00:00:00.001 Started by upstream project "autotest-per-patch" build number 132841 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.096 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:06.669 The recommended git tool is: git 00:00:06.669 using credential 00000000-0000-0000-0000-000000000002 00:00:06.672 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:06.683 Fetching changes from the remote Git repository 00:00:06.684 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:06.696 Using shallow fetch with depth 1 00:00:06.696 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:06.696 > git --version # timeout=10 00:00:06.720 > git --version # 'git version 2.39.2' 00:00:06.720 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:06.734 Setting http proxy: proxy-dmz.intel.com:911 00:00:06.734 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:12.858 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:12.871 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:12.885 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:12.885 > git config core.sparsecheckout # timeout=10 00:00:12.898 > git read-tree -mu HEAD # timeout=10 00:00:12.917 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:12.942 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:12.942 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:13.057 [Pipeline] Start of Pipeline 00:00:13.068 [Pipeline] library 00:00:13.070 Loading library shm_lib@master 00:00:13.070 Library shm_lib@master is cached. Copying from home. 00:00:13.086 [Pipeline] node 00:00:13.095 Running on WFP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest_2 00:00:13.097 [Pipeline] { 00:00:13.105 [Pipeline] catchError 00:00:13.107 [Pipeline] { 00:00:13.118 [Pipeline] wrap 00:00:13.126 [Pipeline] { 00:00:13.133 [Pipeline] stage 00:00:13.135 [Pipeline] { (Prologue) 00:00:13.387 [Pipeline] sh 00:00:13.672 + logger -p user.info -t JENKINS-CI 00:00:13.692 [Pipeline] echo 00:00:13.694 Node: WFP8 00:00:13.703 [Pipeline] sh 00:00:14.005 [Pipeline] setCustomBuildProperty 00:00:14.018 [Pipeline] echo 00:00:14.020 Cleanup processes 00:00:14.025 [Pipeline] sh 00:00:14.311 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk 00:00:14.311 2094221 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk 00:00:14.365 [Pipeline] sh 00:00:14.651 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk 00:00:14.651 ++ grep -v 'sudo pgrep' 00:00:14.651 ++ awk '{print $1}' 00:00:14.651 + sudo kill -9 00:00:14.651 + true 00:00:14.669 [Pipeline] cleanWs 00:00:14.683 [WS-CLEANUP] Deleting project workspace... 00:00:14.683 [WS-CLEANUP] Deferred wipeout is used... 00:00:14.689 [WS-CLEANUP] done 00:00:14.693 [Pipeline] setCustomBuildProperty 00:00:14.709 [Pipeline] sh 00:00:14.991 + sudo git config --global --replace-all safe.directory '*' 00:00:15.092 [Pipeline] httpRequest 00:00:15.434 [Pipeline] echo 00:00:15.436 Sorcerer 10.211.164.20 is alive 00:00:15.446 [Pipeline] retry 00:00:15.448 [Pipeline] { 00:00:15.464 [Pipeline] httpRequest 00:00:15.469 HttpMethod: GET 00:00:15.469 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:15.470 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:15.473 Response Code: HTTP/1.1 200 OK 00:00:15.473 Success: Status code 200 is in the accepted range: 200,404 00:00:15.474 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:15.920 [Pipeline] } 00:00:15.936 [Pipeline] // retry 00:00:15.944 [Pipeline] sh 00:00:16.230 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:16.245 [Pipeline] httpRequest 00:00:16.918 [Pipeline] echo 00:00:16.920 Sorcerer 10.211.164.20 is alive 00:00:16.929 [Pipeline] retry 00:00:16.931 [Pipeline] { 00:00:16.947 [Pipeline] httpRequest 00:00:16.952 HttpMethod: GET 00:00:16.952 URL: http://10.211.164.20/packages/spdk_66289a6dbe28217365daa40fd92dcf327871c2e8.tar.gz 00:00:16.954 Sending request to url: http://10.211.164.20/packages/spdk_66289a6dbe28217365daa40fd92dcf327871c2e8.tar.gz 00:00:16.956 Response Code: HTTP/1.1 404 Not Found 00:00:16.957 Success: Status code 404 is in the accepted range: 200,404 00:00:16.957 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk_66289a6dbe28217365daa40fd92dcf327871c2e8.tar.gz 00:00:16.963 [Pipeline] } 00:00:16.980 [Pipeline] // retry 00:00:16.988 [Pipeline] sh 00:00:17.274 + rm -f spdk_66289a6dbe28217365daa40fd92dcf327871c2e8.tar.gz 00:00:17.285 [Pipeline] retry 00:00:17.287 [Pipeline] { 00:00:17.301 [Pipeline] checkout 00:00:17.307 The recommended git tool is: NONE 00:00:19.140 using credential 00000000-0000-0000-0000-000000000002 00:00:19.142 Wiping out workspace first. 00:00:19.154 Cloning the remote Git repository 00:00:19.157 Honoring refspec on initial clone 00:00:19.179 Cloning repository https://review.spdk.io/gerrit/a/spdk/spdk 00:00:19.202 > git init /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk # timeout=10 00:00:19.218 Using reference repository: /var/ci_repos/spdk_multi 00:00:19.219 Fetching upstream changes from https://review.spdk.io/gerrit/a/spdk/spdk 00:00:19.219 > git --version # timeout=10 00:00:19.225 > git --version # 'git version 2.45.2' 00:00:19.225 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:19.231 Setting http proxy: proxy-dmz.intel.com:911 00:00:19.232 > git fetch --tags --force --progress -- https://review.spdk.io/gerrit/a/spdk/spdk refs/changes/23/25523/5 +refs/heads/master:refs/remotes/origin/master # timeout=10 00:01:25.050 Avoid second fetch 00:01:25.083 Checking out Revision 66289a6dbe28217365daa40fd92dcf327871c2e8 (FETCH_HEAD) 00:01:25.490 Commit message: "build: use VERSION file for storing version" 00:01:24.534 > git config remote.origin.url https://review.spdk.io/gerrit/a/spdk/spdk # timeout=10 00:01:24.541 > git config --add remote.origin.fetch refs/changes/23/25523/5 # timeout=10 00:01:24.547 > git config --add remote.origin.fetch +refs/heads/master:refs/remotes/origin/master # timeout=10 00:01:25.052 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:01:25.073 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:01:25.090 > git config core.sparsecheckout # timeout=10 00:01:25.096 > git checkout -f 66289a6dbe28217365daa40fd92dcf327871c2e8 # timeout=10 00:01:25.492 > git rev-list --no-walk 6263899172182e027030cd18a9502d00497c00eb # timeout=10 00:01:25.527 > git remote # timeout=10 00:01:25.540 > git submodule init # timeout=10 00:01:25.633 > git submodule sync # timeout=10 00:01:25.707 > git config --get remote.origin.url # timeout=10 00:01:25.718 > git submodule init # timeout=10 00:01:25.790 > git config -f .gitmodules --get-regexp ^submodule\.(.+)\.url # timeout=10 00:01:25.797 > git config --get submodule.dpdk.url # timeout=10 00:01:25.803 > git remote # timeout=10 00:01:25.809 > git config --get remote.origin.url # timeout=10 00:01:25.814 > git config -f .gitmodules --get submodule.dpdk.path # timeout=10 00:01:25.822 > git config --get submodule.intel-ipsec-mb.url # timeout=10 00:01:25.828 > git remote # timeout=10 00:01:25.833 > git config --get remote.origin.url # timeout=10 00:01:25.839 > git config -f .gitmodules --get submodule.intel-ipsec-mb.path # timeout=10 00:01:25.845 > git config --get submodule.isa-l.url # timeout=10 00:01:25.851 > git remote # timeout=10 00:01:25.857 > git config --get remote.origin.url # timeout=10 00:01:25.862 > git config -f .gitmodules --get submodule.isa-l.path # timeout=10 00:01:25.868 > git config --get submodule.ocf.url # timeout=10 00:01:25.874 > git remote # timeout=10 00:01:25.880 > git config --get remote.origin.url # timeout=10 00:01:25.885 > git config -f .gitmodules --get submodule.ocf.path # timeout=10 00:01:25.891 > git config --get submodule.libvfio-user.url # timeout=10 00:01:25.897 > git remote # timeout=10 00:01:25.902 > git config --get remote.origin.url # timeout=10 00:01:25.908 > git config -f .gitmodules --get submodule.libvfio-user.path # timeout=10 00:01:25.913 > git config --get submodule.xnvme.url # timeout=10 00:01:25.919 > git remote # timeout=10 00:01:25.925 > git config --get remote.origin.url # timeout=10 00:01:25.931 > git config -f .gitmodules --get submodule.xnvme.path # timeout=10 00:01:25.936 > git config --get submodule.isa-l-crypto.url # timeout=10 00:01:25.941 > git remote # timeout=10 00:01:25.947 > git config --get remote.origin.url # timeout=10 00:01:25.952 > git config -f .gitmodules --get submodule.isa-l-crypto.path # timeout=10 00:01:25.972 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:01:25.972 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:01:25.972 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:01:25.973 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:01:25.973 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:01:25.973 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:01:25.973 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:01:25.978 Setting http proxy: proxy-dmz.intel.com:911 00:01:25.978 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi dpdk # timeout=10 00:01:25.994 Setting http proxy: proxy-dmz.intel.com:911 00:01:25.994 Setting http proxy: proxy-dmz.intel.com:911 00:01:25.994 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi intel-ipsec-mb # timeout=10 00:01:25.994 Setting http proxy: proxy-dmz.intel.com:911 00:01:25.994 Setting http proxy: proxy-dmz.intel.com:911 00:01:25.994 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi isa-l # timeout=10 00:01:25.995 Setting http proxy: proxy-dmz.intel.com:911 00:01:25.995 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi ocf # timeout=10 00:01:25.995 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi libvfio-user # timeout=10 00:01:25.995 Setting http proxy: proxy-dmz.intel.com:911 00:01:25.995 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi isa-l-crypto # timeout=10 00:01:25.995 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi xnvme # timeout=10 00:01:34.312 [Pipeline] dir 00:01:34.313 Running in /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk 00:01:34.315 [Pipeline] { 00:01:34.328 [Pipeline] sh 00:01:34.617 ++ nproc 00:01:34.617 + threads=96 00:01:34.617 + git repack -a -d --threads=96 00:01:41.195 + git submodule foreach git repack -a -d --threads=96 00:01:41.195 Entering 'dpdk' 00:01:49.321 Entering 'intel-ipsec-mb' 00:01:49.889 Entering 'isa-l' 00:01:50.147 Entering 'isa-l-crypto' 00:01:50.407 Entering 'libvfio-user' 00:01:50.665 Entering 'ocf' 00:01:51.234 Entering 'xnvme' 00:01:51.803 + find .git -type f -name alternates -print -delete 00:01:51.803 .git/objects/info/alternates 00:01:51.803 .git/modules/dpdk/objects/info/alternates 00:01:51.803 .git/modules/isa-l/objects/info/alternates 00:01:51.803 .git/modules/libvfio-user/objects/info/alternates 00:01:51.803 .git/modules/isa-l-crypto/objects/info/alternates 00:01:51.803 .git/modules/xnvme/objects/info/alternates 00:01:51.803 .git/modules/intel-ipsec-mb/objects/info/alternates 00:01:51.803 .git/modules/ocf/objects/info/alternates 00:01:51.812 [Pipeline] } 00:01:51.830 [Pipeline] // dir 00:01:51.835 [Pipeline] } 00:01:51.852 [Pipeline] // retry 00:01:51.860 [Pipeline] sh 00:01:52.143 + hash pigz 00:01:52.143 + tar -cf spdk_66289a6dbe28217365daa40fd92dcf327871c2e8.tar.gz -I pigz spdk 00:01:52.724 [Pipeline] retry 00:01:52.726 [Pipeline] { 00:01:52.740 [Pipeline] httpRequest 00:01:52.747 HttpMethod: PUT 00:01:52.747 URL: http://10.211.164.20/cgi-bin/sorcerer.py?group=packages&filename=spdk_66289a6dbe28217365daa40fd92dcf327871c2e8.tar.gz 00:01:52.759 Sending request to url: http://10.211.164.20/cgi-bin/sorcerer.py?group=packages&filename=spdk_66289a6dbe28217365daa40fd92dcf327871c2e8.tar.gz 00:02:02.778 Response Code: HTTP/1.1 200 OK 00:02:02.787 Success: Status code 200 is in the accepted range: 200 00:02:02.790 [Pipeline] } 00:02:02.813 [Pipeline] // retry 00:02:02.818 [Pipeline] echo 00:02:02.819 00:02:02.819 Locking 00:02:02.819 Waited 6s for lock 00:02:02.819 File already exists: /storage/packages/spdk_66289a6dbe28217365daa40fd92dcf327871c2e8.tar.gz 00:02:02.819 00:02:02.824 [Pipeline] sh 00:02:03.106 + git -C spdk log --oneline -n5 00:02:03.106 66289a6db build: use VERSION file for storing version 00:02:03.106 626389917 nvme/rdma: Don't limit max_sge if UMR is used 00:02:03.106 cec5ba284 nvme/rdma: Register UMR per IO request 00:02:03.106 7219bd1a7 thread: use extended version of fd group add 00:02:03.106 1a5bdab32 event: use extended version of fd group add 00:02:03.117 [Pipeline] } 00:02:03.131 [Pipeline] // stage 00:02:03.141 [Pipeline] stage 00:02:03.143 [Pipeline] { (Prepare) 00:02:03.160 [Pipeline] writeFile 00:02:03.176 [Pipeline] sh 00:02:03.460 + logger -p user.info -t JENKINS-CI 00:02:03.472 [Pipeline] sh 00:02:03.755 + logger -p user.info -t JENKINS-CI 00:02:03.769 [Pipeline] sh 00:02:04.055 + cat autorun-spdk.conf 00:02:04.055 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:04.055 SPDK_TEST_NVMF=1 00:02:04.055 SPDK_TEST_NVME_CLI=1 00:02:04.055 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:04.055 SPDK_TEST_NVMF_NICS=e810 00:02:04.055 SPDK_TEST_VFIOUSER=1 00:02:04.055 SPDK_RUN_UBSAN=1 00:02:04.055 NET_TYPE=phy 00:02:04.063 RUN_NIGHTLY=0 00:02:04.068 [Pipeline] readFile 00:02:04.092 [Pipeline] withEnv 00:02:04.094 [Pipeline] { 00:02:04.108 [Pipeline] sh 00:02:04.393 + set -ex 00:02:04.393 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/autorun-spdk.conf ]] 00:02:04.393 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/autorun-spdk.conf 00:02:04.393 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:04.393 ++ SPDK_TEST_NVMF=1 00:02:04.393 ++ SPDK_TEST_NVME_CLI=1 00:02:04.393 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:04.393 ++ SPDK_TEST_NVMF_NICS=e810 00:02:04.393 ++ SPDK_TEST_VFIOUSER=1 00:02:04.393 ++ SPDK_RUN_UBSAN=1 00:02:04.393 ++ NET_TYPE=phy 00:02:04.393 ++ RUN_NIGHTLY=0 00:02:04.393 + case $SPDK_TEST_NVMF_NICS in 00:02:04.393 + DRIVERS=ice 00:02:04.393 + [[ tcp == \r\d\m\a ]] 00:02:04.393 + [[ -n ice ]] 00:02:04.393 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:02:04.393 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:02:04.393 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:02:04.393 rmmod: ERROR: Module irdma is not currently loaded 00:02:04.393 rmmod: ERROR: Module i40iw is not currently loaded 00:02:04.393 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:02:04.393 + true 00:02:04.393 + for D in $DRIVERS 00:02:04.393 + sudo modprobe ice 00:02:04.393 + exit 0 00:02:04.402 [Pipeline] } 00:02:04.417 [Pipeline] // withEnv 00:02:04.423 [Pipeline] } 00:02:04.437 [Pipeline] // stage 00:02:04.447 [Pipeline] catchError 00:02:04.448 [Pipeline] { 00:02:04.463 [Pipeline] timeout 00:02:04.463 Timeout set to expire in 1 hr 0 min 00:02:04.465 [Pipeline] { 00:02:04.481 [Pipeline] stage 00:02:04.483 [Pipeline] { (Tests) 00:02:04.497 [Pipeline] sh 00:02:04.782 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest_2 00:02:04.782 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2 00:02:04.782 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2 00:02:04.782 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest_2 ]] 00:02:04.782 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk 00:02:04.782 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/output 00:02:04.782 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk ]] 00:02:04.782 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/output ]] 00:02:04.782 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/output 00:02:04.782 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/output ]] 00:02:04.783 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:02:04.783 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest_2 00:02:04.783 + source /etc/os-release 00:02:04.783 ++ NAME='Fedora Linux' 00:02:04.783 ++ VERSION='39 (Cloud Edition)' 00:02:04.783 ++ ID=fedora 00:02:04.783 ++ VERSION_ID=39 00:02:04.783 ++ VERSION_CODENAME= 00:02:04.783 ++ PLATFORM_ID=platform:f39 00:02:04.783 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:04.783 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:04.783 ++ LOGO=fedora-logo-icon 00:02:04.783 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:04.783 ++ HOME_URL=https://fedoraproject.org/ 00:02:04.783 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:04.783 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:04.783 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:04.783 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:04.783 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:04.783 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:04.783 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:04.783 ++ SUPPORT_END=2024-11-12 00:02:04.783 ++ VARIANT='Cloud Edition' 00:02:04.783 ++ VARIANT_ID=cloud 00:02:04.783 + uname -a 00:02:04.783 Linux spdk-wfp-08 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:04.783 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/setup.sh status 00:02:07.321 Hugepages 00:02:07.321 node hugesize free / total 00:02:07.321 node0 1048576kB 0 / 0 00:02:07.321 node0 2048kB 0 / 0 00:02:07.321 node1 1048576kB 0 / 0 00:02:07.321 node1 2048kB 0 / 0 00:02:07.321 00:02:07.321 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:07.321 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:02:07.321 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:02:07.321 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:02:07.321 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:02:07.321 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:02:07.321 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:02:07.321 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:02:07.321 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:02:07.321 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:02:07.321 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:02:07.321 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:02:07.321 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:02:07.321 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:02:07.321 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:02:07.321 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:02:07.321 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:02:07.321 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:02:07.321 + rm -f /tmp/spdk-ld-path 00:02:07.321 + source autorun-spdk.conf 00:02:07.321 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:07.321 ++ SPDK_TEST_NVMF=1 00:02:07.321 ++ SPDK_TEST_NVME_CLI=1 00:02:07.321 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:07.321 ++ SPDK_TEST_NVMF_NICS=e810 00:02:07.321 ++ SPDK_TEST_VFIOUSER=1 00:02:07.321 ++ SPDK_RUN_UBSAN=1 00:02:07.321 ++ NET_TYPE=phy 00:02:07.321 ++ RUN_NIGHTLY=0 00:02:07.321 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:07.321 + [[ -n '' ]] 00:02:07.321 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk 00:02:07.321 + for M in /var/spdk/build-*-manifest.txt 00:02:07.321 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:07.321 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/output/ 00:02:07.321 + for M in /var/spdk/build-*-manifest.txt 00:02:07.321 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:07.321 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/output/ 00:02:07.321 + for M in /var/spdk/build-*-manifest.txt 00:02:07.321 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:07.321 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/output/ 00:02:07.321 ++ uname 00:02:07.321 + [[ Linux == \L\i\n\u\x ]] 00:02:07.321 + sudo dmesg -T 00:02:07.584 + sudo dmesg --clear 00:02:07.584 + dmesg_pid=2097272 00:02:07.584 + [[ Fedora Linux == FreeBSD ]] 00:02:07.584 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:07.584 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:07.584 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:07.584 + [[ -x /usr/src/fio-static/fio ]] 00:02:07.584 + export FIO_BIN=/usr/src/fio-static/fio 00:02:07.584 + FIO_BIN=/usr/src/fio-static/fio 00:02:07.584 + sudo dmesg -Tw 00:02:07.584 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\_\2\/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:07.584 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:07.584 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:07.584 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:07.584 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:07.584 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:07.584 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:07.584 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:07.584 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/autorun-spdk.conf 00:02:07.584 22:33:15 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:07.584 22:33:15 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/autorun-spdk.conf 00:02:07.584 22:33:15 -- nvmf-tcp-phy-autotest_2/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:07.584 22:33:15 -- nvmf-tcp-phy-autotest_2/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:02:07.584 22:33:15 -- nvmf-tcp-phy-autotest_2/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:02:07.584 22:33:15 -- nvmf-tcp-phy-autotest_2/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:07.584 22:33:15 -- nvmf-tcp-phy-autotest_2/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:02:07.584 22:33:15 -- nvmf-tcp-phy-autotest_2/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:02:07.584 22:33:15 -- nvmf-tcp-phy-autotest_2/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:02:07.584 22:33:15 -- nvmf-tcp-phy-autotest_2/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:02:07.584 22:33:15 -- nvmf-tcp-phy-autotest_2/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:02:07.584 22:33:15 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:07.584 22:33:15 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/autorun-spdk.conf 00:02:07.584 22:33:15 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:07.584 22:33:15 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:02:07.584 22:33:15 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:07.584 22:33:15 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:07.584 22:33:15 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:07.584 22:33:15 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:07.584 22:33:15 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:07.584 22:33:15 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:07.584 22:33:15 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:07.584 22:33:15 -- paths/export.sh@5 -- $ export PATH 00:02:07.584 22:33:15 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:07.584 22:33:15 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output 00:02:07.584 22:33:15 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:07.584 22:33:15 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733866395.XXXXXX 00:02:07.584 22:33:15 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733866395.yuK9Nr 00:02:07.584 22:33:15 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:07.584 22:33:15 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:02:07.584 22:33:15 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/' 00:02:07.584 22:33:15 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/xnvme --exclude /tmp' 00:02:07.584 22:33:15 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/xnvme --exclude /tmp --status-bugs' 00:02:07.584 22:33:15 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:07.584 22:33:15 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:07.584 22:33:15 -- common/autotest_common.sh@10 -- $ set +x 00:02:07.584 22:33:15 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:02:07.584 22:33:15 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:07.584 22:33:15 -- pm/common@17 -- $ local monitor 00:02:07.584 22:33:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:07.584 22:33:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:07.584 22:33:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:07.584 22:33:15 -- pm/common@21 -- $ date +%s 00:02:07.584 22:33:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:07.584 22:33:15 -- pm/common@21 -- $ date +%s 00:02:07.584 22:33:15 -- pm/common@25 -- $ sleep 1 00:02:07.584 22:33:15 -- pm/common@21 -- $ date +%s 00:02:07.584 22:33:15 -- pm/common@21 -- $ date +%s 00:02:07.584 22:33:15 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/power -l -p monitor.autobuild.sh.1733866395 00:02:07.584 22:33:15 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/power -l -p monitor.autobuild.sh.1733866395 00:02:07.584 22:33:15 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/power -l -p monitor.autobuild.sh.1733866395 00:02:07.584 22:33:15 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/power -l -p monitor.autobuild.sh.1733866395 00:02:07.843 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/power/monitor.autobuild.sh.1733866395_collect-cpu-load.pm.log 00:02:07.843 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/power/monitor.autobuild.sh.1733866395_collect-vmstat.pm.log 00:02:07.843 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/power/monitor.autobuild.sh.1733866395_collect-cpu-temp.pm.log 00:02:07.843 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/power/monitor.autobuild.sh.1733866395_collect-bmc-pm.bmc.pm.log 00:02:08.780 22:33:16 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:08.780 22:33:16 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:08.780 22:33:16 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:08.780 22:33:16 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk 00:02:08.780 22:33:16 -- spdk/autobuild.sh@16 -- $ date -u 00:02:08.780 Tue Dec 10 09:33:16 PM UTC 2024 00:02:08.780 22:33:16 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:08.780 v25.01-pre-330-g66289a6db 00:02:08.780 22:33:16 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:08.780 22:33:16 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:08.780 22:33:16 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:08.780 22:33:16 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:08.780 22:33:16 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:08.780 22:33:16 -- common/autotest_common.sh@10 -- $ set +x 00:02:08.780 ************************************ 00:02:08.780 START TEST ubsan 00:02:08.780 ************************************ 00:02:08.780 22:33:16 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:08.780 using ubsan 00:02:08.780 00:02:08.780 real 0m0.000s 00:02:08.780 user 0m0.000s 00:02:08.780 sys 0m0.000s 00:02:08.780 22:33:16 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:08.780 22:33:16 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:08.780 ************************************ 00:02:08.780 END TEST ubsan 00:02:08.780 ************************************ 00:02:08.780 22:33:16 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:08.780 22:33:16 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:08.780 22:33:16 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:08.780 22:33:16 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:08.780 22:33:16 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:08.780 22:33:16 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:08.780 22:33:16 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:08.780 22:33:16 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:08.780 22:33:16 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:02:09.039 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/lib/env_dpdk 00:02:09.039 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/build 00:02:09.298 Using 'verbs' RDMA provider 00:02:22.083 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/.spdk-isal.log)...done. 00:02:34.297 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/.spdk-isal-crypto.log)...done. 00:02:34.297 Creating mk/config.mk...done. 00:02:34.297 Creating mk/cc.flags.mk...done. 00:02:34.297 Type 'make' to build. 00:02:34.297 22:33:41 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:02:34.297 22:33:41 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:34.297 22:33:41 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:34.297 22:33:41 -- common/autotest_common.sh@10 -- $ set +x 00:02:34.297 ************************************ 00:02:34.297 START TEST make 00:02:34.297 ************************************ 00:02:34.297 22:33:41 make -- common/autotest_common.sh@1129 -- $ make -j96 00:02:36.207 The Meson build system 00:02:36.207 Version: 1.5.0 00:02:36.207 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/libvfio-user 00:02:36.207 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/libvfio-user/build-debug 00:02:36.207 Build type: native build 00:02:36.207 Project name: libvfio-user 00:02:36.207 Project version: 0.0.1 00:02:36.207 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:36.207 C linker for the host machine: cc ld.bfd 2.40-14 00:02:36.207 Host machine cpu family: x86_64 00:02:36.207 Host machine cpu: x86_64 00:02:36.207 Run-time dependency threads found: YES 00:02:36.207 Library dl found: YES 00:02:36.207 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:36.207 Run-time dependency json-c found: YES 0.17 00:02:36.207 Run-time dependency cmocka found: YES 1.1.7 00:02:36.207 Program pytest-3 found: NO 00:02:36.207 Program flake8 found: NO 00:02:36.207 Program misspell-fixer found: NO 00:02:36.207 Program restructuredtext-lint found: NO 00:02:36.207 Program valgrind found: YES (/usr/bin/valgrind) 00:02:36.207 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:36.207 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:36.207 Compiler for C supports arguments -Wwrite-strings: YES 00:02:36.207 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:36.207 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/libvfio-user/test/test-lspci.sh) 00:02:36.207 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/libvfio-user/test/test-linkage.sh) 00:02:36.207 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:36.207 Build targets in project: 8 00:02:36.207 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:36.207 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:36.207 00:02:36.207 libvfio-user 0.0.1 00:02:36.207 00:02:36.207 User defined options 00:02:36.207 buildtype : debug 00:02:36.207 default_library: shared 00:02:36.207 libdir : /usr/local/lib 00:02:36.207 00:02:36.207 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:36.465 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/libvfio-user/build-debug' 00:02:36.465 [1/37] Compiling C object samples/null.p/null.c.o 00:02:36.465 [2/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:36.465 [3/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:36.465 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:36.465 [5/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:36.465 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:36.465 [7/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:36.465 [8/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:36.465 [9/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:36.465 [10/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:36.465 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:36.465 [12/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:36.465 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:36.465 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:36.465 [15/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:36.465 [16/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:36.465 [17/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:36.465 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:36.465 [19/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:36.465 [20/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:36.465 [21/37] Compiling C object samples/server.p/server.c.o 00:02:36.465 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:36.465 [23/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:36.723 [24/37] Compiling C object samples/client.p/client.c.o 00:02:36.723 [25/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:36.723 [26/37] Linking target samples/client 00:02:36.723 [27/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:36.723 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:36.723 [29/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:36.723 [30/37] Linking target test/unit_tests 00:02:36.723 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:02:36.982 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:36.982 [33/37] Linking target samples/server 00:02:36.982 [34/37] Linking target samples/null 00:02:36.982 [35/37] Linking target samples/gpio-pci-idio-16 00:02:36.982 [36/37] Linking target samples/shadow_ioeventfd_server 00:02:36.982 [37/37] Linking target samples/lspci 00:02:36.982 INFO: autodetecting backend as ninja 00:02:36.982 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/libvfio-user/build-debug 00:02:36.982 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/libvfio-user/build-debug 00:02:37.241 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/libvfio-user/build-debug' 00:02:37.241 ninja: no work to do. 00:02:42.516 The Meson build system 00:02:42.516 Version: 1.5.0 00:02:42.516 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk 00:02:42.516 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/build-tmp 00:02:42.516 Build type: native build 00:02:42.516 Program cat found: YES (/usr/bin/cat) 00:02:42.516 Project name: DPDK 00:02:42.516 Project version: 24.03.0 00:02:42.516 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:42.516 C linker for the host machine: cc ld.bfd 2.40-14 00:02:42.516 Host machine cpu family: x86_64 00:02:42.516 Host machine cpu: x86_64 00:02:42.516 Message: ## Building in Developer Mode ## 00:02:42.516 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:42.516 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/buildtools/check-symbols.sh) 00:02:42.516 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:42.516 Program python3 found: YES (/usr/bin/python3) 00:02:42.516 Program cat found: YES (/usr/bin/cat) 00:02:42.516 Compiler for C supports arguments -march=native: YES 00:02:42.516 Checking for size of "void *" : 8 00:02:42.516 Checking for size of "void *" : 8 (cached) 00:02:42.516 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:42.516 Library m found: YES 00:02:42.516 Library numa found: YES 00:02:42.516 Has header "numaif.h" : YES 00:02:42.516 Library fdt found: NO 00:02:42.516 Library execinfo found: NO 00:02:42.516 Has header "execinfo.h" : YES 00:02:42.516 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:42.516 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:42.516 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:42.516 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:42.516 Run-time dependency openssl found: YES 3.1.1 00:02:42.516 Run-time dependency libpcap found: YES 1.10.4 00:02:42.516 Has header "pcap.h" with dependency libpcap: YES 00:02:42.516 Compiler for C supports arguments -Wcast-qual: YES 00:02:42.516 Compiler for C supports arguments -Wdeprecated: YES 00:02:42.516 Compiler for C supports arguments -Wformat: YES 00:02:42.516 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:42.516 Compiler for C supports arguments -Wformat-security: NO 00:02:42.516 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:42.516 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:42.516 Compiler for C supports arguments -Wnested-externs: YES 00:02:42.516 Compiler for C supports arguments -Wold-style-definition: YES 00:02:42.516 Compiler for C supports arguments -Wpointer-arith: YES 00:02:42.516 Compiler for C supports arguments -Wsign-compare: YES 00:02:42.516 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:42.516 Compiler for C supports arguments -Wundef: YES 00:02:42.516 Compiler for C supports arguments -Wwrite-strings: YES 00:02:42.516 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:42.516 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:42.516 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:42.516 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:42.516 Program objdump found: YES (/usr/bin/objdump) 00:02:42.516 Compiler for C supports arguments -mavx512f: YES 00:02:42.516 Checking if "AVX512 checking" compiles: YES 00:02:42.516 Fetching value of define "__SSE4_2__" : 1 00:02:42.516 Fetching value of define "__AES__" : 1 00:02:42.516 Fetching value of define "__AVX__" : 1 00:02:42.516 Fetching value of define "__AVX2__" : 1 00:02:42.516 Fetching value of define "__AVX512BW__" : 1 00:02:42.516 Fetching value of define "__AVX512CD__" : 1 00:02:42.516 Fetching value of define "__AVX512DQ__" : 1 00:02:42.516 Fetching value of define "__AVX512F__" : 1 00:02:42.516 Fetching value of define "__AVX512VL__" : 1 00:02:42.516 Fetching value of define "__PCLMUL__" : 1 00:02:42.516 Fetching value of define "__RDRND__" : 1 00:02:42.516 Fetching value of define "__RDSEED__" : 1 00:02:42.516 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:42.516 Fetching value of define "__znver1__" : (undefined) 00:02:42.516 Fetching value of define "__znver2__" : (undefined) 00:02:42.516 Fetching value of define "__znver3__" : (undefined) 00:02:42.516 Fetching value of define "__znver4__" : (undefined) 00:02:42.516 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:42.516 Message: lib/log: Defining dependency "log" 00:02:42.516 Message: lib/kvargs: Defining dependency "kvargs" 00:02:42.516 Message: lib/telemetry: Defining dependency "telemetry" 00:02:42.516 Checking for function "getentropy" : NO 00:02:42.516 Message: lib/eal: Defining dependency "eal" 00:02:42.516 Message: lib/ring: Defining dependency "ring" 00:02:42.516 Message: lib/rcu: Defining dependency "rcu" 00:02:42.516 Message: lib/mempool: Defining dependency "mempool" 00:02:42.516 Message: lib/mbuf: Defining dependency "mbuf" 00:02:42.516 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:42.516 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:42.516 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:42.516 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:42.516 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:42.516 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:42.516 Compiler for C supports arguments -mpclmul: YES 00:02:42.516 Compiler for C supports arguments -maes: YES 00:02:42.516 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:42.516 Compiler for C supports arguments -mavx512bw: YES 00:02:42.516 Compiler for C supports arguments -mavx512dq: YES 00:02:42.516 Compiler for C supports arguments -mavx512vl: YES 00:02:42.516 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:42.516 Compiler for C supports arguments -mavx2: YES 00:02:42.516 Compiler for C supports arguments -mavx: YES 00:02:42.516 Message: lib/net: Defining dependency "net" 00:02:42.516 Message: lib/meter: Defining dependency "meter" 00:02:42.516 Message: lib/ethdev: Defining dependency "ethdev" 00:02:42.516 Message: lib/pci: Defining dependency "pci" 00:02:42.516 Message: lib/cmdline: Defining dependency "cmdline" 00:02:42.516 Message: lib/hash: Defining dependency "hash" 00:02:42.516 Message: lib/timer: Defining dependency "timer" 00:02:42.516 Message: lib/compressdev: Defining dependency "compressdev" 00:02:42.516 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:42.516 Message: lib/dmadev: Defining dependency "dmadev" 00:02:42.516 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:42.516 Message: lib/power: Defining dependency "power" 00:02:42.516 Message: lib/reorder: Defining dependency "reorder" 00:02:42.516 Message: lib/security: Defining dependency "security" 00:02:42.516 Has header "linux/userfaultfd.h" : YES 00:02:42.516 Has header "linux/vduse.h" : YES 00:02:42.516 Message: lib/vhost: Defining dependency "vhost" 00:02:42.516 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:42.516 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:42.516 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:42.516 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:42.516 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:42.516 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:42.516 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:42.516 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:42.516 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:42.516 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:42.516 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:42.516 Configuring doxy-api-html.conf using configuration 00:02:42.516 Configuring doxy-api-man.conf using configuration 00:02:42.516 Program mandb found: YES (/usr/bin/mandb) 00:02:42.516 Program sphinx-build found: NO 00:02:42.516 Configuring rte_build_config.h using configuration 00:02:42.516 Message: 00:02:42.516 ================= 00:02:42.516 Applications Enabled 00:02:42.516 ================= 00:02:42.516 00:02:42.516 apps: 00:02:42.516 00:02:42.516 00:02:42.516 Message: 00:02:42.516 ================= 00:02:42.516 Libraries Enabled 00:02:42.516 ================= 00:02:42.516 00:02:42.516 libs: 00:02:42.516 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:42.516 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:42.516 cryptodev, dmadev, power, reorder, security, vhost, 00:02:42.516 00:02:42.516 Message: 00:02:42.516 =============== 00:02:42.516 Drivers Enabled 00:02:42.516 =============== 00:02:42.516 00:02:42.516 common: 00:02:42.516 00:02:42.516 bus: 00:02:42.516 pci, vdev, 00:02:42.516 mempool: 00:02:42.516 ring, 00:02:42.516 dma: 00:02:42.516 00:02:42.516 net: 00:02:42.516 00:02:42.516 crypto: 00:02:42.516 00:02:42.516 compress: 00:02:42.516 00:02:42.516 vdpa: 00:02:42.516 00:02:42.516 00:02:42.516 Message: 00:02:42.516 ================= 00:02:42.516 Content Skipped 00:02:42.516 ================= 00:02:42.516 00:02:42.516 apps: 00:02:42.516 dumpcap: explicitly disabled via build config 00:02:42.516 graph: explicitly disabled via build config 00:02:42.516 pdump: explicitly disabled via build config 00:02:42.516 proc-info: explicitly disabled via build config 00:02:42.516 test-acl: explicitly disabled via build config 00:02:42.516 test-bbdev: explicitly disabled via build config 00:02:42.516 test-cmdline: explicitly disabled via build config 00:02:42.516 test-compress-perf: explicitly disabled via build config 00:02:42.516 test-crypto-perf: explicitly disabled via build config 00:02:42.516 test-dma-perf: explicitly disabled via build config 00:02:42.516 test-eventdev: explicitly disabled via build config 00:02:42.516 test-fib: explicitly disabled via build config 00:02:42.516 test-flow-perf: explicitly disabled via build config 00:02:42.516 test-gpudev: explicitly disabled via build config 00:02:42.516 test-mldev: explicitly disabled via build config 00:02:42.517 test-pipeline: explicitly disabled via build config 00:02:42.517 test-pmd: explicitly disabled via build config 00:02:42.517 test-regex: explicitly disabled via build config 00:02:42.517 test-sad: explicitly disabled via build config 00:02:42.517 test-security-perf: explicitly disabled via build config 00:02:42.517 00:02:42.517 libs: 00:02:42.517 argparse: explicitly disabled via build config 00:02:42.517 metrics: explicitly disabled via build config 00:02:42.517 acl: explicitly disabled via build config 00:02:42.517 bbdev: explicitly disabled via build config 00:02:42.517 bitratestats: explicitly disabled via build config 00:02:42.517 bpf: explicitly disabled via build config 00:02:42.517 cfgfile: explicitly disabled via build config 00:02:42.517 distributor: explicitly disabled via build config 00:02:42.517 efd: explicitly disabled via build config 00:02:42.517 eventdev: explicitly disabled via build config 00:02:42.517 dispatcher: explicitly disabled via build config 00:02:42.517 gpudev: explicitly disabled via build config 00:02:42.517 gro: explicitly disabled via build config 00:02:42.517 gso: explicitly disabled via build config 00:02:42.517 ip_frag: explicitly disabled via build config 00:02:42.517 jobstats: explicitly disabled via build config 00:02:42.517 latencystats: explicitly disabled via build config 00:02:42.517 lpm: explicitly disabled via build config 00:02:42.517 member: explicitly disabled via build config 00:02:42.517 pcapng: explicitly disabled via build config 00:02:42.517 rawdev: explicitly disabled via build config 00:02:42.517 regexdev: explicitly disabled via build config 00:02:42.517 mldev: explicitly disabled via build config 00:02:42.517 rib: explicitly disabled via build config 00:02:42.517 sched: explicitly disabled via build config 00:02:42.517 stack: explicitly disabled via build config 00:02:42.517 ipsec: explicitly disabled via build config 00:02:42.517 pdcp: explicitly disabled via build config 00:02:42.517 fib: explicitly disabled via build config 00:02:42.517 port: explicitly disabled via build config 00:02:42.517 pdump: explicitly disabled via build config 00:02:42.517 table: explicitly disabled via build config 00:02:42.517 pipeline: explicitly disabled via build config 00:02:42.517 graph: explicitly disabled via build config 00:02:42.517 node: explicitly disabled via build config 00:02:42.517 00:02:42.517 drivers: 00:02:42.517 common/cpt: not in enabled drivers build config 00:02:42.517 common/dpaax: not in enabled drivers build config 00:02:42.517 common/iavf: not in enabled drivers build config 00:02:42.517 common/idpf: not in enabled drivers build config 00:02:42.517 common/ionic: not in enabled drivers build config 00:02:42.517 common/mvep: not in enabled drivers build config 00:02:42.517 common/octeontx: not in enabled drivers build config 00:02:42.517 bus/auxiliary: not in enabled drivers build config 00:02:42.517 bus/cdx: not in enabled drivers build config 00:02:42.517 bus/dpaa: not in enabled drivers build config 00:02:42.517 bus/fslmc: not in enabled drivers build config 00:02:42.517 bus/ifpga: not in enabled drivers build config 00:02:42.517 bus/platform: not in enabled drivers build config 00:02:42.517 bus/uacce: not in enabled drivers build config 00:02:42.517 bus/vmbus: not in enabled drivers build config 00:02:42.517 common/cnxk: not in enabled drivers build config 00:02:42.517 common/mlx5: not in enabled drivers build config 00:02:42.517 common/nfp: not in enabled drivers build config 00:02:42.517 common/nitrox: not in enabled drivers build config 00:02:42.517 common/qat: not in enabled drivers build config 00:02:42.517 common/sfc_efx: not in enabled drivers build config 00:02:42.517 mempool/bucket: not in enabled drivers build config 00:02:42.517 mempool/cnxk: not in enabled drivers build config 00:02:42.517 mempool/dpaa: not in enabled drivers build config 00:02:42.517 mempool/dpaa2: not in enabled drivers build config 00:02:42.517 mempool/octeontx: not in enabled drivers build config 00:02:42.517 mempool/stack: not in enabled drivers build config 00:02:42.517 dma/cnxk: not in enabled drivers build config 00:02:42.517 dma/dpaa: not in enabled drivers build config 00:02:42.517 dma/dpaa2: not in enabled drivers build config 00:02:42.517 dma/hisilicon: not in enabled drivers build config 00:02:42.517 dma/idxd: not in enabled drivers build config 00:02:42.517 dma/ioat: not in enabled drivers build config 00:02:42.517 dma/skeleton: not in enabled drivers build config 00:02:42.517 net/af_packet: not in enabled drivers build config 00:02:42.517 net/af_xdp: not in enabled drivers build config 00:02:42.517 net/ark: not in enabled drivers build config 00:02:42.517 net/atlantic: not in enabled drivers build config 00:02:42.517 net/avp: not in enabled drivers build config 00:02:42.517 net/axgbe: not in enabled drivers build config 00:02:42.517 net/bnx2x: not in enabled drivers build config 00:02:42.517 net/bnxt: not in enabled drivers build config 00:02:42.517 net/bonding: not in enabled drivers build config 00:02:42.517 net/cnxk: not in enabled drivers build config 00:02:42.517 net/cpfl: not in enabled drivers build config 00:02:42.517 net/cxgbe: not in enabled drivers build config 00:02:42.517 net/dpaa: not in enabled drivers build config 00:02:42.517 net/dpaa2: not in enabled drivers build config 00:02:42.517 net/e1000: not in enabled drivers build config 00:02:42.517 net/ena: not in enabled drivers build config 00:02:42.517 net/enetc: not in enabled drivers build config 00:02:42.517 net/enetfec: not in enabled drivers build config 00:02:42.517 net/enic: not in enabled drivers build config 00:02:42.517 net/failsafe: not in enabled drivers build config 00:02:42.517 net/fm10k: not in enabled drivers build config 00:02:42.517 net/gve: not in enabled drivers build config 00:02:42.517 net/hinic: not in enabled drivers build config 00:02:42.517 net/hns3: not in enabled drivers build config 00:02:42.517 net/i40e: not in enabled drivers build config 00:02:42.517 net/iavf: not in enabled drivers build config 00:02:42.517 net/ice: not in enabled drivers build config 00:02:42.517 net/idpf: not in enabled drivers build config 00:02:42.517 net/igc: not in enabled drivers build config 00:02:42.517 net/ionic: not in enabled drivers build config 00:02:42.517 net/ipn3ke: not in enabled drivers build config 00:02:42.517 net/ixgbe: not in enabled drivers build config 00:02:42.517 net/mana: not in enabled drivers build config 00:02:42.517 net/memif: not in enabled drivers build config 00:02:42.517 net/mlx4: not in enabled drivers build config 00:02:42.517 net/mlx5: not in enabled drivers build config 00:02:42.517 net/mvneta: not in enabled drivers build config 00:02:42.517 net/mvpp2: not in enabled drivers build config 00:02:42.517 net/netvsc: not in enabled drivers build config 00:02:42.517 net/nfb: not in enabled drivers build config 00:02:42.517 net/nfp: not in enabled drivers build config 00:02:42.517 net/ngbe: not in enabled drivers build config 00:02:42.517 net/null: not in enabled drivers build config 00:02:42.517 net/octeontx: not in enabled drivers build config 00:02:42.517 net/octeon_ep: not in enabled drivers build config 00:02:42.517 net/pcap: not in enabled drivers build config 00:02:42.517 net/pfe: not in enabled drivers build config 00:02:42.517 net/qede: not in enabled drivers build config 00:02:42.517 net/ring: not in enabled drivers build config 00:02:42.517 net/sfc: not in enabled drivers build config 00:02:42.517 net/softnic: not in enabled drivers build config 00:02:42.517 net/tap: not in enabled drivers build config 00:02:42.517 net/thunderx: not in enabled drivers build config 00:02:42.517 net/txgbe: not in enabled drivers build config 00:02:42.517 net/vdev_netvsc: not in enabled drivers build config 00:02:42.517 net/vhost: not in enabled drivers build config 00:02:42.517 net/virtio: not in enabled drivers build config 00:02:42.517 net/vmxnet3: not in enabled drivers build config 00:02:42.517 raw/*: missing internal dependency, "rawdev" 00:02:42.517 crypto/armv8: not in enabled drivers build config 00:02:42.517 crypto/bcmfs: not in enabled drivers build config 00:02:42.517 crypto/caam_jr: not in enabled drivers build config 00:02:42.517 crypto/ccp: not in enabled drivers build config 00:02:42.517 crypto/cnxk: not in enabled drivers build config 00:02:42.517 crypto/dpaa_sec: not in enabled drivers build config 00:02:42.517 crypto/dpaa2_sec: not in enabled drivers build config 00:02:42.517 crypto/ipsec_mb: not in enabled drivers build config 00:02:42.517 crypto/mlx5: not in enabled drivers build config 00:02:42.517 crypto/mvsam: not in enabled drivers build config 00:02:42.517 crypto/nitrox: not in enabled drivers build config 00:02:42.517 crypto/null: not in enabled drivers build config 00:02:42.517 crypto/octeontx: not in enabled drivers build config 00:02:42.517 crypto/openssl: not in enabled drivers build config 00:02:42.517 crypto/scheduler: not in enabled drivers build config 00:02:42.517 crypto/uadk: not in enabled drivers build config 00:02:42.517 crypto/virtio: not in enabled drivers build config 00:02:42.517 compress/isal: not in enabled drivers build config 00:02:42.517 compress/mlx5: not in enabled drivers build config 00:02:42.517 compress/nitrox: not in enabled drivers build config 00:02:42.517 compress/octeontx: not in enabled drivers build config 00:02:42.517 compress/zlib: not in enabled drivers build config 00:02:42.517 regex/*: missing internal dependency, "regexdev" 00:02:42.517 ml/*: missing internal dependency, "mldev" 00:02:42.517 vdpa/ifc: not in enabled drivers build config 00:02:42.517 vdpa/mlx5: not in enabled drivers build config 00:02:42.517 vdpa/nfp: not in enabled drivers build config 00:02:42.517 vdpa/sfc: not in enabled drivers build config 00:02:42.517 event/*: missing internal dependency, "eventdev" 00:02:42.517 baseband/*: missing internal dependency, "bbdev" 00:02:42.517 gpu/*: missing internal dependency, "gpudev" 00:02:42.517 00:02:42.517 00:02:42.776 Build targets in project: 85 00:02:42.776 00:02:42.776 DPDK 24.03.0 00:02:42.776 00:02:42.776 User defined options 00:02:42.776 buildtype : debug 00:02:42.776 default_library : shared 00:02:42.776 libdir : lib 00:02:42.776 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/build 00:02:42.776 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:42.776 c_link_args : 00:02:42.776 cpu_instruction_set: native 00:02:42.776 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:42.776 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:42.776 enable_docs : false 00:02:42.776 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:42.776 enable_kmods : false 00:02:42.776 max_lcores : 128 00:02:42.776 tests : false 00:02:42.776 00:02:42.776 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:43.356 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/build-tmp' 00:02:43.356 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:43.356 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:43.356 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:43.356 [4/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:43.356 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:43.356 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:43.356 [7/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:43.356 [8/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:43.356 [9/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:43.356 [10/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:43.356 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:43.356 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:43.356 [13/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:43.356 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:43.356 [15/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:43.356 [16/268] Linking static target lib/librte_kvargs.a 00:02:43.616 [17/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:43.616 [18/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:43.616 [19/268] Linking static target lib/librte_log.a 00:02:43.616 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:43.616 [21/268] Linking static target lib/librte_pci.a 00:02:43.616 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:43.616 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:43.616 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:43.880 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:43.880 [26/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:43.880 [27/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:43.880 [28/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:43.880 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:43.880 [30/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:43.880 [31/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:43.880 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:43.880 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:43.880 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:43.880 [35/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:43.880 [36/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:43.880 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:43.880 [38/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:43.880 [39/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:43.880 [40/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:43.880 [41/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:43.880 [42/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:43.880 [43/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:43.880 [44/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:43.880 [45/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:43.880 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:43.880 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:43.880 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:43.880 [49/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:43.880 [50/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:43.880 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:43.880 [52/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:43.880 [53/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:43.880 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:43.880 [55/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:43.880 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:43.880 [57/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:43.880 [58/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:43.880 [59/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:43.880 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:43.880 [61/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:43.880 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:43.880 [63/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:43.880 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:43.880 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:43.880 [66/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:43.880 [67/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:43.880 [68/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:43.880 [69/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:43.880 [70/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:43.880 [71/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:43.880 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:43.880 [73/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:43.880 [74/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:43.881 [75/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:43.881 [76/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:43.881 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:43.881 [78/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:43.881 [79/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:43.881 [80/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:43.881 [81/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:43.881 [82/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:43.881 [83/268] Linking static target lib/librte_meter.a 00:02:43.881 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:43.881 [85/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:43.881 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:43.881 [87/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:43.881 [88/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:44.138 [89/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:44.138 [90/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:44.138 [91/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:44.138 [92/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:44.139 [93/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:44.139 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:44.139 [95/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:44.139 [96/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:44.139 [97/268] Linking static target lib/librte_ring.a 00:02:44.139 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:44.139 [99/268] Linking static target lib/librte_telemetry.a 00:02:44.139 [100/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:44.139 [101/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:44.139 [102/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.139 [103/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:44.139 [104/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:44.139 [105/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:44.139 [106/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:44.139 [107/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:44.139 [108/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:44.139 [109/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:44.139 [110/268] Linking static target lib/librte_mempool.a 00:02:44.139 [111/268] Linking static target lib/librte_rcu.a 00:02:44.139 [112/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:44.139 [113/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.139 [114/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:44.139 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:44.139 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:44.139 [117/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:44.139 [118/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:44.139 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:44.139 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:44.139 [121/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:44.139 [122/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:44.139 [123/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:44.139 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:44.139 [125/268] Linking static target lib/librte_net.a 00:02:44.139 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:44.139 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:44.139 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:44.139 [129/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:44.139 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:44.139 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:44.139 [132/268] Linking static target lib/librte_cmdline.a 00:02:44.139 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:44.139 [134/268] Linking static target lib/librte_eal.a 00:02:44.139 [135/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:44.139 [136/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.139 [137/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:44.139 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:44.396 [139/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.396 [140/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:44.396 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:44.396 [142/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:44.396 [143/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:44.396 [144/268] Linking target lib/librte_log.so.24.1 00:02:44.396 [145/268] Linking static target lib/librte_mbuf.a 00:02:44.396 [146/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:44.396 [147/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.396 [148/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:44.396 [149/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:44.396 [150/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.396 [151/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:44.396 [152/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:44.396 [153/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:44.396 [154/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:44.396 [155/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:44.396 [156/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:44.396 [157/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:44.396 [158/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:44.396 [159/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.396 [160/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:44.396 [161/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:44.396 [162/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.396 [163/268] Linking static target lib/librte_timer.a 00:02:44.396 [164/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:44.396 [165/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:44.396 [166/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:44.396 [167/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:44.396 [168/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:44.396 [169/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:44.396 [170/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:44.396 [171/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:44.396 [172/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:44.396 [173/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:44.396 [174/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:44.396 [175/268] Linking static target lib/librte_compressdev.a 00:02:44.396 [176/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:44.396 [177/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:44.396 [178/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:44.396 [179/268] Linking static target lib/librte_dmadev.a 00:02:44.396 [180/268] Linking target lib/librte_telemetry.so.24.1 00:02:44.396 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:44.396 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:44.396 [183/268] Linking target lib/librte_kvargs.so.24.1 00:02:44.396 [184/268] Linking static target lib/librte_power.a 00:02:44.396 [185/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:44.396 [186/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:44.655 [187/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:44.655 [188/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:44.655 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:44.655 [190/268] Linking static target lib/librte_reorder.a 00:02:44.655 [191/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:44.655 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:44.655 [193/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:44.655 [194/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:44.655 [195/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:44.655 [196/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:44.655 [197/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:44.655 [198/268] Linking static target drivers/librte_bus_vdev.a 00:02:44.655 [199/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:44.655 [200/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:44.655 [201/268] Linking static target lib/librte_hash.a 00:02:44.655 [202/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:44.655 [203/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:44.655 [204/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:44.655 [205/268] Linking static target drivers/librte_bus_pci.a 00:02:44.655 [206/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:44.655 [207/268] Linking static target lib/librte_security.a 00:02:44.655 [208/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:44.655 [209/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:44.655 [210/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:44.655 [211/268] Linking static target lib/librte_cryptodev.a 00:02:44.655 [212/268] Linking static target drivers/librte_mempool_ring.a 00:02:44.655 [213/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.915 [214/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.915 [215/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:44.915 [216/268] Linking static target lib/librte_ethdev.a 00:02:44.915 [217/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.915 [218/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.915 [219/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.173 [220/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.173 [221/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.173 [222/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.173 [223/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:45.432 [224/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.432 [225/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.432 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.691 [227/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.708 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:46.708 [229/268] Linking static target lib/librte_vhost.a 00:02:46.708 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.609 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.804 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.183 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.183 [234/268] Linking target lib/librte_eal.so.24.1 00:02:54.442 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:54.442 [236/268] Linking target lib/librte_timer.so.24.1 00:02:54.442 [237/268] Linking target lib/librte_ring.so.24.1 00:02:54.442 [238/268] Linking target lib/librte_meter.so.24.1 00:02:54.442 [239/268] Linking target lib/librte_pci.so.24.1 00:02:54.442 [240/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:54.442 [241/268] Linking target lib/librte_dmadev.so.24.1 00:02:54.442 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:54.442 [243/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:54.442 [244/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:54.442 [245/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:54.442 [246/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:54.442 [247/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:54.701 [248/268] Linking target lib/librte_rcu.so.24.1 00:02:54.701 [249/268] Linking target lib/librte_mempool.so.24.1 00:02:54.701 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:54.701 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:54.701 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:54.701 [253/268] Linking target lib/librte_mbuf.so.24.1 00:02:54.960 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:54.960 [255/268] Linking target lib/librte_compressdev.so.24.1 00:02:54.960 [256/268] Linking target lib/librte_reorder.so.24.1 00:02:54.960 [257/268] Linking target lib/librte_net.so.24.1 00:02:54.960 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:02:54.960 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:54.960 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:55.219 [261/268] Linking target lib/librte_cmdline.so.24.1 00:02:55.219 [262/268] Linking target lib/librte_security.so.24.1 00:02:55.219 [263/268] Linking target lib/librte_hash.so.24.1 00:02:55.219 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:55.219 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:55.219 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:55.219 [267/268] Linking target lib/librte_power.so.24.1 00:02:55.219 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:55.478 INFO: autodetecting backend as ninja 00:02:55.478 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/build-tmp -j 96 00:03:05.462 CC lib/ut_mock/mock.o 00:03:05.462 CC lib/ut/ut.o 00:03:05.462 CC lib/log/log.o 00:03:05.462 CC lib/log/log_flags.o 00:03:05.462 CC lib/log/log_deprecated.o 00:03:05.722 LIB libspdk_ut.a 00:03:05.722 LIB libspdk_ut_mock.a 00:03:05.722 LIB libspdk_log.a 00:03:05.722 SO libspdk_ut.so.2.0 00:03:05.722 SO libspdk_ut_mock.so.6.0 00:03:05.722 SO libspdk_log.so.7.1 00:03:05.722 SYMLINK libspdk_ut.so 00:03:05.722 SYMLINK libspdk_ut_mock.so 00:03:05.722 SYMLINK libspdk_log.so 00:03:05.982 CC lib/util/base64.o 00:03:05.982 CC lib/util/bit_array.o 00:03:05.982 CC lib/util/cpuset.o 00:03:05.982 CC lib/util/crc16.o 00:03:05.982 CC lib/util/crc32.o 00:03:05.982 CC lib/util/crc32c.o 00:03:05.982 CC lib/util/crc32_ieee.o 00:03:05.982 CC lib/util/crc64.o 00:03:05.982 CC lib/ioat/ioat.o 00:03:05.982 CC lib/util/dif.o 00:03:06.240 CXX lib/trace_parser/trace.o 00:03:06.240 CC lib/util/fd.o 00:03:06.240 CC lib/dma/dma.o 00:03:06.240 CC lib/util/fd_group.o 00:03:06.240 CC lib/util/file.o 00:03:06.240 CC lib/util/hexlify.o 00:03:06.240 CC lib/util/iov.o 00:03:06.240 CC lib/util/math.o 00:03:06.240 CC lib/util/net.o 00:03:06.240 CC lib/util/pipe.o 00:03:06.240 CC lib/util/strerror_tls.o 00:03:06.241 CC lib/util/string.o 00:03:06.241 CC lib/util/uuid.o 00:03:06.241 CC lib/util/xor.o 00:03:06.241 CC lib/util/zipf.o 00:03:06.241 CC lib/util/md5.o 00:03:06.241 CC lib/vfio_user/host/vfio_user.o 00:03:06.241 CC lib/vfio_user/host/vfio_user_pci.o 00:03:06.241 LIB libspdk_dma.a 00:03:06.499 SO libspdk_dma.so.5.0 00:03:06.499 LIB libspdk_ioat.a 00:03:06.499 SO libspdk_ioat.so.7.0 00:03:06.499 SYMLINK libspdk_dma.so 00:03:06.500 SYMLINK libspdk_ioat.so 00:03:06.500 LIB libspdk_vfio_user.a 00:03:06.500 SO libspdk_vfio_user.so.5.0 00:03:06.500 LIB libspdk_util.a 00:03:06.500 SYMLINK libspdk_vfio_user.so 00:03:06.758 SO libspdk_util.so.10.1 00:03:06.758 SYMLINK libspdk_util.so 00:03:06.758 LIB libspdk_trace_parser.a 00:03:06.758 SO libspdk_trace_parser.so.6.0 00:03:07.018 SYMLINK libspdk_trace_parser.so 00:03:07.018 CC lib/env_dpdk/env.o 00:03:07.018 CC lib/env_dpdk/memory.o 00:03:07.018 CC lib/env_dpdk/pci.o 00:03:07.018 CC lib/json/json_parse.o 00:03:07.018 CC lib/rdma_utils/rdma_utils.o 00:03:07.018 CC lib/vmd/vmd.o 00:03:07.018 CC lib/idxd/idxd.o 00:03:07.018 CC lib/env_dpdk/init.o 00:03:07.018 CC lib/idxd/idxd_user.o 00:03:07.018 CC lib/json/json_util.o 00:03:07.018 CC lib/vmd/led.o 00:03:07.018 CC lib/conf/conf.o 00:03:07.018 CC lib/env_dpdk/threads.o 00:03:07.018 CC lib/idxd/idxd_kernel.o 00:03:07.018 CC lib/json/json_write.o 00:03:07.018 CC lib/env_dpdk/pci_ioat.o 00:03:07.018 CC lib/env_dpdk/pci_virtio.o 00:03:07.018 CC lib/env_dpdk/pci_vmd.o 00:03:07.018 CC lib/env_dpdk/pci_idxd.o 00:03:07.018 CC lib/env_dpdk/pci_event.o 00:03:07.018 CC lib/env_dpdk/sigbus_handler.o 00:03:07.018 CC lib/env_dpdk/pci_dpdk.o 00:03:07.018 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:07.276 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:07.276 LIB libspdk_conf.a 00:03:07.276 SO libspdk_conf.so.6.0 00:03:07.543 LIB libspdk_json.a 00:03:07.543 LIB libspdk_rdma_utils.a 00:03:07.543 SO libspdk_json.so.6.0 00:03:07.543 SO libspdk_rdma_utils.so.1.0 00:03:07.543 SYMLINK libspdk_conf.so 00:03:07.543 SYMLINK libspdk_json.so 00:03:07.543 SYMLINK libspdk_rdma_utils.so 00:03:07.543 LIB libspdk_idxd.a 00:03:07.543 LIB libspdk_vmd.a 00:03:07.802 SO libspdk_idxd.so.12.1 00:03:07.802 SO libspdk_vmd.so.6.0 00:03:07.802 SYMLINK libspdk_idxd.so 00:03:07.802 SYMLINK libspdk_vmd.so 00:03:07.803 CC lib/jsonrpc/jsonrpc_server.o 00:03:07.803 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:07.803 CC lib/jsonrpc/jsonrpc_client.o 00:03:07.803 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:07.803 CC lib/rdma_provider/common.o 00:03:07.803 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:08.061 LIB libspdk_rdma_provider.a 00:03:08.061 SO libspdk_rdma_provider.so.7.0 00:03:08.061 LIB libspdk_jsonrpc.a 00:03:08.061 SO libspdk_jsonrpc.so.6.0 00:03:08.061 SYMLINK libspdk_rdma_provider.so 00:03:08.061 SYMLINK libspdk_jsonrpc.so 00:03:08.320 LIB libspdk_env_dpdk.a 00:03:08.320 SO libspdk_env_dpdk.so.15.1 00:03:08.320 SYMLINK libspdk_env_dpdk.so 00:03:08.579 CC lib/rpc/rpc.o 00:03:08.579 LIB libspdk_rpc.a 00:03:08.838 SO libspdk_rpc.so.6.0 00:03:08.838 SYMLINK libspdk_rpc.so 00:03:09.096 CC lib/trace/trace.o 00:03:09.096 CC lib/trace/trace_flags.o 00:03:09.096 CC lib/trace/trace_rpc.o 00:03:09.096 CC lib/keyring/keyring.o 00:03:09.096 CC lib/keyring/keyring_rpc.o 00:03:09.096 CC lib/notify/notify.o 00:03:09.096 CC lib/notify/notify_rpc.o 00:03:09.356 LIB libspdk_notify.a 00:03:09.356 SO libspdk_notify.so.6.0 00:03:09.356 LIB libspdk_keyring.a 00:03:09.356 LIB libspdk_trace.a 00:03:09.356 SO libspdk_trace.so.11.0 00:03:09.356 SO libspdk_keyring.so.2.0 00:03:09.356 SYMLINK libspdk_notify.so 00:03:09.356 SYMLINK libspdk_trace.so 00:03:09.356 SYMLINK libspdk_keyring.so 00:03:09.949 CC lib/thread/thread.o 00:03:09.949 CC lib/thread/iobuf.o 00:03:09.949 CC lib/sock/sock.o 00:03:09.949 CC lib/sock/sock_rpc.o 00:03:10.207 LIB libspdk_sock.a 00:03:10.207 SO libspdk_sock.so.10.0 00:03:10.207 SYMLINK libspdk_sock.so 00:03:10.467 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:10.467 CC lib/nvme/nvme_ctrlr.o 00:03:10.467 CC lib/nvme/nvme_fabric.o 00:03:10.467 CC lib/nvme/nvme_ns_cmd.o 00:03:10.467 CC lib/nvme/nvme_ns.o 00:03:10.467 CC lib/nvme/nvme_pcie_common.o 00:03:10.467 CC lib/nvme/nvme_pcie.o 00:03:10.467 CC lib/nvme/nvme_qpair.o 00:03:10.467 CC lib/nvme/nvme.o 00:03:10.467 CC lib/nvme/nvme_quirks.o 00:03:10.467 CC lib/nvme/nvme_transport.o 00:03:10.467 CC lib/nvme/nvme_discovery.o 00:03:10.467 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:10.467 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:10.467 CC lib/nvme/nvme_tcp.o 00:03:10.467 CC lib/nvme/nvme_opal.o 00:03:10.467 CC lib/nvme/nvme_io_msg.o 00:03:10.467 CC lib/nvme/nvme_poll_group.o 00:03:10.467 CC lib/nvme/nvme_zns.o 00:03:10.467 CC lib/nvme/nvme_stubs.o 00:03:10.467 CC lib/nvme/nvme_auth.o 00:03:10.467 CC lib/nvme/nvme_cuse.o 00:03:10.467 CC lib/nvme/nvme_vfio_user.o 00:03:10.467 CC lib/nvme/nvme_rdma.o 00:03:11.034 LIB libspdk_thread.a 00:03:11.034 SO libspdk_thread.so.11.0 00:03:11.034 SYMLINK libspdk_thread.so 00:03:11.292 CC lib/fsdev/fsdev.o 00:03:11.292 CC lib/fsdev/fsdev_io.o 00:03:11.292 CC lib/fsdev/fsdev_rpc.o 00:03:11.292 CC lib/virtio/virtio.o 00:03:11.292 CC lib/accel/accel_rpc.o 00:03:11.292 CC lib/accel/accel.o 00:03:11.292 CC lib/vfu_tgt/tgt_endpoint.o 00:03:11.292 CC lib/vfu_tgt/tgt_rpc.o 00:03:11.292 CC lib/accel/accel_sw.o 00:03:11.292 CC lib/init/json_config.o 00:03:11.292 CC lib/virtio/virtio_vhost_user.o 00:03:11.292 CC lib/blob/blobstore.o 00:03:11.292 CC lib/init/subsystem.o 00:03:11.292 CC lib/blob/request.o 00:03:11.292 CC lib/virtio/virtio_vfio_user.o 00:03:11.292 CC lib/init/subsystem_rpc.o 00:03:11.292 CC lib/virtio/virtio_pci.o 00:03:11.292 CC lib/init/rpc.o 00:03:11.292 CC lib/blob/zeroes.o 00:03:11.292 CC lib/blob/blob_bs_dev.o 00:03:11.550 LIB libspdk_init.a 00:03:11.550 SO libspdk_init.so.6.0 00:03:11.550 LIB libspdk_virtio.a 00:03:11.550 LIB libspdk_vfu_tgt.a 00:03:11.550 SO libspdk_virtio.so.7.0 00:03:11.550 SO libspdk_vfu_tgt.so.3.0 00:03:11.550 SYMLINK libspdk_init.so 00:03:11.809 SYMLINK libspdk_virtio.so 00:03:11.809 SYMLINK libspdk_vfu_tgt.so 00:03:11.809 LIB libspdk_fsdev.a 00:03:11.809 SO libspdk_fsdev.so.2.0 00:03:11.809 SYMLINK libspdk_fsdev.so 00:03:12.067 CC lib/event/app.o 00:03:12.067 CC lib/event/reactor.o 00:03:12.067 CC lib/event/log_rpc.o 00:03:12.067 CC lib/event/app_rpc.o 00:03:12.067 CC lib/event/scheduler_static.o 00:03:12.067 LIB libspdk_accel.a 00:03:12.067 SO libspdk_accel.so.16.0 00:03:12.067 LIB libspdk_nvme.a 00:03:12.326 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:12.326 SYMLINK libspdk_accel.so 00:03:12.326 SO libspdk_nvme.so.15.0 00:03:12.326 LIB libspdk_event.a 00:03:12.326 SO libspdk_event.so.14.0 00:03:12.326 SYMLINK libspdk_event.so 00:03:12.584 SYMLINK libspdk_nvme.so 00:03:12.584 CC lib/bdev/bdev.o 00:03:12.584 CC lib/bdev/bdev_rpc.o 00:03:12.584 CC lib/bdev/bdev_zone.o 00:03:12.584 CC lib/bdev/part.o 00:03:12.584 CC lib/bdev/scsi_nvme.o 00:03:12.843 LIB libspdk_fuse_dispatcher.a 00:03:12.843 SO libspdk_fuse_dispatcher.so.1.0 00:03:12.843 SYMLINK libspdk_fuse_dispatcher.so 00:03:13.411 LIB libspdk_blob.a 00:03:13.411 SO libspdk_blob.so.12.0 00:03:13.669 SYMLINK libspdk_blob.so 00:03:13.927 CC lib/blobfs/blobfs.o 00:03:13.927 CC lib/blobfs/tree.o 00:03:13.927 CC lib/lvol/lvol.o 00:03:14.494 LIB libspdk_blobfs.a 00:03:14.494 LIB libspdk_bdev.a 00:03:14.494 SO libspdk_blobfs.so.11.0 00:03:14.494 SO libspdk_bdev.so.17.0 00:03:14.754 LIB libspdk_lvol.a 00:03:14.754 SYMLINK libspdk_blobfs.so 00:03:14.754 SO libspdk_lvol.so.11.0 00:03:14.754 SYMLINK libspdk_bdev.so 00:03:14.754 SYMLINK libspdk_lvol.so 00:03:15.014 CC lib/scsi/dev.o 00:03:15.014 CC lib/scsi/lun.o 00:03:15.014 CC lib/scsi/port.o 00:03:15.014 CC lib/scsi/scsi.o 00:03:15.014 CC lib/scsi/scsi_bdev.o 00:03:15.014 CC lib/scsi/scsi_pr.o 00:03:15.014 CC lib/scsi/scsi_rpc.o 00:03:15.014 CC lib/nvmf/ctrlr.o 00:03:15.014 CC lib/ftl/ftl_core.o 00:03:15.014 CC lib/ftl/ftl_init.o 00:03:15.014 CC lib/scsi/task.o 00:03:15.014 CC lib/nvmf/ctrlr_discovery.o 00:03:15.014 CC lib/ftl/ftl_layout.o 00:03:15.014 CC lib/nvmf/ctrlr_bdev.o 00:03:15.014 CC lib/ftl/ftl_debug.o 00:03:15.014 CC lib/ublk/ublk.o 00:03:15.014 CC lib/nvmf/subsystem.o 00:03:15.014 CC lib/ftl/ftl_io.o 00:03:15.014 CC lib/ublk/ublk_rpc.o 00:03:15.014 CC lib/nvmf/nvmf.o 00:03:15.014 CC lib/ftl/ftl_sb.o 00:03:15.014 CC lib/ftl/ftl_l2p.o 00:03:15.014 CC lib/nvmf/nvmf_rpc.o 00:03:15.014 CC lib/nbd/nbd.o 00:03:15.014 CC lib/nvmf/transport.o 00:03:15.014 CC lib/ftl/ftl_l2p_flat.o 00:03:15.014 CC lib/nvmf/tcp.o 00:03:15.014 CC lib/ftl/ftl_nv_cache.o 00:03:15.014 CC lib/nbd/nbd_rpc.o 00:03:15.014 CC lib/nvmf/stubs.o 00:03:15.014 CC lib/ftl/ftl_band.o 00:03:15.014 CC lib/ftl/ftl_band_ops.o 00:03:15.014 CC lib/nvmf/mdns_server.o 00:03:15.014 CC lib/ftl/ftl_writer.o 00:03:15.014 CC lib/nvmf/vfio_user.o 00:03:15.014 CC lib/nvmf/rdma.o 00:03:15.014 CC lib/ftl/ftl_rq.o 00:03:15.014 CC lib/nvmf/auth.o 00:03:15.014 CC lib/ftl/ftl_reloc.o 00:03:15.014 CC lib/ftl/ftl_l2p_cache.o 00:03:15.014 CC lib/ftl/mngt/ftl_mngt.o 00:03:15.014 CC lib/ftl/ftl_p2l.o 00:03:15.014 CC lib/ftl/ftl_p2l_log.o 00:03:15.014 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:15.014 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:15.014 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:15.014 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:15.014 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:15.014 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:15.014 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:15.014 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:15.014 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:15.014 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:15.014 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:15.014 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:15.014 CC lib/ftl/utils/ftl_md.o 00:03:15.014 CC lib/ftl/utils/ftl_conf.o 00:03:15.014 CC lib/ftl/utils/ftl_mempool.o 00:03:15.014 CC lib/ftl/utils/ftl_bitmap.o 00:03:15.014 CC lib/ftl/utils/ftl_property.o 00:03:15.014 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:15.014 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:15.014 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:15.014 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:15.014 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:15.014 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:15.014 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:15.014 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:15.014 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:15.014 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:15.014 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:15.014 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:15.014 CC lib/ftl/ftl_trace.o 00:03:15.014 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:15.014 CC lib/ftl/base/ftl_base_dev.o 00:03:15.014 CC lib/ftl/base/ftl_base_bdev.o 00:03:15.580 LIB libspdk_nbd.a 00:03:15.580 SO libspdk_nbd.so.7.0 00:03:15.838 LIB libspdk_scsi.a 00:03:15.838 SYMLINK libspdk_nbd.so 00:03:15.838 SO libspdk_scsi.so.9.0 00:03:15.838 LIB libspdk_ublk.a 00:03:15.838 SYMLINK libspdk_scsi.so 00:03:15.838 SO libspdk_ublk.so.3.0 00:03:15.838 SYMLINK libspdk_ublk.so 00:03:16.097 LIB libspdk_ftl.a 00:03:16.097 SO libspdk_ftl.so.9.0 00:03:16.097 CC lib/iscsi/conn.o 00:03:16.097 CC lib/iscsi/init_grp.o 00:03:16.097 CC lib/iscsi/iscsi.o 00:03:16.097 CC lib/iscsi/param.o 00:03:16.097 CC lib/iscsi/portal_grp.o 00:03:16.097 CC lib/iscsi/tgt_node.o 00:03:16.097 CC lib/iscsi/iscsi_subsystem.o 00:03:16.097 CC lib/iscsi/iscsi_rpc.o 00:03:16.097 CC lib/iscsi/task.o 00:03:16.097 CC lib/vhost/vhost.o 00:03:16.097 CC lib/vhost/vhost_rpc.o 00:03:16.097 CC lib/vhost/vhost_scsi.o 00:03:16.097 CC lib/vhost/vhost_blk.o 00:03:16.097 CC lib/vhost/rte_vhost_user.o 00:03:16.356 SYMLINK libspdk_ftl.so 00:03:16.924 LIB libspdk_nvmf.a 00:03:16.924 SO libspdk_nvmf.so.20.0 00:03:16.924 LIB libspdk_vhost.a 00:03:16.924 SO libspdk_vhost.so.8.0 00:03:17.183 SYMLINK libspdk_nvmf.so 00:03:17.183 SYMLINK libspdk_vhost.so 00:03:17.183 LIB libspdk_iscsi.a 00:03:17.183 SO libspdk_iscsi.so.8.0 00:03:17.442 SYMLINK libspdk_iscsi.so 00:03:18.011 CC module/vfu_device/vfu_virtio.o 00:03:18.011 CC module/vfu_device/vfu_virtio_blk.o 00:03:18.011 CC module/vfu_device/vfu_virtio_scsi.o 00:03:18.011 CC module/vfu_device/vfu_virtio_rpc.o 00:03:18.011 CC module/vfu_device/vfu_virtio_fs.o 00:03:18.011 CC module/env_dpdk/env_dpdk_rpc.o 00:03:18.011 CC module/blob/bdev/blob_bdev.o 00:03:18.011 LIB libspdk_env_dpdk_rpc.a 00:03:18.011 CC module/scheduler/gscheduler/gscheduler.o 00:03:18.011 CC module/keyring/file/keyring.o 00:03:18.011 CC module/keyring/file/keyring_rpc.o 00:03:18.011 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:18.011 CC module/accel/iaa/accel_iaa.o 00:03:18.011 CC module/fsdev/aio/fsdev_aio.o 00:03:18.011 CC module/accel/iaa/accel_iaa_rpc.o 00:03:18.011 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:18.011 CC module/accel/dsa/accel_dsa.o 00:03:18.011 CC module/fsdev/aio/linux_aio_mgr.o 00:03:18.011 CC module/accel/dsa/accel_dsa_rpc.o 00:03:18.011 CC module/accel/error/accel_error.o 00:03:18.011 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:18.011 CC module/keyring/linux/keyring.o 00:03:18.011 CC module/accel/error/accel_error_rpc.o 00:03:18.011 CC module/keyring/linux/keyring_rpc.o 00:03:18.011 CC module/accel/ioat/accel_ioat_rpc.o 00:03:18.011 CC module/accel/ioat/accel_ioat.o 00:03:18.011 CC module/sock/posix/posix.o 00:03:18.011 SO libspdk_env_dpdk_rpc.so.6.0 00:03:18.270 SYMLINK libspdk_env_dpdk_rpc.so 00:03:18.270 LIB libspdk_scheduler_dpdk_governor.a 00:03:18.270 LIB libspdk_keyring_linux.a 00:03:18.270 LIB libspdk_scheduler_gscheduler.a 00:03:18.270 LIB libspdk_keyring_file.a 00:03:18.270 SO libspdk_keyring_linux.so.1.0 00:03:18.270 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:18.270 SO libspdk_keyring_file.so.2.0 00:03:18.270 LIB libspdk_scheduler_dynamic.a 00:03:18.270 LIB libspdk_accel_error.a 00:03:18.270 SO libspdk_scheduler_gscheduler.so.4.0 00:03:18.270 LIB libspdk_accel_iaa.a 00:03:18.270 LIB libspdk_accel_ioat.a 00:03:18.270 SO libspdk_accel_error.so.2.0 00:03:18.270 SO libspdk_scheduler_dynamic.so.4.0 00:03:18.270 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:18.270 LIB libspdk_blob_bdev.a 00:03:18.270 SYMLINK libspdk_keyring_linux.so 00:03:18.270 LIB libspdk_accel_dsa.a 00:03:18.270 SO libspdk_accel_ioat.so.6.0 00:03:18.270 SYMLINK libspdk_keyring_file.so 00:03:18.270 SO libspdk_accel_iaa.so.3.0 00:03:18.270 SO libspdk_blob_bdev.so.12.0 00:03:18.270 SYMLINK libspdk_scheduler_gscheduler.so 00:03:18.270 SYMLINK libspdk_scheduler_dynamic.so 00:03:18.270 SO libspdk_accel_dsa.so.5.0 00:03:18.270 SYMLINK libspdk_accel_error.so 00:03:18.529 SYMLINK libspdk_blob_bdev.so 00:03:18.529 SYMLINK libspdk_accel_ioat.so 00:03:18.529 SYMLINK libspdk_accel_iaa.so 00:03:18.529 SYMLINK libspdk_accel_dsa.so 00:03:18.529 LIB libspdk_vfu_device.a 00:03:18.529 SO libspdk_vfu_device.so.3.0 00:03:18.529 SYMLINK libspdk_vfu_device.so 00:03:18.529 LIB libspdk_fsdev_aio.a 00:03:18.788 SO libspdk_fsdev_aio.so.1.0 00:03:18.788 LIB libspdk_sock_posix.a 00:03:18.788 SO libspdk_sock_posix.so.6.0 00:03:18.788 SYMLINK libspdk_fsdev_aio.so 00:03:18.788 SYMLINK libspdk_sock_posix.so 00:03:18.788 CC module/bdev/error/vbdev_error.o 00:03:18.788 CC module/bdev/error/vbdev_error_rpc.o 00:03:18.788 CC module/bdev/passthru/vbdev_passthru.o 00:03:18.788 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:18.788 CC module/bdev/delay/vbdev_delay.o 00:03:18.788 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:18.788 CC module/bdev/lvol/vbdev_lvol.o 00:03:18.788 CC module/bdev/malloc/bdev_malloc.o 00:03:18.788 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:18.788 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:18.788 CC module/bdev/gpt/gpt.o 00:03:18.788 CC module/bdev/gpt/vbdev_gpt.o 00:03:18.788 CC module/bdev/raid/bdev_raid.o 00:03:18.788 CC module/bdev/raid/bdev_raid_rpc.o 00:03:18.788 CC module/bdev/raid/bdev_raid_sb.o 00:03:18.788 CC module/bdev/ftl/bdev_ftl.o 00:03:19.047 CC module/bdev/raid/raid0.o 00:03:19.047 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:19.047 CC module/bdev/raid/raid1.o 00:03:19.047 CC module/bdev/raid/concat.o 00:03:19.047 CC module/bdev/split/vbdev_split.o 00:03:19.047 CC module/bdev/iscsi/bdev_iscsi.o 00:03:19.047 CC module/bdev/split/vbdev_split_rpc.o 00:03:19.047 CC module/bdev/nvme/bdev_nvme.o 00:03:19.047 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:19.047 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:19.047 CC module/bdev/nvme/nvme_rpc.o 00:03:19.047 CC module/bdev/nvme/bdev_mdns_client.o 00:03:19.047 CC module/blobfs/bdev/blobfs_bdev.o 00:03:19.047 CC module/bdev/nvme/vbdev_opal.o 00:03:19.047 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:19.047 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:19.047 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:19.047 CC module/bdev/null/bdev_null.o 00:03:19.047 CC module/bdev/null/bdev_null_rpc.o 00:03:19.047 CC module/bdev/aio/bdev_aio.o 00:03:19.047 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:19.047 CC module/bdev/aio/bdev_aio_rpc.o 00:03:19.047 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:19.047 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:19.047 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:19.047 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:19.306 LIB libspdk_bdev_error.a 00:03:19.306 LIB libspdk_blobfs_bdev.a 00:03:19.306 LIB libspdk_bdev_split.a 00:03:19.306 LIB libspdk_bdev_passthru.a 00:03:19.306 SO libspdk_bdev_error.so.6.0 00:03:19.306 SO libspdk_blobfs_bdev.so.6.0 00:03:19.306 SO libspdk_bdev_split.so.6.0 00:03:19.307 SO libspdk_bdev_passthru.so.6.0 00:03:19.307 LIB libspdk_bdev_gpt.a 00:03:19.307 LIB libspdk_bdev_null.a 00:03:19.307 SYMLINK libspdk_bdev_error.so 00:03:19.307 SO libspdk_bdev_gpt.so.6.0 00:03:19.307 LIB libspdk_bdev_zone_block.a 00:03:19.307 SO libspdk_bdev_null.so.6.0 00:03:19.307 SYMLINK libspdk_bdev_split.so 00:03:19.307 LIB libspdk_bdev_ftl.a 00:03:19.307 SYMLINK libspdk_blobfs_bdev.so 00:03:19.307 LIB libspdk_bdev_malloc.a 00:03:19.307 LIB libspdk_bdev_aio.a 00:03:19.307 LIB libspdk_bdev_delay.a 00:03:19.307 SYMLINK libspdk_bdev_passthru.so 00:03:19.307 LIB libspdk_bdev_iscsi.a 00:03:19.307 SO libspdk_bdev_zone_block.so.6.0 00:03:19.307 SO libspdk_bdev_ftl.so.6.0 00:03:19.307 SO libspdk_bdev_delay.so.6.0 00:03:19.307 SO libspdk_bdev_malloc.so.6.0 00:03:19.307 SO libspdk_bdev_aio.so.6.0 00:03:19.307 SYMLINK libspdk_bdev_null.so 00:03:19.307 SYMLINK libspdk_bdev_gpt.so 00:03:19.307 SO libspdk_bdev_iscsi.so.6.0 00:03:19.307 SYMLINK libspdk_bdev_ftl.so 00:03:19.307 SYMLINK libspdk_bdev_zone_block.so 00:03:19.307 SYMLINK libspdk_bdev_delay.so 00:03:19.307 SYMLINK libspdk_bdev_malloc.so 00:03:19.307 SYMLINK libspdk_bdev_aio.so 00:03:19.307 LIB libspdk_bdev_virtio.a 00:03:19.307 LIB libspdk_bdev_lvol.a 00:03:19.307 SYMLINK libspdk_bdev_iscsi.so 00:03:19.566 SO libspdk_bdev_virtio.so.6.0 00:03:19.566 SO libspdk_bdev_lvol.so.6.0 00:03:19.566 SYMLINK libspdk_bdev_virtio.so 00:03:19.566 SYMLINK libspdk_bdev_lvol.so 00:03:19.825 LIB libspdk_bdev_raid.a 00:03:19.825 SO libspdk_bdev_raid.so.6.0 00:03:19.825 SYMLINK libspdk_bdev_raid.so 00:03:20.763 LIB libspdk_bdev_nvme.a 00:03:20.763 SO libspdk_bdev_nvme.so.7.1 00:03:21.022 SYMLINK libspdk_bdev_nvme.so 00:03:21.591 CC module/event/subsystems/vmd/vmd.o 00:03:21.591 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:21.591 CC module/event/subsystems/iobuf/iobuf.o 00:03:21.591 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:21.591 CC module/event/subsystems/sock/sock.o 00:03:21.591 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:21.591 CC module/event/subsystems/keyring/keyring.o 00:03:21.591 CC module/event/subsystems/fsdev/fsdev.o 00:03:21.591 CC module/event/subsystems/scheduler/scheduler.o 00:03:21.591 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:21.851 LIB libspdk_event_sock.a 00:03:21.851 LIB libspdk_event_keyring.a 00:03:21.851 LIB libspdk_event_vmd.a 00:03:21.851 LIB libspdk_event_vhost_blk.a 00:03:21.851 LIB libspdk_event_vfu_tgt.a 00:03:21.851 LIB libspdk_event_fsdev.a 00:03:21.851 LIB libspdk_event_iobuf.a 00:03:21.851 LIB libspdk_event_scheduler.a 00:03:21.851 SO libspdk_event_sock.so.5.0 00:03:21.851 SO libspdk_event_vhost_blk.so.3.0 00:03:21.851 SO libspdk_event_keyring.so.1.0 00:03:21.851 SO libspdk_event_vmd.so.6.0 00:03:21.851 SO libspdk_event_vfu_tgt.so.3.0 00:03:21.851 SO libspdk_event_scheduler.so.4.0 00:03:21.851 SO libspdk_event_fsdev.so.1.0 00:03:21.851 SO libspdk_event_iobuf.so.3.0 00:03:21.851 SYMLINK libspdk_event_sock.so 00:03:21.851 SYMLINK libspdk_event_vhost_blk.so 00:03:21.851 SYMLINK libspdk_event_keyring.so 00:03:21.851 SYMLINK libspdk_event_scheduler.so 00:03:21.851 SYMLINK libspdk_event_vmd.so 00:03:21.851 SYMLINK libspdk_event_vfu_tgt.so 00:03:21.851 SYMLINK libspdk_event_fsdev.so 00:03:21.851 SYMLINK libspdk_event_iobuf.so 00:03:22.419 CC module/event/subsystems/accel/accel.o 00:03:22.419 LIB libspdk_event_accel.a 00:03:22.419 SO libspdk_event_accel.so.6.0 00:03:22.419 SYMLINK libspdk_event_accel.so 00:03:22.988 CC module/event/subsystems/bdev/bdev.o 00:03:22.988 LIB libspdk_event_bdev.a 00:03:22.988 SO libspdk_event_bdev.so.6.0 00:03:23.248 SYMLINK libspdk_event_bdev.so 00:03:23.516 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:23.516 CC module/event/subsystems/ublk/ublk.o 00:03:23.516 CC module/event/subsystems/nbd/nbd.o 00:03:23.516 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:23.516 CC module/event/subsystems/scsi/scsi.o 00:03:23.516 LIB libspdk_event_nbd.a 00:03:23.516 LIB libspdk_event_ublk.a 00:03:23.516 LIB libspdk_event_scsi.a 00:03:23.877 SO libspdk_event_ublk.so.3.0 00:03:23.878 SO libspdk_event_nbd.so.6.0 00:03:23.878 SO libspdk_event_scsi.so.6.0 00:03:23.878 LIB libspdk_event_nvmf.a 00:03:23.878 SYMLINK libspdk_event_nbd.so 00:03:23.878 SYMLINK libspdk_event_ublk.so 00:03:23.878 SO libspdk_event_nvmf.so.6.0 00:03:23.878 SYMLINK libspdk_event_scsi.so 00:03:23.878 SYMLINK libspdk_event_nvmf.so 00:03:24.137 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:24.137 CC module/event/subsystems/iscsi/iscsi.o 00:03:24.137 LIB libspdk_event_vhost_scsi.a 00:03:24.137 SO libspdk_event_vhost_scsi.so.3.0 00:03:24.137 LIB libspdk_event_iscsi.a 00:03:24.396 SO libspdk_event_iscsi.so.6.0 00:03:24.396 SYMLINK libspdk_event_vhost_scsi.so 00:03:24.396 SYMLINK libspdk_event_iscsi.so 00:03:24.396 SO libspdk.so.6.0 00:03:24.656 SYMLINK libspdk.so 00:03:24.919 CC app/spdk_nvme_discover/discovery_aer.o 00:03:24.920 TEST_HEADER include/spdk/accel.h 00:03:24.920 TEST_HEADER include/spdk/accel_module.h 00:03:24.920 CC app/trace_record/trace_record.o 00:03:24.920 TEST_HEADER include/spdk/assert.h 00:03:24.920 TEST_HEADER include/spdk/barrier.h 00:03:24.920 TEST_HEADER include/spdk/bdev.h 00:03:24.920 TEST_HEADER include/spdk/base64.h 00:03:24.920 TEST_HEADER include/spdk/bdev_module.h 00:03:24.920 CXX app/trace/trace.o 00:03:24.920 TEST_HEADER include/spdk/bit_array.h 00:03:24.920 TEST_HEADER include/spdk/bdev_zone.h 00:03:24.920 TEST_HEADER include/spdk/bit_pool.h 00:03:24.920 CC app/spdk_top/spdk_top.o 00:03:24.920 TEST_HEADER include/spdk/blob_bdev.h 00:03:24.920 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:24.920 TEST_HEADER include/spdk/blobfs.h 00:03:24.920 TEST_HEADER include/spdk/blob.h 00:03:24.920 CC app/spdk_nvme_perf/perf.o 00:03:24.920 TEST_HEADER include/spdk/config.h 00:03:24.920 TEST_HEADER include/spdk/conf.h 00:03:24.920 TEST_HEADER include/spdk/cpuset.h 00:03:24.920 CC app/spdk_nvme_identify/identify.o 00:03:24.920 CC test/rpc_client/rpc_client_test.o 00:03:24.920 TEST_HEADER include/spdk/crc16.h 00:03:24.920 TEST_HEADER include/spdk/crc32.h 00:03:24.920 TEST_HEADER include/spdk/dif.h 00:03:24.920 TEST_HEADER include/spdk/crc64.h 00:03:24.920 TEST_HEADER include/spdk/dma.h 00:03:24.920 CC app/spdk_lspci/spdk_lspci.o 00:03:24.920 TEST_HEADER include/spdk/endian.h 00:03:24.920 TEST_HEADER include/spdk/env_dpdk.h 00:03:24.920 TEST_HEADER include/spdk/env.h 00:03:24.920 TEST_HEADER include/spdk/event.h 00:03:24.920 TEST_HEADER include/spdk/fd_group.h 00:03:24.920 TEST_HEADER include/spdk/fd.h 00:03:24.920 TEST_HEADER include/spdk/file.h 00:03:24.920 TEST_HEADER include/spdk/fsdev.h 00:03:24.920 TEST_HEADER include/spdk/fsdev_module.h 00:03:24.920 TEST_HEADER include/spdk/ftl.h 00:03:24.920 TEST_HEADER include/spdk/gpt_spec.h 00:03:24.920 TEST_HEADER include/spdk/hexlify.h 00:03:24.920 TEST_HEADER include/spdk/histogram_data.h 00:03:24.920 TEST_HEADER include/spdk/idxd.h 00:03:24.920 TEST_HEADER include/spdk/idxd_spec.h 00:03:24.920 TEST_HEADER include/spdk/ioat.h 00:03:24.920 TEST_HEADER include/spdk/init.h 00:03:24.920 TEST_HEADER include/spdk/iscsi_spec.h 00:03:24.920 TEST_HEADER include/spdk/ioat_spec.h 00:03:24.920 TEST_HEADER include/spdk/json.h 00:03:24.920 TEST_HEADER include/spdk/keyring.h 00:03:24.920 TEST_HEADER include/spdk/keyring_module.h 00:03:24.920 TEST_HEADER include/spdk/jsonrpc.h 00:03:24.920 TEST_HEADER include/spdk/log.h 00:03:24.920 TEST_HEADER include/spdk/md5.h 00:03:24.920 TEST_HEADER include/spdk/likely.h 00:03:24.920 TEST_HEADER include/spdk/memory.h 00:03:24.920 TEST_HEADER include/spdk/lvol.h 00:03:24.920 TEST_HEADER include/spdk/mmio.h 00:03:24.920 TEST_HEADER include/spdk/nbd.h 00:03:24.920 TEST_HEADER include/spdk/net.h 00:03:24.920 TEST_HEADER include/spdk/nvme.h 00:03:24.920 TEST_HEADER include/spdk/notify.h 00:03:24.920 TEST_HEADER include/spdk/nvme_intel.h 00:03:24.920 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:24.920 TEST_HEADER include/spdk/nvme_spec.h 00:03:24.920 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:24.920 TEST_HEADER include/spdk/nvme_zns.h 00:03:24.920 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:24.920 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:24.920 TEST_HEADER include/spdk/nvmf_spec.h 00:03:24.920 TEST_HEADER include/spdk/nvmf_transport.h 00:03:24.920 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:24.920 TEST_HEADER include/spdk/nvmf.h 00:03:24.920 TEST_HEADER include/spdk/opal_spec.h 00:03:24.920 TEST_HEADER include/spdk/opal.h 00:03:24.920 TEST_HEADER include/spdk/pci_ids.h 00:03:24.920 TEST_HEADER include/spdk/queue.h 00:03:24.920 TEST_HEADER include/spdk/reduce.h 00:03:24.920 TEST_HEADER include/spdk/rpc.h 00:03:24.920 TEST_HEADER include/spdk/scheduler.h 00:03:24.920 TEST_HEADER include/spdk/pipe.h 00:03:24.920 TEST_HEADER include/spdk/scsi.h 00:03:24.920 TEST_HEADER include/spdk/sock.h 00:03:24.920 TEST_HEADER include/spdk/scsi_spec.h 00:03:24.920 TEST_HEADER include/spdk/stdinc.h 00:03:24.920 TEST_HEADER include/spdk/string.h 00:03:24.920 CC app/spdk_dd/spdk_dd.o 00:03:24.920 TEST_HEADER include/spdk/thread.h 00:03:24.920 TEST_HEADER include/spdk/trace_parser.h 00:03:24.920 TEST_HEADER include/spdk/trace.h 00:03:24.920 CC app/iscsi_tgt/iscsi_tgt.o 00:03:24.920 TEST_HEADER include/spdk/util.h 00:03:24.920 TEST_HEADER include/spdk/ublk.h 00:03:24.920 TEST_HEADER include/spdk/tree.h 00:03:24.920 TEST_HEADER include/spdk/version.h 00:03:24.920 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:24.920 TEST_HEADER include/spdk/uuid.h 00:03:24.920 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:24.920 TEST_HEADER include/spdk/vhost.h 00:03:24.920 TEST_HEADER include/spdk/vmd.h 00:03:24.920 TEST_HEADER include/spdk/xor.h 00:03:24.920 TEST_HEADER include/spdk/zipf.h 00:03:24.920 CC app/nvmf_tgt/nvmf_main.o 00:03:24.920 CXX test/cpp_headers/accel.o 00:03:24.920 CXX test/cpp_headers/accel_module.o 00:03:24.920 CXX test/cpp_headers/assert.o 00:03:24.920 CXX test/cpp_headers/base64.o 00:03:24.920 CXX test/cpp_headers/barrier.o 00:03:24.920 CXX test/cpp_headers/bdev_module.o 00:03:24.920 CXX test/cpp_headers/bit_array.o 00:03:24.920 CXX test/cpp_headers/bdev_zone.o 00:03:24.920 CXX test/cpp_headers/bit_pool.o 00:03:24.920 CXX test/cpp_headers/bdev.o 00:03:24.920 CXX test/cpp_headers/blobfs_bdev.o 00:03:24.920 CXX test/cpp_headers/blob_bdev.o 00:03:24.920 CXX test/cpp_headers/blobfs.o 00:03:24.920 CXX test/cpp_headers/blob.o 00:03:24.920 CXX test/cpp_headers/conf.o 00:03:24.920 CXX test/cpp_headers/config.o 00:03:24.920 CXX test/cpp_headers/crc16.o 00:03:24.920 CXX test/cpp_headers/cpuset.o 00:03:24.920 CXX test/cpp_headers/crc32.o 00:03:24.920 CXX test/cpp_headers/crc64.o 00:03:24.920 CC app/spdk_tgt/spdk_tgt.o 00:03:24.920 CXX test/cpp_headers/dif.o 00:03:24.920 CXX test/cpp_headers/endian.o 00:03:24.920 CXX test/cpp_headers/dma.o 00:03:24.920 CXX test/cpp_headers/env_dpdk.o 00:03:24.920 CXX test/cpp_headers/env.o 00:03:24.920 CXX test/cpp_headers/event.o 00:03:24.920 CXX test/cpp_headers/fd_group.o 00:03:24.920 CXX test/cpp_headers/fd.o 00:03:24.920 CXX test/cpp_headers/file.o 00:03:24.920 CXX test/cpp_headers/fsdev.o 00:03:24.920 CXX test/cpp_headers/ftl.o 00:03:24.920 CXX test/cpp_headers/gpt_spec.o 00:03:24.920 CXX test/cpp_headers/fsdev_module.o 00:03:24.920 CXX test/cpp_headers/histogram_data.o 00:03:24.920 CXX test/cpp_headers/idxd.o 00:03:24.920 CXX test/cpp_headers/hexlify.o 00:03:24.920 CXX test/cpp_headers/idxd_spec.o 00:03:24.920 CXX test/cpp_headers/ioat.o 00:03:24.920 CXX test/cpp_headers/ioat_spec.o 00:03:24.920 CXX test/cpp_headers/init.o 00:03:24.920 CXX test/cpp_headers/json.o 00:03:24.920 CXX test/cpp_headers/jsonrpc.o 00:03:24.920 CXX test/cpp_headers/iscsi_spec.o 00:03:24.920 CXX test/cpp_headers/keyring_module.o 00:03:24.920 CXX test/cpp_headers/keyring.o 00:03:24.920 CXX test/cpp_headers/likely.o 00:03:24.920 CXX test/cpp_headers/lvol.o 00:03:24.920 CXX test/cpp_headers/log.o 00:03:24.920 CXX test/cpp_headers/md5.o 00:03:24.920 CXX test/cpp_headers/memory.o 00:03:24.920 CXX test/cpp_headers/mmio.o 00:03:24.920 CXX test/cpp_headers/nbd.o 00:03:24.920 CXX test/cpp_headers/net.o 00:03:24.920 CXX test/cpp_headers/notify.o 00:03:24.920 CXX test/cpp_headers/nvme.o 00:03:24.920 CXX test/cpp_headers/nvme_ocssd.o 00:03:24.920 CXX test/cpp_headers/nvme_intel.o 00:03:24.920 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:24.920 CXX test/cpp_headers/nvmf_cmd.o 00:03:24.920 CXX test/cpp_headers/nvme_spec.o 00:03:24.920 CXX test/cpp_headers/nvme_zns.o 00:03:24.920 CXX test/cpp_headers/nvmf.o 00:03:24.920 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:24.920 CXX test/cpp_headers/nvmf_spec.o 00:03:24.920 CXX test/cpp_headers/nvmf_transport.o 00:03:24.920 CXX test/cpp_headers/opal.o 00:03:25.190 CXX test/cpp_headers/opal_spec.o 00:03:25.190 CXX test/cpp_headers/pci_ids.o 00:03:25.190 CC app/fio/nvme/fio_plugin.o 00:03:25.190 CC test/thread/poller_perf/poller_perf.o 00:03:25.190 CC test/app/histogram_perf/histogram_perf.o 00:03:25.190 CC test/app/jsoncat/jsoncat.o 00:03:25.190 CC test/app/stub/stub.o 00:03:25.190 CC examples/ioat/verify/verify.o 00:03:25.190 CC examples/ioat/perf/perf.o 00:03:25.190 CC test/env/vtophys/vtophys.o 00:03:25.190 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:25.190 CC examples/util/zipf/zipf.o 00:03:25.190 CC test/env/memory/memory_ut.o 00:03:25.190 CC test/env/pci/pci_ut.o 00:03:25.190 CC test/app/bdev_svc/bdev_svc.o 00:03:25.190 CC app/fio/bdev/fio_plugin.o 00:03:25.461 CC test/dma/test_dma/test_dma.o 00:03:25.461 LINK spdk_lspci 00:03:25.461 LINK spdk_nvme_discover 00:03:25.461 LINK interrupt_tgt 00:03:25.461 LINK rpc_client_test 00:03:25.461 LINK spdk_trace_record 00:03:25.721 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:25.721 CC test/env/mem_callbacks/mem_callbacks.o 00:03:25.721 LINK histogram_perf 00:03:25.721 LINK jsoncat 00:03:25.721 LINK nvmf_tgt 00:03:25.721 CXX test/cpp_headers/pipe.o 00:03:25.721 CXX test/cpp_headers/queue.o 00:03:25.721 CXX test/cpp_headers/reduce.o 00:03:25.721 CXX test/cpp_headers/rpc.o 00:03:25.721 CXX test/cpp_headers/scheduler.o 00:03:25.721 CXX test/cpp_headers/scsi.o 00:03:25.721 CXX test/cpp_headers/scsi_spec.o 00:03:25.721 CXX test/cpp_headers/sock.o 00:03:25.721 CXX test/cpp_headers/stdinc.o 00:03:25.721 CXX test/cpp_headers/string.o 00:03:25.721 CXX test/cpp_headers/thread.o 00:03:25.721 CXX test/cpp_headers/trace.o 00:03:25.721 CXX test/cpp_headers/trace_parser.o 00:03:25.721 CXX test/cpp_headers/tree.o 00:03:25.721 CXX test/cpp_headers/ublk.o 00:03:25.721 CXX test/cpp_headers/util.o 00:03:25.721 CXX test/cpp_headers/uuid.o 00:03:25.721 LINK zipf 00:03:25.721 CXX test/cpp_headers/version.o 00:03:25.721 LINK poller_perf 00:03:25.721 CXX test/cpp_headers/vfio_user_pci.o 00:03:25.721 CXX test/cpp_headers/vfio_user_spec.o 00:03:25.721 LINK iscsi_tgt 00:03:25.721 CXX test/cpp_headers/vhost.o 00:03:25.721 CXX test/cpp_headers/vmd.o 00:03:25.721 CXX test/cpp_headers/xor.o 00:03:25.721 CXX test/cpp_headers/zipf.o 00:03:25.721 LINK vtophys 00:03:25.721 LINK spdk_tgt 00:03:25.721 LINK stub 00:03:25.721 LINK env_dpdk_post_init 00:03:25.721 LINK bdev_svc 00:03:25.721 LINK ioat_perf 00:03:25.721 LINK verify 00:03:25.721 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:25.721 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:25.722 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:25.722 LINK spdk_trace 00:03:25.980 LINK spdk_dd 00:03:25.980 LINK spdk_nvme 00:03:25.980 LINK pci_ut 00:03:26.239 LINK spdk_nvme_identify 00:03:26.239 LINK test_dma 00:03:26.239 CC examples/idxd/perf/perf.o 00:03:26.239 CC test/event/event_perf/event_perf.o 00:03:26.239 CC test/event/reactor/reactor.o 00:03:26.239 CC test/event/reactor_perf/reactor_perf.o 00:03:26.239 CC examples/sock/hello_world/hello_sock.o 00:03:26.239 CC examples/vmd/led/led.o 00:03:26.239 CC examples/thread/thread/thread_ex.o 00:03:26.239 CC examples/vmd/lsvmd/lsvmd.o 00:03:26.239 LINK nvme_fuzz 00:03:26.239 LINK spdk_bdev 00:03:26.239 LINK spdk_nvme_perf 00:03:26.239 CC app/vhost/vhost.o 00:03:26.239 CC test/event/app_repeat/app_repeat.o 00:03:26.239 CC test/event/scheduler/scheduler.o 00:03:26.239 LINK vhost_fuzz 00:03:26.239 LINK spdk_top 00:03:26.239 LINK lsvmd 00:03:26.239 LINK reactor 00:03:26.239 LINK reactor_perf 00:03:26.239 LINK led 00:03:26.239 LINK event_perf 00:03:26.239 LINK mem_callbacks 00:03:26.498 LINK hello_sock 00:03:26.498 LINK app_repeat 00:03:26.498 LINK thread 00:03:26.498 LINK vhost 00:03:26.498 LINK idxd_perf 00:03:26.498 LINK scheduler 00:03:26.756 LINK memory_ut 00:03:26.756 CC test/nvme/reset/reset.o 00:03:26.756 CC test/nvme/aer/aer.o 00:03:26.756 CC test/nvme/e2edp/nvme_dp.o 00:03:26.756 CC test/nvme/reserve/reserve.o 00:03:26.756 CC test/nvme/startup/startup.o 00:03:26.756 CC test/nvme/connect_stress/connect_stress.o 00:03:26.756 CC test/nvme/fdp/fdp.o 00:03:26.756 CC test/nvme/fused_ordering/fused_ordering.o 00:03:26.756 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:26.756 CC test/nvme/overhead/overhead.o 00:03:26.756 CC test/nvme/err_injection/err_injection.o 00:03:26.756 CC test/nvme/compliance/nvme_compliance.o 00:03:26.756 CC test/nvme/sgl/sgl.o 00:03:26.756 CC test/nvme/boot_partition/boot_partition.o 00:03:26.756 CC test/blobfs/mkfs/mkfs.o 00:03:26.756 CC test/nvme/cuse/cuse.o 00:03:26.756 CC test/accel/dif/dif.o 00:03:26.756 CC test/nvme/simple_copy/simple_copy.o 00:03:26.756 CC test/lvol/esnap/esnap.o 00:03:26.756 CC examples/nvme/abort/abort.o 00:03:26.756 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:26.756 CC examples/nvme/arbitration/arbitration.o 00:03:26.756 CC examples/nvme/hello_world/hello_world.o 00:03:27.015 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:27.015 CC examples/nvme/reconnect/reconnect.o 00:03:27.015 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:27.015 CC examples/nvme/hotplug/hotplug.o 00:03:27.015 LINK fused_ordering 00:03:27.015 CC examples/accel/perf/accel_perf.o 00:03:27.015 LINK doorbell_aers 00:03:27.015 LINK connect_stress 00:03:27.015 LINK boot_partition 00:03:27.015 LINK startup 00:03:27.015 LINK reset 00:03:27.015 LINK mkfs 00:03:27.015 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:27.015 CC examples/blob/hello_world/hello_blob.o 00:03:27.015 LINK err_injection 00:03:27.015 LINK reserve 00:03:27.015 CC examples/blob/cli/blobcli.o 00:03:27.015 LINK simple_copy 00:03:27.015 LINK nvme_dp 00:03:27.015 LINK sgl 00:03:27.015 LINK overhead 00:03:27.015 LINK nvme_compliance 00:03:27.015 LINK aer 00:03:27.015 LINK fdp 00:03:27.015 LINK pmr_persistence 00:03:27.015 LINK cmb_copy 00:03:27.015 LINK hello_world 00:03:27.015 LINK hotplug 00:03:27.274 LINK arbitration 00:03:27.274 LINK reconnect 00:03:27.274 LINK abort 00:03:27.274 LINK iscsi_fuzz 00:03:27.274 LINK hello_blob 00:03:27.274 LINK hello_fsdev 00:03:27.274 LINK nvme_manage 00:03:27.274 LINK dif 00:03:27.274 LINK accel_perf 00:03:27.532 LINK blobcli 00:03:27.790 LINK cuse 00:03:27.790 CC test/bdev/bdevio/bdevio.o 00:03:27.790 CC examples/bdev/hello_world/hello_bdev.o 00:03:27.790 CC examples/bdev/bdevperf/bdevperf.o 00:03:28.049 LINK hello_bdev 00:03:28.307 LINK bdevio 00:03:28.566 LINK bdevperf 00:03:29.133 CC examples/nvmf/nvmf/nvmf.o 00:03:29.392 LINK nvmf 00:03:30.768 LINK esnap 00:03:30.768 00:03:30.768 real 0m56.533s 00:03:30.768 user 8m1.802s 00:03:30.768 sys 3m44.310s 00:03:30.768 22:34:38 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:30.768 22:34:38 make -- common/autotest_common.sh@10 -- $ set +x 00:03:30.768 ************************************ 00:03:30.768 END TEST make 00:03:30.768 ************************************ 00:03:30.768 22:34:38 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:30.768 22:34:38 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:30.768 22:34:38 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:30.768 22:34:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:30.768 22:34:38 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/power/collect-cpu-load.pid ]] 00:03:30.768 22:34:38 -- pm/common@44 -- $ pid=2097316 00:03:30.768 22:34:38 -- pm/common@50 -- $ kill -TERM 2097316 00:03:30.768 22:34:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:30.768 22:34:38 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/power/collect-vmstat.pid ]] 00:03:30.768 22:34:38 -- pm/common@44 -- $ pid=2097318 00:03:30.768 22:34:38 -- pm/common@50 -- $ kill -TERM 2097318 00:03:30.768 22:34:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:30.768 22:34:38 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:30.768 22:34:38 -- pm/common@44 -- $ pid=2097320 00:03:30.768 22:34:38 -- pm/common@50 -- $ kill -TERM 2097320 00:03:30.768 22:34:38 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:30.768 22:34:38 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:30.768 22:34:38 -- pm/common@44 -- $ pid=2097343 00:03:30.768 22:34:38 -- pm/common@50 -- $ sudo -E kill -TERM 2097343 00:03:30.768 22:34:38 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:30.768 22:34:38 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/autorun-spdk.conf 00:03:30.768 22:34:38 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:30.768 22:34:38 -- common/autotest_common.sh@1711 -- # lcov --version 00:03:30.768 22:34:38 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:31.028 22:34:38 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:31.028 22:34:38 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:31.028 22:34:38 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:31.028 22:34:38 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:31.028 22:34:38 -- scripts/common.sh@336 -- # IFS=.-: 00:03:31.028 22:34:38 -- scripts/common.sh@336 -- # read -ra ver1 00:03:31.028 22:34:38 -- scripts/common.sh@337 -- # IFS=.-: 00:03:31.028 22:34:38 -- scripts/common.sh@337 -- # read -ra ver2 00:03:31.028 22:34:38 -- scripts/common.sh@338 -- # local 'op=<' 00:03:31.028 22:34:38 -- scripts/common.sh@340 -- # ver1_l=2 00:03:31.028 22:34:38 -- scripts/common.sh@341 -- # ver2_l=1 00:03:31.028 22:34:38 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:31.028 22:34:38 -- scripts/common.sh@344 -- # case "$op" in 00:03:31.028 22:34:38 -- scripts/common.sh@345 -- # : 1 00:03:31.028 22:34:38 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:31.028 22:34:38 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:31.028 22:34:38 -- scripts/common.sh@365 -- # decimal 1 00:03:31.028 22:34:38 -- scripts/common.sh@353 -- # local d=1 00:03:31.028 22:34:38 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:31.028 22:34:38 -- scripts/common.sh@355 -- # echo 1 00:03:31.028 22:34:38 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:31.028 22:34:38 -- scripts/common.sh@366 -- # decimal 2 00:03:31.028 22:34:38 -- scripts/common.sh@353 -- # local d=2 00:03:31.028 22:34:38 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:31.028 22:34:38 -- scripts/common.sh@355 -- # echo 2 00:03:31.028 22:34:38 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:31.028 22:34:38 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:31.028 22:34:38 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:31.028 22:34:38 -- scripts/common.sh@368 -- # return 0 00:03:31.028 22:34:38 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:31.028 22:34:38 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:31.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:31.028 --rc genhtml_branch_coverage=1 00:03:31.028 --rc genhtml_function_coverage=1 00:03:31.028 --rc genhtml_legend=1 00:03:31.028 --rc geninfo_all_blocks=1 00:03:31.028 --rc geninfo_unexecuted_blocks=1 00:03:31.028 00:03:31.028 ' 00:03:31.028 22:34:38 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:31.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:31.028 --rc genhtml_branch_coverage=1 00:03:31.028 --rc genhtml_function_coverage=1 00:03:31.028 --rc genhtml_legend=1 00:03:31.028 --rc geninfo_all_blocks=1 00:03:31.028 --rc geninfo_unexecuted_blocks=1 00:03:31.028 00:03:31.028 ' 00:03:31.028 22:34:38 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:31.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:31.028 --rc genhtml_branch_coverage=1 00:03:31.028 --rc genhtml_function_coverage=1 00:03:31.028 --rc genhtml_legend=1 00:03:31.028 --rc geninfo_all_blocks=1 00:03:31.028 --rc geninfo_unexecuted_blocks=1 00:03:31.028 00:03:31.028 ' 00:03:31.028 22:34:38 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:31.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:31.028 --rc genhtml_branch_coverage=1 00:03:31.028 --rc genhtml_function_coverage=1 00:03:31.028 --rc genhtml_legend=1 00:03:31.028 --rc geninfo_all_blocks=1 00:03:31.028 --rc geninfo_unexecuted_blocks=1 00:03:31.028 00:03:31.028 ' 00:03:31.028 22:34:38 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:03:31.028 22:34:38 -- nvmf/common.sh@7 -- # uname -s 00:03:31.028 22:34:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:31.028 22:34:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:31.028 22:34:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:31.028 22:34:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:31.028 22:34:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:31.028 22:34:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:31.028 22:34:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:31.028 22:34:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:31.028 22:34:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:31.028 22:34:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:31.028 22:34:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:03:31.028 22:34:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:03:31.028 22:34:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:31.028 22:34:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:31.028 22:34:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:31.028 22:34:38 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:31.028 22:34:38 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:03:31.028 22:34:38 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:31.028 22:34:38 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:31.028 22:34:38 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:31.028 22:34:38 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:31.028 22:34:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:31.028 22:34:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:31.028 22:34:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:31.028 22:34:38 -- paths/export.sh@5 -- # export PATH 00:03:31.028 22:34:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:31.028 22:34:38 -- nvmf/common.sh@51 -- # : 0 00:03:31.028 22:34:38 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:31.028 22:34:38 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:31.028 22:34:38 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:31.028 22:34:38 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:31.028 22:34:38 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:31.028 22:34:38 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:31.028 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:31.028 22:34:38 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:31.028 22:34:38 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:31.028 22:34:38 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:31.028 22:34:38 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:31.028 22:34:38 -- spdk/autotest.sh@32 -- # uname -s 00:03:31.028 22:34:38 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:31.028 22:34:38 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:31.028 22:34:38 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/coredumps 00:03:31.028 22:34:38 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/core-collector.sh %P %s %t' 00:03:31.028 22:34:38 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/coredumps 00:03:31.028 22:34:38 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:31.028 22:34:38 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:31.028 22:34:38 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:31.028 22:34:38 -- spdk/autotest.sh@48 -- # udevadm_pid=2161012 00:03:31.028 22:34:38 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:31.028 22:34:38 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:31.028 22:34:38 -- pm/common@17 -- # local monitor 00:03:31.028 22:34:38 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:31.028 22:34:38 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:31.028 22:34:38 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:31.028 22:34:38 -- pm/common@21 -- # date +%s 00:03:31.028 22:34:38 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:31.028 22:34:38 -- pm/common@21 -- # date +%s 00:03:31.028 22:34:38 -- pm/common@25 -- # sleep 1 00:03:31.028 22:34:38 -- pm/common@21 -- # date +%s 00:03:31.028 22:34:38 -- pm/common@21 -- # date +%s 00:03:31.028 22:34:38 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/power -l -p monitor.autotest.sh.1733866478 00:03:31.028 22:34:38 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/power -l -p monitor.autotest.sh.1733866478 00:03:31.028 22:34:38 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/power -l -p monitor.autotest.sh.1733866478 00:03:31.028 22:34:38 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/power -l -p monitor.autotest.sh.1733866478 00:03:31.028 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/power/monitor.autotest.sh.1733866478_collect-cpu-load.pm.log 00:03:31.028 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/power/monitor.autotest.sh.1733866478_collect-vmstat.pm.log 00:03:31.028 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/power/monitor.autotest.sh.1733866478_collect-cpu-temp.pm.log 00:03:31.028 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/power/monitor.autotest.sh.1733866478_collect-bmc-pm.bmc.pm.log 00:03:31.963 22:34:39 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:31.963 22:34:39 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:31.963 22:34:39 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:31.963 22:34:39 -- common/autotest_common.sh@10 -- # set +x 00:03:31.963 22:34:39 -- spdk/autotest.sh@59 -- # create_test_list 00:03:31.963 22:34:39 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:31.963 22:34:39 -- common/autotest_common.sh@10 -- # set +x 00:03:31.963 22:34:39 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/autotest.sh 00:03:31.963 22:34:39 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk 00:03:31.963 22:34:39 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk 00:03:31.963 22:34:39 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output 00:03:31.963 22:34:39 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk 00:03:31.963 22:34:39 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:31.963 22:34:39 -- common/autotest_common.sh@1457 -- # uname 00:03:31.963 22:34:39 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:31.963 22:34:39 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:31.963 22:34:39 -- common/autotest_common.sh@1477 -- # uname 00:03:32.222 22:34:39 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:32.222 22:34:39 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:32.222 22:34:39 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:32.222 lcov: LCOV version 1.15 00:03:32.222 22:34:39 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/cov_base.info 00:03:44.431 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:44.431 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/lib/nvme/nvme_stubs.gcno 00:03:59.314 22:35:04 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:59.314 22:35:04 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:59.314 22:35:04 -- common/autotest_common.sh@10 -- # set +x 00:03:59.314 22:35:04 -- spdk/autotest.sh@78 -- # rm -f 00:03:59.314 22:35:04 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/setup.sh reset 00:03:59.882 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:03:59.882 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:59.882 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:59.882 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:04:00.142 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:04:00.142 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:04:00.142 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:04:00.142 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:04:00.142 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:04:00.142 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:04:00.142 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:04:00.142 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:04:00.142 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:04:00.142 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:04:00.142 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:04:00.142 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:04:00.401 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:04:00.401 22:35:07 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:00.401 22:35:07 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:00.401 22:35:07 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:00.401 22:35:07 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:04:00.401 22:35:07 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:04:00.401 22:35:07 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:04:00.401 22:35:07 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:00.401 22:35:07 -- common/autotest_common.sh@1669 -- # bdf=0000:5e:00.0 00:04:00.401 22:35:07 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:00.401 22:35:07 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:04:00.401 22:35:07 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:00.401 22:35:07 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:00.401 22:35:07 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:00.401 22:35:07 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:00.401 22:35:07 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:00.401 22:35:07 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:00.401 22:35:07 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:00.401 22:35:07 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:00.401 22:35:07 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:00.401 No valid GPT data, bailing 00:04:00.401 22:35:08 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:00.401 22:35:08 -- scripts/common.sh@394 -- # pt= 00:04:00.401 22:35:08 -- scripts/common.sh@395 -- # return 1 00:04:00.401 22:35:08 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:00.401 1+0 records in 00:04:00.401 1+0 records out 00:04:00.401 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00406625 s, 258 MB/s 00:04:00.401 22:35:08 -- spdk/autotest.sh@105 -- # sync 00:04:00.401 22:35:08 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:00.401 22:35:08 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:00.401 22:35:08 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:06.977 22:35:13 -- spdk/autotest.sh@111 -- # uname -s 00:04:06.977 22:35:13 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:06.977 22:35:13 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:06.977 22:35:13 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/setup.sh status 00:04:08.883 Hugepages 00:04:08.883 node hugesize free / total 00:04:08.883 node0 1048576kB 0 / 0 00:04:08.883 node0 2048kB 0 / 0 00:04:08.883 node1 1048576kB 0 / 0 00:04:08.883 node1 2048kB 0 / 0 00:04:08.883 00:04:08.883 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:08.883 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:08.883 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:08.883 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:08.883 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:08.883 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:08.883 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:08.883 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:08.883 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:08.883 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:04:08.883 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:08.883 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:08.883 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:08.883 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:08.883 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:08.883 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:08.883 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:08.883 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:08.883 22:35:16 -- spdk/autotest.sh@117 -- # uname -s 00:04:08.883 22:35:16 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:08.883 22:35:16 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:08.883 22:35:16 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/setup.sh 00:04:12.186 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:12.186 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:12.186 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:12.186 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:12.186 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:12.186 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:12.186 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:12.186 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:12.186 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:12.186 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:12.186 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:12.186 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:12.186 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:12.186 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:12.186 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:12.186 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:12.755 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:12.755 22:35:20 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:13.693 22:35:21 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:13.693 22:35:21 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:13.693 22:35:21 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:13.693 22:35:21 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:13.693 22:35:21 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:13.693 22:35:21 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:13.693 22:35:21 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:13.693 22:35:21 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/gen_nvme.sh 00:04:13.693 22:35:21 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:13.952 22:35:21 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:13.952 22:35:21 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:04:13.952 22:35:21 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/setup.sh reset 00:04:16.490 Waiting for block devices as requested 00:04:16.490 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:04:16.750 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:16.750 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:17.010 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:17.010 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:17.010 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:17.010 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:17.269 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:17.269 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:17.269 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:17.529 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:17.529 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:17.529 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:17.795 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:17.795 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:17.795 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:17.795 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:18.054 22:35:25 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:18.054 22:35:25 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:04:18.054 22:35:25 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:04:18.054 22:35:25 -- common/autotest_common.sh@1487 -- # grep 0000:5e:00.0/nvme/nvme 00:04:18.054 22:35:25 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:18.054 22:35:25 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:04:18.054 22:35:25 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:18.054 22:35:25 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:18.054 22:35:25 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:18.054 22:35:25 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:18.054 22:35:25 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:18.054 22:35:25 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:18.054 22:35:25 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:18.054 22:35:25 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:04:18.054 22:35:25 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:18.054 22:35:25 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:18.054 22:35:25 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:18.054 22:35:25 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:18.054 22:35:25 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:18.054 22:35:25 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:18.054 22:35:25 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:18.054 22:35:25 -- common/autotest_common.sh@1543 -- # continue 00:04:18.054 22:35:25 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:18.054 22:35:25 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:18.054 22:35:25 -- common/autotest_common.sh@10 -- # set +x 00:04:18.054 22:35:25 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:18.054 22:35:25 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:18.054 22:35:25 -- common/autotest_common.sh@10 -- # set +x 00:04:18.054 22:35:25 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/setup.sh 00:04:21.395 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:21.395 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:21.395 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:21.395 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:21.395 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:21.395 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:21.395 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:21.395 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:21.395 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:21.395 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:21.395 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:21.395 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:21.395 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:21.395 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:21.395 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:21.395 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:21.992 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:21.992 22:35:29 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:21.992 22:35:29 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:21.992 22:35:29 -- common/autotest_common.sh@10 -- # set +x 00:04:21.992 22:35:29 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:21.992 22:35:29 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:21.992 22:35:29 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:21.992 22:35:29 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:21.992 22:35:29 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:21.992 22:35:29 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:21.992 22:35:29 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:21.992 22:35:29 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:21.992 22:35:29 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:21.992 22:35:29 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:21.992 22:35:29 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:21.992 22:35:29 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/gen_nvme.sh 00:04:21.992 22:35:29 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:22.252 22:35:29 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:22.252 22:35:29 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:04:22.252 22:35:29 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:22.252 22:35:29 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:04:22.252 22:35:29 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:04:22.252 22:35:29 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:22.252 22:35:29 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:04:22.252 22:35:29 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:04:22.252 22:35:29 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:5e:00.0 00:04:22.252 22:35:29 -- common/autotest_common.sh@1579 -- # [[ -z 0000:5e:00.0 ]] 00:04:22.252 22:35:29 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=2175424 00:04:22.252 22:35:29 -- common/autotest_common.sh@1585 -- # waitforlisten 2175424 00:04:22.252 22:35:29 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt 00:04:22.252 22:35:29 -- common/autotest_common.sh@835 -- # '[' -z 2175424 ']' 00:04:22.252 22:35:29 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:22.252 22:35:29 -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:22.252 22:35:29 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:22.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:22.252 22:35:29 -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:22.252 22:35:29 -- common/autotest_common.sh@10 -- # set +x 00:04:22.252 [2024-12-10 22:35:29.794172] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:04:22.252 [2024-12-10 22:35:29.794220] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2175424 ] 00:04:22.252 [2024-12-10 22:35:29.869086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:22.252 [2024-12-10 22:35:29.908725] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:22.511 22:35:30 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:22.511 22:35:30 -- common/autotest_common.sh@868 -- # return 0 00:04:22.511 22:35:30 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:04:22.511 22:35:30 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:04:22.511 22:35:30 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:04:25.802 nvme0n1 00:04:25.802 22:35:33 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:25.802 [2024-12-10 22:35:33.327789] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:04:25.802 request: 00:04:25.802 { 00:04:25.802 "nvme_ctrlr_name": "nvme0", 00:04:25.802 "password": "test", 00:04:25.802 "method": "bdev_nvme_opal_revert", 00:04:25.802 "req_id": 1 00:04:25.802 } 00:04:25.802 Got JSON-RPC error response 00:04:25.802 response: 00:04:25.802 { 00:04:25.802 "code": -32602, 00:04:25.802 "message": "Invalid parameters" 00:04:25.802 } 00:04:25.802 22:35:33 -- common/autotest_common.sh@1591 -- # true 00:04:25.802 22:35:33 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:04:25.802 22:35:33 -- common/autotest_common.sh@1595 -- # killprocess 2175424 00:04:25.802 22:35:33 -- common/autotest_common.sh@954 -- # '[' -z 2175424 ']' 00:04:25.802 22:35:33 -- common/autotest_common.sh@958 -- # kill -0 2175424 00:04:25.802 22:35:33 -- common/autotest_common.sh@959 -- # uname 00:04:25.802 22:35:33 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:25.802 22:35:33 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2175424 00:04:25.802 22:35:33 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:25.802 22:35:33 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:25.802 22:35:33 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2175424' 00:04:25.802 killing process with pid 2175424 00:04:25.802 22:35:33 -- common/autotest_common.sh@973 -- # kill 2175424 00:04:25.802 22:35:33 -- common/autotest_common.sh@978 -- # wait 2175424 00:04:27.708 22:35:35 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:27.708 22:35:35 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:27.708 22:35:35 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:27.708 22:35:35 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:27.708 22:35:35 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:27.708 22:35:35 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:27.708 22:35:35 -- common/autotest_common.sh@10 -- # set +x 00:04:27.708 22:35:35 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:27.708 22:35:35 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/env/env.sh 00:04:27.708 22:35:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:27.708 22:35:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:27.708 22:35:35 -- common/autotest_common.sh@10 -- # set +x 00:04:27.708 ************************************ 00:04:27.708 START TEST env 00:04:27.708 ************************************ 00:04:27.708 22:35:35 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/env/env.sh 00:04:27.708 * Looking for test storage... 00:04:27.708 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/env 00:04:27.708 22:35:35 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:27.708 22:35:35 env -- common/autotest_common.sh@1711 -- # lcov --version 00:04:27.708 22:35:35 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:27.708 22:35:35 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:27.708 22:35:35 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:27.708 22:35:35 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:27.708 22:35:35 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:27.708 22:35:35 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:27.708 22:35:35 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:27.708 22:35:35 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:27.708 22:35:35 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:27.708 22:35:35 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:27.708 22:35:35 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:27.709 22:35:35 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:27.709 22:35:35 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:27.709 22:35:35 env -- scripts/common.sh@344 -- # case "$op" in 00:04:27.709 22:35:35 env -- scripts/common.sh@345 -- # : 1 00:04:27.709 22:35:35 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:27.709 22:35:35 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:27.709 22:35:35 env -- scripts/common.sh@365 -- # decimal 1 00:04:27.709 22:35:35 env -- scripts/common.sh@353 -- # local d=1 00:04:27.709 22:35:35 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:27.709 22:35:35 env -- scripts/common.sh@355 -- # echo 1 00:04:27.709 22:35:35 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:27.709 22:35:35 env -- scripts/common.sh@366 -- # decimal 2 00:04:27.709 22:35:35 env -- scripts/common.sh@353 -- # local d=2 00:04:27.709 22:35:35 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:27.709 22:35:35 env -- scripts/common.sh@355 -- # echo 2 00:04:27.709 22:35:35 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:27.709 22:35:35 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:27.709 22:35:35 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:27.709 22:35:35 env -- scripts/common.sh@368 -- # return 0 00:04:27.709 22:35:35 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:27.709 22:35:35 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:27.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.709 --rc genhtml_branch_coverage=1 00:04:27.709 --rc genhtml_function_coverage=1 00:04:27.709 --rc genhtml_legend=1 00:04:27.709 --rc geninfo_all_blocks=1 00:04:27.709 --rc geninfo_unexecuted_blocks=1 00:04:27.709 00:04:27.709 ' 00:04:27.709 22:35:35 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:27.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.709 --rc genhtml_branch_coverage=1 00:04:27.709 --rc genhtml_function_coverage=1 00:04:27.709 --rc genhtml_legend=1 00:04:27.709 --rc geninfo_all_blocks=1 00:04:27.709 --rc geninfo_unexecuted_blocks=1 00:04:27.709 00:04:27.709 ' 00:04:27.709 22:35:35 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:27.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.709 --rc genhtml_branch_coverage=1 00:04:27.709 --rc genhtml_function_coverage=1 00:04:27.709 --rc genhtml_legend=1 00:04:27.709 --rc geninfo_all_blocks=1 00:04:27.709 --rc geninfo_unexecuted_blocks=1 00:04:27.709 00:04:27.709 ' 00:04:27.709 22:35:35 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:27.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:27.709 --rc genhtml_branch_coverage=1 00:04:27.709 --rc genhtml_function_coverage=1 00:04:27.709 --rc genhtml_legend=1 00:04:27.709 --rc geninfo_all_blocks=1 00:04:27.709 --rc geninfo_unexecuted_blocks=1 00:04:27.709 00:04:27.709 ' 00:04:27.709 22:35:35 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/env/memory/memory_ut 00:04:27.709 22:35:35 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:27.709 22:35:35 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:27.709 22:35:35 env -- common/autotest_common.sh@10 -- # set +x 00:04:27.709 ************************************ 00:04:27.709 START TEST env_memory 00:04:27.709 ************************************ 00:04:27.709 22:35:35 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/env/memory/memory_ut 00:04:27.709 00:04:27.709 00:04:27.709 CUnit - A unit testing framework for C - Version 2.1-3 00:04:27.709 http://cunit.sourceforge.net/ 00:04:27.709 00:04:27.709 00:04:27.709 Suite: memory 00:04:27.709 Test: alloc and free memory map ...[2024-12-10 22:35:35.331218] /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:27.709 passed 00:04:27.709 Test: mem map translation ...[2024-12-10 22:35:35.351096] /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:27.709 [2024-12-10 22:35:35.351110] /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:27.709 [2024-12-10 22:35:35.351147] /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:27.709 [2024-12-10 22:35:35.351153] /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:27.709 passed 00:04:27.709 Test: mem map registration ...[2024-12-10 22:35:35.389648] /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:27.709 [2024-12-10 22:35:35.389662] /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:27.709 passed 00:04:27.969 Test: mem map adjacent registrations ...passed 00:04:27.969 00:04:27.969 Run Summary: Type Total Ran Passed Failed Inactive 00:04:27.969 suites 1 1 n/a 0 0 00:04:27.969 tests 4 4 4 0 0 00:04:27.969 asserts 152 152 152 0 n/a 00:04:27.969 00:04:27.969 Elapsed time = 0.140 seconds 00:04:27.969 00:04:27.969 real 0m0.153s 00:04:27.969 user 0m0.138s 00:04:27.969 sys 0m0.015s 00:04:27.969 22:35:35 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:27.969 22:35:35 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:27.969 ************************************ 00:04:27.969 END TEST env_memory 00:04:27.969 ************************************ 00:04:27.969 22:35:35 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/env/vtophys/vtophys 00:04:27.969 22:35:35 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:27.969 22:35:35 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:27.969 22:35:35 env -- common/autotest_common.sh@10 -- # set +x 00:04:27.969 ************************************ 00:04:27.969 START TEST env_vtophys 00:04:27.969 ************************************ 00:04:27.969 22:35:35 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/env/vtophys/vtophys 00:04:27.969 EAL: lib.eal log level changed from notice to debug 00:04:27.969 EAL: Detected lcore 0 as core 0 on socket 0 00:04:27.969 EAL: Detected lcore 1 as core 1 on socket 0 00:04:27.969 EAL: Detected lcore 2 as core 2 on socket 0 00:04:27.969 EAL: Detected lcore 3 as core 3 on socket 0 00:04:27.969 EAL: Detected lcore 4 as core 4 on socket 0 00:04:27.969 EAL: Detected lcore 5 as core 5 on socket 0 00:04:27.969 EAL: Detected lcore 6 as core 6 on socket 0 00:04:27.969 EAL: Detected lcore 7 as core 8 on socket 0 00:04:27.969 EAL: Detected lcore 8 as core 9 on socket 0 00:04:27.969 EAL: Detected lcore 9 as core 10 on socket 0 00:04:27.969 EAL: Detected lcore 10 as core 11 on socket 0 00:04:27.969 EAL: Detected lcore 11 as core 12 on socket 0 00:04:27.969 EAL: Detected lcore 12 as core 13 on socket 0 00:04:27.969 EAL: Detected lcore 13 as core 16 on socket 0 00:04:27.969 EAL: Detected lcore 14 as core 17 on socket 0 00:04:27.969 EAL: Detected lcore 15 as core 18 on socket 0 00:04:27.969 EAL: Detected lcore 16 as core 19 on socket 0 00:04:27.969 EAL: Detected lcore 17 as core 20 on socket 0 00:04:27.969 EAL: Detected lcore 18 as core 21 on socket 0 00:04:27.969 EAL: Detected lcore 19 as core 25 on socket 0 00:04:27.969 EAL: Detected lcore 20 as core 26 on socket 0 00:04:27.969 EAL: Detected lcore 21 as core 27 on socket 0 00:04:27.969 EAL: Detected lcore 22 as core 28 on socket 0 00:04:27.969 EAL: Detected lcore 23 as core 29 on socket 0 00:04:27.969 EAL: Detected lcore 24 as core 0 on socket 1 00:04:27.969 EAL: Detected lcore 25 as core 1 on socket 1 00:04:27.969 EAL: Detected lcore 26 as core 2 on socket 1 00:04:27.969 EAL: Detected lcore 27 as core 3 on socket 1 00:04:27.969 EAL: Detected lcore 28 as core 4 on socket 1 00:04:27.969 EAL: Detected lcore 29 as core 5 on socket 1 00:04:27.969 EAL: Detected lcore 30 as core 6 on socket 1 00:04:27.969 EAL: Detected lcore 31 as core 9 on socket 1 00:04:27.969 EAL: Detected lcore 32 as core 10 on socket 1 00:04:27.969 EAL: Detected lcore 33 as core 11 on socket 1 00:04:27.969 EAL: Detected lcore 34 as core 12 on socket 1 00:04:27.969 EAL: Detected lcore 35 as core 13 on socket 1 00:04:27.969 EAL: Detected lcore 36 as core 16 on socket 1 00:04:27.969 EAL: Detected lcore 37 as core 17 on socket 1 00:04:27.969 EAL: Detected lcore 38 as core 18 on socket 1 00:04:27.969 EAL: Detected lcore 39 as core 19 on socket 1 00:04:27.969 EAL: Detected lcore 40 as core 20 on socket 1 00:04:27.969 EAL: Detected lcore 41 as core 21 on socket 1 00:04:27.969 EAL: Detected lcore 42 as core 24 on socket 1 00:04:27.969 EAL: Detected lcore 43 as core 25 on socket 1 00:04:27.969 EAL: Detected lcore 44 as core 26 on socket 1 00:04:27.969 EAL: Detected lcore 45 as core 27 on socket 1 00:04:27.969 EAL: Detected lcore 46 as core 28 on socket 1 00:04:27.969 EAL: Detected lcore 47 as core 29 on socket 1 00:04:27.969 EAL: Detected lcore 48 as core 0 on socket 0 00:04:27.969 EAL: Detected lcore 49 as core 1 on socket 0 00:04:27.969 EAL: Detected lcore 50 as core 2 on socket 0 00:04:27.969 EAL: Detected lcore 51 as core 3 on socket 0 00:04:27.969 EAL: Detected lcore 52 as core 4 on socket 0 00:04:27.969 EAL: Detected lcore 53 as core 5 on socket 0 00:04:27.969 EAL: Detected lcore 54 as core 6 on socket 0 00:04:27.969 EAL: Detected lcore 55 as core 8 on socket 0 00:04:27.969 EAL: Detected lcore 56 as core 9 on socket 0 00:04:27.969 EAL: Detected lcore 57 as core 10 on socket 0 00:04:27.969 EAL: Detected lcore 58 as core 11 on socket 0 00:04:27.969 EAL: Detected lcore 59 as core 12 on socket 0 00:04:27.969 EAL: Detected lcore 60 as core 13 on socket 0 00:04:27.969 EAL: Detected lcore 61 as core 16 on socket 0 00:04:27.969 EAL: Detected lcore 62 as core 17 on socket 0 00:04:27.969 EAL: Detected lcore 63 as core 18 on socket 0 00:04:27.969 EAL: Detected lcore 64 as core 19 on socket 0 00:04:27.969 EAL: Detected lcore 65 as core 20 on socket 0 00:04:27.969 EAL: Detected lcore 66 as core 21 on socket 0 00:04:27.969 EAL: Detected lcore 67 as core 25 on socket 0 00:04:27.969 EAL: Detected lcore 68 as core 26 on socket 0 00:04:27.969 EAL: Detected lcore 69 as core 27 on socket 0 00:04:27.969 EAL: Detected lcore 70 as core 28 on socket 0 00:04:27.969 EAL: Detected lcore 71 as core 29 on socket 0 00:04:27.969 EAL: Detected lcore 72 as core 0 on socket 1 00:04:27.969 EAL: Detected lcore 73 as core 1 on socket 1 00:04:27.969 EAL: Detected lcore 74 as core 2 on socket 1 00:04:27.969 EAL: Detected lcore 75 as core 3 on socket 1 00:04:27.969 EAL: Detected lcore 76 as core 4 on socket 1 00:04:27.969 EAL: Detected lcore 77 as core 5 on socket 1 00:04:27.969 EAL: Detected lcore 78 as core 6 on socket 1 00:04:27.969 EAL: Detected lcore 79 as core 9 on socket 1 00:04:27.969 EAL: Detected lcore 80 as core 10 on socket 1 00:04:27.969 EAL: Detected lcore 81 as core 11 on socket 1 00:04:27.969 EAL: Detected lcore 82 as core 12 on socket 1 00:04:27.969 EAL: Detected lcore 83 as core 13 on socket 1 00:04:27.969 EAL: Detected lcore 84 as core 16 on socket 1 00:04:27.969 EAL: Detected lcore 85 as core 17 on socket 1 00:04:27.969 EAL: Detected lcore 86 as core 18 on socket 1 00:04:27.969 EAL: Detected lcore 87 as core 19 on socket 1 00:04:27.969 EAL: Detected lcore 88 as core 20 on socket 1 00:04:27.969 EAL: Detected lcore 89 as core 21 on socket 1 00:04:27.969 EAL: Detected lcore 90 as core 24 on socket 1 00:04:27.969 EAL: Detected lcore 91 as core 25 on socket 1 00:04:27.969 EAL: Detected lcore 92 as core 26 on socket 1 00:04:27.969 EAL: Detected lcore 93 as core 27 on socket 1 00:04:27.969 EAL: Detected lcore 94 as core 28 on socket 1 00:04:27.969 EAL: Detected lcore 95 as core 29 on socket 1 00:04:27.969 EAL: Maximum logical cores by configuration: 128 00:04:27.969 EAL: Detected CPU lcores: 96 00:04:27.969 EAL: Detected NUMA nodes: 2 00:04:27.969 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:27.969 EAL: Detected shared linkage of DPDK 00:04:27.969 EAL: No shared files mode enabled, IPC will be disabled 00:04:27.969 EAL: Bus pci wants IOVA as 'DC' 00:04:27.969 EAL: Buses did not request a specific IOVA mode. 00:04:27.969 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:27.969 EAL: Selected IOVA mode 'VA' 00:04:27.969 EAL: Probing VFIO support... 00:04:27.969 EAL: IOMMU type 1 (Type 1) is supported 00:04:27.969 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:27.969 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:27.969 EAL: VFIO support initialized 00:04:27.969 EAL: Ask a virtual area of 0x2e000 bytes 00:04:27.969 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:27.969 EAL: Setting up physically contiguous memory... 00:04:27.969 EAL: Setting maximum number of open files to 524288 00:04:27.969 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:27.969 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:27.969 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:27.969 EAL: Ask a virtual area of 0x61000 bytes 00:04:27.969 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:27.970 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:27.970 EAL: Ask a virtual area of 0x400000000 bytes 00:04:27.970 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:27.970 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:27.970 EAL: Ask a virtual area of 0x61000 bytes 00:04:27.970 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:27.970 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:27.970 EAL: Ask a virtual area of 0x400000000 bytes 00:04:27.970 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:27.970 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:27.970 EAL: Ask a virtual area of 0x61000 bytes 00:04:27.970 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:27.970 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:27.970 EAL: Ask a virtual area of 0x400000000 bytes 00:04:27.970 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:27.970 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:27.970 EAL: Ask a virtual area of 0x61000 bytes 00:04:27.970 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:27.970 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:27.970 EAL: Ask a virtual area of 0x400000000 bytes 00:04:27.970 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:27.970 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:27.970 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:27.970 EAL: Ask a virtual area of 0x61000 bytes 00:04:27.970 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:27.970 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:27.970 EAL: Ask a virtual area of 0x400000000 bytes 00:04:27.970 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:27.970 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:27.970 EAL: Ask a virtual area of 0x61000 bytes 00:04:27.970 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:27.970 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:27.970 EAL: Ask a virtual area of 0x400000000 bytes 00:04:27.970 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:27.970 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:27.970 EAL: Ask a virtual area of 0x61000 bytes 00:04:27.970 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:27.970 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:27.970 EAL: Ask a virtual area of 0x400000000 bytes 00:04:27.970 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:27.970 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:27.970 EAL: Ask a virtual area of 0x61000 bytes 00:04:27.970 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:27.970 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:27.970 EAL: Ask a virtual area of 0x400000000 bytes 00:04:27.970 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:27.970 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:27.970 EAL: Hugepages will be freed exactly as allocated. 00:04:27.970 EAL: No shared files mode enabled, IPC is disabled 00:04:27.970 EAL: No shared files mode enabled, IPC is disabled 00:04:27.970 EAL: TSC frequency is ~2300000 KHz 00:04:27.970 EAL: Main lcore 0 is ready (tid=7f0934c29a00;cpuset=[0]) 00:04:27.970 EAL: Trying to obtain current memory policy. 00:04:27.970 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:27.970 EAL: Restoring previous memory policy: 0 00:04:27.970 EAL: request: mp_malloc_sync 00:04:27.970 EAL: No shared files mode enabled, IPC is disabled 00:04:27.970 EAL: Heap on socket 0 was expanded by 2MB 00:04:27.970 EAL: No shared files mode enabled, IPC is disabled 00:04:27.970 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:27.970 EAL: Mem event callback 'spdk:(nil)' registered 00:04:27.970 00:04:27.970 00:04:27.970 CUnit - A unit testing framework for C - Version 2.1-3 00:04:27.970 http://cunit.sourceforge.net/ 00:04:27.970 00:04:27.970 00:04:27.970 Suite: components_suite 00:04:27.970 Test: vtophys_malloc_test ...passed 00:04:27.970 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:27.970 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:27.970 EAL: Restoring previous memory policy: 4 00:04:27.970 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.970 EAL: request: mp_malloc_sync 00:04:27.970 EAL: No shared files mode enabled, IPC is disabled 00:04:27.970 EAL: Heap on socket 0 was expanded by 4MB 00:04:27.970 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.970 EAL: request: mp_malloc_sync 00:04:27.970 EAL: No shared files mode enabled, IPC is disabled 00:04:27.970 EAL: Heap on socket 0 was shrunk by 4MB 00:04:27.970 EAL: Trying to obtain current memory policy. 00:04:27.970 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:27.970 EAL: Restoring previous memory policy: 4 00:04:27.970 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.970 EAL: request: mp_malloc_sync 00:04:27.970 EAL: No shared files mode enabled, IPC is disabled 00:04:27.970 EAL: Heap on socket 0 was expanded by 6MB 00:04:27.970 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.970 EAL: request: mp_malloc_sync 00:04:27.970 EAL: No shared files mode enabled, IPC is disabled 00:04:27.970 EAL: Heap on socket 0 was shrunk by 6MB 00:04:27.970 EAL: Trying to obtain current memory policy. 00:04:27.970 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:27.970 EAL: Restoring previous memory policy: 4 00:04:27.970 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.970 EAL: request: mp_malloc_sync 00:04:27.970 EAL: No shared files mode enabled, IPC is disabled 00:04:27.970 EAL: Heap on socket 0 was expanded by 10MB 00:04:27.970 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.970 EAL: request: mp_malloc_sync 00:04:27.970 EAL: No shared files mode enabled, IPC is disabled 00:04:27.970 EAL: Heap on socket 0 was shrunk by 10MB 00:04:27.970 EAL: Trying to obtain current memory policy. 00:04:27.970 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:27.970 EAL: Restoring previous memory policy: 4 00:04:27.970 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.970 EAL: request: mp_malloc_sync 00:04:27.970 EAL: No shared files mode enabled, IPC is disabled 00:04:27.970 EAL: Heap on socket 0 was expanded by 18MB 00:04:27.970 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.970 EAL: request: mp_malloc_sync 00:04:27.970 EAL: No shared files mode enabled, IPC is disabled 00:04:27.970 EAL: Heap on socket 0 was shrunk by 18MB 00:04:27.970 EAL: Trying to obtain current memory policy. 00:04:27.970 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:27.970 EAL: Restoring previous memory policy: 4 00:04:27.970 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.970 EAL: request: mp_malloc_sync 00:04:27.970 EAL: No shared files mode enabled, IPC is disabled 00:04:27.970 EAL: Heap on socket 0 was expanded by 34MB 00:04:27.970 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.970 EAL: request: mp_malloc_sync 00:04:27.970 EAL: No shared files mode enabled, IPC is disabled 00:04:27.970 EAL: Heap on socket 0 was shrunk by 34MB 00:04:27.970 EAL: Trying to obtain current memory policy. 00:04:27.970 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:27.970 EAL: Restoring previous memory policy: 4 00:04:27.970 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.970 EAL: request: mp_malloc_sync 00:04:27.970 EAL: No shared files mode enabled, IPC is disabled 00:04:27.970 EAL: Heap on socket 0 was expanded by 66MB 00:04:27.970 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.970 EAL: request: mp_malloc_sync 00:04:27.970 EAL: No shared files mode enabled, IPC is disabled 00:04:27.970 EAL: Heap on socket 0 was shrunk by 66MB 00:04:27.970 EAL: Trying to obtain current memory policy. 00:04:27.970 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:27.970 EAL: Restoring previous memory policy: 4 00:04:27.970 EAL: Calling mem event callback 'spdk:(nil)' 00:04:27.970 EAL: request: mp_malloc_sync 00:04:27.970 EAL: No shared files mode enabled, IPC is disabled 00:04:27.970 EAL: Heap on socket 0 was expanded by 130MB 00:04:28.230 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.230 EAL: request: mp_malloc_sync 00:04:28.230 EAL: No shared files mode enabled, IPC is disabled 00:04:28.230 EAL: Heap on socket 0 was shrunk by 130MB 00:04:28.230 EAL: Trying to obtain current memory policy. 00:04:28.230 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:28.230 EAL: Restoring previous memory policy: 4 00:04:28.230 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.230 EAL: request: mp_malloc_sync 00:04:28.230 EAL: No shared files mode enabled, IPC is disabled 00:04:28.230 EAL: Heap on socket 0 was expanded by 258MB 00:04:28.230 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.230 EAL: request: mp_malloc_sync 00:04:28.230 EAL: No shared files mode enabled, IPC is disabled 00:04:28.230 EAL: Heap on socket 0 was shrunk by 258MB 00:04:28.230 EAL: Trying to obtain current memory policy. 00:04:28.230 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:28.230 EAL: Restoring previous memory policy: 4 00:04:28.230 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.230 EAL: request: mp_malloc_sync 00:04:28.230 EAL: No shared files mode enabled, IPC is disabled 00:04:28.230 EAL: Heap on socket 0 was expanded by 514MB 00:04:28.489 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.489 EAL: request: mp_malloc_sync 00:04:28.489 EAL: No shared files mode enabled, IPC is disabled 00:04:28.489 EAL: Heap on socket 0 was shrunk by 514MB 00:04:28.489 EAL: Trying to obtain current memory policy. 00:04:28.489 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:28.748 EAL: Restoring previous memory policy: 4 00:04:28.748 EAL: Calling mem event callback 'spdk:(nil)' 00:04:28.748 EAL: request: mp_malloc_sync 00:04:28.748 EAL: No shared files mode enabled, IPC is disabled 00:04:28.748 EAL: Heap on socket 0 was expanded by 1026MB 00:04:28.748 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.008 EAL: request: mp_malloc_sync 00:04:29.008 EAL: No shared files mode enabled, IPC is disabled 00:04:29.008 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:29.008 passed 00:04:29.008 00:04:29.008 Run Summary: Type Total Ran Passed Failed Inactive 00:04:29.008 suites 1 1 n/a 0 0 00:04:29.008 tests 2 2 2 0 0 00:04:29.008 asserts 497 497 497 0 n/a 00:04:29.008 00:04:29.008 Elapsed time = 0.976 seconds 00:04:29.008 EAL: Calling mem event callback 'spdk:(nil)' 00:04:29.008 EAL: request: mp_malloc_sync 00:04:29.008 EAL: No shared files mode enabled, IPC is disabled 00:04:29.008 EAL: Heap on socket 0 was shrunk by 2MB 00:04:29.008 EAL: No shared files mode enabled, IPC is disabled 00:04:29.008 EAL: No shared files mode enabled, IPC is disabled 00:04:29.008 EAL: No shared files mode enabled, IPC is disabled 00:04:29.008 00:04:29.008 real 0m1.104s 00:04:29.008 user 0m0.649s 00:04:29.008 sys 0m0.430s 00:04:29.008 22:35:36 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:29.008 22:35:36 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:29.008 ************************************ 00:04:29.008 END TEST env_vtophys 00:04:29.008 ************************************ 00:04:29.008 22:35:36 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/env/pci/pci_ut 00:04:29.008 22:35:36 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:29.008 22:35:36 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:29.008 22:35:36 env -- common/autotest_common.sh@10 -- # set +x 00:04:29.008 ************************************ 00:04:29.008 START TEST env_pci 00:04:29.008 ************************************ 00:04:29.008 22:35:36 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/env/pci/pci_ut 00:04:29.008 00:04:29.008 00:04:29.008 CUnit - A unit testing framework for C - Version 2.1-3 00:04:29.008 http://cunit.sourceforge.net/ 00:04:29.008 00:04:29.008 00:04:29.008 Suite: pci 00:04:29.008 Test: pci_hook ...[2024-12-10 22:35:36.705727] /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2176726 has claimed it 00:04:29.267 EAL: Cannot find device (10000:00:01.0) 00:04:29.267 EAL: Failed to attach device on primary process 00:04:29.267 passed 00:04:29.267 00:04:29.267 Run Summary: Type Total Ran Passed Failed Inactive 00:04:29.267 suites 1 1 n/a 0 0 00:04:29.267 tests 1 1 1 0 0 00:04:29.267 asserts 25 25 25 0 n/a 00:04:29.267 00:04:29.267 Elapsed time = 0.029 seconds 00:04:29.267 00:04:29.267 real 0m0.049s 00:04:29.267 user 0m0.015s 00:04:29.267 sys 0m0.033s 00:04:29.267 22:35:36 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:29.267 22:35:36 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:29.267 ************************************ 00:04:29.267 END TEST env_pci 00:04:29.267 ************************************ 00:04:29.267 22:35:36 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:29.267 22:35:36 env -- env/env.sh@15 -- # uname 00:04:29.267 22:35:36 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:29.267 22:35:36 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:29.267 22:35:36 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:29.267 22:35:36 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:29.267 22:35:36 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:29.267 22:35:36 env -- common/autotest_common.sh@10 -- # set +x 00:04:29.267 ************************************ 00:04:29.267 START TEST env_dpdk_post_init 00:04:29.267 ************************************ 00:04:29.267 22:35:36 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:29.267 EAL: Detected CPU lcores: 96 00:04:29.267 EAL: Detected NUMA nodes: 2 00:04:29.267 EAL: Detected shared linkage of DPDK 00:04:29.267 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:29.267 EAL: Selected IOVA mode 'VA' 00:04:29.267 EAL: VFIO support initialized 00:04:29.267 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:29.267 EAL: Using IOMMU type 1 (Type 1) 00:04:29.267 EAL: Ignore mapping IO port bar(1) 00:04:29.267 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:29.267 EAL: Ignore mapping IO port bar(1) 00:04:29.267 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:29.267 EAL: Ignore mapping IO port bar(1) 00:04:29.267 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:29.267 EAL: Ignore mapping IO port bar(1) 00:04:29.267 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:29.527 EAL: Ignore mapping IO port bar(1) 00:04:29.527 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:29.527 EAL: Ignore mapping IO port bar(1) 00:04:29.527 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:29.527 EAL: Ignore mapping IO port bar(1) 00:04:29.527 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:29.527 EAL: Ignore mapping IO port bar(1) 00:04:29.527 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:30.096 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:04:30.096 EAL: Ignore mapping IO port bar(1) 00:04:30.096 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:30.096 EAL: Ignore mapping IO port bar(1) 00:04:30.096 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:30.096 EAL: Ignore mapping IO port bar(1) 00:04:30.096 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:30.355 EAL: Ignore mapping IO port bar(1) 00:04:30.356 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:30.356 EAL: Ignore mapping IO port bar(1) 00:04:30.356 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:30.356 EAL: Ignore mapping IO port bar(1) 00:04:30.356 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:30.356 EAL: Ignore mapping IO port bar(1) 00:04:30.356 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:30.356 EAL: Ignore mapping IO port bar(1) 00:04:30.356 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:33.644 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:04:33.644 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:04:33.644 Starting DPDK initialization... 00:04:33.644 Starting SPDK post initialization... 00:04:33.644 SPDK NVMe probe 00:04:33.644 Attaching to 0000:5e:00.0 00:04:33.644 Attached to 0000:5e:00.0 00:04:33.644 Cleaning up... 00:04:33.644 00:04:33.644 real 0m4.402s 00:04:33.644 user 0m3.018s 00:04:33.644 sys 0m0.460s 00:04:33.644 22:35:41 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:33.644 22:35:41 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:33.644 ************************************ 00:04:33.644 END TEST env_dpdk_post_init 00:04:33.644 ************************************ 00:04:33.644 22:35:41 env -- env/env.sh@26 -- # uname 00:04:33.644 22:35:41 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:33.645 22:35:41 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/env/mem_callbacks/mem_callbacks 00:04:33.645 22:35:41 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:33.645 22:35:41 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:33.645 22:35:41 env -- common/autotest_common.sh@10 -- # set +x 00:04:33.645 ************************************ 00:04:33.645 START TEST env_mem_callbacks 00:04:33.645 ************************************ 00:04:33.645 22:35:41 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/env/mem_callbacks/mem_callbacks 00:04:33.645 EAL: Detected CPU lcores: 96 00:04:33.645 EAL: Detected NUMA nodes: 2 00:04:33.645 EAL: Detected shared linkage of DPDK 00:04:33.645 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:33.645 EAL: Selected IOVA mode 'VA' 00:04:33.645 EAL: VFIO support initialized 00:04:33.645 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:33.645 00:04:33.645 00:04:33.645 CUnit - A unit testing framework for C - Version 2.1-3 00:04:33.645 http://cunit.sourceforge.net/ 00:04:33.645 00:04:33.645 00:04:33.645 Suite: memory 00:04:33.645 Test: test ... 00:04:33.645 register 0x200000200000 2097152 00:04:33.645 malloc 3145728 00:04:33.645 register 0x200000400000 4194304 00:04:33.645 buf 0x200000500000 len 3145728 PASSED 00:04:33.645 malloc 64 00:04:33.645 buf 0x2000004fff40 len 64 PASSED 00:04:33.645 malloc 4194304 00:04:33.645 register 0x200000800000 6291456 00:04:33.645 buf 0x200000a00000 len 4194304 PASSED 00:04:33.645 free 0x200000500000 3145728 00:04:33.645 free 0x2000004fff40 64 00:04:33.645 unregister 0x200000400000 4194304 PASSED 00:04:33.645 free 0x200000a00000 4194304 00:04:33.645 unregister 0x200000800000 6291456 PASSED 00:04:33.645 malloc 8388608 00:04:33.645 register 0x200000400000 10485760 00:04:33.645 buf 0x200000600000 len 8388608 PASSED 00:04:33.645 free 0x200000600000 8388608 00:04:33.645 unregister 0x200000400000 10485760 PASSED 00:04:33.645 passed 00:04:33.645 00:04:33.645 Run Summary: Type Total Ran Passed Failed Inactive 00:04:33.645 suites 1 1 n/a 0 0 00:04:33.645 tests 1 1 1 0 0 00:04:33.645 asserts 15 15 15 0 n/a 00:04:33.645 00:04:33.645 Elapsed time = 0.008 seconds 00:04:33.645 00:04:33.645 real 0m0.062s 00:04:33.645 user 0m0.021s 00:04:33.645 sys 0m0.040s 00:04:33.645 22:35:41 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:33.645 22:35:41 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:33.645 ************************************ 00:04:33.645 END TEST env_mem_callbacks 00:04:33.645 ************************************ 00:04:33.904 00:04:33.904 real 0m6.314s 00:04:33.904 user 0m4.076s 00:04:33.904 sys 0m1.320s 00:04:33.904 22:35:41 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:33.904 22:35:41 env -- common/autotest_common.sh@10 -- # set +x 00:04:33.904 ************************************ 00:04:33.904 END TEST env 00:04:33.904 ************************************ 00:04:33.904 22:35:41 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc/rpc.sh 00:04:33.904 22:35:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:33.904 22:35:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:33.904 22:35:41 -- common/autotest_common.sh@10 -- # set +x 00:04:33.904 ************************************ 00:04:33.904 START TEST rpc 00:04:33.904 ************************************ 00:04:33.904 22:35:41 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc/rpc.sh 00:04:33.904 * Looking for test storage... 00:04:33.904 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc 00:04:33.904 22:35:41 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:33.904 22:35:41 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:33.904 22:35:41 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:33.904 22:35:41 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:33.904 22:35:41 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:33.904 22:35:41 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:33.904 22:35:41 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:33.904 22:35:41 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:33.904 22:35:41 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:33.904 22:35:41 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:33.904 22:35:41 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:33.904 22:35:41 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:33.904 22:35:41 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:33.904 22:35:41 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:33.904 22:35:41 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:33.904 22:35:41 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:33.904 22:35:41 rpc -- scripts/common.sh@345 -- # : 1 00:04:33.904 22:35:41 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:33.904 22:35:41 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:33.904 22:35:41 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:33.904 22:35:41 rpc -- scripts/common.sh@353 -- # local d=1 00:04:33.904 22:35:41 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:33.904 22:35:41 rpc -- scripts/common.sh@355 -- # echo 1 00:04:33.904 22:35:41 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:33.904 22:35:41 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:33.904 22:35:41 rpc -- scripts/common.sh@353 -- # local d=2 00:04:33.904 22:35:41 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:33.904 22:35:41 rpc -- scripts/common.sh@355 -- # echo 2 00:04:33.904 22:35:41 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:33.904 22:35:41 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:33.904 22:35:41 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:33.904 22:35:41 rpc -- scripts/common.sh@368 -- # return 0 00:04:33.904 22:35:41 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:33.904 22:35:41 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:33.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.904 --rc genhtml_branch_coverage=1 00:04:33.904 --rc genhtml_function_coverage=1 00:04:33.904 --rc genhtml_legend=1 00:04:33.904 --rc geninfo_all_blocks=1 00:04:33.904 --rc geninfo_unexecuted_blocks=1 00:04:33.904 00:04:33.904 ' 00:04:33.904 22:35:41 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:33.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.904 --rc genhtml_branch_coverage=1 00:04:33.904 --rc genhtml_function_coverage=1 00:04:33.904 --rc genhtml_legend=1 00:04:33.904 --rc geninfo_all_blocks=1 00:04:33.904 --rc geninfo_unexecuted_blocks=1 00:04:33.904 00:04:33.904 ' 00:04:33.904 22:35:41 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:33.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.904 --rc genhtml_branch_coverage=1 00:04:33.904 --rc genhtml_function_coverage=1 00:04:33.904 --rc genhtml_legend=1 00:04:33.904 --rc geninfo_all_blocks=1 00:04:33.905 --rc geninfo_unexecuted_blocks=1 00:04:33.905 00:04:33.905 ' 00:04:33.905 22:35:41 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:33.905 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.905 --rc genhtml_branch_coverage=1 00:04:33.905 --rc genhtml_function_coverage=1 00:04:33.905 --rc genhtml_legend=1 00:04:33.905 --rc geninfo_all_blocks=1 00:04:33.905 --rc geninfo_unexecuted_blocks=1 00:04:33.905 00:04:33.905 ' 00:04:33.905 22:35:41 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2177574 00:04:33.905 22:35:41 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:33.905 22:35:41 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -e bdev 00:04:33.905 22:35:41 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2177574 00:04:33.905 22:35:41 rpc -- common/autotest_common.sh@835 -- # '[' -z 2177574 ']' 00:04:33.905 22:35:41 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:34.164 22:35:41 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:34.164 22:35:41 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:34.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:34.164 22:35:41 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:34.164 22:35:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.164 [2024-12-10 22:35:41.681621] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:04:34.164 [2024-12-10 22:35:41.681668] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2177574 ] 00:04:34.164 [2024-12-10 22:35:41.758268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.164 [2024-12-10 22:35:41.796503] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:34.164 [2024-12-10 22:35:41.796542] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2177574' to capture a snapshot of events at runtime. 00:04:34.164 [2024-12-10 22:35:41.796549] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:34.164 [2024-12-10 22:35:41.796555] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:34.164 [2024-12-10 22:35:41.796560] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2177574 for offline analysis/debug. 00:04:34.164 [2024-12-10 22:35:41.797096] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.424 22:35:42 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:34.424 22:35:42 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:34.424 22:35:42 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc 00:04:34.424 22:35:42 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc 00:04:34.424 22:35:42 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:34.424 22:35:42 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:34.424 22:35:42 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:34.424 22:35:42 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:34.424 22:35:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.424 ************************************ 00:04:34.424 START TEST rpc_integrity 00:04:34.424 ************************************ 00:04:34.424 22:35:42 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:34.424 22:35:42 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:34.424 22:35:42 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.424 22:35:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.424 22:35:42 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.424 22:35:42 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:34.424 22:35:42 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:34.424 22:35:42 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:34.424 22:35:42 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:34.424 22:35:42 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.424 22:35:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.424 22:35:42 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.424 22:35:42 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:34.424 22:35:42 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:34.424 22:35:42 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.424 22:35:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.424 22:35:42 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.424 22:35:42 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:34.424 { 00:04:34.424 "name": "Malloc0", 00:04:34.424 "aliases": [ 00:04:34.424 "27585f02-ab31-426b-a348-8576f1171a54" 00:04:34.424 ], 00:04:34.424 "product_name": "Malloc disk", 00:04:34.424 "block_size": 512, 00:04:34.424 "num_blocks": 16384, 00:04:34.424 "uuid": "27585f02-ab31-426b-a348-8576f1171a54", 00:04:34.424 "assigned_rate_limits": { 00:04:34.424 "rw_ios_per_sec": 0, 00:04:34.424 "rw_mbytes_per_sec": 0, 00:04:34.424 "r_mbytes_per_sec": 0, 00:04:34.424 "w_mbytes_per_sec": 0 00:04:34.424 }, 00:04:34.424 "claimed": false, 00:04:34.424 "zoned": false, 00:04:34.424 "supported_io_types": { 00:04:34.424 "read": true, 00:04:34.424 "write": true, 00:04:34.424 "unmap": true, 00:04:34.424 "flush": true, 00:04:34.424 "reset": true, 00:04:34.424 "nvme_admin": false, 00:04:34.424 "nvme_io": false, 00:04:34.424 "nvme_io_md": false, 00:04:34.424 "write_zeroes": true, 00:04:34.424 "zcopy": true, 00:04:34.424 "get_zone_info": false, 00:04:34.424 "zone_management": false, 00:04:34.424 "zone_append": false, 00:04:34.424 "compare": false, 00:04:34.424 "compare_and_write": false, 00:04:34.424 "abort": true, 00:04:34.424 "seek_hole": false, 00:04:34.424 "seek_data": false, 00:04:34.424 "copy": true, 00:04:34.424 "nvme_iov_md": false 00:04:34.424 }, 00:04:34.424 "memory_domains": [ 00:04:34.424 { 00:04:34.424 "dma_device_id": "system", 00:04:34.424 "dma_device_type": 1 00:04:34.424 }, 00:04:34.424 { 00:04:34.424 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:34.424 "dma_device_type": 2 00:04:34.424 } 00:04:34.424 ], 00:04:34.424 "driver_specific": {} 00:04:34.424 } 00:04:34.424 ]' 00:04:34.424 22:35:42 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:34.684 22:35:42 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:34.684 22:35:42 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:34.684 22:35:42 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.684 22:35:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.684 [2024-12-10 22:35:42.181548] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:34.684 [2024-12-10 22:35:42.181580] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:34.684 [2024-12-10 22:35:42.181591] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x90a140 00:04:34.684 [2024-12-10 22:35:42.181598] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:34.684 [2024-12-10 22:35:42.182712] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:34.684 [2024-12-10 22:35:42.182734] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:34.684 Passthru0 00:04:34.684 22:35:42 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.684 22:35:42 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:34.684 22:35:42 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.684 22:35:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.685 22:35:42 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.685 22:35:42 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:34.685 { 00:04:34.685 "name": "Malloc0", 00:04:34.685 "aliases": [ 00:04:34.685 "27585f02-ab31-426b-a348-8576f1171a54" 00:04:34.685 ], 00:04:34.685 "product_name": "Malloc disk", 00:04:34.685 "block_size": 512, 00:04:34.685 "num_blocks": 16384, 00:04:34.685 "uuid": "27585f02-ab31-426b-a348-8576f1171a54", 00:04:34.685 "assigned_rate_limits": { 00:04:34.685 "rw_ios_per_sec": 0, 00:04:34.685 "rw_mbytes_per_sec": 0, 00:04:34.685 "r_mbytes_per_sec": 0, 00:04:34.685 "w_mbytes_per_sec": 0 00:04:34.685 }, 00:04:34.685 "claimed": true, 00:04:34.685 "claim_type": "exclusive_write", 00:04:34.685 "zoned": false, 00:04:34.685 "supported_io_types": { 00:04:34.685 "read": true, 00:04:34.685 "write": true, 00:04:34.685 "unmap": true, 00:04:34.685 "flush": true, 00:04:34.685 "reset": true, 00:04:34.685 "nvme_admin": false, 00:04:34.685 "nvme_io": false, 00:04:34.685 "nvme_io_md": false, 00:04:34.685 "write_zeroes": true, 00:04:34.685 "zcopy": true, 00:04:34.685 "get_zone_info": false, 00:04:34.685 "zone_management": false, 00:04:34.685 "zone_append": false, 00:04:34.685 "compare": false, 00:04:34.685 "compare_and_write": false, 00:04:34.685 "abort": true, 00:04:34.685 "seek_hole": false, 00:04:34.685 "seek_data": false, 00:04:34.685 "copy": true, 00:04:34.685 "nvme_iov_md": false 00:04:34.685 }, 00:04:34.685 "memory_domains": [ 00:04:34.685 { 00:04:34.685 "dma_device_id": "system", 00:04:34.685 "dma_device_type": 1 00:04:34.685 }, 00:04:34.685 { 00:04:34.685 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:34.685 "dma_device_type": 2 00:04:34.685 } 00:04:34.685 ], 00:04:34.685 "driver_specific": {} 00:04:34.685 }, 00:04:34.685 { 00:04:34.685 "name": "Passthru0", 00:04:34.685 "aliases": [ 00:04:34.685 "7b09e4e2-a0ac-5b0e-a8c5-41154263c44b" 00:04:34.685 ], 00:04:34.685 "product_name": "passthru", 00:04:34.685 "block_size": 512, 00:04:34.685 "num_blocks": 16384, 00:04:34.685 "uuid": "7b09e4e2-a0ac-5b0e-a8c5-41154263c44b", 00:04:34.685 "assigned_rate_limits": { 00:04:34.685 "rw_ios_per_sec": 0, 00:04:34.685 "rw_mbytes_per_sec": 0, 00:04:34.685 "r_mbytes_per_sec": 0, 00:04:34.685 "w_mbytes_per_sec": 0 00:04:34.685 }, 00:04:34.685 "claimed": false, 00:04:34.685 "zoned": false, 00:04:34.685 "supported_io_types": { 00:04:34.685 "read": true, 00:04:34.685 "write": true, 00:04:34.685 "unmap": true, 00:04:34.685 "flush": true, 00:04:34.685 "reset": true, 00:04:34.685 "nvme_admin": false, 00:04:34.685 "nvme_io": false, 00:04:34.685 "nvme_io_md": false, 00:04:34.685 "write_zeroes": true, 00:04:34.685 "zcopy": true, 00:04:34.685 "get_zone_info": false, 00:04:34.685 "zone_management": false, 00:04:34.685 "zone_append": false, 00:04:34.685 "compare": false, 00:04:34.685 "compare_and_write": false, 00:04:34.685 "abort": true, 00:04:34.685 "seek_hole": false, 00:04:34.685 "seek_data": false, 00:04:34.685 "copy": true, 00:04:34.685 "nvme_iov_md": false 00:04:34.685 }, 00:04:34.685 "memory_domains": [ 00:04:34.685 { 00:04:34.685 "dma_device_id": "system", 00:04:34.685 "dma_device_type": 1 00:04:34.685 }, 00:04:34.685 { 00:04:34.685 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:34.685 "dma_device_type": 2 00:04:34.685 } 00:04:34.685 ], 00:04:34.685 "driver_specific": { 00:04:34.685 "passthru": { 00:04:34.685 "name": "Passthru0", 00:04:34.685 "base_bdev_name": "Malloc0" 00:04:34.685 } 00:04:34.685 } 00:04:34.685 } 00:04:34.685 ]' 00:04:34.685 22:35:42 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:34.685 22:35:42 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:34.685 22:35:42 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:34.685 22:35:42 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.685 22:35:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.685 22:35:42 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.685 22:35:42 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:34.685 22:35:42 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.685 22:35:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.685 22:35:42 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.685 22:35:42 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:34.685 22:35:42 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.685 22:35:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.685 22:35:42 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.685 22:35:42 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:34.685 22:35:42 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:34.685 22:35:42 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:34.685 00:04:34.685 real 0m0.276s 00:04:34.685 user 0m0.170s 00:04:34.685 sys 0m0.039s 00:04:34.685 22:35:42 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:34.685 22:35:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:34.685 ************************************ 00:04:34.685 END TEST rpc_integrity 00:04:34.685 ************************************ 00:04:34.685 22:35:42 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:34.685 22:35:42 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:34.685 22:35:42 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:34.685 22:35:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.685 ************************************ 00:04:34.685 START TEST rpc_plugins 00:04:34.685 ************************************ 00:04:34.685 22:35:42 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:34.685 22:35:42 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:34.685 22:35:42 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.685 22:35:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:34.685 22:35:42 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.685 22:35:42 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:34.685 22:35:42 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:34.685 22:35:42 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.685 22:35:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:34.946 22:35:42 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.946 22:35:42 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:34.946 { 00:04:34.946 "name": "Malloc1", 00:04:34.946 "aliases": [ 00:04:34.946 "36b437b8-40a6-4ad6-9ff1-2d903651be5d" 00:04:34.946 ], 00:04:34.946 "product_name": "Malloc disk", 00:04:34.946 "block_size": 4096, 00:04:34.946 "num_blocks": 256, 00:04:34.946 "uuid": "36b437b8-40a6-4ad6-9ff1-2d903651be5d", 00:04:34.946 "assigned_rate_limits": { 00:04:34.946 "rw_ios_per_sec": 0, 00:04:34.946 "rw_mbytes_per_sec": 0, 00:04:34.946 "r_mbytes_per_sec": 0, 00:04:34.946 "w_mbytes_per_sec": 0 00:04:34.946 }, 00:04:34.946 "claimed": false, 00:04:34.946 "zoned": false, 00:04:34.946 "supported_io_types": { 00:04:34.946 "read": true, 00:04:34.946 "write": true, 00:04:34.946 "unmap": true, 00:04:34.946 "flush": true, 00:04:34.946 "reset": true, 00:04:34.946 "nvme_admin": false, 00:04:34.946 "nvme_io": false, 00:04:34.946 "nvme_io_md": false, 00:04:34.946 "write_zeroes": true, 00:04:34.946 "zcopy": true, 00:04:34.946 "get_zone_info": false, 00:04:34.946 "zone_management": false, 00:04:34.946 "zone_append": false, 00:04:34.946 "compare": false, 00:04:34.946 "compare_and_write": false, 00:04:34.946 "abort": true, 00:04:34.946 "seek_hole": false, 00:04:34.946 "seek_data": false, 00:04:34.946 "copy": true, 00:04:34.946 "nvme_iov_md": false 00:04:34.946 }, 00:04:34.946 "memory_domains": [ 00:04:34.946 { 00:04:34.946 "dma_device_id": "system", 00:04:34.946 "dma_device_type": 1 00:04:34.946 }, 00:04:34.946 { 00:04:34.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:34.946 "dma_device_type": 2 00:04:34.946 } 00:04:34.946 ], 00:04:34.946 "driver_specific": {} 00:04:34.946 } 00:04:34.946 ]' 00:04:34.946 22:35:42 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:34.946 22:35:42 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:34.946 22:35:42 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:34.946 22:35:42 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.946 22:35:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:34.946 22:35:42 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.946 22:35:42 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:34.946 22:35:42 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.946 22:35:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:34.946 22:35:42 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.946 22:35:42 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:34.946 22:35:42 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:34.946 22:35:42 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:34.946 00:04:34.946 real 0m0.137s 00:04:34.946 user 0m0.091s 00:04:34.946 sys 0m0.012s 00:04:34.946 22:35:42 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:34.946 22:35:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:34.946 ************************************ 00:04:34.946 END TEST rpc_plugins 00:04:34.946 ************************************ 00:04:34.946 22:35:42 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:34.946 22:35:42 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:34.946 22:35:42 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:34.946 22:35:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.946 ************************************ 00:04:34.946 START TEST rpc_trace_cmd_test 00:04:34.946 ************************************ 00:04:34.946 22:35:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:34.946 22:35:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:34.946 22:35:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:34.946 22:35:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:34.946 22:35:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:34.946 22:35:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:34.946 22:35:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:34.946 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2177574", 00:04:34.946 "tpoint_group_mask": "0x8", 00:04:34.946 "iscsi_conn": { 00:04:34.946 "mask": "0x2", 00:04:34.946 "tpoint_mask": "0x0" 00:04:34.946 }, 00:04:34.946 "scsi": { 00:04:34.946 "mask": "0x4", 00:04:34.946 "tpoint_mask": "0x0" 00:04:34.946 }, 00:04:34.946 "bdev": { 00:04:34.946 "mask": "0x8", 00:04:34.946 "tpoint_mask": "0xffffffffffffffff" 00:04:34.946 }, 00:04:34.946 "nvmf_rdma": { 00:04:34.946 "mask": "0x10", 00:04:34.946 "tpoint_mask": "0x0" 00:04:34.946 }, 00:04:34.946 "nvmf_tcp": { 00:04:34.946 "mask": "0x20", 00:04:34.946 "tpoint_mask": "0x0" 00:04:34.946 }, 00:04:34.946 "ftl": { 00:04:34.946 "mask": "0x40", 00:04:34.946 "tpoint_mask": "0x0" 00:04:34.946 }, 00:04:34.946 "blobfs": { 00:04:34.946 "mask": "0x80", 00:04:34.946 "tpoint_mask": "0x0" 00:04:34.946 }, 00:04:34.946 "dsa": { 00:04:34.946 "mask": "0x200", 00:04:34.946 "tpoint_mask": "0x0" 00:04:34.946 }, 00:04:34.946 "thread": { 00:04:34.946 "mask": "0x400", 00:04:34.946 "tpoint_mask": "0x0" 00:04:34.946 }, 00:04:34.946 "nvme_pcie": { 00:04:34.946 "mask": "0x800", 00:04:34.946 "tpoint_mask": "0x0" 00:04:34.946 }, 00:04:34.946 "iaa": { 00:04:34.946 "mask": "0x1000", 00:04:34.946 "tpoint_mask": "0x0" 00:04:34.946 }, 00:04:34.946 "nvme_tcp": { 00:04:34.946 "mask": "0x2000", 00:04:34.946 "tpoint_mask": "0x0" 00:04:34.946 }, 00:04:34.946 "bdev_nvme": { 00:04:34.946 "mask": "0x4000", 00:04:34.946 "tpoint_mask": "0x0" 00:04:34.946 }, 00:04:34.946 "sock": { 00:04:34.946 "mask": "0x8000", 00:04:34.946 "tpoint_mask": "0x0" 00:04:34.946 }, 00:04:34.946 "blob": { 00:04:34.946 "mask": "0x10000", 00:04:34.946 "tpoint_mask": "0x0" 00:04:34.946 }, 00:04:34.946 "bdev_raid": { 00:04:34.946 "mask": "0x20000", 00:04:34.946 "tpoint_mask": "0x0" 00:04:34.946 }, 00:04:34.946 "scheduler": { 00:04:34.946 "mask": "0x40000", 00:04:34.946 "tpoint_mask": "0x0" 00:04:34.946 } 00:04:34.946 }' 00:04:34.946 22:35:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:34.946 22:35:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:34.946 22:35:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:35.205 22:35:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:35.205 22:35:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:35.205 22:35:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:35.205 22:35:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:35.205 22:35:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:35.205 22:35:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:35.205 22:35:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:35.205 00:04:35.205 real 0m0.215s 00:04:35.205 user 0m0.180s 00:04:35.205 sys 0m0.026s 00:04:35.205 22:35:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:35.205 22:35:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:35.205 ************************************ 00:04:35.205 END TEST rpc_trace_cmd_test 00:04:35.206 ************************************ 00:04:35.206 22:35:42 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:35.206 22:35:42 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:35.206 22:35:42 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:35.206 22:35:42 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:35.206 22:35:42 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:35.206 22:35:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.206 ************************************ 00:04:35.206 START TEST rpc_daemon_integrity 00:04:35.206 ************************************ 00:04:35.206 22:35:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:35.206 22:35:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:35.206 22:35:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.206 22:35:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.206 22:35:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.206 22:35:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:35.206 22:35:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:35.465 22:35:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:35.465 22:35:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:35.465 22:35:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.465 22:35:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.465 22:35:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.465 22:35:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:35.465 22:35:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:35.465 22:35:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.465 22:35:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.465 22:35:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.465 22:35:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:35.465 { 00:04:35.465 "name": "Malloc2", 00:04:35.465 "aliases": [ 00:04:35.465 "7a03d9b0-cabb-4430-9c4c-4c353e156754" 00:04:35.465 ], 00:04:35.465 "product_name": "Malloc disk", 00:04:35.465 "block_size": 512, 00:04:35.465 "num_blocks": 16384, 00:04:35.465 "uuid": "7a03d9b0-cabb-4430-9c4c-4c353e156754", 00:04:35.465 "assigned_rate_limits": { 00:04:35.465 "rw_ios_per_sec": 0, 00:04:35.465 "rw_mbytes_per_sec": 0, 00:04:35.465 "r_mbytes_per_sec": 0, 00:04:35.465 "w_mbytes_per_sec": 0 00:04:35.465 }, 00:04:35.465 "claimed": false, 00:04:35.465 "zoned": false, 00:04:35.465 "supported_io_types": { 00:04:35.465 "read": true, 00:04:35.465 "write": true, 00:04:35.465 "unmap": true, 00:04:35.465 "flush": true, 00:04:35.465 "reset": true, 00:04:35.465 "nvme_admin": false, 00:04:35.465 "nvme_io": false, 00:04:35.465 "nvme_io_md": false, 00:04:35.465 "write_zeroes": true, 00:04:35.465 "zcopy": true, 00:04:35.465 "get_zone_info": false, 00:04:35.465 "zone_management": false, 00:04:35.465 "zone_append": false, 00:04:35.465 "compare": false, 00:04:35.465 "compare_and_write": false, 00:04:35.465 "abort": true, 00:04:35.465 "seek_hole": false, 00:04:35.465 "seek_data": false, 00:04:35.465 "copy": true, 00:04:35.465 "nvme_iov_md": false 00:04:35.465 }, 00:04:35.465 "memory_domains": [ 00:04:35.465 { 00:04:35.465 "dma_device_id": "system", 00:04:35.465 "dma_device_type": 1 00:04:35.465 }, 00:04:35.465 { 00:04:35.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:35.465 "dma_device_type": 2 00:04:35.465 } 00:04:35.465 ], 00:04:35.465 "driver_specific": {} 00:04:35.465 } 00:04:35.465 ]' 00:04:35.465 22:35:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:35.465 22:35:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:35.465 22:35:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:35.465 22:35:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.465 22:35:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.465 [2024-12-10 22:35:43.011943] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:35.465 [2024-12-10 22:35:43.011973] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:35.465 [2024-12-10 22:35:43.011985] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x7c8490 00:04:35.465 [2024-12-10 22:35:43.011991] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:35.465 [2024-12-10 22:35:43.012988] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:35.465 [2024-12-10 22:35:43.013011] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:35.465 Passthru0 00:04:35.465 22:35:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.465 22:35:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:35.465 22:35:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.465 22:35:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.465 22:35:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.465 22:35:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:35.465 { 00:04:35.465 "name": "Malloc2", 00:04:35.465 "aliases": [ 00:04:35.465 "7a03d9b0-cabb-4430-9c4c-4c353e156754" 00:04:35.465 ], 00:04:35.465 "product_name": "Malloc disk", 00:04:35.465 "block_size": 512, 00:04:35.465 "num_blocks": 16384, 00:04:35.465 "uuid": "7a03d9b0-cabb-4430-9c4c-4c353e156754", 00:04:35.465 "assigned_rate_limits": { 00:04:35.465 "rw_ios_per_sec": 0, 00:04:35.465 "rw_mbytes_per_sec": 0, 00:04:35.465 "r_mbytes_per_sec": 0, 00:04:35.465 "w_mbytes_per_sec": 0 00:04:35.465 }, 00:04:35.465 "claimed": true, 00:04:35.465 "claim_type": "exclusive_write", 00:04:35.465 "zoned": false, 00:04:35.465 "supported_io_types": { 00:04:35.465 "read": true, 00:04:35.465 "write": true, 00:04:35.465 "unmap": true, 00:04:35.465 "flush": true, 00:04:35.465 "reset": true, 00:04:35.465 "nvme_admin": false, 00:04:35.465 "nvme_io": false, 00:04:35.465 "nvme_io_md": false, 00:04:35.465 "write_zeroes": true, 00:04:35.465 "zcopy": true, 00:04:35.465 "get_zone_info": false, 00:04:35.465 "zone_management": false, 00:04:35.465 "zone_append": false, 00:04:35.465 "compare": false, 00:04:35.465 "compare_and_write": false, 00:04:35.465 "abort": true, 00:04:35.465 "seek_hole": false, 00:04:35.465 "seek_data": false, 00:04:35.465 "copy": true, 00:04:35.465 "nvme_iov_md": false 00:04:35.465 }, 00:04:35.465 "memory_domains": [ 00:04:35.465 { 00:04:35.466 "dma_device_id": "system", 00:04:35.466 "dma_device_type": 1 00:04:35.466 }, 00:04:35.466 { 00:04:35.466 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:35.466 "dma_device_type": 2 00:04:35.466 } 00:04:35.466 ], 00:04:35.466 "driver_specific": {} 00:04:35.466 }, 00:04:35.466 { 00:04:35.466 "name": "Passthru0", 00:04:35.466 "aliases": [ 00:04:35.466 "256fd934-cd1b-500f-8cd9-c7ff4a8808db" 00:04:35.466 ], 00:04:35.466 "product_name": "passthru", 00:04:35.466 "block_size": 512, 00:04:35.466 "num_blocks": 16384, 00:04:35.466 "uuid": "256fd934-cd1b-500f-8cd9-c7ff4a8808db", 00:04:35.466 "assigned_rate_limits": { 00:04:35.466 "rw_ios_per_sec": 0, 00:04:35.466 "rw_mbytes_per_sec": 0, 00:04:35.466 "r_mbytes_per_sec": 0, 00:04:35.466 "w_mbytes_per_sec": 0 00:04:35.466 }, 00:04:35.466 "claimed": false, 00:04:35.466 "zoned": false, 00:04:35.466 "supported_io_types": { 00:04:35.466 "read": true, 00:04:35.466 "write": true, 00:04:35.466 "unmap": true, 00:04:35.466 "flush": true, 00:04:35.466 "reset": true, 00:04:35.466 "nvme_admin": false, 00:04:35.466 "nvme_io": false, 00:04:35.466 "nvme_io_md": false, 00:04:35.466 "write_zeroes": true, 00:04:35.466 "zcopy": true, 00:04:35.466 "get_zone_info": false, 00:04:35.466 "zone_management": false, 00:04:35.466 "zone_append": false, 00:04:35.466 "compare": false, 00:04:35.466 "compare_and_write": false, 00:04:35.466 "abort": true, 00:04:35.466 "seek_hole": false, 00:04:35.466 "seek_data": false, 00:04:35.466 "copy": true, 00:04:35.466 "nvme_iov_md": false 00:04:35.466 }, 00:04:35.466 "memory_domains": [ 00:04:35.466 { 00:04:35.466 "dma_device_id": "system", 00:04:35.466 "dma_device_type": 1 00:04:35.466 }, 00:04:35.466 { 00:04:35.466 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:35.466 "dma_device_type": 2 00:04:35.466 } 00:04:35.466 ], 00:04:35.466 "driver_specific": { 00:04:35.466 "passthru": { 00:04:35.466 "name": "Passthru0", 00:04:35.466 "base_bdev_name": "Malloc2" 00:04:35.466 } 00:04:35.466 } 00:04:35.466 } 00:04:35.466 ]' 00:04:35.466 22:35:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:35.466 22:35:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:35.466 22:35:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:35.466 22:35:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.466 22:35:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.466 22:35:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.466 22:35:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:35.466 22:35:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.466 22:35:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.466 22:35:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.466 22:35:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:35.466 22:35:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.466 22:35:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.466 22:35:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.466 22:35:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:35.466 22:35:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:35.466 22:35:43 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:35.466 00:04:35.466 real 0m0.277s 00:04:35.466 user 0m0.171s 00:04:35.466 sys 0m0.042s 00:04:35.466 22:35:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:35.466 22:35:43 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:35.466 ************************************ 00:04:35.466 END TEST rpc_daemon_integrity 00:04:35.466 ************************************ 00:04:35.466 22:35:43 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:35.466 22:35:43 rpc -- rpc/rpc.sh@84 -- # killprocess 2177574 00:04:35.466 22:35:43 rpc -- common/autotest_common.sh@954 -- # '[' -z 2177574 ']' 00:04:35.466 22:35:43 rpc -- common/autotest_common.sh@958 -- # kill -0 2177574 00:04:35.466 22:35:43 rpc -- common/autotest_common.sh@959 -- # uname 00:04:35.726 22:35:43 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:35.726 22:35:43 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2177574 00:04:35.726 22:35:43 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:35.726 22:35:43 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:35.726 22:35:43 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2177574' 00:04:35.726 killing process with pid 2177574 00:04:35.726 22:35:43 rpc -- common/autotest_common.sh@973 -- # kill 2177574 00:04:35.726 22:35:43 rpc -- common/autotest_common.sh@978 -- # wait 2177574 00:04:35.985 00:04:35.985 real 0m2.083s 00:04:35.985 user 0m2.664s 00:04:35.985 sys 0m0.674s 00:04:35.985 22:35:43 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:35.985 22:35:43 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.985 ************************************ 00:04:35.985 END TEST rpc 00:04:35.985 ************************************ 00:04:35.985 22:35:43 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc/skip_rpc.sh 00:04:35.985 22:35:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:35.985 22:35:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:35.985 22:35:43 -- common/autotest_common.sh@10 -- # set +x 00:04:35.985 ************************************ 00:04:35.985 START TEST skip_rpc 00:04:35.985 ************************************ 00:04:35.985 22:35:43 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc/skip_rpc.sh 00:04:35.985 * Looking for test storage... 00:04:35.985 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc 00:04:35.985 22:35:43 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:35.985 22:35:43 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:35.985 22:35:43 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:36.245 22:35:43 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:36.245 22:35:43 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:36.245 22:35:43 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:36.245 22:35:43 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:36.245 22:35:43 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:36.245 22:35:43 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:36.245 22:35:43 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:36.245 22:35:43 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:36.245 22:35:43 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:36.245 22:35:43 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:36.245 22:35:43 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:36.245 22:35:43 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:36.245 22:35:43 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:36.245 22:35:43 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:36.245 22:35:43 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:36.245 22:35:43 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:36.245 22:35:43 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:36.245 22:35:43 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:36.245 22:35:43 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:36.245 22:35:43 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:36.245 22:35:43 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:36.245 22:35:43 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:36.245 22:35:43 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:36.245 22:35:43 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:36.245 22:35:43 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:36.245 22:35:43 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:36.245 22:35:43 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:36.245 22:35:43 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:36.245 22:35:43 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:36.245 22:35:43 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:36.245 22:35:43 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:36.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.245 --rc genhtml_branch_coverage=1 00:04:36.245 --rc genhtml_function_coverage=1 00:04:36.245 --rc genhtml_legend=1 00:04:36.245 --rc geninfo_all_blocks=1 00:04:36.245 --rc geninfo_unexecuted_blocks=1 00:04:36.245 00:04:36.245 ' 00:04:36.245 22:35:43 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:36.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.245 --rc genhtml_branch_coverage=1 00:04:36.245 --rc genhtml_function_coverage=1 00:04:36.245 --rc genhtml_legend=1 00:04:36.245 --rc geninfo_all_blocks=1 00:04:36.245 --rc geninfo_unexecuted_blocks=1 00:04:36.245 00:04:36.245 ' 00:04:36.245 22:35:43 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:36.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.245 --rc genhtml_branch_coverage=1 00:04:36.245 --rc genhtml_function_coverage=1 00:04:36.245 --rc genhtml_legend=1 00:04:36.245 --rc geninfo_all_blocks=1 00:04:36.245 --rc geninfo_unexecuted_blocks=1 00:04:36.245 00:04:36.245 ' 00:04:36.245 22:35:43 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:36.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.245 --rc genhtml_branch_coverage=1 00:04:36.245 --rc genhtml_function_coverage=1 00:04:36.245 --rc genhtml_legend=1 00:04:36.245 --rc geninfo_all_blocks=1 00:04:36.245 --rc geninfo_unexecuted_blocks=1 00:04:36.245 00:04:36.245 ' 00:04:36.245 22:35:43 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc/config.json 00:04:36.245 22:35:43 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc/log.txt 00:04:36.245 22:35:43 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:36.245 22:35:43 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:36.245 22:35:43 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.245 22:35:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.245 ************************************ 00:04:36.245 START TEST skip_rpc 00:04:36.245 ************************************ 00:04:36.245 22:35:43 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:36.245 22:35:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2178209 00:04:36.245 22:35:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:36.245 22:35:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:36.245 22:35:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:36.245 [2024-12-10 22:35:43.870156] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:04:36.245 [2024-12-10 22:35:43.870197] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2178209 ] 00:04:36.245 [2024-12-10 22:35:43.943227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:36.504 [2024-12-10 22:35:43.983639] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.778 22:35:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:41.778 22:35:48 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:41.778 22:35:48 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:41.778 22:35:48 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:41.778 22:35:48 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:41.778 22:35:48 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:41.778 22:35:48 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:41.778 22:35:48 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:41.778 22:35:48 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:41.778 22:35:48 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.778 22:35:48 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:41.778 22:35:48 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:41.778 22:35:48 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:41.778 22:35:48 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:41.778 22:35:48 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:41.778 22:35:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:41.778 22:35:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2178209 00:04:41.778 22:35:48 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 2178209 ']' 00:04:41.778 22:35:48 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 2178209 00:04:41.778 22:35:48 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:41.778 22:35:48 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:41.778 22:35:48 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2178209 00:04:41.778 22:35:48 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:41.778 22:35:48 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:41.778 22:35:48 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2178209' 00:04:41.778 killing process with pid 2178209 00:04:41.778 22:35:48 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 2178209 00:04:41.778 22:35:48 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 2178209 00:04:41.778 00:04:41.778 real 0m5.369s 00:04:41.778 user 0m5.126s 00:04:41.778 sys 0m0.282s 00:04:41.778 22:35:49 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.778 22:35:49 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.778 ************************************ 00:04:41.778 END TEST skip_rpc 00:04:41.778 ************************************ 00:04:41.778 22:35:49 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:41.778 22:35:49 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:41.778 22:35:49 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.778 22:35:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.778 ************************************ 00:04:41.778 START TEST skip_rpc_with_json 00:04:41.778 ************************************ 00:04:41.778 22:35:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:41.778 22:35:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:41.778 22:35:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2179154 00:04:41.778 22:35:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:41.778 22:35:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 0x1 00:04:41.778 22:35:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2179154 00:04:41.778 22:35:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 2179154 ']' 00:04:41.778 22:35:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:41.778 22:35:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:41.778 22:35:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:41.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:41.778 22:35:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:41.778 22:35:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:41.778 [2024-12-10 22:35:49.306653] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:04:41.778 [2024-12-10 22:35:49.306693] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2179154 ] 00:04:41.778 [2024-12-10 22:35:49.382550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.778 [2024-12-10 22:35:49.421453] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:42.038 22:35:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:42.038 22:35:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:42.038 22:35:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:42.038 22:35:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.038 22:35:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:42.038 [2024-12-10 22:35:49.649380] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:42.038 request: 00:04:42.038 { 00:04:42.038 "trtype": "tcp", 00:04:42.038 "method": "nvmf_get_transports", 00:04:42.038 "req_id": 1 00:04:42.038 } 00:04:42.038 Got JSON-RPC error response 00:04:42.038 response: 00:04:42.038 { 00:04:42.038 "code": -19, 00:04:42.038 "message": "No such device" 00:04:42.038 } 00:04:42.038 22:35:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:42.038 22:35:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:42.038 22:35:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.038 22:35:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:42.038 [2024-12-10 22:35:49.661488] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:42.038 22:35:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.038 22:35:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:42.038 22:35:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:42.038 22:35:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:42.297 22:35:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:42.297 22:35:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc/config.json 00:04:42.297 { 00:04:42.297 "subsystems": [ 00:04:42.297 { 00:04:42.298 "subsystem": "fsdev", 00:04:42.298 "config": [ 00:04:42.298 { 00:04:42.298 "method": "fsdev_set_opts", 00:04:42.298 "params": { 00:04:42.298 "fsdev_io_pool_size": 65535, 00:04:42.298 "fsdev_io_cache_size": 256 00:04:42.298 } 00:04:42.298 } 00:04:42.298 ] 00:04:42.298 }, 00:04:42.298 { 00:04:42.298 "subsystem": "vfio_user_target", 00:04:42.298 "config": null 00:04:42.298 }, 00:04:42.298 { 00:04:42.298 "subsystem": "keyring", 00:04:42.298 "config": [] 00:04:42.298 }, 00:04:42.298 { 00:04:42.298 "subsystem": "iobuf", 00:04:42.298 "config": [ 00:04:42.298 { 00:04:42.298 "method": "iobuf_set_options", 00:04:42.298 "params": { 00:04:42.298 "small_pool_count": 8192, 00:04:42.298 "large_pool_count": 1024, 00:04:42.298 "small_bufsize": 8192, 00:04:42.298 "large_bufsize": 135168, 00:04:42.298 "enable_numa": false 00:04:42.298 } 00:04:42.298 } 00:04:42.298 ] 00:04:42.298 }, 00:04:42.298 { 00:04:42.298 "subsystem": "sock", 00:04:42.298 "config": [ 00:04:42.298 { 00:04:42.298 "method": "sock_set_default_impl", 00:04:42.298 "params": { 00:04:42.298 "impl_name": "posix" 00:04:42.298 } 00:04:42.298 }, 00:04:42.298 { 00:04:42.298 "method": "sock_impl_set_options", 00:04:42.298 "params": { 00:04:42.298 "impl_name": "ssl", 00:04:42.298 "recv_buf_size": 4096, 00:04:42.298 "send_buf_size": 4096, 00:04:42.298 "enable_recv_pipe": true, 00:04:42.298 "enable_quickack": false, 00:04:42.298 "enable_placement_id": 0, 00:04:42.298 "enable_zerocopy_send_server": true, 00:04:42.298 "enable_zerocopy_send_client": false, 00:04:42.298 "zerocopy_threshold": 0, 00:04:42.298 "tls_version": 0, 00:04:42.298 "enable_ktls": false 00:04:42.298 } 00:04:42.298 }, 00:04:42.298 { 00:04:42.298 "method": "sock_impl_set_options", 00:04:42.298 "params": { 00:04:42.298 "impl_name": "posix", 00:04:42.298 "recv_buf_size": 2097152, 00:04:42.298 "send_buf_size": 2097152, 00:04:42.298 "enable_recv_pipe": true, 00:04:42.298 "enable_quickack": false, 00:04:42.298 "enable_placement_id": 0, 00:04:42.298 "enable_zerocopy_send_server": true, 00:04:42.298 "enable_zerocopy_send_client": false, 00:04:42.298 "zerocopy_threshold": 0, 00:04:42.298 "tls_version": 0, 00:04:42.298 "enable_ktls": false 00:04:42.298 } 00:04:42.298 } 00:04:42.298 ] 00:04:42.298 }, 00:04:42.298 { 00:04:42.298 "subsystem": "vmd", 00:04:42.298 "config": [] 00:04:42.298 }, 00:04:42.298 { 00:04:42.298 "subsystem": "accel", 00:04:42.298 "config": [ 00:04:42.298 { 00:04:42.298 "method": "accel_set_options", 00:04:42.298 "params": { 00:04:42.298 "small_cache_size": 128, 00:04:42.298 "large_cache_size": 16, 00:04:42.298 "task_count": 2048, 00:04:42.298 "sequence_count": 2048, 00:04:42.298 "buf_count": 2048 00:04:42.298 } 00:04:42.298 } 00:04:42.298 ] 00:04:42.298 }, 00:04:42.298 { 00:04:42.298 "subsystem": "bdev", 00:04:42.298 "config": [ 00:04:42.298 { 00:04:42.298 "method": "bdev_set_options", 00:04:42.298 "params": { 00:04:42.298 "bdev_io_pool_size": 65535, 00:04:42.298 "bdev_io_cache_size": 256, 00:04:42.298 "bdev_auto_examine": true, 00:04:42.298 "iobuf_small_cache_size": 128, 00:04:42.298 "iobuf_large_cache_size": 16 00:04:42.298 } 00:04:42.298 }, 00:04:42.298 { 00:04:42.298 "method": "bdev_raid_set_options", 00:04:42.298 "params": { 00:04:42.298 "process_window_size_kb": 1024, 00:04:42.298 "process_max_bandwidth_mb_sec": 0 00:04:42.298 } 00:04:42.298 }, 00:04:42.298 { 00:04:42.298 "method": "bdev_iscsi_set_options", 00:04:42.298 "params": { 00:04:42.298 "timeout_sec": 30 00:04:42.298 } 00:04:42.298 }, 00:04:42.298 { 00:04:42.298 "method": "bdev_nvme_set_options", 00:04:42.298 "params": { 00:04:42.298 "action_on_timeout": "none", 00:04:42.298 "timeout_us": 0, 00:04:42.298 "timeout_admin_us": 0, 00:04:42.298 "keep_alive_timeout_ms": 10000, 00:04:42.298 "arbitration_burst": 0, 00:04:42.298 "low_priority_weight": 0, 00:04:42.298 "medium_priority_weight": 0, 00:04:42.298 "high_priority_weight": 0, 00:04:42.298 "nvme_adminq_poll_period_us": 10000, 00:04:42.298 "nvme_ioq_poll_period_us": 0, 00:04:42.298 "io_queue_requests": 0, 00:04:42.298 "delay_cmd_submit": true, 00:04:42.298 "transport_retry_count": 4, 00:04:42.298 "bdev_retry_count": 3, 00:04:42.298 "transport_ack_timeout": 0, 00:04:42.298 "ctrlr_loss_timeout_sec": 0, 00:04:42.298 "reconnect_delay_sec": 0, 00:04:42.298 "fast_io_fail_timeout_sec": 0, 00:04:42.298 "disable_auto_failback": false, 00:04:42.298 "generate_uuids": false, 00:04:42.298 "transport_tos": 0, 00:04:42.298 "nvme_error_stat": false, 00:04:42.298 "rdma_srq_size": 0, 00:04:42.298 "io_path_stat": false, 00:04:42.298 "allow_accel_sequence": false, 00:04:42.298 "rdma_max_cq_size": 0, 00:04:42.298 "rdma_cm_event_timeout_ms": 0, 00:04:42.298 "dhchap_digests": [ 00:04:42.298 "sha256", 00:04:42.298 "sha384", 00:04:42.298 "sha512" 00:04:42.298 ], 00:04:42.298 "dhchap_dhgroups": [ 00:04:42.298 "null", 00:04:42.298 "ffdhe2048", 00:04:42.298 "ffdhe3072", 00:04:42.298 "ffdhe4096", 00:04:42.298 "ffdhe6144", 00:04:42.298 "ffdhe8192" 00:04:42.298 ], 00:04:42.298 "rdma_umr_per_io": false 00:04:42.298 } 00:04:42.298 }, 00:04:42.298 { 00:04:42.298 "method": "bdev_nvme_set_hotplug", 00:04:42.298 "params": { 00:04:42.298 "period_us": 100000, 00:04:42.298 "enable": false 00:04:42.298 } 00:04:42.298 }, 00:04:42.298 { 00:04:42.298 "method": "bdev_wait_for_examine" 00:04:42.298 } 00:04:42.298 ] 00:04:42.298 }, 00:04:42.298 { 00:04:42.298 "subsystem": "scsi", 00:04:42.298 "config": null 00:04:42.298 }, 00:04:42.298 { 00:04:42.298 "subsystem": "scheduler", 00:04:42.298 "config": [ 00:04:42.298 { 00:04:42.298 "method": "framework_set_scheduler", 00:04:42.298 "params": { 00:04:42.298 "name": "static" 00:04:42.298 } 00:04:42.298 } 00:04:42.298 ] 00:04:42.298 }, 00:04:42.298 { 00:04:42.298 "subsystem": "vhost_scsi", 00:04:42.298 "config": [] 00:04:42.298 }, 00:04:42.298 { 00:04:42.298 "subsystem": "vhost_blk", 00:04:42.298 "config": [] 00:04:42.298 }, 00:04:42.298 { 00:04:42.298 "subsystem": "ublk", 00:04:42.298 "config": [] 00:04:42.298 }, 00:04:42.298 { 00:04:42.298 "subsystem": "nbd", 00:04:42.298 "config": [] 00:04:42.298 }, 00:04:42.298 { 00:04:42.298 "subsystem": "nvmf", 00:04:42.298 "config": [ 00:04:42.298 { 00:04:42.298 "method": "nvmf_set_config", 00:04:42.298 "params": { 00:04:42.298 "discovery_filter": "match_any", 00:04:42.298 "admin_cmd_passthru": { 00:04:42.298 "identify_ctrlr": false 00:04:42.298 }, 00:04:42.298 "dhchap_digests": [ 00:04:42.298 "sha256", 00:04:42.298 "sha384", 00:04:42.298 "sha512" 00:04:42.298 ], 00:04:42.298 "dhchap_dhgroups": [ 00:04:42.298 "null", 00:04:42.298 "ffdhe2048", 00:04:42.298 "ffdhe3072", 00:04:42.298 "ffdhe4096", 00:04:42.298 "ffdhe6144", 00:04:42.298 "ffdhe8192" 00:04:42.298 ] 00:04:42.298 } 00:04:42.298 }, 00:04:42.298 { 00:04:42.298 "method": "nvmf_set_max_subsystems", 00:04:42.298 "params": { 00:04:42.298 "max_subsystems": 1024 00:04:42.298 } 00:04:42.298 }, 00:04:42.298 { 00:04:42.298 "method": "nvmf_set_crdt", 00:04:42.298 "params": { 00:04:42.298 "crdt1": 0, 00:04:42.298 "crdt2": 0, 00:04:42.298 "crdt3": 0 00:04:42.298 } 00:04:42.298 }, 00:04:42.298 { 00:04:42.298 "method": "nvmf_create_transport", 00:04:42.298 "params": { 00:04:42.298 "trtype": "TCP", 00:04:42.298 "max_queue_depth": 128, 00:04:42.298 "max_io_qpairs_per_ctrlr": 127, 00:04:42.298 "in_capsule_data_size": 4096, 00:04:42.298 "max_io_size": 131072, 00:04:42.298 "io_unit_size": 131072, 00:04:42.298 "max_aq_depth": 128, 00:04:42.298 "num_shared_buffers": 511, 00:04:42.298 "buf_cache_size": 4294967295, 00:04:42.298 "dif_insert_or_strip": false, 00:04:42.298 "zcopy": false, 00:04:42.298 "c2h_success": true, 00:04:42.298 "sock_priority": 0, 00:04:42.298 "abort_timeout_sec": 1, 00:04:42.298 "ack_timeout": 0, 00:04:42.298 "data_wr_pool_size": 0 00:04:42.298 } 00:04:42.298 } 00:04:42.298 ] 00:04:42.298 }, 00:04:42.298 { 00:04:42.298 "subsystem": "iscsi", 00:04:42.298 "config": [ 00:04:42.298 { 00:04:42.298 "method": "iscsi_set_options", 00:04:42.298 "params": { 00:04:42.298 "node_base": "iqn.2016-06.io.spdk", 00:04:42.298 "max_sessions": 128, 00:04:42.298 "max_connections_per_session": 2, 00:04:42.298 "max_queue_depth": 64, 00:04:42.298 "default_time2wait": 2, 00:04:42.298 "default_time2retain": 20, 00:04:42.298 "first_burst_length": 8192, 00:04:42.298 "immediate_data": true, 00:04:42.298 "allow_duplicated_isid": false, 00:04:42.298 "error_recovery_level": 0, 00:04:42.298 "nop_timeout": 60, 00:04:42.298 "nop_in_interval": 30, 00:04:42.298 "disable_chap": false, 00:04:42.298 "require_chap": false, 00:04:42.298 "mutual_chap": false, 00:04:42.298 "chap_group": 0, 00:04:42.298 "max_large_datain_per_connection": 64, 00:04:42.298 "max_r2t_per_connection": 4, 00:04:42.298 "pdu_pool_size": 36864, 00:04:42.298 "immediate_data_pool_size": 16384, 00:04:42.298 "data_out_pool_size": 2048 00:04:42.298 } 00:04:42.299 } 00:04:42.299 ] 00:04:42.299 } 00:04:42.299 ] 00:04:42.299 } 00:04:42.299 22:35:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:42.299 22:35:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2179154 00:04:42.299 22:35:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 2179154 ']' 00:04:42.299 22:35:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 2179154 00:04:42.299 22:35:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:42.299 22:35:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:42.299 22:35:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2179154 00:04:42.299 22:35:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:42.299 22:35:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:42.299 22:35:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2179154' 00:04:42.299 killing process with pid 2179154 00:04:42.299 22:35:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 2179154 00:04:42.299 22:35:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 2179154 00:04:42.571 22:35:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2179245 00:04:42.571 22:35:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc/config.json 00:04:42.571 22:35:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:47.849 22:35:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2179245 00:04:47.849 22:35:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 2179245 ']' 00:04:47.849 22:35:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 2179245 00:04:47.849 22:35:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:47.849 22:35:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:47.849 22:35:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2179245 00:04:47.849 22:35:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:47.849 22:35:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:47.849 22:35:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2179245' 00:04:47.849 killing process with pid 2179245 00:04:47.849 22:35:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 2179245 00:04:47.849 22:35:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 2179245 00:04:47.849 22:35:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc/log.txt 00:04:47.849 22:35:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc/log.txt 00:04:47.849 00:04:47.849 real 0m6.299s 00:04:47.849 user 0m5.969s 00:04:47.849 sys 0m0.634s 00:04:47.849 22:35:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:47.849 22:35:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:47.849 ************************************ 00:04:47.849 END TEST skip_rpc_with_json 00:04:47.849 ************************************ 00:04:48.109 22:35:55 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:48.109 22:35:55 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:48.109 22:35:55 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:48.109 22:35:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.109 ************************************ 00:04:48.109 START TEST skip_rpc_with_delay 00:04:48.109 ************************************ 00:04:48.109 22:35:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:48.109 22:35:55 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:48.109 22:35:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:48.109 22:35:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:48.109 22:35:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt 00:04:48.109 22:35:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:48.109 22:35:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt 00:04:48.109 22:35:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:48.109 22:35:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt 00:04:48.109 22:35:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:48.109 22:35:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt 00:04:48.109 22:35:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt ]] 00:04:48.109 22:35:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:48.109 [2024-12-10 22:35:55.681680] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:48.109 22:35:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:48.109 22:35:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:48.109 22:35:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:48.109 22:35:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:48.109 00:04:48.109 real 0m0.072s 00:04:48.109 user 0m0.044s 00:04:48.109 sys 0m0.027s 00:04:48.109 22:35:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:48.109 22:35:55 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:48.109 ************************************ 00:04:48.109 END TEST skip_rpc_with_delay 00:04:48.109 ************************************ 00:04:48.109 22:35:55 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:48.109 22:35:55 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:48.109 22:35:55 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:48.109 22:35:55 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:48.109 22:35:55 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:48.109 22:35:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.109 ************************************ 00:04:48.109 START TEST exit_on_failed_rpc_init 00:04:48.109 ************************************ 00:04:48.109 22:35:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:48.109 22:35:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2180279 00:04:48.109 22:35:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2180279 00:04:48.109 22:35:55 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 0x1 00:04:48.109 22:35:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 2180279 ']' 00:04:48.109 22:35:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:48.109 22:35:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:48.109 22:35:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:48.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:48.109 22:35:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:48.109 22:35:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:48.109 [2024-12-10 22:35:55.826557] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:04:48.109 [2024-12-10 22:35:55.826621] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2180279 ] 00:04:48.369 [2024-12-10 22:35:55.901998] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.369 [2024-12-10 22:35:55.943340] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.628 22:35:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:48.628 22:35:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:48.628 22:35:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:48.628 22:35:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 0x2 00:04:48.628 22:35:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:48.628 22:35:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 0x2 00:04:48.628 22:35:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt 00:04:48.628 22:35:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:48.628 22:35:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt 00:04:48.628 22:35:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:48.628 22:35:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt 00:04:48.628 22:35:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:48.628 22:35:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt 00:04:48.628 22:35:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt ]] 00:04:48.628 22:35:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 0x2 00:04:48.628 [2024-12-10 22:35:56.216027] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:04:48.628 [2024-12-10 22:35:56.216072] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2180372 ] 00:04:48.628 [2024-12-10 22:35:56.292514] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.628 [2024-12-10 22:35:56.332758] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:04:48.628 [2024-12-10 22:35:56.332812] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:48.628 [2024-12-10 22:35:56.332821] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:48.628 [2024-12-10 22:35:56.332827] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:48.887 22:35:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:48.887 22:35:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:48.887 22:35:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:48.887 22:35:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:48.887 22:35:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:48.887 22:35:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:48.887 22:35:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:48.887 22:35:56 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2180279 00:04:48.887 22:35:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 2180279 ']' 00:04:48.887 22:35:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 2180279 00:04:48.887 22:35:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:48.887 22:35:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:48.887 22:35:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2180279 00:04:48.887 22:35:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:48.887 22:35:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:48.887 22:35:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2180279' 00:04:48.887 killing process with pid 2180279 00:04:48.887 22:35:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 2180279 00:04:48.887 22:35:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 2180279 00:04:49.148 00:04:49.148 real 0m0.959s 00:04:49.148 user 0m1.015s 00:04:49.148 sys 0m0.401s 00:04:49.148 22:35:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:49.148 22:35:56 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:49.148 ************************************ 00:04:49.148 END TEST exit_on_failed_rpc_init 00:04:49.148 ************************************ 00:04:49.148 22:35:56 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc/config.json 00:04:49.148 00:04:49.148 real 0m13.159s 00:04:49.148 user 0m12.356s 00:04:49.148 sys 0m1.633s 00:04:49.148 22:35:56 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:49.148 22:35:56 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.148 ************************************ 00:04:49.148 END TEST skip_rpc 00:04:49.148 ************************************ 00:04:49.148 22:35:56 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc_client/rpc_client.sh 00:04:49.148 22:35:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:49.148 22:35:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:49.148 22:35:56 -- common/autotest_common.sh@10 -- # set +x 00:04:49.148 ************************************ 00:04:49.148 START TEST rpc_client 00:04:49.148 ************************************ 00:04:49.148 22:35:56 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc_client/rpc_client.sh 00:04:49.408 * Looking for test storage... 00:04:49.408 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc_client 00:04:49.408 22:35:56 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:49.408 22:35:56 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:04:49.408 22:35:56 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:49.408 22:35:56 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:49.409 22:35:56 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:49.409 22:35:56 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:49.409 22:35:56 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:49.409 22:35:56 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:49.409 22:35:56 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:49.409 22:35:56 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:49.409 22:35:56 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:49.409 22:35:56 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:49.409 22:35:56 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:49.409 22:35:56 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:49.409 22:35:56 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:49.409 22:35:56 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:49.409 22:35:56 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:49.409 22:35:57 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:49.409 22:35:57 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:49.409 22:35:57 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:49.409 22:35:57 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:49.409 22:35:57 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:49.409 22:35:57 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:49.409 22:35:57 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:49.409 22:35:57 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:49.409 22:35:57 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:49.409 22:35:57 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:49.409 22:35:57 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:49.409 22:35:57 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:49.409 22:35:57 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:49.409 22:35:57 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:49.409 22:35:57 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:49.409 22:35:57 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:49.409 22:35:57 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:49.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.409 --rc genhtml_branch_coverage=1 00:04:49.409 --rc genhtml_function_coverage=1 00:04:49.409 --rc genhtml_legend=1 00:04:49.409 --rc geninfo_all_blocks=1 00:04:49.409 --rc geninfo_unexecuted_blocks=1 00:04:49.409 00:04:49.409 ' 00:04:49.409 22:35:57 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:49.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.409 --rc genhtml_branch_coverage=1 00:04:49.409 --rc genhtml_function_coverage=1 00:04:49.409 --rc genhtml_legend=1 00:04:49.409 --rc geninfo_all_blocks=1 00:04:49.409 --rc geninfo_unexecuted_blocks=1 00:04:49.409 00:04:49.409 ' 00:04:49.409 22:35:57 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:49.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.409 --rc genhtml_branch_coverage=1 00:04:49.409 --rc genhtml_function_coverage=1 00:04:49.409 --rc genhtml_legend=1 00:04:49.409 --rc geninfo_all_blocks=1 00:04:49.409 --rc geninfo_unexecuted_blocks=1 00:04:49.409 00:04:49.409 ' 00:04:49.409 22:35:57 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:49.409 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.409 --rc genhtml_branch_coverage=1 00:04:49.409 --rc genhtml_function_coverage=1 00:04:49.409 --rc genhtml_legend=1 00:04:49.409 --rc geninfo_all_blocks=1 00:04:49.409 --rc geninfo_unexecuted_blocks=1 00:04:49.409 00:04:49.409 ' 00:04:49.409 22:35:57 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc_client/rpc_client_test 00:04:49.409 OK 00:04:49.409 22:35:57 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:49.409 00:04:49.409 real 0m0.201s 00:04:49.409 user 0m0.134s 00:04:49.409 sys 0m0.081s 00:04:49.409 22:35:57 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:49.409 22:35:57 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:49.409 ************************************ 00:04:49.409 END TEST rpc_client 00:04:49.409 ************************************ 00:04:49.409 22:35:57 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/json_config.sh 00:04:49.409 22:35:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:49.409 22:35:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:49.409 22:35:57 -- common/autotest_common.sh@10 -- # set +x 00:04:49.409 ************************************ 00:04:49.409 START TEST json_config 00:04:49.409 ************************************ 00:04:49.409 22:35:57 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/json_config.sh 00:04:49.669 22:35:57 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:49.669 22:35:57 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:04:49.669 22:35:57 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:49.669 22:35:57 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:49.669 22:35:57 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:49.669 22:35:57 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:49.669 22:35:57 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:49.669 22:35:57 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:49.669 22:35:57 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:49.669 22:35:57 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:49.669 22:35:57 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:49.669 22:35:57 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:49.669 22:35:57 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:49.669 22:35:57 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:49.669 22:35:57 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:49.669 22:35:57 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:49.669 22:35:57 json_config -- scripts/common.sh@345 -- # : 1 00:04:49.669 22:35:57 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:49.669 22:35:57 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:49.669 22:35:57 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:49.669 22:35:57 json_config -- scripts/common.sh@353 -- # local d=1 00:04:49.670 22:35:57 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:49.670 22:35:57 json_config -- scripts/common.sh@355 -- # echo 1 00:04:49.670 22:35:57 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:49.670 22:35:57 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:49.670 22:35:57 json_config -- scripts/common.sh@353 -- # local d=2 00:04:49.670 22:35:57 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:49.670 22:35:57 json_config -- scripts/common.sh@355 -- # echo 2 00:04:49.670 22:35:57 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:49.670 22:35:57 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:49.670 22:35:57 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:49.670 22:35:57 json_config -- scripts/common.sh@368 -- # return 0 00:04:49.670 22:35:57 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:49.670 22:35:57 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:49.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.670 --rc genhtml_branch_coverage=1 00:04:49.670 --rc genhtml_function_coverage=1 00:04:49.670 --rc genhtml_legend=1 00:04:49.670 --rc geninfo_all_blocks=1 00:04:49.670 --rc geninfo_unexecuted_blocks=1 00:04:49.670 00:04:49.670 ' 00:04:49.670 22:35:57 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:49.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.670 --rc genhtml_branch_coverage=1 00:04:49.670 --rc genhtml_function_coverage=1 00:04:49.670 --rc genhtml_legend=1 00:04:49.670 --rc geninfo_all_blocks=1 00:04:49.670 --rc geninfo_unexecuted_blocks=1 00:04:49.670 00:04:49.670 ' 00:04:49.670 22:35:57 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:49.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.670 --rc genhtml_branch_coverage=1 00:04:49.670 --rc genhtml_function_coverage=1 00:04:49.670 --rc genhtml_legend=1 00:04:49.670 --rc geninfo_all_blocks=1 00:04:49.670 --rc geninfo_unexecuted_blocks=1 00:04:49.670 00:04:49.670 ' 00:04:49.670 22:35:57 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:49.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.670 --rc genhtml_branch_coverage=1 00:04:49.670 --rc genhtml_function_coverage=1 00:04:49.670 --rc genhtml_legend=1 00:04:49.670 --rc geninfo_all_blocks=1 00:04:49.670 --rc geninfo_unexecuted_blocks=1 00:04:49.670 00:04:49.670 ' 00:04:49.670 22:35:57 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:04:49.670 22:35:57 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:49.670 22:35:57 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:49.670 22:35:57 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:49.670 22:35:57 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:49.670 22:35:57 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:49.670 22:35:57 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:49.670 22:35:57 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:49.670 22:35:57 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:49.670 22:35:57 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:49.670 22:35:57 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:49.670 22:35:57 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:49.670 22:35:57 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:49.670 22:35:57 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:49.670 22:35:57 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:49.670 22:35:57 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:49.670 22:35:57 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:49.670 22:35:57 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:49.670 22:35:57 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:04:49.670 22:35:57 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:49.670 22:35:57 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:49.670 22:35:57 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:49.670 22:35:57 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:49.670 22:35:57 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:49.670 22:35:57 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:49.670 22:35:57 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:49.670 22:35:57 json_config -- paths/export.sh@5 -- # export PATH 00:04:49.670 22:35:57 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:49.670 22:35:57 json_config -- nvmf/common.sh@51 -- # : 0 00:04:49.670 22:35:57 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:49.670 22:35:57 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:49.670 22:35:57 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:49.670 22:35:57 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:49.670 22:35:57 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:49.670 22:35:57 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:49.670 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:49.670 22:35:57 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:49.670 22:35:57 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:49.670 22:35:57 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:49.670 22:35:57 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/common.sh 00:04:49.670 22:35:57 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:49.670 22:35:57 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:49.670 22:35:57 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:49.670 22:35:57 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:49.670 22:35:57 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:49.670 22:35:57 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:49.670 22:35:57 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:49.670 22:35:57 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:49.670 22:35:57 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:49.670 22:35:57 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:49.670 22:35:57 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/spdk_initiator_config.json') 00:04:49.670 22:35:57 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:49.670 22:35:57 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:49.670 22:35:57 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:49.670 22:35:57 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:49.670 INFO: JSON configuration test init 00:04:49.670 22:35:57 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:49.670 22:35:57 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:49.670 22:35:57 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:49.670 22:35:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:49.670 22:35:57 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:49.670 22:35:57 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:49.670 22:35:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:49.670 22:35:57 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:49.670 22:35:57 json_config -- json_config/common.sh@9 -- # local app=target 00:04:49.670 22:35:57 json_config -- json_config/common.sh@10 -- # shift 00:04:49.670 22:35:57 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:49.670 22:35:57 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:49.670 22:35:57 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:49.670 22:35:57 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:49.670 22:35:57 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:49.670 22:35:57 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2180724 00:04:49.670 22:35:57 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:49.670 Waiting for target to run... 00:04:49.670 22:35:57 json_config -- json_config/common.sh@25 -- # waitforlisten 2180724 /var/tmp/spdk_tgt.sock 00:04:49.670 22:35:57 json_config -- common/autotest_common.sh@835 -- # '[' -z 2180724 ']' 00:04:49.670 22:35:57 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:49.671 22:35:57 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:49.671 22:35:57 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:49.671 22:35:57 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:49.671 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:49.671 22:35:57 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:49.671 22:35:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:49.671 [2024-12-10 22:35:57.355927] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:04:49.671 [2024-12-10 22:35:57.355975] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2180724 ] 00:04:50.238 [2024-12-10 22:35:57.814489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.238 [2024-12-10 22:35:57.870568] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:50.498 22:35:58 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:50.498 22:35:58 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:50.498 22:35:58 json_config -- json_config/common.sh@26 -- # echo '' 00:04:50.498 00:04:50.498 22:35:58 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:50.498 22:35:58 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:50.498 22:35:58 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:50.498 22:35:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:50.498 22:35:58 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:50.498 22:35:58 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:50.498 22:35:58 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:50.498 22:35:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:50.757 22:35:58 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:50.757 22:35:58 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:50.757 22:35:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:54.046 22:36:01 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:54.046 22:36:01 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:54.046 22:36:01 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:54.046 22:36:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:54.046 22:36:01 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:54.046 22:36:01 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:54.046 22:36:01 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:54.046 22:36:01 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:54.046 22:36:01 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:54.046 22:36:01 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:54.046 22:36:01 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:54.046 22:36:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:54.046 22:36:01 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:54.046 22:36:01 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:54.046 22:36:01 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:54.046 22:36:01 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:54.046 22:36:01 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:54.046 22:36:01 json_config -- json_config/json_config.sh@54 -- # sort 00:04:54.046 22:36:01 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:54.046 22:36:01 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:54.046 22:36:01 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:54.046 22:36:01 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:54.046 22:36:01 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:54.046 22:36:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:54.046 22:36:01 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:54.046 22:36:01 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:54.046 22:36:01 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:54.046 22:36:01 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:54.046 22:36:01 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:54.046 22:36:01 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:54.046 22:36:01 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:54.046 22:36:01 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:54.046 22:36:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:54.046 22:36:01 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:54.046 22:36:01 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:54.046 22:36:01 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:54.046 22:36:01 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:54.046 22:36:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:54.046 MallocForNvmf0 00:04:54.305 22:36:01 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:54.305 22:36:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:54.305 MallocForNvmf1 00:04:54.305 22:36:01 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:54.305 22:36:01 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:54.564 [2024-12-10 22:36:02.165232] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:54.564 22:36:02 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:54.564 22:36:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:54.822 22:36:02 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:54.822 22:36:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:55.081 22:36:02 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:55.081 22:36:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:55.081 22:36:02 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:55.341 22:36:02 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:55.341 [2024-12-10 22:36:02.983761] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:55.341 22:36:03 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:55.341 22:36:03 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:55.341 22:36:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:55.341 22:36:03 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:55.341 22:36:03 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:55.341 22:36:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:55.600 22:36:03 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:55.600 22:36:03 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:55.600 22:36:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:55.600 MallocBdevForConfigChangeCheck 00:04:55.600 22:36:03 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:55.600 22:36:03 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:55.600 22:36:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:55.600 22:36:03 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:55.600 22:36:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:56.168 22:36:03 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:56.168 INFO: shutting down applications... 00:04:56.168 22:36:03 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:56.168 22:36:03 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:56.168 22:36:03 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:56.168 22:36:03 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:57.548 Calling clear_iscsi_subsystem 00:04:57.548 Calling clear_nvmf_subsystem 00:04:57.548 Calling clear_nbd_subsystem 00:04:57.548 Calling clear_ublk_subsystem 00:04:57.548 Calling clear_vhost_blk_subsystem 00:04:57.548 Calling clear_vhost_scsi_subsystem 00:04:57.548 Calling clear_bdev_subsystem 00:04:57.548 22:36:05 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/config_filter.py 00:04:57.548 22:36:05 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:57.548 22:36:05 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:57.548 22:36:05 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:57.548 22:36:05 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:57.548 22:36:05 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/config_filter.py -method check_empty 00:04:58.115 22:36:05 json_config -- json_config/json_config.sh@352 -- # break 00:04:58.115 22:36:05 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:58.115 22:36:05 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:58.115 22:36:05 json_config -- json_config/common.sh@31 -- # local app=target 00:04:58.115 22:36:05 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:58.115 22:36:05 json_config -- json_config/common.sh@35 -- # [[ -n 2180724 ]] 00:04:58.115 22:36:05 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2180724 00:04:58.115 22:36:05 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:58.115 22:36:05 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:58.115 22:36:05 json_config -- json_config/common.sh@41 -- # kill -0 2180724 00:04:58.115 22:36:05 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:58.684 22:36:06 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:58.684 22:36:06 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:58.684 22:36:06 json_config -- json_config/common.sh@41 -- # kill -0 2180724 00:04:58.684 22:36:06 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:58.684 22:36:06 json_config -- json_config/common.sh@43 -- # break 00:04:58.684 22:36:06 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:58.684 22:36:06 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:58.684 SPDK target shutdown done 00:04:58.684 22:36:06 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:58.684 INFO: relaunching applications... 00:04:58.684 22:36:06 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/spdk_tgt_config.json 00:04:58.684 22:36:06 json_config -- json_config/common.sh@9 -- # local app=target 00:04:58.684 22:36:06 json_config -- json_config/common.sh@10 -- # shift 00:04:58.684 22:36:06 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:58.684 22:36:06 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:58.684 22:36:06 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:58.684 22:36:06 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:58.684 22:36:06 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:58.684 22:36:06 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2182361 00:04:58.684 22:36:06 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:58.684 Waiting for target to run... 00:04:58.685 22:36:06 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/spdk_tgt_config.json 00:04:58.685 22:36:06 json_config -- json_config/common.sh@25 -- # waitforlisten 2182361 /var/tmp/spdk_tgt.sock 00:04:58.685 22:36:06 json_config -- common/autotest_common.sh@835 -- # '[' -z 2182361 ']' 00:04:58.685 22:36:06 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:58.685 22:36:06 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:58.685 22:36:06 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:58.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:58.685 22:36:06 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:58.685 22:36:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:58.685 [2024-12-10 22:36:06.214260] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:04:58.685 [2024-12-10 22:36:06.214323] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2182361 ] 00:04:59.252 [2024-12-10 22:36:06.675863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.252 [2024-12-10 22:36:06.721433] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.635 [2024-12-10 22:36:09.751017] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:02.635 [2024-12-10 22:36:09.783390] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:02.894 22:36:10 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:02.894 22:36:10 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:02.894 22:36:10 json_config -- json_config/common.sh@26 -- # echo '' 00:05:02.894 00:05:02.894 22:36:10 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:02.894 22:36:10 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:02.894 INFO: Checking if target configuration is the same... 00:05:02.894 22:36:10 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/spdk_tgt_config.json 00:05:02.894 22:36:10 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:02.894 22:36:10 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:02.894 + '[' 2 -ne 2 ']' 00:05:02.894 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/json_diff.sh 00:05:02.894 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/../.. 00:05:02.894 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk 00:05:02.894 +++ basename /dev/fd/62 00:05:02.894 ++ mktemp /tmp/62.XXX 00:05:02.894 + tmp_file_1=/tmp/62.qRd 00:05:02.894 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/spdk_tgt_config.json 00:05:02.894 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:02.894 + tmp_file_2=/tmp/spdk_tgt_config.json.Wsd 00:05:02.894 + ret=0 00:05:02.894 + /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/config_filter.py -method sort 00:05:03.153 + /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/config_filter.py -method sort 00:05:03.153 + diff -u /tmp/62.qRd /tmp/spdk_tgt_config.json.Wsd 00:05:03.153 + echo 'INFO: JSON config files are the same' 00:05:03.153 INFO: JSON config files are the same 00:05:03.153 + rm /tmp/62.qRd /tmp/spdk_tgt_config.json.Wsd 00:05:03.153 + exit 0 00:05:03.153 22:36:10 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:03.153 22:36:10 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:03.153 INFO: changing configuration and checking if this can be detected... 00:05:03.153 22:36:10 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:03.153 22:36:10 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:03.412 22:36:11 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:03.412 22:36:11 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:03.412 22:36:11 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/spdk_tgt_config.json 00:05:03.412 + '[' 2 -ne 2 ']' 00:05:03.412 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/json_diff.sh 00:05:03.412 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/../.. 00:05:03.412 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk 00:05:03.412 +++ basename /dev/fd/62 00:05:03.412 ++ mktemp /tmp/62.XXX 00:05:03.412 + tmp_file_1=/tmp/62.B2b 00:05:03.412 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/spdk_tgt_config.json 00:05:03.412 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:03.412 + tmp_file_2=/tmp/spdk_tgt_config.json.xMz 00:05:03.412 + ret=0 00:05:03.412 + /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/config_filter.py -method sort 00:05:03.979 + /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/config_filter.py -method sort 00:05:03.979 + diff -u /tmp/62.B2b /tmp/spdk_tgt_config.json.xMz 00:05:03.979 + ret=1 00:05:03.979 + echo '=== Start of file: /tmp/62.B2b ===' 00:05:03.979 + cat /tmp/62.B2b 00:05:03.979 + echo '=== End of file: /tmp/62.B2b ===' 00:05:03.979 + echo '' 00:05:03.979 + echo '=== Start of file: /tmp/spdk_tgt_config.json.xMz ===' 00:05:03.979 + cat /tmp/spdk_tgt_config.json.xMz 00:05:03.979 + echo '=== End of file: /tmp/spdk_tgt_config.json.xMz ===' 00:05:03.979 + echo '' 00:05:03.979 + rm /tmp/62.B2b /tmp/spdk_tgt_config.json.xMz 00:05:03.979 + exit 1 00:05:03.979 22:36:11 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:03.979 INFO: configuration change detected. 00:05:03.979 22:36:11 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:03.979 22:36:11 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:03.979 22:36:11 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:03.979 22:36:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:03.979 22:36:11 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:03.979 22:36:11 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:03.979 22:36:11 json_config -- json_config/json_config.sh@324 -- # [[ -n 2182361 ]] 00:05:03.979 22:36:11 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:03.979 22:36:11 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:03.979 22:36:11 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:03.979 22:36:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:03.979 22:36:11 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:03.979 22:36:11 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:03.979 22:36:11 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:03.979 22:36:11 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:03.979 22:36:11 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:03.979 22:36:11 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:03.979 22:36:11 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:03.979 22:36:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:03.980 22:36:11 json_config -- json_config/json_config.sh@330 -- # killprocess 2182361 00:05:03.980 22:36:11 json_config -- common/autotest_common.sh@954 -- # '[' -z 2182361 ']' 00:05:03.980 22:36:11 json_config -- common/autotest_common.sh@958 -- # kill -0 2182361 00:05:03.980 22:36:11 json_config -- common/autotest_common.sh@959 -- # uname 00:05:03.980 22:36:11 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:03.980 22:36:11 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2182361 00:05:03.980 22:36:11 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:03.980 22:36:11 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:03.980 22:36:11 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2182361' 00:05:03.980 killing process with pid 2182361 00:05:03.980 22:36:11 json_config -- common/autotest_common.sh@973 -- # kill 2182361 00:05:03.980 22:36:11 json_config -- common/autotest_common.sh@978 -- # wait 2182361 00:05:05.884 22:36:13 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/spdk_tgt_config.json 00:05:05.884 22:36:13 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:05.884 22:36:13 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:05.884 22:36:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:05.884 22:36:13 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:05.884 22:36:13 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:05.884 INFO: Success 00:05:05.884 00:05:05.884 real 0m16.050s 00:05:05.884 user 0m16.559s 00:05:05.884 sys 0m2.798s 00:05:05.884 22:36:13 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:05.884 22:36:13 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:05.884 ************************************ 00:05:05.884 END TEST json_config 00:05:05.884 ************************************ 00:05:05.884 22:36:13 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/json_config_extra_key.sh 00:05:05.884 22:36:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:05.884 22:36:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.884 22:36:13 -- common/autotest_common.sh@10 -- # set +x 00:05:05.884 ************************************ 00:05:05.884 START TEST json_config_extra_key 00:05:05.884 ************************************ 00:05:05.884 22:36:13 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/json_config_extra_key.sh 00:05:05.884 22:36:13 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:05.884 22:36:13 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:05:05.884 22:36:13 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:05.884 22:36:13 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:05.884 22:36:13 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:05.884 22:36:13 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:05.884 22:36:13 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:05.884 22:36:13 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:05.884 22:36:13 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:05.884 22:36:13 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:05.884 22:36:13 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:05.884 22:36:13 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:05.884 22:36:13 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:05.884 22:36:13 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:05.884 22:36:13 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:05.884 22:36:13 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:05.884 22:36:13 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:05.884 22:36:13 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:05.884 22:36:13 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:05.884 22:36:13 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:05.884 22:36:13 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:05.884 22:36:13 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:05.884 22:36:13 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:05.884 22:36:13 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:05.884 22:36:13 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:05.884 22:36:13 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:05.884 22:36:13 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:05.884 22:36:13 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:05.884 22:36:13 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:05.884 22:36:13 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:05.884 22:36:13 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:05.884 22:36:13 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:05.884 22:36:13 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:05.884 22:36:13 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:05.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.884 --rc genhtml_branch_coverage=1 00:05:05.884 --rc genhtml_function_coverage=1 00:05:05.884 --rc genhtml_legend=1 00:05:05.884 --rc geninfo_all_blocks=1 00:05:05.884 --rc geninfo_unexecuted_blocks=1 00:05:05.884 00:05:05.884 ' 00:05:05.884 22:36:13 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:05.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.884 --rc genhtml_branch_coverage=1 00:05:05.884 --rc genhtml_function_coverage=1 00:05:05.884 --rc genhtml_legend=1 00:05:05.884 --rc geninfo_all_blocks=1 00:05:05.884 --rc geninfo_unexecuted_blocks=1 00:05:05.884 00:05:05.884 ' 00:05:05.884 22:36:13 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:05.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.884 --rc genhtml_branch_coverage=1 00:05:05.884 --rc genhtml_function_coverage=1 00:05:05.884 --rc genhtml_legend=1 00:05:05.884 --rc geninfo_all_blocks=1 00:05:05.884 --rc geninfo_unexecuted_blocks=1 00:05:05.884 00:05:05.884 ' 00:05:05.884 22:36:13 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:05.884 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.884 --rc genhtml_branch_coverage=1 00:05:05.884 --rc genhtml_function_coverage=1 00:05:05.884 --rc genhtml_legend=1 00:05:05.884 --rc geninfo_all_blocks=1 00:05:05.884 --rc geninfo_unexecuted_blocks=1 00:05:05.884 00:05:05.884 ' 00:05:05.884 22:36:13 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:05:05.884 22:36:13 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:05.884 22:36:13 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:05.884 22:36:13 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:05.884 22:36:13 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:05.884 22:36:13 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:05.884 22:36:13 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:05.884 22:36:13 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:05.884 22:36:13 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:05.884 22:36:13 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:05.884 22:36:13 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:05.885 22:36:13 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:05.885 22:36:13 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:05.885 22:36:13 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:05.885 22:36:13 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:05.885 22:36:13 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:05.885 22:36:13 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:05.885 22:36:13 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:05.885 22:36:13 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:05:05.885 22:36:13 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:05.885 22:36:13 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:05.885 22:36:13 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:05.885 22:36:13 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:05.885 22:36:13 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:05.885 22:36:13 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:05.885 22:36:13 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:05.885 22:36:13 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:05.885 22:36:13 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:05.885 22:36:13 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:05.885 22:36:13 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:05.885 22:36:13 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:05.885 22:36:13 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:05.885 22:36:13 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:05.885 22:36:13 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:05.885 22:36:13 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:05.885 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:05.885 22:36:13 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:05.885 22:36:13 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:05.885 22:36:13 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:05.885 22:36:13 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/common.sh 00:05:05.885 22:36:13 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:05.885 22:36:13 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:05.885 22:36:13 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:05.885 22:36:13 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:05.885 22:36:13 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:05.885 22:36:13 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:05.885 22:36:13 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/extra_key.json') 00:05:05.885 22:36:13 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:05.885 22:36:13 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:05.885 22:36:13 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:05.885 INFO: launching applications... 00:05:05.885 22:36:13 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/extra_key.json 00:05:05.885 22:36:13 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:05.885 22:36:13 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:05.885 22:36:13 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:05.885 22:36:13 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:05.885 22:36:13 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:05.885 22:36:13 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:05.885 22:36:13 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:05.885 22:36:13 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2184071 00:05:05.885 22:36:13 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:05.885 Waiting for target to run... 00:05:05.885 22:36:13 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2184071 /var/tmp/spdk_tgt.sock 00:05:05.885 22:36:13 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 2184071 ']' 00:05:05.885 22:36:13 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/extra_key.json 00:05:05.885 22:36:13 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:05.885 22:36:13 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:05.885 22:36:13 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:05.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:05.885 22:36:13 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:05.885 22:36:13 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:05.885 [2024-12-10 22:36:13.456226] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:05:05.885 [2024-12-10 22:36:13.456279] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2184071 ] 00:05:06.453 [2024-12-10 22:36:13.913094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.453 [2024-12-10 22:36:13.968516] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.712 22:36:14 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:06.712 22:36:14 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:06.712 22:36:14 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:06.712 00:05:06.712 22:36:14 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:06.712 INFO: shutting down applications... 00:05:06.712 22:36:14 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:06.712 22:36:14 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:06.712 22:36:14 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:06.712 22:36:14 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2184071 ]] 00:05:06.712 22:36:14 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2184071 00:05:06.712 22:36:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:06.712 22:36:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:06.712 22:36:14 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2184071 00:05:06.712 22:36:14 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:07.281 22:36:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:07.281 22:36:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:07.281 22:36:14 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2184071 00:05:07.281 22:36:14 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:07.281 22:36:14 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:07.281 22:36:14 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:07.281 22:36:14 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:07.281 SPDK target shutdown done 00:05:07.281 22:36:14 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:07.281 Success 00:05:07.281 00:05:07.281 real 0m1.582s 00:05:07.281 user 0m1.203s 00:05:07.281 sys 0m0.575s 00:05:07.281 22:36:14 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:07.281 22:36:14 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:07.281 ************************************ 00:05:07.281 END TEST json_config_extra_key 00:05:07.281 ************************************ 00:05:07.281 22:36:14 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:07.281 22:36:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:07.281 22:36:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:07.281 22:36:14 -- common/autotest_common.sh@10 -- # set +x 00:05:07.281 ************************************ 00:05:07.281 START TEST alias_rpc 00:05:07.281 ************************************ 00:05:07.281 22:36:14 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:07.281 * Looking for test storage... 00:05:07.281 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/alias_rpc 00:05:07.282 22:36:14 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:07.282 22:36:14 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:07.282 22:36:14 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:07.553 22:36:15 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:07.553 22:36:15 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:07.553 22:36:15 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:07.553 22:36:15 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:07.553 22:36:15 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:07.553 22:36:15 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:07.553 22:36:15 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:07.553 22:36:15 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:07.553 22:36:15 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:07.553 22:36:15 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:07.553 22:36:15 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:07.553 22:36:15 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:07.553 22:36:15 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:07.553 22:36:15 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:07.553 22:36:15 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:07.553 22:36:15 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:07.553 22:36:15 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:07.553 22:36:15 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:07.553 22:36:15 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:07.553 22:36:15 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:07.553 22:36:15 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:07.553 22:36:15 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:07.553 22:36:15 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:07.553 22:36:15 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:07.553 22:36:15 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:07.553 22:36:15 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:07.553 22:36:15 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:07.553 22:36:15 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:07.553 22:36:15 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:07.553 22:36:15 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:07.553 22:36:15 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:07.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.553 --rc genhtml_branch_coverage=1 00:05:07.553 --rc genhtml_function_coverage=1 00:05:07.553 --rc genhtml_legend=1 00:05:07.553 --rc geninfo_all_blocks=1 00:05:07.553 --rc geninfo_unexecuted_blocks=1 00:05:07.553 00:05:07.553 ' 00:05:07.553 22:36:15 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:07.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.553 --rc genhtml_branch_coverage=1 00:05:07.553 --rc genhtml_function_coverage=1 00:05:07.553 --rc genhtml_legend=1 00:05:07.553 --rc geninfo_all_blocks=1 00:05:07.553 --rc geninfo_unexecuted_blocks=1 00:05:07.553 00:05:07.553 ' 00:05:07.553 22:36:15 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:07.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.553 --rc genhtml_branch_coverage=1 00:05:07.553 --rc genhtml_function_coverage=1 00:05:07.553 --rc genhtml_legend=1 00:05:07.553 --rc geninfo_all_blocks=1 00:05:07.553 --rc geninfo_unexecuted_blocks=1 00:05:07.553 00:05:07.553 ' 00:05:07.553 22:36:15 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:07.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.553 --rc genhtml_branch_coverage=1 00:05:07.553 --rc genhtml_function_coverage=1 00:05:07.553 --rc genhtml_legend=1 00:05:07.553 --rc geninfo_all_blocks=1 00:05:07.553 --rc geninfo_unexecuted_blocks=1 00:05:07.553 00:05:07.553 ' 00:05:07.553 22:36:15 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:07.553 22:36:15 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2184515 00:05:07.553 22:36:15 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2184515 00:05:07.553 22:36:15 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt 00:05:07.553 22:36:15 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 2184515 ']' 00:05:07.553 22:36:15 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:07.553 22:36:15 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:07.553 22:36:15 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:07.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:07.553 22:36:15 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:07.553 22:36:15 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.553 [2024-12-10 22:36:15.105307] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:05:07.553 [2024-12-10 22:36:15.105360] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2184515 ] 00:05:07.553 [2024-12-10 22:36:15.178084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.553 [2024-12-10 22:36:15.217096] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.813 22:36:15 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:07.813 22:36:15 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:07.813 22:36:15 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py load_config -i 00:05:08.072 22:36:15 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2184515 00:05:08.072 22:36:15 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 2184515 ']' 00:05:08.072 22:36:15 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 2184515 00:05:08.072 22:36:15 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:08.072 22:36:15 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:08.072 22:36:15 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2184515 00:05:08.072 22:36:15 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:08.072 22:36:15 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:08.072 22:36:15 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2184515' 00:05:08.072 killing process with pid 2184515 00:05:08.072 22:36:15 alias_rpc -- common/autotest_common.sh@973 -- # kill 2184515 00:05:08.072 22:36:15 alias_rpc -- common/autotest_common.sh@978 -- # wait 2184515 00:05:08.331 00:05:08.331 real 0m1.136s 00:05:08.331 user 0m1.164s 00:05:08.331 sys 0m0.407s 00:05:08.331 22:36:16 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:08.331 22:36:16 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:08.331 ************************************ 00:05:08.331 END TEST alias_rpc 00:05:08.331 ************************************ 00:05:08.331 22:36:16 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:08.331 22:36:16 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/spdkcli/tcp.sh 00:05:08.331 22:36:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:08.331 22:36:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:08.331 22:36:16 -- common/autotest_common.sh@10 -- # set +x 00:05:08.591 ************************************ 00:05:08.591 START TEST spdkcli_tcp 00:05:08.591 ************************************ 00:05:08.591 22:36:16 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/spdkcli/tcp.sh 00:05:08.591 * Looking for test storage... 00:05:08.591 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/spdkcli 00:05:08.591 22:36:16 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:08.591 22:36:16 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:05:08.591 22:36:16 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:08.591 22:36:16 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:08.591 22:36:16 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:08.591 22:36:16 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:08.591 22:36:16 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:08.591 22:36:16 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:08.591 22:36:16 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:08.591 22:36:16 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:08.591 22:36:16 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:08.591 22:36:16 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:08.592 22:36:16 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:08.592 22:36:16 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:08.592 22:36:16 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:08.592 22:36:16 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:08.592 22:36:16 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:08.592 22:36:16 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:08.592 22:36:16 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:08.592 22:36:16 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:08.592 22:36:16 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:08.592 22:36:16 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:08.592 22:36:16 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:08.592 22:36:16 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:08.592 22:36:16 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:08.592 22:36:16 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:08.592 22:36:16 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:08.592 22:36:16 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:08.592 22:36:16 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:08.592 22:36:16 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:08.592 22:36:16 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:08.592 22:36:16 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:08.592 22:36:16 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:08.592 22:36:16 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:08.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.592 --rc genhtml_branch_coverage=1 00:05:08.592 --rc genhtml_function_coverage=1 00:05:08.592 --rc genhtml_legend=1 00:05:08.592 --rc geninfo_all_blocks=1 00:05:08.592 --rc geninfo_unexecuted_blocks=1 00:05:08.592 00:05:08.592 ' 00:05:08.592 22:36:16 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:08.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.592 --rc genhtml_branch_coverage=1 00:05:08.592 --rc genhtml_function_coverage=1 00:05:08.592 --rc genhtml_legend=1 00:05:08.592 --rc geninfo_all_blocks=1 00:05:08.592 --rc geninfo_unexecuted_blocks=1 00:05:08.592 00:05:08.592 ' 00:05:08.592 22:36:16 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:08.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.592 --rc genhtml_branch_coverage=1 00:05:08.592 --rc genhtml_function_coverage=1 00:05:08.592 --rc genhtml_legend=1 00:05:08.592 --rc geninfo_all_blocks=1 00:05:08.592 --rc geninfo_unexecuted_blocks=1 00:05:08.592 00:05:08.592 ' 00:05:08.592 22:36:16 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:08.592 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:08.592 --rc genhtml_branch_coverage=1 00:05:08.592 --rc genhtml_function_coverage=1 00:05:08.592 --rc genhtml_legend=1 00:05:08.592 --rc geninfo_all_blocks=1 00:05:08.592 --rc geninfo_unexecuted_blocks=1 00:05:08.592 00:05:08.592 ' 00:05:08.592 22:36:16 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/spdkcli/common.sh 00:05:08.592 22:36:16 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/spdkcli/spdkcli_job.py 00:05:08.592 22:36:16 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/clear_config.py 00:05:08.592 22:36:16 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:08.592 22:36:16 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:08.592 22:36:16 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:08.592 22:36:16 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:08.592 22:36:16 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:08.592 22:36:16 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:08.592 22:36:16 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2184697 00:05:08.592 22:36:16 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:08.592 22:36:16 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2184697 00:05:08.592 22:36:16 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 2184697 ']' 00:05:08.592 22:36:16 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.592 22:36:16 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:08.592 22:36:16 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.592 22:36:16 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:08.592 22:36:16 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:08.592 [2024-12-10 22:36:16.314567] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:05:08.592 [2024-12-10 22:36:16.314617] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2184697 ] 00:05:08.852 [2024-12-10 22:36:16.390417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:08.852 [2024-12-10 22:36:16.432936] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:08.852 [2024-12-10 22:36:16.432938] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.111 22:36:16 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:09.112 22:36:16 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:09.112 22:36:16 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2184849 00:05:09.112 22:36:16 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:09.112 22:36:16 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:09.112 [ 00:05:09.112 "bdev_malloc_delete", 00:05:09.112 "bdev_malloc_create", 00:05:09.112 "bdev_null_resize", 00:05:09.112 "bdev_null_delete", 00:05:09.112 "bdev_null_create", 00:05:09.112 "bdev_nvme_cuse_unregister", 00:05:09.112 "bdev_nvme_cuse_register", 00:05:09.112 "bdev_opal_new_user", 00:05:09.112 "bdev_opal_set_lock_state", 00:05:09.112 "bdev_opal_delete", 00:05:09.112 "bdev_opal_get_info", 00:05:09.112 "bdev_opal_create", 00:05:09.112 "bdev_nvme_opal_revert", 00:05:09.112 "bdev_nvme_opal_init", 00:05:09.112 "bdev_nvme_send_cmd", 00:05:09.112 "bdev_nvme_set_keys", 00:05:09.112 "bdev_nvme_get_path_iostat", 00:05:09.112 "bdev_nvme_get_mdns_discovery_info", 00:05:09.112 "bdev_nvme_stop_mdns_discovery", 00:05:09.112 "bdev_nvme_start_mdns_discovery", 00:05:09.112 "bdev_nvme_set_multipath_policy", 00:05:09.112 "bdev_nvme_set_preferred_path", 00:05:09.112 "bdev_nvme_get_io_paths", 00:05:09.112 "bdev_nvme_remove_error_injection", 00:05:09.112 "bdev_nvme_add_error_injection", 00:05:09.112 "bdev_nvme_get_discovery_info", 00:05:09.112 "bdev_nvme_stop_discovery", 00:05:09.112 "bdev_nvme_start_discovery", 00:05:09.112 "bdev_nvme_get_controller_health_info", 00:05:09.112 "bdev_nvme_disable_controller", 00:05:09.112 "bdev_nvme_enable_controller", 00:05:09.112 "bdev_nvme_reset_controller", 00:05:09.112 "bdev_nvme_get_transport_statistics", 00:05:09.112 "bdev_nvme_apply_firmware", 00:05:09.112 "bdev_nvme_detach_controller", 00:05:09.112 "bdev_nvme_get_controllers", 00:05:09.112 "bdev_nvme_attach_controller", 00:05:09.112 "bdev_nvme_set_hotplug", 00:05:09.112 "bdev_nvme_set_options", 00:05:09.112 "bdev_passthru_delete", 00:05:09.112 "bdev_passthru_create", 00:05:09.112 "bdev_lvol_set_parent_bdev", 00:05:09.112 "bdev_lvol_set_parent", 00:05:09.112 "bdev_lvol_check_shallow_copy", 00:05:09.112 "bdev_lvol_start_shallow_copy", 00:05:09.112 "bdev_lvol_grow_lvstore", 00:05:09.112 "bdev_lvol_get_lvols", 00:05:09.112 "bdev_lvol_get_lvstores", 00:05:09.112 "bdev_lvol_delete", 00:05:09.112 "bdev_lvol_set_read_only", 00:05:09.112 "bdev_lvol_resize", 00:05:09.112 "bdev_lvol_decouple_parent", 00:05:09.112 "bdev_lvol_inflate", 00:05:09.112 "bdev_lvol_rename", 00:05:09.112 "bdev_lvol_clone_bdev", 00:05:09.112 "bdev_lvol_clone", 00:05:09.112 "bdev_lvol_snapshot", 00:05:09.112 "bdev_lvol_create", 00:05:09.112 "bdev_lvol_delete_lvstore", 00:05:09.112 "bdev_lvol_rename_lvstore", 00:05:09.112 "bdev_lvol_create_lvstore", 00:05:09.112 "bdev_raid_set_options", 00:05:09.112 "bdev_raid_remove_base_bdev", 00:05:09.112 "bdev_raid_add_base_bdev", 00:05:09.112 "bdev_raid_delete", 00:05:09.112 "bdev_raid_create", 00:05:09.112 "bdev_raid_get_bdevs", 00:05:09.112 "bdev_error_inject_error", 00:05:09.112 "bdev_error_delete", 00:05:09.112 "bdev_error_create", 00:05:09.112 "bdev_split_delete", 00:05:09.112 "bdev_split_create", 00:05:09.112 "bdev_delay_delete", 00:05:09.112 "bdev_delay_create", 00:05:09.112 "bdev_delay_update_latency", 00:05:09.112 "bdev_zone_block_delete", 00:05:09.112 "bdev_zone_block_create", 00:05:09.112 "blobfs_create", 00:05:09.112 "blobfs_detect", 00:05:09.112 "blobfs_set_cache_size", 00:05:09.112 "bdev_aio_delete", 00:05:09.112 "bdev_aio_rescan", 00:05:09.112 "bdev_aio_create", 00:05:09.112 "bdev_ftl_set_property", 00:05:09.112 "bdev_ftl_get_properties", 00:05:09.112 "bdev_ftl_get_stats", 00:05:09.112 "bdev_ftl_unmap", 00:05:09.112 "bdev_ftl_unload", 00:05:09.112 "bdev_ftl_delete", 00:05:09.112 "bdev_ftl_load", 00:05:09.112 "bdev_ftl_create", 00:05:09.112 "bdev_virtio_attach_controller", 00:05:09.112 "bdev_virtio_scsi_get_devices", 00:05:09.112 "bdev_virtio_detach_controller", 00:05:09.112 "bdev_virtio_blk_set_hotplug", 00:05:09.112 "bdev_iscsi_delete", 00:05:09.112 "bdev_iscsi_create", 00:05:09.112 "bdev_iscsi_set_options", 00:05:09.112 "accel_error_inject_error", 00:05:09.112 "ioat_scan_accel_module", 00:05:09.112 "dsa_scan_accel_module", 00:05:09.112 "iaa_scan_accel_module", 00:05:09.112 "vfu_virtio_create_fs_endpoint", 00:05:09.112 "vfu_virtio_create_scsi_endpoint", 00:05:09.112 "vfu_virtio_scsi_remove_target", 00:05:09.112 "vfu_virtio_scsi_add_target", 00:05:09.112 "vfu_virtio_create_blk_endpoint", 00:05:09.112 "vfu_virtio_delete_endpoint", 00:05:09.112 "keyring_file_remove_key", 00:05:09.112 "keyring_file_add_key", 00:05:09.112 "keyring_linux_set_options", 00:05:09.112 "fsdev_aio_delete", 00:05:09.112 "fsdev_aio_create", 00:05:09.112 "iscsi_get_histogram", 00:05:09.112 "iscsi_enable_histogram", 00:05:09.112 "iscsi_set_options", 00:05:09.112 "iscsi_get_auth_groups", 00:05:09.112 "iscsi_auth_group_remove_secret", 00:05:09.112 "iscsi_auth_group_add_secret", 00:05:09.112 "iscsi_delete_auth_group", 00:05:09.112 "iscsi_create_auth_group", 00:05:09.112 "iscsi_set_discovery_auth", 00:05:09.112 "iscsi_get_options", 00:05:09.112 "iscsi_target_node_request_logout", 00:05:09.112 "iscsi_target_node_set_redirect", 00:05:09.112 "iscsi_target_node_set_auth", 00:05:09.112 "iscsi_target_node_add_lun", 00:05:09.112 "iscsi_get_stats", 00:05:09.112 "iscsi_get_connections", 00:05:09.112 "iscsi_portal_group_set_auth", 00:05:09.112 "iscsi_start_portal_group", 00:05:09.112 "iscsi_delete_portal_group", 00:05:09.112 "iscsi_create_portal_group", 00:05:09.112 "iscsi_get_portal_groups", 00:05:09.112 "iscsi_delete_target_node", 00:05:09.112 "iscsi_target_node_remove_pg_ig_maps", 00:05:09.112 "iscsi_target_node_add_pg_ig_maps", 00:05:09.112 "iscsi_create_target_node", 00:05:09.112 "iscsi_get_target_nodes", 00:05:09.112 "iscsi_delete_initiator_group", 00:05:09.112 "iscsi_initiator_group_remove_initiators", 00:05:09.112 "iscsi_initiator_group_add_initiators", 00:05:09.112 "iscsi_create_initiator_group", 00:05:09.112 "iscsi_get_initiator_groups", 00:05:09.112 "nvmf_set_crdt", 00:05:09.112 "nvmf_set_config", 00:05:09.112 "nvmf_set_max_subsystems", 00:05:09.112 "nvmf_stop_mdns_prr", 00:05:09.112 "nvmf_publish_mdns_prr", 00:05:09.112 "nvmf_subsystem_get_listeners", 00:05:09.112 "nvmf_subsystem_get_qpairs", 00:05:09.112 "nvmf_subsystem_get_controllers", 00:05:09.130 "nvmf_get_stats", 00:05:09.130 "nvmf_get_transports", 00:05:09.130 "nvmf_create_transport", 00:05:09.130 "nvmf_get_targets", 00:05:09.130 "nvmf_delete_target", 00:05:09.130 "nvmf_create_target", 00:05:09.130 "nvmf_subsystem_allow_any_host", 00:05:09.130 "nvmf_subsystem_set_keys", 00:05:09.130 "nvmf_subsystem_remove_host", 00:05:09.130 "nvmf_subsystem_add_host", 00:05:09.130 "nvmf_ns_remove_host", 00:05:09.130 "nvmf_ns_add_host", 00:05:09.130 "nvmf_subsystem_remove_ns", 00:05:09.130 "nvmf_subsystem_set_ns_ana_group", 00:05:09.130 "nvmf_subsystem_add_ns", 00:05:09.130 "nvmf_subsystem_listener_set_ana_state", 00:05:09.130 "nvmf_discovery_get_referrals", 00:05:09.130 "nvmf_discovery_remove_referral", 00:05:09.130 "nvmf_discovery_add_referral", 00:05:09.130 "nvmf_subsystem_remove_listener", 00:05:09.130 "nvmf_subsystem_add_listener", 00:05:09.130 "nvmf_delete_subsystem", 00:05:09.130 "nvmf_create_subsystem", 00:05:09.130 "nvmf_get_subsystems", 00:05:09.130 "env_dpdk_get_mem_stats", 00:05:09.130 "nbd_get_disks", 00:05:09.130 "nbd_stop_disk", 00:05:09.130 "nbd_start_disk", 00:05:09.130 "ublk_recover_disk", 00:05:09.130 "ublk_get_disks", 00:05:09.130 "ublk_stop_disk", 00:05:09.130 "ublk_start_disk", 00:05:09.130 "ublk_destroy_target", 00:05:09.130 "ublk_create_target", 00:05:09.130 "virtio_blk_create_transport", 00:05:09.130 "virtio_blk_get_transports", 00:05:09.130 "vhost_controller_set_coalescing", 00:05:09.130 "vhost_get_controllers", 00:05:09.130 "vhost_delete_controller", 00:05:09.130 "vhost_create_blk_controller", 00:05:09.130 "vhost_scsi_controller_remove_target", 00:05:09.130 "vhost_scsi_controller_add_target", 00:05:09.130 "vhost_start_scsi_controller", 00:05:09.130 "vhost_create_scsi_controller", 00:05:09.130 "thread_set_cpumask", 00:05:09.130 "scheduler_set_options", 00:05:09.130 "framework_get_governor", 00:05:09.130 "framework_get_scheduler", 00:05:09.130 "framework_set_scheduler", 00:05:09.130 "framework_get_reactors", 00:05:09.130 "thread_get_io_channels", 00:05:09.130 "thread_get_pollers", 00:05:09.130 "thread_get_stats", 00:05:09.130 "framework_monitor_context_switch", 00:05:09.130 "spdk_kill_instance", 00:05:09.130 "log_enable_timestamps", 00:05:09.130 "log_get_flags", 00:05:09.130 "log_clear_flag", 00:05:09.130 "log_set_flag", 00:05:09.130 "log_get_level", 00:05:09.130 "log_set_level", 00:05:09.130 "log_get_print_level", 00:05:09.130 "log_set_print_level", 00:05:09.130 "framework_enable_cpumask_locks", 00:05:09.130 "framework_disable_cpumask_locks", 00:05:09.130 "framework_wait_init", 00:05:09.130 "framework_start_init", 00:05:09.130 "scsi_get_devices", 00:05:09.130 "bdev_get_histogram", 00:05:09.130 "bdev_enable_histogram", 00:05:09.130 "bdev_set_qos_limit", 00:05:09.130 "bdev_set_qd_sampling_period", 00:05:09.130 "bdev_get_bdevs", 00:05:09.130 "bdev_reset_iostat", 00:05:09.130 "bdev_get_iostat", 00:05:09.130 "bdev_examine", 00:05:09.130 "bdev_wait_for_examine", 00:05:09.130 "bdev_set_options", 00:05:09.130 "accel_get_stats", 00:05:09.130 "accel_set_options", 00:05:09.130 "accel_set_driver", 00:05:09.130 "accel_crypto_key_destroy", 00:05:09.130 "accel_crypto_keys_get", 00:05:09.130 "accel_crypto_key_create", 00:05:09.130 "accel_assign_opc", 00:05:09.130 "accel_get_module_info", 00:05:09.130 "accel_get_opc_assignments", 00:05:09.130 "vmd_rescan", 00:05:09.130 "vmd_remove_device", 00:05:09.130 "vmd_enable", 00:05:09.130 "sock_get_default_impl", 00:05:09.130 "sock_set_default_impl", 00:05:09.130 "sock_impl_set_options", 00:05:09.130 "sock_impl_get_options", 00:05:09.130 "iobuf_get_stats", 00:05:09.130 "iobuf_set_options", 00:05:09.130 "keyring_get_keys", 00:05:09.130 "vfu_tgt_set_base_path", 00:05:09.130 "framework_get_pci_devices", 00:05:09.130 "framework_get_config", 00:05:09.130 "framework_get_subsystems", 00:05:09.130 "fsdev_set_opts", 00:05:09.130 "fsdev_get_opts", 00:05:09.130 "trace_get_info", 00:05:09.130 "trace_get_tpoint_group_mask", 00:05:09.130 "trace_disable_tpoint_group", 00:05:09.130 "trace_enable_tpoint_group", 00:05:09.130 "trace_clear_tpoint_mask", 00:05:09.130 "trace_set_tpoint_mask", 00:05:09.130 "notify_get_notifications", 00:05:09.130 "notify_get_types", 00:05:09.130 "spdk_get_version", 00:05:09.130 "rpc_get_methods" 00:05:09.130 ] 00:05:09.130 22:36:16 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:09.130 22:36:16 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:09.130 22:36:16 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:09.390 22:36:16 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:09.390 22:36:16 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2184697 00:05:09.390 22:36:16 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 2184697 ']' 00:05:09.390 22:36:16 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 2184697 00:05:09.390 22:36:16 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:09.390 22:36:16 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:09.390 22:36:16 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2184697 00:05:09.390 22:36:16 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:09.390 22:36:16 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:09.390 22:36:16 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2184697' 00:05:09.390 killing process with pid 2184697 00:05:09.390 22:36:16 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 2184697 00:05:09.390 22:36:16 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 2184697 00:05:09.649 00:05:09.649 real 0m1.143s 00:05:09.649 user 0m1.884s 00:05:09.649 sys 0m0.462s 00:05:09.649 22:36:17 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:09.649 22:36:17 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:09.649 ************************************ 00:05:09.649 END TEST spdkcli_tcp 00:05:09.649 ************************************ 00:05:09.649 22:36:17 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:09.649 22:36:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:09.649 22:36:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.649 22:36:17 -- common/autotest_common.sh@10 -- # set +x 00:05:09.649 ************************************ 00:05:09.649 START TEST dpdk_mem_utility 00:05:09.649 ************************************ 00:05:09.649 22:36:17 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:09.649 * Looking for test storage... 00:05:09.909 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/dpdk_memory_utility 00:05:09.909 22:36:17 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:09.909 22:36:17 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:05:09.909 22:36:17 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:09.909 22:36:17 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:09.909 22:36:17 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:09.909 22:36:17 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:09.909 22:36:17 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:09.909 22:36:17 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:09.909 22:36:17 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:09.909 22:36:17 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:09.909 22:36:17 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:09.909 22:36:17 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:09.909 22:36:17 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:09.909 22:36:17 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:09.909 22:36:17 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:09.909 22:36:17 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:09.909 22:36:17 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:09.909 22:36:17 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:09.909 22:36:17 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:09.909 22:36:17 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:09.909 22:36:17 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:09.909 22:36:17 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:09.909 22:36:17 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:09.909 22:36:17 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:09.909 22:36:17 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:09.909 22:36:17 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:09.909 22:36:17 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:09.909 22:36:17 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:09.909 22:36:17 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:09.909 22:36:17 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:09.909 22:36:17 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:09.909 22:36:17 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:09.909 22:36:17 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:09.909 22:36:17 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:09.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.909 --rc genhtml_branch_coverage=1 00:05:09.909 --rc genhtml_function_coverage=1 00:05:09.909 --rc genhtml_legend=1 00:05:09.909 --rc geninfo_all_blocks=1 00:05:09.909 --rc geninfo_unexecuted_blocks=1 00:05:09.909 00:05:09.909 ' 00:05:09.909 22:36:17 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:09.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.909 --rc genhtml_branch_coverage=1 00:05:09.909 --rc genhtml_function_coverage=1 00:05:09.909 --rc genhtml_legend=1 00:05:09.909 --rc geninfo_all_blocks=1 00:05:09.909 --rc geninfo_unexecuted_blocks=1 00:05:09.909 00:05:09.909 ' 00:05:09.909 22:36:17 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:09.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.909 --rc genhtml_branch_coverage=1 00:05:09.909 --rc genhtml_function_coverage=1 00:05:09.909 --rc genhtml_legend=1 00:05:09.909 --rc geninfo_all_blocks=1 00:05:09.909 --rc geninfo_unexecuted_blocks=1 00:05:09.909 00:05:09.909 ' 00:05:09.909 22:36:17 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:09.909 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.909 --rc genhtml_branch_coverage=1 00:05:09.909 --rc genhtml_function_coverage=1 00:05:09.909 --rc genhtml_legend=1 00:05:09.909 --rc geninfo_all_blocks=1 00:05:09.909 --rc geninfo_unexecuted_blocks=1 00:05:09.909 00:05:09.909 ' 00:05:09.909 22:36:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/dpdk_mem_info.py 00:05:09.909 22:36:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2184927 00:05:09.909 22:36:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2184927 00:05:09.909 22:36:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt 00:05:09.909 22:36:17 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 2184927 ']' 00:05:09.909 22:36:17 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.909 22:36:17 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:09.909 22:36:17 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.909 22:36:17 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:09.909 22:36:17 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:09.909 [2024-12-10 22:36:17.515367] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:05:09.909 [2024-12-10 22:36:17.515417] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2184927 ] 00:05:09.909 [2024-12-10 22:36:17.594263] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.909 [2024-12-10 22:36:17.635906] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.169 22:36:17 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:10.169 22:36:17 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:10.169 22:36:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:10.169 22:36:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:10.169 22:36:17 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.169 22:36:17 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:10.169 { 00:05:10.169 "filename": "/tmp/spdk_mem_dump.txt" 00:05:10.169 } 00:05:10.169 22:36:17 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:10.169 22:36:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/dpdk_mem_info.py 00:05:10.428 DPDK memory size 818.000000 MiB in 1 heap(s) 00:05:10.428 1 heaps totaling size 818.000000 MiB 00:05:10.428 size: 818.000000 MiB heap id: 0 00:05:10.428 end heaps---------- 00:05:10.428 9 mempools totaling size 603.782043 MiB 00:05:10.428 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:10.428 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:10.428 size: 100.555481 MiB name: bdev_io_2184927 00:05:10.428 size: 50.003479 MiB name: msgpool_2184927 00:05:10.428 size: 36.509338 MiB name: fsdev_io_2184927 00:05:10.428 size: 21.763794 MiB name: PDU_Pool 00:05:10.428 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:10.428 size: 4.133484 MiB name: evtpool_2184927 00:05:10.428 size: 0.026123 MiB name: Session_Pool 00:05:10.428 end mempools------- 00:05:10.428 6 memzones totaling size 4.142822 MiB 00:05:10.428 size: 1.000366 MiB name: RG_ring_0_2184927 00:05:10.428 size: 1.000366 MiB name: RG_ring_1_2184927 00:05:10.428 size: 1.000366 MiB name: RG_ring_4_2184927 00:05:10.428 size: 1.000366 MiB name: RG_ring_5_2184927 00:05:10.428 size: 0.125366 MiB name: RG_ring_2_2184927 00:05:10.428 size: 0.015991 MiB name: RG_ring_3_2184927 00:05:10.428 end memzones------- 00:05:10.428 22:36:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/dpdk_mem_info.py -m 0 00:05:10.428 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:05:10.428 list of free elements. size: 10.852478 MiB 00:05:10.428 element at address: 0x200019200000 with size: 0.999878 MiB 00:05:10.428 element at address: 0x200019400000 with size: 0.999878 MiB 00:05:10.428 element at address: 0x200000400000 with size: 0.998535 MiB 00:05:10.428 element at address: 0x200032000000 with size: 0.994446 MiB 00:05:10.429 element at address: 0x200006400000 with size: 0.959839 MiB 00:05:10.429 element at address: 0x200012c00000 with size: 0.944275 MiB 00:05:10.429 element at address: 0x200019600000 with size: 0.936584 MiB 00:05:10.429 element at address: 0x200000200000 with size: 0.717346 MiB 00:05:10.429 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:05:10.429 element at address: 0x200000c00000 with size: 0.495422 MiB 00:05:10.429 element at address: 0x20000a600000 with size: 0.490723 MiB 00:05:10.429 element at address: 0x200019800000 with size: 0.485657 MiB 00:05:10.429 element at address: 0x200003e00000 with size: 0.481934 MiB 00:05:10.429 element at address: 0x200028200000 with size: 0.410034 MiB 00:05:10.429 element at address: 0x200000800000 with size: 0.355042 MiB 00:05:10.429 list of standard malloc elements. size: 199.218628 MiB 00:05:10.429 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:05:10.429 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:05:10.429 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:10.429 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:05:10.429 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:05:10.429 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:10.429 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:05:10.429 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:10.429 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:05:10.429 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:10.429 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:10.429 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:10.429 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:10.429 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:05:10.429 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:10.429 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:10.429 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:05:10.429 element at address: 0x20000085b040 with size: 0.000183 MiB 00:05:10.429 element at address: 0x20000085f300 with size: 0.000183 MiB 00:05:10.429 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:05:10.429 element at address: 0x20000087f680 with size: 0.000183 MiB 00:05:10.429 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:05:10.429 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:10.429 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:10.429 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:10.429 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:10.429 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:05:10.429 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:05:10.429 element at address: 0x200003efb980 with size: 0.000183 MiB 00:05:10.429 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:05:10.429 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:05:10.429 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:05:10.429 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:05:10.429 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:05:10.429 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:05:10.429 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:05:10.429 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:05:10.429 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:05:10.429 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:05:10.429 element at address: 0x200028268f80 with size: 0.000183 MiB 00:05:10.429 element at address: 0x200028269040 with size: 0.000183 MiB 00:05:10.429 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:05:10.429 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:05:10.429 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:05:10.429 list of memzone associated elements. size: 607.928894 MiB 00:05:10.429 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:05:10.429 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:10.429 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:05:10.429 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:10.429 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:05:10.429 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_2184927_0 00:05:10.429 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:10.429 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2184927_0 00:05:10.429 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:05:10.429 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_2184927_0 00:05:10.429 element at address: 0x2000199be940 with size: 20.255554 MiB 00:05:10.429 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:10.429 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:05:10.429 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:10.429 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:10.429 associated memzone info: size: 3.000122 MiB name: MP_evtpool_2184927_0 00:05:10.429 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:10.429 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2184927 00:05:10.429 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:10.429 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2184927 00:05:10.429 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:05:10.429 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:10.429 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:05:10.429 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:10.429 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:05:10.429 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:10.429 element at address: 0x200003efba40 with size: 1.008118 MiB 00:05:10.429 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:10.429 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:10.429 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2184927 00:05:10.429 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:10.429 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2184927 00:05:10.429 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:05:10.429 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2184927 00:05:10.429 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:05:10.429 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2184927 00:05:10.429 element at address: 0x20000087f740 with size: 0.500488 MiB 00:05:10.429 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_2184927 00:05:10.429 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:10.429 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2184927 00:05:10.429 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:05:10.429 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:10.429 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:05:10.429 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:10.429 element at address: 0x20001987c540 with size: 0.250488 MiB 00:05:10.429 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:10.429 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:05:10.429 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_2184927 00:05:10.429 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:05:10.429 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2184927 00:05:10.429 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:05:10.429 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:10.429 element at address: 0x200028269100 with size: 0.023743 MiB 00:05:10.429 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:10.429 element at address: 0x20000085b100 with size: 0.016113 MiB 00:05:10.429 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2184927 00:05:10.429 element at address: 0x20002826f240 with size: 0.002441 MiB 00:05:10.429 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:10.429 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:05:10.429 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2184927 00:05:10.429 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:05:10.429 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_2184927 00:05:10.429 element at address: 0x20000085af00 with size: 0.000305 MiB 00:05:10.429 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2184927 00:05:10.429 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:05:10.429 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:10.429 22:36:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:10.429 22:36:17 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2184927 00:05:10.429 22:36:17 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 2184927 ']' 00:05:10.429 22:36:17 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 2184927 00:05:10.429 22:36:17 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:10.429 22:36:17 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:10.429 22:36:17 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2184927 00:05:10.429 22:36:18 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:10.429 22:36:18 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:10.429 22:36:18 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2184927' 00:05:10.429 killing process with pid 2184927 00:05:10.429 22:36:18 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 2184927 00:05:10.429 22:36:18 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 2184927 00:05:10.689 00:05:10.689 real 0m1.030s 00:05:10.689 user 0m0.992s 00:05:10.689 sys 0m0.398s 00:05:10.689 22:36:18 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:10.689 22:36:18 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:10.689 ************************************ 00:05:10.689 END TEST dpdk_mem_utility 00:05:10.689 ************************************ 00:05:10.689 22:36:18 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/event.sh 00:05:10.689 22:36:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:10.689 22:36:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:10.689 22:36:18 -- common/autotest_common.sh@10 -- # set +x 00:05:10.689 ************************************ 00:05:10.689 START TEST event 00:05:10.689 ************************************ 00:05:10.689 22:36:18 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/event.sh 00:05:10.948 * Looking for test storage... 00:05:10.949 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event 00:05:10.949 22:36:18 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:10.949 22:36:18 event -- common/autotest_common.sh@1711 -- # lcov --version 00:05:10.949 22:36:18 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:10.949 22:36:18 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:10.949 22:36:18 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:10.949 22:36:18 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:10.949 22:36:18 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:10.949 22:36:18 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:10.949 22:36:18 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:10.949 22:36:18 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:10.949 22:36:18 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:10.949 22:36:18 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:10.949 22:36:18 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:10.949 22:36:18 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:10.949 22:36:18 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:10.949 22:36:18 event -- scripts/common.sh@344 -- # case "$op" in 00:05:10.949 22:36:18 event -- scripts/common.sh@345 -- # : 1 00:05:10.949 22:36:18 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:10.949 22:36:18 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:10.949 22:36:18 event -- scripts/common.sh@365 -- # decimal 1 00:05:10.949 22:36:18 event -- scripts/common.sh@353 -- # local d=1 00:05:10.949 22:36:18 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:10.949 22:36:18 event -- scripts/common.sh@355 -- # echo 1 00:05:10.949 22:36:18 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:10.949 22:36:18 event -- scripts/common.sh@366 -- # decimal 2 00:05:10.949 22:36:18 event -- scripts/common.sh@353 -- # local d=2 00:05:10.949 22:36:18 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:10.949 22:36:18 event -- scripts/common.sh@355 -- # echo 2 00:05:10.949 22:36:18 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:10.949 22:36:18 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:10.949 22:36:18 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:10.949 22:36:18 event -- scripts/common.sh@368 -- # return 0 00:05:10.949 22:36:18 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:10.949 22:36:18 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:10.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.949 --rc genhtml_branch_coverage=1 00:05:10.949 --rc genhtml_function_coverage=1 00:05:10.949 --rc genhtml_legend=1 00:05:10.949 --rc geninfo_all_blocks=1 00:05:10.949 --rc geninfo_unexecuted_blocks=1 00:05:10.949 00:05:10.949 ' 00:05:10.949 22:36:18 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:10.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.949 --rc genhtml_branch_coverage=1 00:05:10.949 --rc genhtml_function_coverage=1 00:05:10.949 --rc genhtml_legend=1 00:05:10.949 --rc geninfo_all_blocks=1 00:05:10.949 --rc geninfo_unexecuted_blocks=1 00:05:10.949 00:05:10.949 ' 00:05:10.949 22:36:18 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:10.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.949 --rc genhtml_branch_coverage=1 00:05:10.949 --rc genhtml_function_coverage=1 00:05:10.949 --rc genhtml_legend=1 00:05:10.949 --rc geninfo_all_blocks=1 00:05:10.949 --rc geninfo_unexecuted_blocks=1 00:05:10.949 00:05:10.949 ' 00:05:10.949 22:36:18 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:10.949 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.949 --rc genhtml_branch_coverage=1 00:05:10.949 --rc genhtml_function_coverage=1 00:05:10.949 --rc genhtml_legend=1 00:05:10.949 --rc geninfo_all_blocks=1 00:05:10.949 --rc geninfo_unexecuted_blocks=1 00:05:10.949 00:05:10.949 ' 00:05:10.949 22:36:18 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/bdev/nbd_common.sh 00:05:10.949 22:36:18 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:10.949 22:36:18 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:10.949 22:36:18 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:10.949 22:36:18 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:10.949 22:36:18 event -- common/autotest_common.sh@10 -- # set +x 00:05:10.949 ************************************ 00:05:10.949 START TEST event_perf 00:05:10.949 ************************************ 00:05:10.949 22:36:18 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:10.949 Running I/O for 1 seconds...[2024-12-10 22:36:18.619767] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:05:10.949 [2024-12-10 22:36:18.619843] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2185222 ] 00:05:11.208 [2024-12-10 22:36:18.696423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:11.208 [2024-12-10 22:36:18.740069] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:11.208 [2024-12-10 22:36:18.740192] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:11.208 [2024-12-10 22:36:18.740257] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.208 [2024-12-10 22:36:18.740258] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:12.145 Running I/O for 1 seconds... 00:05:12.145 lcore 0: 203048 00:05:12.145 lcore 1: 203047 00:05:12.145 lcore 2: 203048 00:05:12.145 lcore 3: 203046 00:05:12.145 done. 00:05:12.145 00:05:12.145 real 0m1.180s 00:05:12.145 user 0m4.094s 00:05:12.145 sys 0m0.082s 00:05:12.145 22:36:19 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.145 22:36:19 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:12.145 ************************************ 00:05:12.145 END TEST event_perf 00:05:12.145 ************************************ 00:05:12.145 22:36:19 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/reactor/reactor -t 1 00:05:12.145 22:36:19 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:12.145 22:36:19 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.145 22:36:19 event -- common/autotest_common.sh@10 -- # set +x 00:05:12.145 ************************************ 00:05:12.145 START TEST event_reactor 00:05:12.145 ************************************ 00:05:12.145 22:36:19 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/reactor/reactor -t 1 00:05:12.145 [2024-12-10 22:36:19.869491] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:05:12.145 [2024-12-10 22:36:19.869559] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2185473 ] 00:05:12.404 [2024-12-10 22:36:19.947772] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.404 [2024-12-10 22:36:19.985360] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.341 test_start 00:05:13.341 oneshot 00:05:13.341 tick 100 00:05:13.341 tick 100 00:05:13.341 tick 250 00:05:13.341 tick 100 00:05:13.341 tick 100 00:05:13.341 tick 250 00:05:13.341 tick 100 00:05:13.341 tick 500 00:05:13.341 tick 100 00:05:13.341 tick 100 00:05:13.341 tick 250 00:05:13.341 tick 100 00:05:13.341 tick 100 00:05:13.341 test_end 00:05:13.341 00:05:13.341 real 0m1.179s 00:05:13.341 user 0m1.091s 00:05:13.341 sys 0m0.082s 00:05:13.341 22:36:21 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:13.341 22:36:21 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:13.341 ************************************ 00:05:13.341 END TEST event_reactor 00:05:13.341 ************************************ 00:05:13.341 22:36:21 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:13.341 22:36:21 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:13.341 22:36:21 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:13.341 22:36:21 event -- common/autotest_common.sh@10 -- # set +x 00:05:13.600 ************************************ 00:05:13.600 START TEST event_reactor_perf 00:05:13.600 ************************************ 00:05:13.600 22:36:21 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:13.600 [2024-12-10 22:36:21.116582] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:05:13.600 [2024-12-10 22:36:21.116650] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2185723 ] 00:05:13.600 [2024-12-10 22:36:21.193778] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.600 [2024-12-10 22:36:21.232369] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.978 test_start 00:05:14.978 test_end 00:05:14.978 Performance: 505083 events per second 00:05:14.978 00:05:14.978 real 0m1.178s 00:05:14.978 user 0m1.095s 00:05:14.978 sys 0m0.079s 00:05:14.978 22:36:22 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.978 22:36:22 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:14.978 ************************************ 00:05:14.978 END TEST event_reactor_perf 00:05:14.978 ************************************ 00:05:14.978 22:36:22 event -- event/event.sh@49 -- # uname -s 00:05:14.978 22:36:22 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:14.978 22:36:22 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/scheduler/scheduler.sh 00:05:14.978 22:36:22 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:14.978 22:36:22 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.978 22:36:22 event -- common/autotest_common.sh@10 -- # set +x 00:05:14.978 ************************************ 00:05:14.978 START TEST event_scheduler 00:05:14.978 ************************************ 00:05:14.978 22:36:22 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/scheduler/scheduler.sh 00:05:14.978 * Looking for test storage... 00:05:14.978 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/scheduler 00:05:14.978 22:36:22 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:14.978 22:36:22 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:05:14.978 22:36:22 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:14.978 22:36:22 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:14.978 22:36:22 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:14.978 22:36:22 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:14.978 22:36:22 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:14.978 22:36:22 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:14.978 22:36:22 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:14.978 22:36:22 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:14.978 22:36:22 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:14.978 22:36:22 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:14.978 22:36:22 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:14.978 22:36:22 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:14.978 22:36:22 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:14.978 22:36:22 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:14.978 22:36:22 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:14.978 22:36:22 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:14.978 22:36:22 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:14.978 22:36:22 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:14.978 22:36:22 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:14.978 22:36:22 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:14.978 22:36:22 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:14.978 22:36:22 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:14.978 22:36:22 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:14.978 22:36:22 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:14.978 22:36:22 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:14.978 22:36:22 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:14.978 22:36:22 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:14.978 22:36:22 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:14.978 22:36:22 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:14.978 22:36:22 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:14.979 22:36:22 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:14.979 22:36:22 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:14.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.979 --rc genhtml_branch_coverage=1 00:05:14.979 --rc genhtml_function_coverage=1 00:05:14.979 --rc genhtml_legend=1 00:05:14.979 --rc geninfo_all_blocks=1 00:05:14.979 --rc geninfo_unexecuted_blocks=1 00:05:14.979 00:05:14.979 ' 00:05:14.979 22:36:22 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:14.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.979 --rc genhtml_branch_coverage=1 00:05:14.979 --rc genhtml_function_coverage=1 00:05:14.979 --rc genhtml_legend=1 00:05:14.979 --rc geninfo_all_blocks=1 00:05:14.979 --rc geninfo_unexecuted_blocks=1 00:05:14.979 00:05:14.979 ' 00:05:14.979 22:36:22 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:14.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.979 --rc genhtml_branch_coverage=1 00:05:14.979 --rc genhtml_function_coverage=1 00:05:14.979 --rc genhtml_legend=1 00:05:14.979 --rc geninfo_all_blocks=1 00:05:14.979 --rc geninfo_unexecuted_blocks=1 00:05:14.979 00:05:14.979 ' 00:05:14.979 22:36:22 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:14.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.979 --rc genhtml_branch_coverage=1 00:05:14.979 --rc genhtml_function_coverage=1 00:05:14.979 --rc genhtml_legend=1 00:05:14.979 --rc geninfo_all_blocks=1 00:05:14.979 --rc geninfo_unexecuted_blocks=1 00:05:14.979 00:05:14.979 ' 00:05:14.979 22:36:22 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:14.979 22:36:22 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2186006 00:05:14.979 22:36:22 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:14.979 22:36:22 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:14.979 22:36:22 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2186006 00:05:14.979 22:36:22 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 2186006 ']' 00:05:14.979 22:36:22 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:14.979 22:36:22 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:14.979 22:36:22 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:14.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:14.979 22:36:22 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:14.979 22:36:22 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:14.979 [2024-12-10 22:36:22.562549] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:05:14.979 [2024-12-10 22:36:22.562597] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2186006 ] 00:05:14.979 [2024-12-10 22:36:22.637971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:14.979 [2024-12-10 22:36:22.682495] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.979 [2024-12-10 22:36:22.682603] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:14.979 [2024-12-10 22:36:22.682625] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:14.979 [2024-12-10 22:36:22.682627] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:15.238 22:36:22 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:15.238 22:36:22 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:15.238 22:36:22 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:15.238 22:36:22 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.238 22:36:22 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:15.238 [2024-12-10 22:36:22.723240] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:15.238 [2024-12-10 22:36:22.723257] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:15.238 [2024-12-10 22:36:22.723267] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:15.238 [2024-12-10 22:36:22.723272] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:15.238 [2024-12-10 22:36:22.723277] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:15.238 22:36:22 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.238 22:36:22 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:15.238 22:36:22 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.238 22:36:22 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:15.238 [2024-12-10 22:36:22.798130] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:15.238 22:36:22 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.239 22:36:22 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:15.239 22:36:22 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:15.239 22:36:22 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:15.239 22:36:22 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:15.239 ************************************ 00:05:15.239 START TEST scheduler_create_thread 00:05:15.239 ************************************ 00:05:15.239 22:36:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:15.239 22:36:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:15.239 22:36:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.239 22:36:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:15.239 2 00:05:15.239 22:36:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.239 22:36:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:15.239 22:36:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.239 22:36:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:15.239 3 00:05:15.239 22:36:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.239 22:36:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:15.239 22:36:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.239 22:36:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:15.239 4 00:05:15.239 22:36:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.239 22:36:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:15.239 22:36:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.239 22:36:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:15.239 5 00:05:15.239 22:36:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.239 22:36:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:15.239 22:36:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.239 22:36:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:15.239 6 00:05:15.239 22:36:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.239 22:36:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:15.239 22:36:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.239 22:36:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:15.239 7 00:05:15.239 22:36:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.239 22:36:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:15.239 22:36:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.239 22:36:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:15.239 8 00:05:15.239 22:36:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.239 22:36:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:15.239 22:36:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.239 22:36:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:15.239 9 00:05:15.239 22:36:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.239 22:36:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:15.239 22:36:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.239 22:36:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:15.239 10 00:05:15.239 22:36:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.239 22:36:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:15.239 22:36:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.239 22:36:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:15.239 22:36:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.239 22:36:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:15.239 22:36:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:15.239 22:36:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.239 22:36:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:15.806 22:36:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.806 22:36:23 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:15.806 22:36:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.806 22:36:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:17.184 22:36:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:17.184 22:36:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:17.184 22:36:24 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:17.184 22:36:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:17.184 22:36:24 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:18.561 22:36:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:18.561 00:05:18.561 real 0m3.102s 00:05:18.561 user 0m0.021s 00:05:18.561 sys 0m0.008s 00:05:18.561 22:36:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:18.561 22:36:25 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:18.561 ************************************ 00:05:18.561 END TEST scheduler_create_thread 00:05:18.561 ************************************ 00:05:18.561 22:36:25 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:18.561 22:36:25 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2186006 00:05:18.561 22:36:25 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 2186006 ']' 00:05:18.561 22:36:25 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 2186006 00:05:18.561 22:36:25 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:18.561 22:36:25 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:18.561 22:36:25 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2186006 00:05:18.561 22:36:26 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:18.561 22:36:26 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:18.561 22:36:26 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2186006' 00:05:18.561 killing process with pid 2186006 00:05:18.561 22:36:26 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 2186006 00:05:18.561 22:36:26 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 2186006 00:05:18.820 [2024-12-10 22:36:26.313386] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:18.820 00:05:18.820 real 0m4.155s 00:05:18.820 user 0m6.632s 00:05:18.820 sys 0m0.383s 00:05:18.820 22:36:26 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:18.820 22:36:26 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:18.820 ************************************ 00:05:18.820 END TEST event_scheduler 00:05:18.820 ************************************ 00:05:18.820 22:36:26 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:18.820 22:36:26 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:18.820 22:36:26 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:18.820 22:36:26 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:18.820 22:36:26 event -- common/autotest_common.sh@10 -- # set +x 00:05:19.079 ************************************ 00:05:19.079 START TEST app_repeat 00:05:19.079 ************************************ 00:05:19.079 22:36:26 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:19.079 22:36:26 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.079 22:36:26 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.079 22:36:26 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:19.079 22:36:26 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:19.079 22:36:26 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:19.079 22:36:26 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:19.079 22:36:26 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:19.079 22:36:26 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2186744 00:05:19.079 22:36:26 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:19.079 22:36:26 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:19.079 22:36:26 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2186744' 00:05:19.079 Process app_repeat pid: 2186744 00:05:19.079 22:36:26 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:19.079 22:36:26 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:19.079 spdk_app_start Round 0 00:05:19.079 22:36:26 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2186744 /var/tmp/spdk-nbd.sock 00:05:19.079 22:36:26 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2186744 ']' 00:05:19.079 22:36:26 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:19.079 22:36:26 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:19.079 22:36:26 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:19.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:19.079 22:36:26 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:19.079 22:36:26 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:19.079 [2024-12-10 22:36:26.614084] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:05:19.079 [2024-12-10 22:36:26.614136] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2186744 ] 00:05:19.079 [2024-12-10 22:36:26.691210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:19.080 [2024-12-10 22:36:26.730561] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:19.080 [2024-12-10 22:36:26.730562] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.338 22:36:26 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:19.339 22:36:26 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:19.339 22:36:26 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:19.339 Malloc0 00:05:19.339 22:36:27 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:19.597 Malloc1 00:05:19.597 22:36:27 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:19.597 22:36:27 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.597 22:36:27 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:19.597 22:36:27 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:19.597 22:36:27 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.597 22:36:27 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:19.597 22:36:27 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:19.598 22:36:27 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:19.598 22:36:27 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:19.598 22:36:27 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:19.598 22:36:27 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:19.598 22:36:27 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:19.598 22:36:27 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:19.598 22:36:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:19.598 22:36:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:19.598 22:36:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:19.856 /dev/nbd0 00:05:19.856 22:36:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:19.856 22:36:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:19.856 22:36:27 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:19.856 22:36:27 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:19.856 22:36:27 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:19.856 22:36:27 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:19.856 22:36:27 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:19.856 22:36:27 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:19.857 22:36:27 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:19.857 22:36:27 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:19.857 22:36:27 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:19.857 1+0 records in 00:05:19.857 1+0 records out 00:05:19.857 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000248355 s, 16.5 MB/s 00:05:19.857 22:36:27 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdtest 00:05:19.857 22:36:27 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:19.857 22:36:27 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdtest 00:05:19.857 22:36:27 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:19.857 22:36:27 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:19.857 22:36:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:19.857 22:36:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:19.857 22:36:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:20.116 /dev/nbd1 00:05:20.116 22:36:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:20.116 22:36:27 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:20.116 22:36:27 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:20.116 22:36:27 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:20.116 22:36:27 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:20.116 22:36:27 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:20.116 22:36:27 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:20.116 22:36:27 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:20.116 22:36:27 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:20.116 22:36:27 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:20.116 22:36:27 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:20.116 1+0 records in 00:05:20.116 1+0 records out 00:05:20.116 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000206275 s, 19.9 MB/s 00:05:20.116 22:36:27 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdtest 00:05:20.116 22:36:27 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:20.116 22:36:27 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdtest 00:05:20.116 22:36:27 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:20.116 22:36:27 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:20.116 22:36:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:20.116 22:36:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:20.116 22:36:27 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:20.116 22:36:27 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.116 22:36:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:20.375 22:36:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:20.375 { 00:05:20.375 "nbd_device": "/dev/nbd0", 00:05:20.375 "bdev_name": "Malloc0" 00:05:20.375 }, 00:05:20.375 { 00:05:20.375 "nbd_device": "/dev/nbd1", 00:05:20.375 "bdev_name": "Malloc1" 00:05:20.375 } 00:05:20.375 ]' 00:05:20.375 22:36:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:20.375 { 00:05:20.375 "nbd_device": "/dev/nbd0", 00:05:20.375 "bdev_name": "Malloc0" 00:05:20.375 }, 00:05:20.375 { 00:05:20.375 "nbd_device": "/dev/nbd1", 00:05:20.375 "bdev_name": "Malloc1" 00:05:20.375 } 00:05:20.375 ]' 00:05:20.375 22:36:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:20.375 22:36:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:20.375 /dev/nbd1' 00:05:20.375 22:36:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:20.375 /dev/nbd1' 00:05:20.375 22:36:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:20.375 22:36:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:20.375 22:36:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:20.375 22:36:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:20.375 22:36:28 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:20.375 22:36:28 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:20.375 22:36:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.375 22:36:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:20.375 22:36:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:20.375 22:36:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest 00:05:20.375 22:36:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:20.375 22:36:28 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:20.375 256+0 records in 00:05:20.375 256+0 records out 00:05:20.375 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106207 s, 98.7 MB/s 00:05:20.375 22:36:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:20.375 22:36:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:20.375 256+0 records in 00:05:20.375 256+0 records out 00:05:20.375 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.013856 s, 75.7 MB/s 00:05:20.375 22:36:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:20.375 22:36:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:20.375 256+0 records in 00:05:20.375 256+0 records out 00:05:20.376 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0148976 s, 70.4 MB/s 00:05:20.376 22:36:28 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:20.376 22:36:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.376 22:36:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:20.376 22:36:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:20.376 22:36:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest 00:05:20.376 22:36:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:20.376 22:36:28 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:20.376 22:36:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:20.376 22:36:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest /dev/nbd0 00:05:20.376 22:36:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:20.376 22:36:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest /dev/nbd1 00:05:20.635 22:36:28 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest 00:05:20.635 22:36:28 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:20.635 22:36:28 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.635 22:36:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:20.635 22:36:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:20.635 22:36:28 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:20.635 22:36:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:20.635 22:36:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:20.635 22:36:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:20.635 22:36:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:20.635 22:36:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:20.635 22:36:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:20.635 22:36:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:20.635 22:36:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:20.635 22:36:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:20.635 22:36:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:20.635 22:36:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:20.635 22:36:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:20.894 22:36:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:20.894 22:36:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:20.895 22:36:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:20.895 22:36:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:20.895 22:36:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:20.895 22:36:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:20.895 22:36:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:20.895 22:36:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:20.895 22:36:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:20.895 22:36:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:20.895 22:36:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:21.154 22:36:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:21.154 22:36:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:21.154 22:36:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:21.154 22:36:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:21.154 22:36:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:21.154 22:36:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:21.154 22:36:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:21.154 22:36:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:21.154 22:36:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:21.154 22:36:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:21.154 22:36:28 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:21.154 22:36:28 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:21.154 22:36:28 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:21.414 22:36:28 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:21.414 [2024-12-10 22:36:29.140922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:21.673 [2024-12-10 22:36:29.177756] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:21.673 [2024-12-10 22:36:29.177757] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.673 [2024-12-10 22:36:29.218008] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:21.673 [2024-12-10 22:36:29.218045] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:24.384 22:36:31 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:24.384 22:36:31 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:24.384 spdk_app_start Round 1 00:05:24.384 22:36:31 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2186744 /var/tmp/spdk-nbd.sock 00:05:24.384 22:36:31 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2186744 ']' 00:05:24.384 22:36:31 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:24.384 22:36:31 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:24.384 22:36:31 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:24.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:24.384 22:36:32 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:24.384 22:36:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:24.643 22:36:32 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:24.643 22:36:32 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:24.643 22:36:32 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:24.902 Malloc0 00:05:24.902 22:36:32 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:24.902 Malloc1 00:05:24.902 22:36:32 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:24.902 22:36:32 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.902 22:36:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:24.902 22:36:32 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:24.902 22:36:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.902 22:36:32 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:24.902 22:36:32 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:24.902 22:36:32 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:24.902 22:36:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:24.902 22:36:32 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:24.902 22:36:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:24.902 22:36:32 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:24.902 22:36:32 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:24.903 22:36:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:24.903 22:36:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:24.903 22:36:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:25.162 /dev/nbd0 00:05:25.162 22:36:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:25.162 22:36:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:25.162 22:36:32 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:25.162 22:36:32 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:25.162 22:36:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:25.162 22:36:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:25.162 22:36:32 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:25.162 22:36:32 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:25.162 22:36:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:25.162 22:36:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:25.162 22:36:32 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:25.162 1+0 records in 00:05:25.162 1+0 records out 00:05:25.162 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000199911 s, 20.5 MB/s 00:05:25.162 22:36:32 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdtest 00:05:25.162 22:36:32 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:25.162 22:36:32 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdtest 00:05:25.162 22:36:32 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:25.162 22:36:32 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:25.162 22:36:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:25.162 22:36:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:25.162 22:36:32 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:25.421 /dev/nbd1 00:05:25.421 22:36:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:25.421 22:36:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:25.421 22:36:33 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:25.421 22:36:33 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:25.421 22:36:33 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:25.421 22:36:33 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:25.421 22:36:33 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:25.421 22:36:33 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:25.421 22:36:33 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:25.421 22:36:33 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:25.421 22:36:33 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:25.421 1+0 records in 00:05:25.421 1+0 records out 00:05:25.421 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000113722 s, 36.0 MB/s 00:05:25.421 22:36:33 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdtest 00:05:25.421 22:36:33 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:25.421 22:36:33 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdtest 00:05:25.421 22:36:33 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:25.421 22:36:33 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:25.421 22:36:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:25.421 22:36:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:25.421 22:36:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:25.421 22:36:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.421 22:36:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:25.680 22:36:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:25.680 { 00:05:25.680 "nbd_device": "/dev/nbd0", 00:05:25.680 "bdev_name": "Malloc0" 00:05:25.680 }, 00:05:25.680 { 00:05:25.680 "nbd_device": "/dev/nbd1", 00:05:25.680 "bdev_name": "Malloc1" 00:05:25.680 } 00:05:25.680 ]' 00:05:25.680 22:36:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:25.680 { 00:05:25.681 "nbd_device": "/dev/nbd0", 00:05:25.681 "bdev_name": "Malloc0" 00:05:25.681 }, 00:05:25.681 { 00:05:25.681 "nbd_device": "/dev/nbd1", 00:05:25.681 "bdev_name": "Malloc1" 00:05:25.681 } 00:05:25.681 ]' 00:05:25.681 22:36:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:25.681 22:36:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:25.681 /dev/nbd1' 00:05:25.681 22:36:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:25.681 /dev/nbd1' 00:05:25.681 22:36:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:25.681 22:36:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:25.681 22:36:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:25.681 22:36:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:25.681 22:36:33 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:25.681 22:36:33 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:25.681 22:36:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.681 22:36:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:25.681 22:36:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:25.681 22:36:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest 00:05:25.681 22:36:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:25.681 22:36:33 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:25.681 256+0 records in 00:05:25.681 256+0 records out 00:05:25.681 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00332973 s, 315 MB/s 00:05:25.681 22:36:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:25.681 22:36:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:25.681 256+0 records in 00:05:25.681 256+0 records out 00:05:25.681 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0147412 s, 71.1 MB/s 00:05:25.681 22:36:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:25.681 22:36:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:25.681 256+0 records in 00:05:25.681 256+0 records out 00:05:25.681 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0154575 s, 67.8 MB/s 00:05:25.681 22:36:33 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:25.681 22:36:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.681 22:36:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:25.681 22:36:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:25.681 22:36:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest 00:05:25.940 22:36:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:25.940 22:36:33 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:25.940 22:36:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:25.940 22:36:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest /dev/nbd0 00:05:25.940 22:36:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:25.940 22:36:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest /dev/nbd1 00:05:25.940 22:36:33 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest 00:05:25.940 22:36:33 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:25.940 22:36:33 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.940 22:36:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.940 22:36:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:25.940 22:36:33 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:25.940 22:36:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:25.940 22:36:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:25.940 22:36:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:25.940 22:36:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:25.940 22:36:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:25.940 22:36:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:25.940 22:36:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:25.940 22:36:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:25.940 22:36:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:25.940 22:36:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:25.940 22:36:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:25.940 22:36:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:26.199 22:36:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:26.199 22:36:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:26.199 22:36:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:26.199 22:36:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:26.199 22:36:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:26.199 22:36:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:26.199 22:36:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:26.199 22:36:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:26.199 22:36:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:26.199 22:36:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.199 22:36:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:26.456 22:36:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:26.456 22:36:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:26.456 22:36:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:26.456 22:36:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:26.456 22:36:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:26.456 22:36:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:26.456 22:36:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:26.456 22:36:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:26.456 22:36:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:26.456 22:36:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:26.456 22:36:34 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:26.456 22:36:34 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:26.456 22:36:34 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:26.715 22:36:34 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:26.974 [2024-12-10 22:36:34.462491] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:26.974 [2024-12-10 22:36:34.498859] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:26.974 [2024-12-10 22:36:34.498860] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.974 [2024-12-10 22:36:34.540691] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:26.974 [2024-12-10 22:36:34.540729] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:30.266 22:36:37 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:30.266 22:36:37 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:30.266 spdk_app_start Round 2 00:05:30.266 22:36:37 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2186744 /var/tmp/spdk-nbd.sock 00:05:30.266 22:36:37 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2186744 ']' 00:05:30.266 22:36:37 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:30.266 22:36:37 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:30.266 22:36:37 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:30.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:30.266 22:36:37 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:30.266 22:36:37 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:30.266 22:36:37 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:30.266 22:36:37 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:30.266 22:36:37 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:30.266 Malloc0 00:05:30.266 22:36:37 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:30.266 Malloc1 00:05:30.266 22:36:37 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:30.266 22:36:37 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.266 22:36:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:30.266 22:36:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:30.266 22:36:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.266 22:36:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:30.266 22:36:37 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:30.266 22:36:37 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.266 22:36:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:30.266 22:36:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:30.266 22:36:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:30.266 22:36:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:30.266 22:36:37 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:30.266 22:36:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:30.266 22:36:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:30.266 22:36:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:30.525 /dev/nbd0 00:05:30.525 22:36:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:30.525 22:36:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:30.525 22:36:38 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:30.525 22:36:38 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:30.525 22:36:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:30.525 22:36:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:30.526 22:36:38 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:30.526 22:36:38 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:30.526 22:36:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:30.526 22:36:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:30.526 22:36:38 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:30.526 1+0 records in 00:05:30.526 1+0 records out 00:05:30.526 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000230912 s, 17.7 MB/s 00:05:30.526 22:36:38 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdtest 00:05:30.526 22:36:38 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:30.526 22:36:38 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdtest 00:05:30.526 22:36:38 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:30.526 22:36:38 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:30.526 22:36:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:30.526 22:36:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:30.526 22:36:38 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:30.785 /dev/nbd1 00:05:30.785 22:36:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:30.785 22:36:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:30.785 22:36:38 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:30.785 22:36:38 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:30.785 22:36:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:30.785 22:36:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:30.785 22:36:38 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:30.785 22:36:38 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:30.785 22:36:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:30.785 22:36:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:30.785 22:36:38 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:30.785 1+0 records in 00:05:30.785 1+0 records out 00:05:30.785 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000296191 s, 13.8 MB/s 00:05:30.785 22:36:38 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdtest 00:05:30.785 22:36:38 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:30.785 22:36:38 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdtest 00:05:30.785 22:36:38 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:30.785 22:36:38 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:30.785 22:36:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:30.785 22:36:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:30.785 22:36:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:30.785 22:36:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:30.785 22:36:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:31.044 22:36:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:31.044 { 00:05:31.044 "nbd_device": "/dev/nbd0", 00:05:31.044 "bdev_name": "Malloc0" 00:05:31.044 }, 00:05:31.044 { 00:05:31.044 "nbd_device": "/dev/nbd1", 00:05:31.044 "bdev_name": "Malloc1" 00:05:31.044 } 00:05:31.044 ]' 00:05:31.044 22:36:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:31.044 22:36:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:31.044 { 00:05:31.044 "nbd_device": "/dev/nbd0", 00:05:31.044 "bdev_name": "Malloc0" 00:05:31.044 }, 00:05:31.044 { 00:05:31.044 "nbd_device": "/dev/nbd1", 00:05:31.044 "bdev_name": "Malloc1" 00:05:31.044 } 00:05:31.044 ]' 00:05:31.044 22:36:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:31.044 /dev/nbd1' 00:05:31.044 22:36:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:31.044 /dev/nbd1' 00:05:31.044 22:36:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:31.044 22:36:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:31.044 22:36:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:31.044 22:36:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:31.044 22:36:38 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:31.044 22:36:38 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:31.044 22:36:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.044 22:36:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:31.044 22:36:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:31.044 22:36:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest 00:05:31.044 22:36:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:31.044 22:36:38 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:31.044 256+0 records in 00:05:31.044 256+0 records out 00:05:31.044 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0104027 s, 101 MB/s 00:05:31.044 22:36:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:31.044 22:36:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:31.044 256+0 records in 00:05:31.044 256+0 records out 00:05:31.044 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.014491 s, 72.4 MB/s 00:05:31.044 22:36:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:31.044 22:36:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:31.044 256+0 records in 00:05:31.044 256+0 records out 00:05:31.044 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0147908 s, 70.9 MB/s 00:05:31.044 22:36:38 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:31.044 22:36:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.044 22:36:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:31.044 22:36:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:31.044 22:36:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest 00:05:31.044 22:36:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:31.044 22:36:38 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:31.044 22:36:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:31.044 22:36:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest /dev/nbd0 00:05:31.044 22:36:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:31.044 22:36:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest /dev/nbd1 00:05:31.044 22:36:38 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/nbdrandtest 00:05:31.044 22:36:38 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:31.044 22:36:38 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.044 22:36:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.045 22:36:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:31.045 22:36:38 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:31.045 22:36:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:31.045 22:36:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:31.304 22:36:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:31.304 22:36:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:31.304 22:36:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:31.304 22:36:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:31.304 22:36:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:31.304 22:36:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:31.304 22:36:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:31.304 22:36:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:31.304 22:36:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:31.304 22:36:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:31.563 22:36:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:31.563 22:36:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:31.563 22:36:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:31.563 22:36:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:31.563 22:36:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:31.563 22:36:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:31.563 22:36:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:31.563 22:36:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:31.563 22:36:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:31.563 22:36:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.563 22:36:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:31.822 22:36:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:31.822 22:36:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:31.822 22:36:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:31.822 22:36:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:31.822 22:36:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:31.822 22:36:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:31.822 22:36:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:31.822 22:36:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:31.822 22:36:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:31.822 22:36:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:31.822 22:36:39 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:31.822 22:36:39 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:31.822 22:36:39 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:32.081 22:36:39 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:32.081 [2024-12-10 22:36:39.805517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:32.341 [2024-12-10 22:36:39.842592] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.341 [2024-12-10 22:36:39.842592] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.341 [2024-12-10 22:36:39.883766] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:32.341 [2024-12-10 22:36:39.883808] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:35.262 22:36:42 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2186744 /var/tmp/spdk-nbd.sock 00:05:35.262 22:36:42 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2186744 ']' 00:05:35.262 22:36:42 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:35.262 22:36:42 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:35.262 22:36:42 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:35.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:35.262 22:36:42 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:35.262 22:36:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:35.262 22:36:42 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:35.262 22:36:42 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:35.262 22:36:42 event.app_repeat -- event/event.sh@39 -- # killprocess 2186744 00:05:35.262 22:36:42 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 2186744 ']' 00:05:35.262 22:36:42 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 2186744 00:05:35.262 22:36:42 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:35.262 22:36:42 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:35.262 22:36:42 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2186744 00:05:35.262 22:36:42 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:35.262 22:36:42 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:35.262 22:36:42 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2186744' 00:05:35.262 killing process with pid 2186744 00:05:35.262 22:36:42 event.app_repeat -- common/autotest_common.sh@973 -- # kill 2186744 00:05:35.262 22:36:42 event.app_repeat -- common/autotest_common.sh@978 -- # wait 2186744 00:05:35.522 spdk_app_start is called in Round 0. 00:05:35.522 Shutdown signal received, stop current app iteration 00:05:35.522 Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 reinitialization... 00:05:35.522 spdk_app_start is called in Round 1. 00:05:35.522 Shutdown signal received, stop current app iteration 00:05:35.522 Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 reinitialization... 00:05:35.522 spdk_app_start is called in Round 2. 00:05:35.522 Shutdown signal received, stop current app iteration 00:05:35.522 Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 reinitialization... 00:05:35.522 spdk_app_start is called in Round 3. 00:05:35.522 Shutdown signal received, stop current app iteration 00:05:35.522 22:36:43 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:35.522 22:36:43 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:35.522 00:05:35.522 real 0m16.472s 00:05:35.522 user 0m36.230s 00:05:35.522 sys 0m2.573s 00:05:35.522 22:36:43 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.522 22:36:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:35.522 ************************************ 00:05:35.522 END TEST app_repeat 00:05:35.522 ************************************ 00:05:35.522 22:36:43 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:35.522 22:36:43 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/cpu_locks.sh 00:05:35.522 22:36:43 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:35.522 22:36:43 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.522 22:36:43 event -- common/autotest_common.sh@10 -- # set +x 00:05:35.522 ************************************ 00:05:35.522 START TEST cpu_locks 00:05:35.522 ************************************ 00:05:35.522 22:36:43 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event/cpu_locks.sh 00:05:35.522 * Looking for test storage... 00:05:35.522 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/event 00:05:35.522 22:36:43 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:35.522 22:36:43 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:05:35.522 22:36:43 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:35.781 22:36:43 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:35.781 22:36:43 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:35.781 22:36:43 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:35.781 22:36:43 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:35.781 22:36:43 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:35.781 22:36:43 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:35.781 22:36:43 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:35.781 22:36:43 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:35.781 22:36:43 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:35.781 22:36:43 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:35.781 22:36:43 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:35.781 22:36:43 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:35.781 22:36:43 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:35.781 22:36:43 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:35.781 22:36:43 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:35.781 22:36:43 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:35.781 22:36:43 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:35.781 22:36:43 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:35.781 22:36:43 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:35.781 22:36:43 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:35.781 22:36:43 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:35.781 22:36:43 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:35.781 22:36:43 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:35.781 22:36:43 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:35.781 22:36:43 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:35.782 22:36:43 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:35.782 22:36:43 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:35.782 22:36:43 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:35.782 22:36:43 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:35.782 22:36:43 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:35.782 22:36:43 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:35.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.782 --rc genhtml_branch_coverage=1 00:05:35.782 --rc genhtml_function_coverage=1 00:05:35.782 --rc genhtml_legend=1 00:05:35.782 --rc geninfo_all_blocks=1 00:05:35.782 --rc geninfo_unexecuted_blocks=1 00:05:35.782 00:05:35.782 ' 00:05:35.782 22:36:43 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:35.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.782 --rc genhtml_branch_coverage=1 00:05:35.782 --rc genhtml_function_coverage=1 00:05:35.782 --rc genhtml_legend=1 00:05:35.782 --rc geninfo_all_blocks=1 00:05:35.782 --rc geninfo_unexecuted_blocks=1 00:05:35.782 00:05:35.782 ' 00:05:35.782 22:36:43 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:35.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.782 --rc genhtml_branch_coverage=1 00:05:35.782 --rc genhtml_function_coverage=1 00:05:35.782 --rc genhtml_legend=1 00:05:35.782 --rc geninfo_all_blocks=1 00:05:35.782 --rc geninfo_unexecuted_blocks=1 00:05:35.782 00:05:35.782 ' 00:05:35.782 22:36:43 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:35.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.782 --rc genhtml_branch_coverage=1 00:05:35.782 --rc genhtml_function_coverage=1 00:05:35.782 --rc genhtml_legend=1 00:05:35.782 --rc geninfo_all_blocks=1 00:05:35.782 --rc geninfo_unexecuted_blocks=1 00:05:35.782 00:05:35.782 ' 00:05:35.782 22:36:43 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:35.782 22:36:43 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:35.782 22:36:43 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:35.782 22:36:43 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:35.782 22:36:43 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:35.782 22:36:43 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.782 22:36:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:35.782 ************************************ 00:05:35.782 START TEST default_locks 00:05:35.782 ************************************ 00:05:35.782 22:36:43 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:35.782 22:36:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2189751 00:05:35.782 22:36:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2189751 00:05:35.782 22:36:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 0x1 00:05:35.782 22:36:43 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 2189751 ']' 00:05:35.782 22:36:43 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.782 22:36:43 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:35.782 22:36:43 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.782 22:36:43 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:35.782 22:36:43 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:35.782 [2024-12-10 22:36:43.384635] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:05:35.782 [2024-12-10 22:36:43.384677] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2189751 ] 00:05:35.782 [2024-12-10 22:36:43.456831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.782 [2024-12-10 22:36:43.496397] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.042 22:36:43 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:36.042 22:36:43 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:36.042 22:36:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2189751 00:05:36.042 22:36:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2189751 00:05:36.042 22:36:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:36.610 lslocks: write error 00:05:36.611 22:36:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2189751 00:05:36.611 22:36:44 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 2189751 ']' 00:05:36.611 22:36:44 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 2189751 00:05:36.611 22:36:44 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:36.611 22:36:44 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:36.611 22:36:44 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2189751 00:05:36.611 22:36:44 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:36.611 22:36:44 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:36.611 22:36:44 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2189751' 00:05:36.611 killing process with pid 2189751 00:05:36.611 22:36:44 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 2189751 00:05:36.611 22:36:44 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 2189751 00:05:36.870 22:36:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2189751 00:05:36.870 22:36:44 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:36.871 22:36:44 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2189751 00:05:36.871 22:36:44 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:36.871 22:36:44 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:36.871 22:36:44 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:36.871 22:36:44 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:36.871 22:36:44 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 2189751 00:05:36.871 22:36:44 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 2189751 ']' 00:05:36.871 22:36:44 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.871 22:36:44 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:36.871 22:36:44 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.871 22:36:44 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:36.871 22:36:44 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:36.871 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/common/autotest_common.sh: line 850: kill: (2189751) - No such process 00:05:36.871 ERROR: process (pid: 2189751) is no longer running 00:05:36.871 22:36:44 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:36.871 22:36:44 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:36.871 22:36:44 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:36.871 22:36:44 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:36.871 22:36:44 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:36.871 22:36:44 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:36.871 22:36:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:36.871 22:36:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:36.871 22:36:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:36.871 22:36:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:36.871 00:05:36.871 real 0m1.215s 00:05:36.871 user 0m1.172s 00:05:36.871 sys 0m0.557s 00:05:36.871 22:36:44 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:36.871 22:36:44 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:36.871 ************************************ 00:05:36.871 END TEST default_locks 00:05:36.871 ************************************ 00:05:36.871 22:36:44 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:36.871 22:36:44 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:36.871 22:36:44 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:36.871 22:36:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:37.130 ************************************ 00:05:37.130 START TEST default_locks_via_rpc 00:05:37.130 ************************************ 00:05:37.130 22:36:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:37.130 22:36:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2190008 00:05:37.130 22:36:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2190008 00:05:37.130 22:36:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 0x1 00:05:37.130 22:36:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2190008 ']' 00:05:37.130 22:36:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.130 22:36:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:37.130 22:36:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.130 22:36:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:37.130 22:36:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.130 [2024-12-10 22:36:44.668011] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:05:37.130 [2024-12-10 22:36:44.668053] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2190008 ] 00:05:37.130 [2024-12-10 22:36:44.741460] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.130 [2024-12-10 22:36:44.781293] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.390 22:36:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:37.390 22:36:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:37.390 22:36:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:37.390 22:36:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.390 22:36:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.390 22:36:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.390 22:36:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:37.390 22:36:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:37.390 22:36:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:37.390 22:36:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:37.390 22:36:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:37.390 22:36:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.390 22:36:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.390 22:36:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.390 22:36:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2190008 00:05:37.390 22:36:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2190008 00:05:37.390 22:36:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:37.966 22:36:45 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2190008 00:05:37.966 22:36:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 2190008 ']' 00:05:37.966 22:36:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 2190008 00:05:37.966 22:36:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:37.966 22:36:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:37.966 22:36:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2190008 00:05:37.966 22:36:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:37.966 22:36:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:37.966 22:36:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2190008' 00:05:37.966 killing process with pid 2190008 00:05:37.966 22:36:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 2190008 00:05:37.966 22:36:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 2190008 00:05:38.226 00:05:38.226 real 0m1.236s 00:05:38.226 user 0m1.207s 00:05:38.226 sys 0m0.545s 00:05:38.226 22:36:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:38.226 22:36:45 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.226 ************************************ 00:05:38.226 END TEST default_locks_via_rpc 00:05:38.226 ************************************ 00:05:38.226 22:36:45 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:38.226 22:36:45 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:38.226 22:36:45 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:38.226 22:36:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:38.226 ************************************ 00:05:38.226 START TEST non_locking_app_on_locked_coremask 00:05:38.226 ************************************ 00:05:38.226 22:36:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:38.226 22:36:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2190265 00:05:38.226 22:36:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2190265 /var/tmp/spdk.sock 00:05:38.226 22:36:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 0x1 00:05:38.226 22:36:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2190265 ']' 00:05:38.226 22:36:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:38.226 22:36:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:38.226 22:36:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:38.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:38.226 22:36:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:38.226 22:36:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:38.486 [2024-12-10 22:36:45.973308] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:05:38.486 [2024-12-10 22:36:45.973354] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2190265 ] 00:05:38.486 [2024-12-10 22:36:46.049735] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.486 [2024-12-10 22:36:46.090945] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.745 22:36:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:38.745 22:36:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:38.745 22:36:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2190310 00:05:38.745 22:36:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2190310 /var/tmp/spdk2.sock 00:05:38.745 22:36:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:38.745 22:36:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2190310 ']' 00:05:38.745 22:36:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:38.745 22:36:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:38.745 22:36:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:38.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:38.745 22:36:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:38.745 22:36:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:38.745 [2024-12-10 22:36:46.357947] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:05:38.745 [2024-12-10 22:36:46.357995] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2190310 ] 00:05:38.745 [2024-12-10 22:36:46.449765] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:38.745 [2024-12-10 22:36:46.449793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.004 [2024-12-10 22:36:46.538042] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.572 22:36:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:39.572 22:36:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:39.572 22:36:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2190265 00:05:39.572 22:36:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2190265 00:05:39.572 22:36:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:40.140 lslocks: write error 00:05:40.140 22:36:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2190265 00:05:40.140 22:36:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2190265 ']' 00:05:40.140 22:36:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2190265 00:05:40.140 22:36:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:40.140 22:36:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:40.140 22:36:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2190265 00:05:40.399 22:36:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:40.399 22:36:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:40.399 22:36:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2190265' 00:05:40.399 killing process with pid 2190265 00:05:40.399 22:36:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2190265 00:05:40.399 22:36:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2190265 00:05:40.968 22:36:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2190310 00:05:40.968 22:36:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2190310 ']' 00:05:40.968 22:36:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2190310 00:05:40.968 22:36:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:40.968 22:36:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:40.968 22:36:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2190310 00:05:40.968 22:36:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:40.968 22:36:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:40.968 22:36:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2190310' 00:05:40.968 killing process with pid 2190310 00:05:40.968 22:36:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2190310 00:05:40.968 22:36:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2190310 00:05:41.227 00:05:41.227 real 0m2.927s 00:05:41.227 user 0m3.098s 00:05:41.227 sys 0m0.971s 00:05:41.227 22:36:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.227 22:36:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:41.227 ************************************ 00:05:41.227 END TEST non_locking_app_on_locked_coremask 00:05:41.227 ************************************ 00:05:41.227 22:36:48 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:41.227 22:36:48 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.227 22:36:48 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.228 22:36:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:41.228 ************************************ 00:05:41.228 START TEST locking_app_on_unlocked_coremask 00:05:41.228 ************************************ 00:05:41.228 22:36:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:41.228 22:36:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2190768 00:05:41.228 22:36:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2190768 /var/tmp/spdk.sock 00:05:41.228 22:36:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:41.228 22:36:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2190768 ']' 00:05:41.228 22:36:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.228 22:36:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:41.228 22:36:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.228 22:36:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:41.228 22:36:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:41.487 [2024-12-10 22:36:48.966959] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:05:41.487 [2024-12-10 22:36:48.967001] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2190768 ] 00:05:41.487 [2024-12-10 22:36:49.044206] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:41.487 [2024-12-10 22:36:49.044237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.487 [2024-12-10 22:36:49.085491] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.780 22:36:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:41.780 22:36:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:41.780 22:36:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2190977 00:05:41.780 22:36:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2190977 /var/tmp/spdk2.sock 00:05:41.780 22:36:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:41.780 22:36:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2190977 ']' 00:05:41.780 22:36:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:41.780 22:36:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:41.780 22:36:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:41.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:41.780 22:36:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:41.780 22:36:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:41.780 [2024-12-10 22:36:49.351285] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:05:41.780 [2024-12-10 22:36:49.351338] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2190977 ] 00:05:41.780 [2024-12-10 22:36:49.445801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.040 [2024-12-10 22:36:49.533814] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.606 22:36:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:42.607 22:36:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:42.607 22:36:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2190977 00:05:42.607 22:36:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2190977 00:05:42.607 22:36:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:43.174 lslocks: write error 00:05:43.174 22:36:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2190768 00:05:43.174 22:36:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2190768 ']' 00:05:43.174 22:36:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 2190768 00:05:43.174 22:36:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:43.174 22:36:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:43.174 22:36:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2190768 00:05:43.174 22:36:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:43.174 22:36:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:43.174 22:36:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2190768' 00:05:43.174 killing process with pid 2190768 00:05:43.174 22:36:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 2190768 00:05:43.174 22:36:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 2190768 00:05:43.743 22:36:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2190977 00:05:43.743 22:36:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2190977 ']' 00:05:43.743 22:36:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 2190977 00:05:43.743 22:36:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:43.743 22:36:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:43.743 22:36:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2190977 00:05:43.743 22:36:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:43.743 22:36:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:43.743 22:36:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2190977' 00:05:43.743 killing process with pid 2190977 00:05:43.743 22:36:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 2190977 00:05:43.743 22:36:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 2190977 00:05:44.312 00:05:44.312 real 0m2.836s 00:05:44.312 user 0m3.008s 00:05:44.312 sys 0m0.935s 00:05:44.312 22:36:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:44.312 22:36:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:44.312 ************************************ 00:05:44.312 END TEST locking_app_on_unlocked_coremask 00:05:44.312 ************************************ 00:05:44.312 22:36:51 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:44.312 22:36:51 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:44.312 22:36:51 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.312 22:36:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:44.312 ************************************ 00:05:44.312 START TEST locking_app_on_locked_coremask 00:05:44.312 ************************************ 00:05:44.312 22:36:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:44.312 22:36:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 0x1 00:05:44.312 22:36:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2191323 00:05:44.312 22:36:51 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2191323 /var/tmp/spdk.sock 00:05:44.312 22:36:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2191323 ']' 00:05:44.312 22:36:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.312 22:36:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:44.312 22:36:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.312 22:36:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:44.312 22:36:51 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:44.312 [2024-12-10 22:36:51.865343] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:05:44.312 [2024-12-10 22:36:51.865385] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2191323 ] 00:05:44.312 [2024-12-10 22:36:51.939675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.312 [2024-12-10 22:36:51.979021] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.572 22:36:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:44.572 22:36:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:44.572 22:36:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2191483 00:05:44.572 22:36:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2191483 /var/tmp/spdk2.sock 00:05:44.572 22:36:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:44.572 22:36:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:44.572 22:36:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2191483 /var/tmp/spdk2.sock 00:05:44.572 22:36:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:44.572 22:36:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:44.572 22:36:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:44.572 22:36:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:44.572 22:36:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 2191483 /var/tmp/spdk2.sock 00:05:44.572 22:36:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2191483 ']' 00:05:44.572 22:36:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:44.572 22:36:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:44.572 22:36:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:44.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:44.572 22:36:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:44.572 22:36:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:44.572 [2024-12-10 22:36:52.260909] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:05:44.572 [2024-12-10 22:36:52.260956] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2191483 ] 00:05:44.831 [2024-12-10 22:36:52.351954] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2191323 has claimed it. 00:05:44.831 [2024-12-10 22:36:52.351997] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:45.400 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/common/autotest_common.sh: line 850: kill: (2191483) - No such process 00:05:45.400 ERROR: process (pid: 2191483) is no longer running 00:05:45.400 22:36:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:45.400 22:36:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:45.400 22:36:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:45.400 22:36:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:45.400 22:36:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:45.400 22:36:52 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:45.400 22:36:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2191323 00:05:45.400 22:36:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2191323 00:05:45.400 22:36:52 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:45.660 lslocks: write error 00:05:45.660 22:36:53 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2191323 00:05:45.660 22:36:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2191323 ']' 00:05:45.660 22:36:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2191323 00:05:45.660 22:36:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:45.660 22:36:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:45.660 22:36:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2191323 00:05:45.660 22:36:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:45.660 22:36:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:45.660 22:36:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2191323' 00:05:45.660 killing process with pid 2191323 00:05:45.660 22:36:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2191323 00:05:45.660 22:36:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2191323 00:05:46.229 00:05:46.229 real 0m1.835s 00:05:46.229 user 0m1.984s 00:05:46.229 sys 0m0.591s 00:05:46.229 22:36:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.229 22:36:53 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:46.229 ************************************ 00:05:46.229 END TEST locking_app_on_locked_coremask 00:05:46.229 ************************************ 00:05:46.229 22:36:53 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:46.229 22:36:53 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:46.229 22:36:53 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.229 22:36:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:46.229 ************************************ 00:05:46.229 START TEST locking_overlapped_coremask 00:05:46.229 ************************************ 00:05:46.229 22:36:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:46.229 22:36:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2191751 00:05:46.229 22:36:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2191751 /var/tmp/spdk.sock 00:05:46.229 22:36:53 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 0x7 00:05:46.229 22:36:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 2191751 ']' 00:05:46.229 22:36:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.229 22:36:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:46.229 22:36:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.229 22:36:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:46.229 22:36:53 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:46.229 [2024-12-10 22:36:53.774692] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:05:46.229 [2024-12-10 22:36:53.774738] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2191751 ] 00:05:46.229 [2024-12-10 22:36:53.849334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:46.229 [2024-12-10 22:36:53.888482] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.229 [2024-12-10 22:36:53.888588] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.229 [2024-12-10 22:36:53.888589] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:46.488 22:36:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:46.488 22:36:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:46.488 22:36:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2191756 00:05:46.488 22:36:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2191756 /var/tmp/spdk2.sock 00:05:46.488 22:36:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:46.488 22:36:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:46.488 22:36:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2191756 /var/tmp/spdk2.sock 00:05:46.488 22:36:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:46.488 22:36:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:46.488 22:36:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:46.488 22:36:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:46.488 22:36:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 2191756 /var/tmp/spdk2.sock 00:05:46.488 22:36:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 2191756 ']' 00:05:46.488 22:36:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:46.488 22:36:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:46.488 22:36:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:46.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:46.488 22:36:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:46.488 22:36:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:46.488 [2024-12-10 22:36:54.158782] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:05:46.488 [2024-12-10 22:36:54.158822] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2191756 ] 00:05:46.747 [2024-12-10 22:36:54.250187] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2191751 has claimed it. 00:05:46.747 [2024-12-10 22:36:54.250245] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:47.315 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/common/autotest_common.sh: line 850: kill: (2191756) - No such process 00:05:47.315 ERROR: process (pid: 2191756) is no longer running 00:05:47.315 22:36:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:47.315 22:36:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:47.315 22:36:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:47.315 22:36:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:47.315 22:36:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:47.315 22:36:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:47.315 22:36:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:47.315 22:36:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:47.315 22:36:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:47.315 22:36:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:47.315 22:36:54 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2191751 00:05:47.315 22:36:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 2191751 ']' 00:05:47.315 22:36:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 2191751 00:05:47.315 22:36:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:47.315 22:36:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:47.315 22:36:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2191751 00:05:47.315 22:36:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:47.315 22:36:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:47.316 22:36:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2191751' 00:05:47.316 killing process with pid 2191751 00:05:47.316 22:36:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 2191751 00:05:47.316 22:36:54 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 2191751 00:05:47.575 00:05:47.575 real 0m1.437s 00:05:47.575 user 0m3.966s 00:05:47.575 sys 0m0.391s 00:05:47.575 22:36:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.575 22:36:55 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:47.575 ************************************ 00:05:47.575 END TEST locking_overlapped_coremask 00:05:47.575 ************************************ 00:05:47.575 22:36:55 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:47.575 22:36:55 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:47.575 22:36:55 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.575 22:36:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:47.575 ************************************ 00:05:47.575 START TEST locking_overlapped_coremask_via_rpc 00:05:47.575 ************************************ 00:05:47.575 22:36:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:47.575 22:36:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2192014 00:05:47.575 22:36:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2192014 /var/tmp/spdk.sock 00:05:47.575 22:36:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:47.575 22:36:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2192014 ']' 00:05:47.575 22:36:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.575 22:36:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:47.575 22:36:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.575 22:36:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:47.575 22:36:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.575 [2024-12-10 22:36:55.279468] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:05:47.575 [2024-12-10 22:36:55.279509] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2192014 ] 00:05:47.835 [2024-12-10 22:36:55.355091] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:47.835 [2024-12-10 22:36:55.355122] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:47.835 [2024-12-10 22:36:55.393885] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:47.835 [2024-12-10 22:36:55.393989] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.835 [2024-12-10 22:36:55.393990] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:48.094 22:36:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:48.094 22:36:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:48.094 22:36:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2192028 00:05:48.094 22:36:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2192028 /var/tmp/spdk2.sock 00:05:48.094 22:36:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:48.094 22:36:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2192028 ']' 00:05:48.094 22:36:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:48.094 22:36:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:48.094 22:36:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:48.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:48.094 22:36:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:48.094 22:36:55 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.095 [2024-12-10 22:36:55.667524] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:05:48.095 [2024-12-10 22:36:55.667569] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2192028 ] 00:05:48.095 [2024-12-10 22:36:55.758095] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:48.095 [2024-12-10 22:36:55.758128] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:48.353 [2024-12-10 22:36:55.845384] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:48.353 [2024-12-10 22:36:55.849203] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:48.353 [2024-12-10 22:36:55.849203] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:05:48.921 22:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:48.921 22:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:48.921 22:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:48.921 22:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.921 22:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.921 22:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.921 22:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:48.921 22:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:48.921 22:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:48.921 22:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:48.921 22:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:48.921 22:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:48.921 22:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:48.921 22:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:48.921 22:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.921 22:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.921 [2024-12-10 22:36:56.511237] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2192014 has claimed it. 00:05:48.921 request: 00:05:48.921 { 00:05:48.921 "method": "framework_enable_cpumask_locks", 00:05:48.921 "req_id": 1 00:05:48.921 } 00:05:48.921 Got JSON-RPC error response 00:05:48.921 response: 00:05:48.921 { 00:05:48.921 "code": -32603, 00:05:48.921 "message": "Failed to claim CPU core: 2" 00:05:48.921 } 00:05:48.921 22:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:48.921 22:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:48.921 22:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:48.921 22:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:48.921 22:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:48.921 22:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2192014 /var/tmp/spdk.sock 00:05:48.921 22:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2192014 ']' 00:05:48.921 22:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.921 22:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:48.921 22:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.921 22:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:48.921 22:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.181 22:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:49.181 22:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:49.181 22:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2192028 /var/tmp/spdk2.sock 00:05:49.181 22:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2192028 ']' 00:05:49.181 22:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:49.181 22:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:49.181 22:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:49.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:49.181 22:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:49.181 22:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.441 22:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:49.441 22:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:49.441 22:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:49.441 22:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:49.441 22:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:49.441 22:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:49.441 00:05:49.441 real 0m1.703s 00:05:49.441 user 0m0.812s 00:05:49.441 sys 0m0.138s 00:05:49.441 22:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.441 22:36:56 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.441 ************************************ 00:05:49.441 END TEST locking_overlapped_coremask_via_rpc 00:05:49.441 ************************************ 00:05:49.441 22:36:56 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:49.441 22:36:56 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2192014 ]] 00:05:49.441 22:36:56 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2192014 00:05:49.441 22:36:56 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2192014 ']' 00:05:49.441 22:36:56 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2192014 00:05:49.441 22:36:56 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:49.441 22:36:56 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:49.441 22:36:56 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2192014 00:05:49.441 22:36:57 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:49.441 22:36:57 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:49.441 22:36:57 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2192014' 00:05:49.441 killing process with pid 2192014 00:05:49.441 22:36:57 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 2192014 00:05:49.441 22:36:57 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 2192014 00:05:49.701 22:36:57 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2192028 ]] 00:05:49.701 22:36:57 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2192028 00:05:49.701 22:36:57 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2192028 ']' 00:05:49.701 22:36:57 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2192028 00:05:49.701 22:36:57 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:49.701 22:36:57 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:49.701 22:36:57 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2192028 00:05:49.701 22:36:57 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:49.701 22:36:57 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:49.701 22:36:57 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2192028' 00:05:49.701 killing process with pid 2192028 00:05:49.701 22:36:57 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 2192028 00:05:49.701 22:36:57 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 2192028 00:05:50.270 22:36:57 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:50.270 22:36:57 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:50.270 22:36:57 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2192014 ]] 00:05:50.270 22:36:57 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2192014 00:05:50.270 22:36:57 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2192014 ']' 00:05:50.270 22:36:57 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2192014 00:05:50.270 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/common/autotest_common.sh: line 958: kill: (2192014) - No such process 00:05:50.270 22:36:57 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 2192014 is not found' 00:05:50.270 Process with pid 2192014 is not found 00:05:50.270 22:36:57 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2192028 ]] 00:05:50.270 22:36:57 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2192028 00:05:50.270 22:36:57 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2192028 ']' 00:05:50.270 22:36:57 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2192028 00:05:50.270 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/common/autotest_common.sh: line 958: kill: (2192028) - No such process 00:05:50.270 22:36:57 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 2192028 is not found' 00:05:50.270 Process with pid 2192028 is not found 00:05:50.270 22:36:57 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:50.270 00:05:50.270 real 0m14.589s 00:05:50.270 user 0m25.026s 00:05:50.270 sys 0m5.093s 00:05:50.270 22:36:57 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:50.270 22:36:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:50.270 ************************************ 00:05:50.270 END TEST cpu_locks 00:05:50.270 ************************************ 00:05:50.270 00:05:50.270 real 0m39.355s 00:05:50.270 user 1m14.422s 00:05:50.270 sys 0m8.680s 00:05:50.270 22:36:57 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:50.270 22:36:57 event -- common/autotest_common.sh@10 -- # set +x 00:05:50.270 ************************************ 00:05:50.270 END TEST event 00:05:50.270 ************************************ 00:05:50.270 22:36:57 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/thread/thread.sh 00:05:50.270 22:36:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:50.270 22:36:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:50.270 22:36:57 -- common/autotest_common.sh@10 -- # set +x 00:05:50.270 ************************************ 00:05:50.270 START TEST thread 00:05:50.270 ************************************ 00:05:50.270 22:36:57 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/thread/thread.sh 00:05:50.270 * Looking for test storage... 00:05:50.270 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/thread 00:05:50.270 22:36:57 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:50.270 22:36:57 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:05:50.270 22:36:57 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:50.270 22:36:57 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:50.270 22:36:57 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:50.270 22:36:57 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:50.270 22:36:57 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:50.270 22:36:57 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:50.270 22:36:57 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:50.270 22:36:57 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:50.270 22:36:57 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:50.270 22:36:57 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:50.270 22:36:57 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:50.270 22:36:57 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:50.270 22:36:57 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:50.270 22:36:57 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:50.270 22:36:57 thread -- scripts/common.sh@345 -- # : 1 00:05:50.270 22:36:57 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:50.270 22:36:57 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:50.270 22:36:57 thread -- scripts/common.sh@365 -- # decimal 1 00:05:50.270 22:36:57 thread -- scripts/common.sh@353 -- # local d=1 00:05:50.270 22:36:57 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:50.270 22:36:57 thread -- scripts/common.sh@355 -- # echo 1 00:05:50.271 22:36:57 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:50.271 22:36:57 thread -- scripts/common.sh@366 -- # decimal 2 00:05:50.271 22:36:57 thread -- scripts/common.sh@353 -- # local d=2 00:05:50.271 22:36:57 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:50.271 22:36:57 thread -- scripts/common.sh@355 -- # echo 2 00:05:50.271 22:36:57 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:50.271 22:36:57 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:50.271 22:36:57 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:50.271 22:36:57 thread -- scripts/common.sh@368 -- # return 0 00:05:50.271 22:36:57 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:50.271 22:36:57 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:50.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.271 --rc genhtml_branch_coverage=1 00:05:50.271 --rc genhtml_function_coverage=1 00:05:50.271 --rc genhtml_legend=1 00:05:50.271 --rc geninfo_all_blocks=1 00:05:50.271 --rc geninfo_unexecuted_blocks=1 00:05:50.271 00:05:50.271 ' 00:05:50.271 22:36:57 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:50.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.271 --rc genhtml_branch_coverage=1 00:05:50.271 --rc genhtml_function_coverage=1 00:05:50.271 --rc genhtml_legend=1 00:05:50.271 --rc geninfo_all_blocks=1 00:05:50.271 --rc geninfo_unexecuted_blocks=1 00:05:50.271 00:05:50.271 ' 00:05:50.271 22:36:57 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:50.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.271 --rc genhtml_branch_coverage=1 00:05:50.271 --rc genhtml_function_coverage=1 00:05:50.271 --rc genhtml_legend=1 00:05:50.271 --rc geninfo_all_blocks=1 00:05:50.271 --rc geninfo_unexecuted_blocks=1 00:05:50.271 00:05:50.271 ' 00:05:50.271 22:36:57 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:50.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.271 --rc genhtml_branch_coverage=1 00:05:50.271 --rc genhtml_function_coverage=1 00:05:50.271 --rc genhtml_legend=1 00:05:50.271 --rc geninfo_all_blocks=1 00:05:50.271 --rc geninfo_unexecuted_blocks=1 00:05:50.271 00:05:50.271 ' 00:05:50.271 22:36:57 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:50.271 22:36:57 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:50.271 22:36:57 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:50.271 22:36:57 thread -- common/autotest_common.sh@10 -- # set +x 00:05:50.530 ************************************ 00:05:50.530 START TEST thread_poller_perf 00:05:50.530 ************************************ 00:05:50.530 22:36:58 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:50.530 [2024-12-10 22:36:58.041054] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:05:50.530 [2024-12-10 22:36:58.041133] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2192587 ] 00:05:50.530 [2024-12-10 22:36:58.117612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.530 [2024-12-10 22:36:58.156874] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.530 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:51.910 [2024-12-10T21:36:59.639Z] ====================================== 00:05:51.910 [2024-12-10T21:36:59.639Z] busy:2305165388 (cyc) 00:05:51.910 [2024-12-10T21:36:59.639Z] total_run_count: 408000 00:05:51.910 [2024-12-10T21:36:59.639Z] tsc_hz: 2300000000 (cyc) 00:05:51.910 [2024-12-10T21:36:59.639Z] ====================================== 00:05:51.910 [2024-12-10T21:36:59.639Z] poller_cost: 5649 (cyc), 2456 (nsec) 00:05:51.910 00:05:51.910 real 0m1.176s 00:05:51.910 user 0m1.099s 00:05:51.910 sys 0m0.074s 00:05:51.910 22:36:59 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:51.910 22:36:59 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:51.910 ************************************ 00:05:51.910 END TEST thread_poller_perf 00:05:51.910 ************************************ 00:05:51.910 22:36:59 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:51.910 22:36:59 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:51.910 22:36:59 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.910 22:36:59 thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.910 ************************************ 00:05:51.910 START TEST thread_poller_perf 00:05:51.910 ************************************ 00:05:51.910 22:36:59 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:51.910 [2024-12-10 22:36:59.290911] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:05:51.910 [2024-12-10 22:36:59.290981] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2192834 ] 00:05:51.910 [2024-12-10 22:36:59.371188] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.910 [2024-12-10 22:36:59.409020] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.910 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:52.849 [2024-12-10T21:37:00.578Z] ====================================== 00:05:52.849 [2024-12-10T21:37:00.578Z] busy:2301587570 (cyc) 00:05:52.849 [2024-12-10T21:37:00.578Z] total_run_count: 4884000 00:05:52.849 [2024-12-10T21:37:00.578Z] tsc_hz: 2300000000 (cyc) 00:05:52.849 [2024-12-10T21:37:00.578Z] ====================================== 00:05:52.849 [2024-12-10T21:37:00.578Z] poller_cost: 471 (cyc), 204 (nsec) 00:05:52.849 00:05:52.849 real 0m1.182s 00:05:52.849 user 0m1.104s 00:05:52.849 sys 0m0.073s 00:05:52.849 22:37:00 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.849 22:37:00 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:52.849 ************************************ 00:05:52.849 END TEST thread_poller_perf 00:05:52.849 ************************************ 00:05:52.849 22:37:00 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:52.849 00:05:52.849 real 0m2.669s 00:05:52.849 user 0m2.344s 00:05:52.849 sys 0m0.337s 00:05:52.849 22:37:00 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.849 22:37:00 thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.849 ************************************ 00:05:52.849 END TEST thread 00:05:52.849 ************************************ 00:05:52.849 22:37:00 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:52.849 22:37:00 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/app/cmdline.sh 00:05:52.849 22:37:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:52.849 22:37:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:52.849 22:37:00 -- common/autotest_common.sh@10 -- # set +x 00:05:52.849 ************************************ 00:05:52.849 START TEST app_cmdline 00:05:52.849 ************************************ 00:05:52.849 22:37:00 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/app/cmdline.sh 00:05:53.109 * Looking for test storage... 00:05:53.109 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/app 00:05:53.109 22:37:00 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:53.109 22:37:00 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:05:53.109 22:37:00 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:53.109 22:37:00 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:53.109 22:37:00 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:53.109 22:37:00 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:53.109 22:37:00 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:53.109 22:37:00 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:53.109 22:37:00 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:53.109 22:37:00 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:53.109 22:37:00 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:53.109 22:37:00 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:53.109 22:37:00 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:53.109 22:37:00 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:53.109 22:37:00 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:53.109 22:37:00 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:53.109 22:37:00 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:53.109 22:37:00 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:53.109 22:37:00 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:53.109 22:37:00 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:53.109 22:37:00 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:53.109 22:37:00 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:53.109 22:37:00 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:53.109 22:37:00 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:53.109 22:37:00 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:53.109 22:37:00 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:53.109 22:37:00 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:53.109 22:37:00 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:53.109 22:37:00 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:53.109 22:37:00 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:53.109 22:37:00 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:53.109 22:37:00 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:53.109 22:37:00 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:53.109 22:37:00 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:53.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.109 --rc genhtml_branch_coverage=1 00:05:53.109 --rc genhtml_function_coverage=1 00:05:53.109 --rc genhtml_legend=1 00:05:53.110 --rc geninfo_all_blocks=1 00:05:53.110 --rc geninfo_unexecuted_blocks=1 00:05:53.110 00:05:53.110 ' 00:05:53.110 22:37:00 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:53.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.110 --rc genhtml_branch_coverage=1 00:05:53.110 --rc genhtml_function_coverage=1 00:05:53.110 --rc genhtml_legend=1 00:05:53.110 --rc geninfo_all_blocks=1 00:05:53.110 --rc geninfo_unexecuted_blocks=1 00:05:53.110 00:05:53.110 ' 00:05:53.110 22:37:00 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:53.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.110 --rc genhtml_branch_coverage=1 00:05:53.110 --rc genhtml_function_coverage=1 00:05:53.110 --rc genhtml_legend=1 00:05:53.110 --rc geninfo_all_blocks=1 00:05:53.110 --rc geninfo_unexecuted_blocks=1 00:05:53.110 00:05:53.110 ' 00:05:53.110 22:37:00 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:53.110 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.110 --rc genhtml_branch_coverage=1 00:05:53.110 --rc genhtml_function_coverage=1 00:05:53.110 --rc genhtml_legend=1 00:05:53.110 --rc geninfo_all_blocks=1 00:05:53.110 --rc geninfo_unexecuted_blocks=1 00:05:53.110 00:05:53.110 ' 00:05:53.110 22:37:00 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:53.110 22:37:00 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2193140 00:05:53.110 22:37:00 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2193140 00:05:53.110 22:37:00 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:53.110 22:37:00 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 2193140 ']' 00:05:53.110 22:37:00 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.110 22:37:00 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:53.110 22:37:00 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.110 22:37:00 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:53.110 22:37:00 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:53.110 [2024-12-10 22:37:00.781575] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:05:53.110 [2024-12-10 22:37:00.781627] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2193140 ] 00:05:53.369 [2024-12-10 22:37:00.856715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.369 [2024-12-10 22:37:00.895973] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.629 22:37:01 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:53.629 22:37:01 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:05:53.629 22:37:01 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py spdk_get_version 00:05:53.629 { 00:05:53.629 "version": "SPDK v25.01-pre git sha1 66289a6db", 00:05:53.629 "fields": { 00:05:53.629 "major": 25, 00:05:53.629 "minor": 1, 00:05:53.629 "patch": 0, 00:05:53.629 "suffix": "-pre", 00:05:53.629 "commit": "66289a6db" 00:05:53.629 } 00:05:53.629 } 00:05:53.629 22:37:01 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:53.629 22:37:01 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:53.629 22:37:01 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:53.629 22:37:01 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:53.629 22:37:01 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:53.629 22:37:01 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:53.629 22:37:01 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:53.629 22:37:01 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:53.629 22:37:01 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:53.629 22:37:01 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:53.629 22:37:01 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:53.629 22:37:01 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:53.629 22:37:01 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:53.629 22:37:01 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:05:53.629 22:37:01 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:53.629 22:37:01 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:05:53.629 22:37:01 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:53.629 22:37:01 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:05:53.629 22:37:01 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:53.629 22:37:01 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:05:53.629 22:37:01 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:53.629 22:37:01 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:05:53.629 22:37:01 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py ]] 00:05:53.629 22:37:01 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:53.889 request: 00:05:53.889 { 00:05:53.889 "method": "env_dpdk_get_mem_stats", 00:05:53.889 "req_id": 1 00:05:53.889 } 00:05:53.889 Got JSON-RPC error response 00:05:53.889 response: 00:05:53.889 { 00:05:53.889 "code": -32601, 00:05:53.889 "message": "Method not found" 00:05:53.889 } 00:05:53.889 22:37:01 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:05:53.889 22:37:01 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:53.889 22:37:01 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:53.889 22:37:01 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:53.889 22:37:01 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2193140 00:05:53.889 22:37:01 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 2193140 ']' 00:05:53.889 22:37:01 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 2193140 00:05:53.889 22:37:01 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:05:53.889 22:37:01 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:53.889 22:37:01 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2193140 00:05:53.889 22:37:01 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:53.889 22:37:01 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:53.889 22:37:01 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2193140' 00:05:53.889 killing process with pid 2193140 00:05:53.889 22:37:01 app_cmdline -- common/autotest_common.sh@973 -- # kill 2193140 00:05:53.889 22:37:01 app_cmdline -- common/autotest_common.sh@978 -- # wait 2193140 00:05:54.459 00:05:54.459 real 0m1.335s 00:05:54.459 user 0m1.537s 00:05:54.459 sys 0m0.460s 00:05:54.459 22:37:01 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:54.459 22:37:01 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:54.459 ************************************ 00:05:54.459 END TEST app_cmdline 00:05:54.459 ************************************ 00:05:54.459 22:37:01 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/app/version.sh 00:05:54.459 22:37:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:54.459 22:37:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.459 22:37:01 -- common/autotest_common.sh@10 -- # set +x 00:05:54.459 ************************************ 00:05:54.459 START TEST version 00:05:54.459 ************************************ 00:05:54.459 22:37:01 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/app/version.sh 00:05:54.459 * Looking for test storage... 00:05:54.459 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/app 00:05:54.459 22:37:02 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:54.459 22:37:02 version -- common/autotest_common.sh@1711 -- # lcov --version 00:05:54.459 22:37:02 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:54.460 22:37:02 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:54.460 22:37:02 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:54.460 22:37:02 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:54.460 22:37:02 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:54.460 22:37:02 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:54.460 22:37:02 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:54.460 22:37:02 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:54.460 22:37:02 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:54.460 22:37:02 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:54.460 22:37:02 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:54.460 22:37:02 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:54.460 22:37:02 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:54.460 22:37:02 version -- scripts/common.sh@344 -- # case "$op" in 00:05:54.460 22:37:02 version -- scripts/common.sh@345 -- # : 1 00:05:54.460 22:37:02 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:54.460 22:37:02 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:54.460 22:37:02 version -- scripts/common.sh@365 -- # decimal 1 00:05:54.460 22:37:02 version -- scripts/common.sh@353 -- # local d=1 00:05:54.460 22:37:02 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:54.460 22:37:02 version -- scripts/common.sh@355 -- # echo 1 00:05:54.460 22:37:02 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:54.460 22:37:02 version -- scripts/common.sh@366 -- # decimal 2 00:05:54.460 22:37:02 version -- scripts/common.sh@353 -- # local d=2 00:05:54.460 22:37:02 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:54.460 22:37:02 version -- scripts/common.sh@355 -- # echo 2 00:05:54.460 22:37:02 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:54.460 22:37:02 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:54.460 22:37:02 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:54.460 22:37:02 version -- scripts/common.sh@368 -- # return 0 00:05:54.460 22:37:02 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:54.460 22:37:02 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:54.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.460 --rc genhtml_branch_coverage=1 00:05:54.460 --rc genhtml_function_coverage=1 00:05:54.460 --rc genhtml_legend=1 00:05:54.460 --rc geninfo_all_blocks=1 00:05:54.460 --rc geninfo_unexecuted_blocks=1 00:05:54.460 00:05:54.460 ' 00:05:54.460 22:37:02 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:54.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.460 --rc genhtml_branch_coverage=1 00:05:54.460 --rc genhtml_function_coverage=1 00:05:54.460 --rc genhtml_legend=1 00:05:54.460 --rc geninfo_all_blocks=1 00:05:54.460 --rc geninfo_unexecuted_blocks=1 00:05:54.460 00:05:54.460 ' 00:05:54.460 22:37:02 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:54.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.460 --rc genhtml_branch_coverage=1 00:05:54.460 --rc genhtml_function_coverage=1 00:05:54.460 --rc genhtml_legend=1 00:05:54.460 --rc geninfo_all_blocks=1 00:05:54.460 --rc geninfo_unexecuted_blocks=1 00:05:54.460 00:05:54.460 ' 00:05:54.460 22:37:02 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:54.460 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.460 --rc genhtml_branch_coverage=1 00:05:54.460 --rc genhtml_function_coverage=1 00:05:54.460 --rc genhtml_legend=1 00:05:54.460 --rc geninfo_all_blocks=1 00:05:54.460 --rc geninfo_unexecuted_blocks=1 00:05:54.460 00:05:54.460 ' 00:05:54.460 22:37:02 version -- app/version.sh@17 -- # get_header_version major 00:05:54.460 22:37:02 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/include/spdk/version.h 00:05:54.460 22:37:02 version -- app/version.sh@14 -- # cut -f2 00:05:54.460 22:37:02 version -- app/version.sh@14 -- # tr -d '"' 00:05:54.460 22:37:02 version -- app/version.sh@17 -- # major=25 00:05:54.460 22:37:02 version -- app/version.sh@18 -- # get_header_version minor 00:05:54.460 22:37:02 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/include/spdk/version.h 00:05:54.460 22:37:02 version -- app/version.sh@14 -- # cut -f2 00:05:54.460 22:37:02 version -- app/version.sh@14 -- # tr -d '"' 00:05:54.460 22:37:02 version -- app/version.sh@18 -- # minor=1 00:05:54.460 22:37:02 version -- app/version.sh@19 -- # get_header_version patch 00:05:54.460 22:37:02 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/include/spdk/version.h 00:05:54.460 22:37:02 version -- app/version.sh@14 -- # cut -f2 00:05:54.460 22:37:02 version -- app/version.sh@14 -- # tr -d '"' 00:05:54.460 22:37:02 version -- app/version.sh@19 -- # patch=0 00:05:54.460 22:37:02 version -- app/version.sh@20 -- # get_header_version suffix 00:05:54.460 22:37:02 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/include/spdk/version.h 00:05:54.460 22:37:02 version -- app/version.sh@14 -- # cut -f2 00:05:54.460 22:37:02 version -- app/version.sh@14 -- # tr -d '"' 00:05:54.460 22:37:02 version -- app/version.sh@20 -- # suffix=-pre 00:05:54.460 22:37:02 version -- app/version.sh@22 -- # version=25.1 00:05:54.460 22:37:02 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:54.460 22:37:02 version -- app/version.sh@28 -- # version=25.1rc0 00:05:54.460 22:37:02 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/python 00:05:54.460 22:37:02 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:54.721 22:37:02 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:54.721 22:37:02 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:54.721 00:05:54.721 real 0m0.244s 00:05:54.721 user 0m0.142s 00:05:54.721 sys 0m0.144s 00:05:54.721 22:37:02 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:54.721 22:37:02 version -- common/autotest_common.sh@10 -- # set +x 00:05:54.721 ************************************ 00:05:54.721 END TEST version 00:05:54.721 ************************************ 00:05:54.721 22:37:02 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:54.721 22:37:02 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:54.721 22:37:02 -- spdk/autotest.sh@194 -- # uname -s 00:05:54.721 22:37:02 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:54.721 22:37:02 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:54.721 22:37:02 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:54.721 22:37:02 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:54.721 22:37:02 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:05:54.721 22:37:02 -- spdk/autotest.sh@260 -- # timing_exit lib 00:05:54.721 22:37:02 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:54.721 22:37:02 -- common/autotest_common.sh@10 -- # set +x 00:05:54.721 22:37:02 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:05:54.721 22:37:02 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:05:54.721 22:37:02 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:05:54.721 22:37:02 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:05:54.721 22:37:02 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:05:54.721 22:37:02 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:05:54.721 22:37:02 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:54.721 22:37:02 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:54.721 22:37:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.721 22:37:02 -- common/autotest_common.sh@10 -- # set +x 00:05:54.721 ************************************ 00:05:54.721 START TEST nvmf_tcp 00:05:54.721 ************************************ 00:05:54.721 22:37:02 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:54.721 * Looking for test storage... 00:05:54.721 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf 00:05:54.721 22:37:02 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:54.721 22:37:02 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:05:54.721 22:37:02 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:54.981 22:37:02 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:54.981 22:37:02 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:54.981 22:37:02 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:54.981 22:37:02 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:54.981 22:37:02 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:54.981 22:37:02 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:54.981 22:37:02 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:54.981 22:37:02 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:54.981 22:37:02 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:54.981 22:37:02 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:54.981 22:37:02 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:54.981 22:37:02 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:54.981 22:37:02 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:54.981 22:37:02 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:05:54.981 22:37:02 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:54.981 22:37:02 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:54.981 22:37:02 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:54.981 22:37:02 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:05:54.981 22:37:02 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:54.981 22:37:02 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:05:54.981 22:37:02 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:54.981 22:37:02 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:54.981 22:37:02 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:05:54.981 22:37:02 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:54.981 22:37:02 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:05:54.981 22:37:02 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:54.981 22:37:02 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:54.981 22:37:02 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:54.981 22:37:02 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:05:54.981 22:37:02 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:54.981 22:37:02 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:54.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.981 --rc genhtml_branch_coverage=1 00:05:54.981 --rc genhtml_function_coverage=1 00:05:54.981 --rc genhtml_legend=1 00:05:54.981 --rc geninfo_all_blocks=1 00:05:54.981 --rc geninfo_unexecuted_blocks=1 00:05:54.981 00:05:54.981 ' 00:05:54.981 22:37:02 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:54.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.981 --rc genhtml_branch_coverage=1 00:05:54.981 --rc genhtml_function_coverage=1 00:05:54.981 --rc genhtml_legend=1 00:05:54.981 --rc geninfo_all_blocks=1 00:05:54.981 --rc geninfo_unexecuted_blocks=1 00:05:54.981 00:05:54.981 ' 00:05:54.981 22:37:02 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:54.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.981 --rc genhtml_branch_coverage=1 00:05:54.981 --rc genhtml_function_coverage=1 00:05:54.981 --rc genhtml_legend=1 00:05:54.981 --rc geninfo_all_blocks=1 00:05:54.981 --rc geninfo_unexecuted_blocks=1 00:05:54.981 00:05:54.981 ' 00:05:54.981 22:37:02 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:54.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.981 --rc genhtml_branch_coverage=1 00:05:54.981 --rc genhtml_function_coverage=1 00:05:54.981 --rc genhtml_legend=1 00:05:54.981 --rc geninfo_all_blocks=1 00:05:54.981 --rc geninfo_unexecuted_blocks=1 00:05:54.981 00:05:54.981 ' 00:05:54.981 22:37:02 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:54.981 22:37:02 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:54.981 22:37:02 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:54.981 22:37:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:54.981 22:37:02 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.981 22:37:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:54.981 ************************************ 00:05:54.981 START TEST nvmf_target_core 00:05:54.981 ************************************ 00:05:54.981 22:37:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:54.981 * Looking for test storage... 00:05:54.981 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf 00:05:54.981 22:37:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:54.981 22:37:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:05:54.982 22:37:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:54.982 22:37:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:54.982 22:37:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:54.982 22:37:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:54.982 22:37:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:54.982 22:37:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:05:54.982 22:37:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:05:54.982 22:37:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:05:54.982 22:37:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:05:54.982 22:37:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:05:54.982 22:37:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:05:54.982 22:37:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:05:54.982 22:37:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:54.982 22:37:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:05:54.982 22:37:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:05:54.982 22:37:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:54.982 22:37:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:54.982 22:37:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:05:54.982 22:37:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:05:54.982 22:37:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:54.982 22:37:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:05:54.982 22:37:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:05:54.982 22:37:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:05:54.982 22:37:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:05:54.982 22:37:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:54.982 22:37:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:05:54.982 22:37:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:05:54.982 22:37:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:54.982 22:37:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:54.982 22:37:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:05:54.982 22:37:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:54.982 22:37:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:54.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.982 --rc genhtml_branch_coverage=1 00:05:54.982 --rc genhtml_function_coverage=1 00:05:54.982 --rc genhtml_legend=1 00:05:54.982 --rc geninfo_all_blocks=1 00:05:54.982 --rc geninfo_unexecuted_blocks=1 00:05:54.982 00:05:54.982 ' 00:05:54.982 22:37:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:54.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.982 --rc genhtml_branch_coverage=1 00:05:54.982 --rc genhtml_function_coverage=1 00:05:54.982 --rc genhtml_legend=1 00:05:54.982 --rc geninfo_all_blocks=1 00:05:54.982 --rc geninfo_unexecuted_blocks=1 00:05:54.982 00:05:54.982 ' 00:05:54.982 22:37:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:54.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.982 --rc genhtml_branch_coverage=1 00:05:54.982 --rc genhtml_function_coverage=1 00:05:54.982 --rc genhtml_legend=1 00:05:54.982 --rc geninfo_all_blocks=1 00:05:54.982 --rc geninfo_unexecuted_blocks=1 00:05:54.982 00:05:54.982 ' 00:05:54.982 22:37:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:54.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:54.982 --rc genhtml_branch_coverage=1 00:05:54.982 --rc genhtml_function_coverage=1 00:05:54.982 --rc genhtml_legend=1 00:05:54.982 --rc geninfo_all_blocks=1 00:05:54.982 --rc geninfo_unexecuted_blocks=1 00:05:54.982 00:05:54.982 ' 00:05:54.982 22:37:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:55.242 22:37:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:55.242 22:37:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:05:55.242 22:37:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:55.242 22:37:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:55.242 22:37:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:55.242 22:37:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:55.242 22:37:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:55.242 22:37:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:55.242 22:37:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:55.242 22:37:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:55.242 22:37:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:55.242 22:37:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:55.242 22:37:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:55.242 22:37:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:55.242 22:37:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:55.242 22:37:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:55.242 22:37:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:55.242 22:37:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:55.242 22:37:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:55.242 22:37:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:05:55.242 22:37:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:05:55.242 22:37:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:55.242 22:37:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:55.243 22:37:02 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:55.243 22:37:02 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.243 22:37:02 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.243 22:37:02 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.243 22:37:02 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:55.243 22:37:02 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.243 22:37:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:05:55.243 22:37:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:55.243 22:37:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:55.243 22:37:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:55.243 22:37:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:55.243 22:37:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:55.243 22:37:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:55.243 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:55.243 22:37:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:55.243 22:37:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:55.243 22:37:02 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:55.243 22:37:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:55.243 22:37:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:55.243 22:37:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:55.243 22:37:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:55.243 22:37:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:55.243 22:37:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:55.243 22:37:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:55.243 ************************************ 00:05:55.243 START TEST nvmf_abort 00:05:55.243 ************************************ 00:05:55.243 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:55.243 * Looking for test storage... 00:05:55.243 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:05:55.243 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:55.243 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:05:55.243 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:55.243 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:55.243 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:55.243 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:55.243 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:55.243 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:05:55.243 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:05:55.243 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:05:55.243 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:05:55.243 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:05:55.243 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:05:55.243 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:05:55.243 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:55.243 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:05:55.243 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:05:55.243 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:55.243 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:55.243 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:05:55.243 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:05:55.243 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:55.243 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:05:55.243 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:05:55.243 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:05:55.243 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:05:55.243 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:55.243 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:05:55.243 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:05:55.243 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:55.243 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:55.243 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:05:55.243 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:55.243 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:55.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.243 --rc genhtml_branch_coverage=1 00:05:55.243 --rc genhtml_function_coverage=1 00:05:55.243 --rc genhtml_legend=1 00:05:55.243 --rc geninfo_all_blocks=1 00:05:55.243 --rc geninfo_unexecuted_blocks=1 00:05:55.243 00:05:55.243 ' 00:05:55.243 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:55.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.243 --rc genhtml_branch_coverage=1 00:05:55.243 --rc genhtml_function_coverage=1 00:05:55.243 --rc genhtml_legend=1 00:05:55.243 --rc geninfo_all_blocks=1 00:05:55.243 --rc geninfo_unexecuted_blocks=1 00:05:55.243 00:05:55.243 ' 00:05:55.243 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:55.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.243 --rc genhtml_branch_coverage=1 00:05:55.243 --rc genhtml_function_coverage=1 00:05:55.243 --rc genhtml_legend=1 00:05:55.243 --rc geninfo_all_blocks=1 00:05:55.243 --rc geninfo_unexecuted_blocks=1 00:05:55.243 00:05:55.243 ' 00:05:55.243 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:55.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.243 --rc genhtml_branch_coverage=1 00:05:55.243 --rc genhtml_function_coverage=1 00:05:55.243 --rc genhtml_legend=1 00:05:55.243 --rc geninfo_all_blocks=1 00:05:55.243 --rc geninfo_unexecuted_blocks=1 00:05:55.243 00:05:55.243 ' 00:05:55.243 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:05:55.244 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:55.244 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:55.244 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:55.244 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:55.244 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:55.244 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:55.244 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:55.244 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:55.244 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:55.244 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:55.244 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:55.244 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:55.244 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:55.244 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:55.244 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:55.244 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:55.244 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:55.504 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:05:55.504 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:05:55.504 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:55.504 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:55.504 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:55.504 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.504 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.504 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.504 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:55.504 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:55.504 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:05:55.504 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:55.504 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:55.504 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:55.504 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:55.504 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:55.504 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:55.504 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:55.504 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:55.504 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:55.504 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:55.504 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:55.504 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:55.504 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:55.504 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:05:55.504 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:55.504 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:05:55.504 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:05:55.504 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:05:55.504 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:55.504 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:55.504 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:55.504 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:05:55.504 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:05:55.504 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:05:55.504 22:37:02 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:02.078 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:02.078 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:06:02.078 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:02.078 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:02.078 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:02.078 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:02.078 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:02.078 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:06:02.078 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:02.078 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:06:02.078 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:06:02.078 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:06:02.078 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:06:02.078 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:06:02.078 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:06:02.078 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:02.078 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:02.078 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:02.078 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:02.078 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:02.078 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:02.078 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:02.078 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:02.078 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:02.078 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:02.078 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:02.078 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:02.078 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:02.078 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:02.078 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:02.078 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:02.078 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:02.078 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:02.078 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:02.078 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:02.079 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:02.079 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:02.079 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:02.079 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:02.079 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:02.079 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:02.079 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:02.079 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:02.079 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:02.079 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:02.079 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:02.079 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:02.079 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:02.079 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:02.079 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:02.079 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:02.079 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:02.079 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:02.079 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:02.079 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:02.079 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:02.079 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:02.079 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:02.079 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:02.079 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:02.079 Found net devices under 0000:86:00.0: cvl_0_0 00:06:02.079 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:02.079 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:02.079 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:02.079 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:02.079 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:02.079 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:02.079 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:02.079 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:02.079 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:02.079 Found net devices under 0000:86:00.1: cvl_0_1 00:06:02.079 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:02.079 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:02.079 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:06:02.079 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:02.079 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:02.079 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:02.079 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:02.079 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:02.079 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:02.079 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:02.079 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:02.079 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:02.079 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:02.079 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:02.079 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:02.079 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:02.079 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:02.079 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:02.079 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:02.079 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:02.079 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:02.079 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:02.079 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:02.079 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:02.079 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:02.079 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:02.079 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:02.079 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:02.079 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:02.079 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:02.079 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.346 ms 00:06:02.079 00:06:02.079 --- 10.0.0.2 ping statistics --- 00:06:02.079 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:02.079 rtt min/avg/max/mdev = 0.346/0.346/0.346/0.000 ms 00:06:02.079 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:02.079 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:02.079 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:06:02.079 00:06:02.079 --- 10.0.0.1 ping statistics --- 00:06:02.079 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:02.079 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:06:02.079 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:02.079 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:06:02.079 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:02.079 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:02.079 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:02.079 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:02.079 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:02.079 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:02.079 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:02.079 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:02.079 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:02.079 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:02.079 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:02.079 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2196816 00:06:02.079 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:02.079 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2196816 00:06:02.079 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 2196816 ']' 00:06:02.079 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.079 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:02.079 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.079 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:02.079 22:37:08 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:02.079 [2024-12-10 22:37:09.015060] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:06:02.079 [2024-12-10 22:37:09.015106] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:02.079 [2024-12-10 22:37:09.093017] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:02.079 [2024-12-10 22:37:09.133203] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:02.079 [2024-12-10 22:37:09.133244] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:02.079 [2024-12-10 22:37:09.133254] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:02.079 [2024-12-10 22:37:09.133259] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:02.079 [2024-12-10 22:37:09.133264] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:02.079 [2024-12-10 22:37:09.134574] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:02.079 [2024-12-10 22:37:09.134663] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:02.079 [2024-12-10 22:37:09.134665] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:02.079 22:37:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:02.079 22:37:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:06:02.080 22:37:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:02.080 22:37:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:02.080 22:37:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:02.080 22:37:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:02.080 22:37:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:02.080 22:37:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.080 22:37:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:02.080 [2024-12-10 22:37:09.283984] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:02.080 22:37:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:02.080 22:37:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:02.080 22:37:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.080 22:37:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:02.080 Malloc0 00:06:02.080 22:37:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:02.080 22:37:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:02.080 22:37:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.080 22:37:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:02.080 Delay0 00:06:02.080 22:37:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:02.080 22:37:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:02.080 22:37:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.080 22:37:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:02.080 22:37:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:02.080 22:37:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:02.080 22:37:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.080 22:37:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:02.080 22:37:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:02.080 22:37:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:02.080 22:37:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.080 22:37:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:02.080 [2024-12-10 22:37:09.364110] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:02.080 22:37:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:02.080 22:37:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:02.080 22:37:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.080 22:37:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:02.080 22:37:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:02.080 22:37:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:02.080 [2024-12-10 22:37:09.496855] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:04.008 Initializing NVMe Controllers 00:06:04.008 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:04.008 controller IO queue size 128 less than required 00:06:04.008 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:04.008 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:04.008 Initialization complete. Launching workers. 00:06:04.009 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 35862 00:06:04.009 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 35927, failed to submit 62 00:06:04.009 success 35866, unsuccessful 61, failed 0 00:06:04.009 22:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:04.009 22:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:04.009 22:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:04.009 22:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:04.009 22:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:04.009 22:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:04.009 22:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:04.009 22:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:06:04.009 22:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:04.009 22:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:06:04.009 22:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:04.009 22:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:04.009 rmmod nvme_tcp 00:06:04.009 rmmod nvme_fabrics 00:06:04.009 rmmod nvme_keyring 00:06:04.009 22:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:04.009 22:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:06:04.009 22:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:06:04.009 22:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2196816 ']' 00:06:04.009 22:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2196816 00:06:04.009 22:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 2196816 ']' 00:06:04.009 22:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 2196816 00:06:04.009 22:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:06:04.009 22:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:04.009 22:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2196816 00:06:04.268 22:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:04.268 22:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:04.268 22:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2196816' 00:06:04.268 killing process with pid 2196816 00:06:04.268 22:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 2196816 00:06:04.268 22:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 2196816 00:06:04.268 22:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:04.268 22:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:04.268 22:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:04.268 22:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:06:04.268 22:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:06:04.268 22:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:04.268 22:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:06:04.268 22:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:04.268 22:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:04.268 22:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:04.268 22:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:04.268 22:37:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:06.807 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:06.807 00:06:06.807 real 0m11.259s 00:06:06.807 user 0m11.859s 00:06:06.807 sys 0m5.483s 00:06:06.807 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.807 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:06.807 ************************************ 00:06:06.807 END TEST nvmf_abort 00:06:06.807 ************************************ 00:06:06.807 22:37:14 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:06.807 22:37:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:06.807 22:37:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.807 22:37:14 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:06.807 ************************************ 00:06:06.807 START TEST nvmf_ns_hotplug_stress 00:06:06.807 ************************************ 00:06:06.807 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:06.807 * Looking for test storage... 00:06:06.807 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:06:06.807 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:06.807 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:06:06.807 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:06.807 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:06.807 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:06.807 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:06.807 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:06.807 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:06:06.807 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:06:06.807 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:06:06.807 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:06:06.807 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:06:06.807 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:06:06.807 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:06:06.807 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:06.807 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:06:06.807 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:06:06.807 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:06.807 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:06.807 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:06:06.807 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:06:06.807 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:06.807 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:06:06.807 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:06:06.807 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:06:06.807 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:06:06.807 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:06.807 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:06:06.807 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:06:06.807 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:06.807 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:06.807 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:06:06.807 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:06.807 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:06.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.807 --rc genhtml_branch_coverage=1 00:06:06.807 --rc genhtml_function_coverage=1 00:06:06.807 --rc genhtml_legend=1 00:06:06.807 --rc geninfo_all_blocks=1 00:06:06.807 --rc geninfo_unexecuted_blocks=1 00:06:06.807 00:06:06.807 ' 00:06:06.807 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:06.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.807 --rc genhtml_branch_coverage=1 00:06:06.807 --rc genhtml_function_coverage=1 00:06:06.807 --rc genhtml_legend=1 00:06:06.807 --rc geninfo_all_blocks=1 00:06:06.807 --rc geninfo_unexecuted_blocks=1 00:06:06.807 00:06:06.807 ' 00:06:06.807 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:06.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.807 --rc genhtml_branch_coverage=1 00:06:06.807 --rc genhtml_function_coverage=1 00:06:06.807 --rc genhtml_legend=1 00:06:06.807 --rc geninfo_all_blocks=1 00:06:06.807 --rc geninfo_unexecuted_blocks=1 00:06:06.807 00:06:06.807 ' 00:06:06.807 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:06.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:06.807 --rc genhtml_branch_coverage=1 00:06:06.807 --rc genhtml_function_coverage=1 00:06:06.807 --rc genhtml_legend=1 00:06:06.807 --rc geninfo_all_blocks=1 00:06:06.807 --rc geninfo_unexecuted_blocks=1 00:06:06.807 00:06:06.807 ' 00:06:06.807 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:06:06.807 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:06.807 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:06.807 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:06.807 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:06.807 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:06.807 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:06.807 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:06.807 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:06.807 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:06.807 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:06.807 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:06.807 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:06.807 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:06.807 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:06.807 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:06.807 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:06.807 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:06.807 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:06:06.807 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:06:06.807 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:06.807 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:06.807 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:06.807 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.807 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.808 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.808 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:06.808 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:06.808 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:06:06.808 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:06.808 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:06.808 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:06.808 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:06.808 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:06.808 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:06.808 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:06.808 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:06.808 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:06.808 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:06.808 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:06:06.808 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:06.808 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:06.808 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:06.808 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:06.808 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:06.808 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:06.808 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:06.808 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:06.808 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:06.808 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:06.808 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:06.808 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:06:06.808 22:37:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:13.379 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:13.379 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:06:13.379 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:13.379 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:13.380 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:13.380 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:13.380 Found net devices under 0000:86:00.0: cvl_0_0 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:13.380 Found net devices under 0000:86:00.1: cvl_0_1 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:13.380 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:13.380 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.391 ms 00:06:13.380 00:06:13.380 --- 10.0.0.2 ping statistics --- 00:06:13.380 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:13.380 rtt min/avg/max/mdev = 0.391/0.391/0.391/0.000 ms 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:13.380 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:13.380 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.245 ms 00:06:13.380 00:06:13.380 --- 10.0.0.1 ping statistics --- 00:06:13.380 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:13.380 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:06:13.380 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:13.381 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:06:13.381 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:13.381 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:13.381 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:13.381 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:13.381 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:13.381 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:13.381 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:13.381 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:13.381 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:13.381 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:13.381 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:13.381 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2200840 00:06:13.381 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2200840 00:06:13.381 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:13.381 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 2200840 ']' 00:06:13.381 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.381 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:13.381 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.381 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:13.381 22:37:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:13.381 [2024-12-10 22:37:20.401784] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:06:13.381 [2024-12-10 22:37:20.401833] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:13.381 [2024-12-10 22:37:20.483915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:13.381 [2024-12-10 22:37:20.524899] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:13.381 [2024-12-10 22:37:20.524936] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:13.381 [2024-12-10 22:37:20.524944] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:13.381 [2024-12-10 22:37:20.524950] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:13.381 [2024-12-10 22:37:20.524955] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:13.381 [2024-12-10 22:37:20.526330] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:13.381 [2024-12-10 22:37:20.526436] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:13.381 [2024-12-10 22:37:20.526437] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:13.639 22:37:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:13.639 22:37:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:06:13.639 22:37:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:13.639 22:37:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:13.639 22:37:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:13.639 22:37:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:13.639 22:37:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:13.639 22:37:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:13.898 [2024-12-10 22:37:21.458616] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:13.898 22:37:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:14.156 22:37:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:14.156 [2024-12-10 22:37:21.856091] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:14.415 22:37:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:14.415 22:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:14.674 Malloc0 00:06:14.674 22:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:14.932 Delay0 00:06:14.932 22:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:15.191 22:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:15.191 NULL1 00:06:15.191 22:37:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:15.451 22:37:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:15.451 22:37:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2201329 00:06:15.451 22:37:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2201329 00:06:15.451 22:37:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:15.710 22:37:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:15.968 22:37:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:15.969 22:37:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:15.969 true 00:06:16.228 22:37:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2201329 00:06:16.228 22:37:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:16.228 22:37:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:16.486 22:37:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:16.486 22:37:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:16.745 true 00:06:16.745 22:37:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2201329 00:06:16.745 22:37:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:18.123 Read completed with error (sct=0, sc=11) 00:06:18.123 22:37:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:18.123 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:18.123 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:18.123 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:18.123 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:18.123 22:37:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:18.123 22:37:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:18.123 true 00:06:18.382 22:37:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2201329 00:06:18.382 22:37:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:18.382 22:37:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:18.640 22:37:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:18.640 22:37:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:18.899 true 00:06:18.899 22:37:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2201329 00:06:18.899 22:37:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:19.158 22:37:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:19.158 22:37:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:19.158 22:37:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:19.417 true 00:06:19.417 22:37:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2201329 00:06:19.417 22:37:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:19.676 22:37:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:19.935 22:37:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:19.935 22:37:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:19.935 true 00:06:20.194 22:37:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2201329 00:06:20.194 22:37:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:21.241 22:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:21.241 22:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:21.241 22:37:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:21.542 true 00:06:21.543 22:37:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2201329 00:06:21.543 22:37:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:21.801 22:37:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:21.801 22:37:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:21.801 22:37:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:22.060 true 00:06:22.060 22:37:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2201329 00:06:22.060 22:37:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:22.997 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:22.997 22:37:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:23.255 22:37:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:23.255 22:37:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:23.514 true 00:06:23.514 22:37:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2201329 00:06:23.514 22:37:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:23.772 22:37:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:24.031 22:37:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:24.031 22:37:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:24.031 true 00:06:24.031 22:37:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2201329 00:06:24.031 22:37:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:25.408 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:25.408 22:37:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:25.408 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:25.408 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:25.408 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:25.408 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:25.408 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:25.408 22:37:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:25.408 22:37:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:25.668 true 00:06:25.668 22:37:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2201329 00:06:25.668 22:37:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:26.604 22:37:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:26.604 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:26.604 22:37:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:26.604 22:37:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:26.863 true 00:06:26.863 22:37:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2201329 00:06:26.863 22:37:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:27.121 22:37:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:27.380 22:37:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:27.380 22:37:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:27.380 true 00:06:27.380 22:37:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2201329 00:06:27.380 22:37:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:28.757 22:37:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:28.757 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:28.757 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:28.757 22:37:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:28.757 22:37:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:29.016 true 00:06:29.016 22:37:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2201329 00:06:29.016 22:37:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:29.275 22:37:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:29.275 22:37:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:29.275 22:37:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:29.533 true 00:06:29.533 22:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2201329 00:06:29.533 22:37:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:30.912 22:37:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:30.912 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:30.912 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:30.912 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:30.912 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:30.912 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:30.912 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:30.912 22:37:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:30.912 22:37:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:31.171 true 00:06:31.171 22:37:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2201329 00:06:31.171 22:37:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:32.108 22:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:32.108 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:32.108 22:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:32.108 22:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:32.366 true 00:06:32.366 22:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2201329 00:06:32.366 22:37:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:32.625 22:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:32.625 22:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:32.625 22:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:32.884 true 00:06:32.884 22:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2201329 00:06:32.884 22:37:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:34.263 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:34.263 22:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:34.263 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:34.263 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:34.263 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:34.263 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:34.263 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:34.263 22:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:34.263 22:37:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:34.522 true 00:06:34.522 22:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2201329 00:06:34.522 22:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:35.459 22:37:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:35.459 22:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:35.459 22:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:35.717 true 00:06:35.717 22:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2201329 00:06:35.717 22:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:35.976 22:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:35.976 22:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:35.976 22:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:36.235 true 00:06:36.235 22:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2201329 00:06:36.235 22:37:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:37.612 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.612 22:37:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:37.612 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.612 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.612 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.612 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.612 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.612 22:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:37.612 22:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:37.871 true 00:06:37.871 22:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2201329 00:06:37.871 22:37:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:38.438 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:38.438 22:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:38.697 22:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:38.697 22:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:38.956 true 00:06:38.956 22:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2201329 00:06:38.956 22:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:39.215 22:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:39.474 22:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:39.474 22:37:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:39.474 true 00:06:39.474 22:37:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2201329 00:06:39.474 22:37:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:40.852 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:40.852 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:40.852 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:40.852 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:40.852 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:40.852 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:40.852 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:40.852 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:40.852 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:41.111 true 00:06:41.111 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2201329 00:06:41.111 22:37:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:42.047 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:42.047 22:37:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:42.047 22:37:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:42.047 22:37:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:42.306 true 00:06:42.306 22:37:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2201329 00:06:42.306 22:37:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:42.565 22:37:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:42.824 22:37:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:42.824 22:37:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:42.824 true 00:06:43.083 22:37:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2201329 00:06:43.083 22:37:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:44.021 22:37:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:44.021 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:44.021 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:44.021 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:44.280 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:44.280 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:44.280 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:44.280 22:37:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:06:44.280 22:37:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:44.538 true 00:06:44.538 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2201329 00:06:44.538 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.517 22:37:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:45.517 22:37:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:06:45.517 22:37:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:06:45.789 true 00:06:45.789 22:37:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2201329 00:06:45.789 22:37:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.789 22:37:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:45.789 Initializing NVMe Controllers 00:06:45.789 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:45.789 Controller IO queue size 128, less than required. 00:06:45.789 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:45.789 Controller IO queue size 128, less than required. 00:06:45.789 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:45.789 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:45.789 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:45.789 Initialization complete. Launching workers. 00:06:45.789 ======================================================== 00:06:45.789 Latency(us) 00:06:45.789 Device Information : IOPS MiB/s Average min max 00:06:45.789 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1639.71 0.80 47740.81 1976.28 1011157.54 00:06:45.789 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 15660.14 7.65 8152.82 1634.06 380497.90 00:06:45.789 ======================================================== 00:06:45.789 Total : 17299.85 8.45 11905.03 1634.06 1011157.54 00:06:45.789 00:06:46.048 22:37:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:06:46.048 22:37:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:06:46.307 true 00:06:46.307 22:37:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2201329 00:06:46.307 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2201329) - No such process 00:06:46.307 22:37:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2201329 00:06:46.307 22:37:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:46.566 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:46.566 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:46.566 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:46.566 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:46.566 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:46.566 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:46.825 null0 00:06:46.825 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:46.825 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:46.825 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:47.084 null1 00:06:47.084 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:47.084 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:47.084 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:47.343 null2 00:06:47.343 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:47.343 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:47.343 22:37:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:47.343 null3 00:06:47.601 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:47.601 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:47.601 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:47.601 null4 00:06:47.601 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:47.601 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:47.601 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:47.858 null5 00:06:47.858 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:47.858 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:47.858 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:48.118 null6 00:06:48.118 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:48.118 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:48.118 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:48.377 null7 00:06:48.377 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:48.377 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:48.377 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:48.377 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:48.377 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:48.377 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:48.377 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:48.377 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:48.377 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:48.377 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:48.377 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:48.377 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:48.377 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:48.377 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:48.377 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:48.377 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:48.377 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:48.377 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:48.377 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:48.377 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:48.377 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:48.377 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:48.377 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:48.377 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:48.377 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:48.377 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:48.377 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:48.377 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:48.377 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:48.377 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:48.377 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:48.377 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:48.377 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:48.377 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:48.377 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:48.377 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:48.378 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:48.378 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:48.378 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:48.378 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:48.378 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:48.378 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:48.378 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:48.378 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:48.378 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:48.378 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:48.378 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:48.378 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:48.378 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:48.378 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:48.378 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:48.378 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:48.378 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:48.378 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:48.378 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:48.378 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:48.378 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:48.378 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:48.378 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:48.378 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:48.378 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:48.378 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:48.378 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:48.378 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:48.378 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2206945 2206946 2206948 2206950 2206952 2206954 2206956 2206958 00:06:48.378 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:48.378 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:48.378 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:48.378 22:37:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:48.378 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:48.378 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:48.378 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:48.637 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:48.637 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:48.637 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:48.637 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:48.637 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:48.637 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:48.637 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:48.637 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:48.637 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:48.637 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:48.637 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:48.637 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:48.637 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:48.637 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:48.638 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:48.638 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:48.638 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:48.638 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:48.638 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:48.638 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:48.638 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:48.638 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:48.638 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:48.638 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:48.638 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:48.638 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:48.638 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:48.638 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:48.638 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:48.897 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:48.897 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:48.897 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:48.897 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:48.897 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:48.897 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:48.897 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:48.897 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:49.157 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.157 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.157 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.157 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.157 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:49.157 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:49.157 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.157 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.157 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:49.157 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.157 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.157 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:49.157 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.157 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.157 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.157 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.157 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:49.157 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:49.157 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.157 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.157 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:49.157 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.157 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.157 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:49.417 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:49.417 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:49.417 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:49.417 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:49.417 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:49.417 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:49.417 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:49.417 22:37:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:49.676 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.676 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.676 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:49.676 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.676 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.676 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:49.676 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.676 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.676 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:49.676 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.676 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.676 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:49.676 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.676 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.676 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:49.676 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.676 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.676 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:49.676 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.676 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.676 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:49.676 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.676 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.676 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:49.676 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:49.676 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:49.676 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:49.676 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:49.676 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:49.676 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:49.676 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:49.676 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:49.936 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.936 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.936 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:49.936 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.936 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.936 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:49.936 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.936 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.936 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:49.936 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.936 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.936 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:49.936 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.936 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.936 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:49.936 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.936 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.936 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:49.936 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.936 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.936 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:49.936 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:49.936 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:49.936 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:50.195 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:50.195 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:50.195 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:50.195 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:50.195 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:50.195 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:50.195 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:50.195 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:50.455 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.455 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.455 22:37:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:50.455 22:37:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.455 22:37:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.455 22:37:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:50.455 22:37:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.455 22:37:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.455 22:37:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:50.455 22:37:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.455 22:37:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.455 22:37:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.455 22:37:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.455 22:37:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:50.455 22:37:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:50.455 22:37:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.455 22:37:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.455 22:37:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:50.455 22:37:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.455 22:37:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.455 22:37:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:50.455 22:37:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.455 22:37:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.455 22:37:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:50.714 22:37:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:50.714 22:37:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:50.714 22:37:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:50.714 22:37:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:50.714 22:37:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:50.714 22:37:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:50.714 22:37:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:50.714 22:37:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:50.714 22:37:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.714 22:37:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.714 22:37:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:50.714 22:37:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.714 22:37:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.714 22:37:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:50.714 22:37:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.714 22:37:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.714 22:37:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:50.974 22:37:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.974 22:37:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.974 22:37:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:50.974 22:37:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.974 22:37:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.974 22:37:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:50.974 22:37:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.974 22:37:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.974 22:37:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:50.974 22:37:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.974 22:37:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.974 22:37:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:50.974 22:37:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:50.974 22:37:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:50.974 22:37:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:50.974 22:37:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:50.974 22:37:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:50.974 22:37:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:50.974 22:37:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:50.974 22:37:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:50.974 22:37:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:50.974 22:37:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:50.974 22:37:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:51.233 22:37:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.233 22:37:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.233 22:37:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:51.233 22:37:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.233 22:37:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.233 22:37:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:51.233 22:37:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.233 22:37:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.233 22:37:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:51.233 22:37:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.233 22:37:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.233 22:37:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:51.233 22:37:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.233 22:37:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.233 22:37:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:51.233 22:37:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.233 22:37:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.234 22:37:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:51.234 22:37:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.234 22:37:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.234 22:37:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:51.234 22:37:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.234 22:37:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.234 22:37:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:51.492 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:51.492 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:51.492 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:51.492 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.493 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:51.493 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:51.493 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:51.493 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:51.752 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.752 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.752 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:51.752 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.752 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.752 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:51.752 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.752 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.752 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:51.752 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.752 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.752 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:51.752 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.752 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.752 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:51.752 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.752 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.752 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:51.752 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.752 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:51.752 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.752 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:51.752 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:51.752 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:51.752 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:51.752 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:52.011 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.012 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:52.012 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:52.012 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:52.012 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:52.012 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:52.012 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.012 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.012 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:52.012 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.012 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.012 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:52.012 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.012 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.012 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:52.012 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.012 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.012 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:52.012 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.012 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.012 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:52.012 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.012 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.012 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:52.012 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.012 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.012 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:52.012 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.012 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.012 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:52.271 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:52.271 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:52.271 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:52.271 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:52.271 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:52.271 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:52.271 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:52.271 22:37:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:52.530 22:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.530 22:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.530 22:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.530 22:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.530 22:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.530 22:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.530 22:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.531 22:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.531 22:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.531 22:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.531 22:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.531 22:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.531 22:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.531 22:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.531 22:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:52.531 22:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:52.531 22:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:52.531 22:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:52.531 22:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:52.531 22:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:06:52.531 22:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:52.531 22:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:06:52.531 22:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:52.531 22:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:52.531 rmmod nvme_tcp 00:06:52.531 rmmod nvme_fabrics 00:06:52.531 rmmod nvme_keyring 00:06:52.531 22:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:52.531 22:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:06:52.531 22:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:06:52.531 22:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2200840 ']' 00:06:52.531 22:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2200840 00:06:52.531 22:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 2200840 ']' 00:06:52.531 22:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 2200840 00:06:52.531 22:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:06:52.531 22:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:52.531 22:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2200840 00:06:52.531 22:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:52.531 22:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:52.531 22:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2200840' 00:06:52.531 killing process with pid 2200840 00:06:52.531 22:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 2200840 00:06:52.531 22:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 2200840 00:06:52.790 22:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:52.790 22:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:52.790 22:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:52.790 22:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:06:52.790 22:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:06:52.790 22:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:52.790 22:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:06:52.790 22:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:52.790 22:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:52.790 22:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:52.790 22:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:52.790 22:38:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:55.327 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:55.327 00:06:55.327 real 0m48.388s 00:06:55.327 user 3m16.979s 00:06:55.327 sys 0m15.676s 00:06:55.327 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.327 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:55.327 ************************************ 00:06:55.327 END TEST nvmf_ns_hotplug_stress 00:06:55.327 ************************************ 00:06:55.327 22:38:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:55.327 22:38:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:55.327 22:38:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.327 22:38:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:55.327 ************************************ 00:06:55.327 START TEST nvmf_delete_subsystem 00:06:55.327 ************************************ 00:06:55.327 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:55.327 * Looking for test storage... 00:06:55.327 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:06:55.327 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:55.327 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:06:55.327 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:55.327 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:55.327 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:55.327 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:55.327 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:55.327 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:06:55.327 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:06:55.327 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:06:55.327 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:06:55.327 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:06:55.327 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:06:55.327 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:06:55.327 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:55.327 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:06:55.327 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:06:55.327 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:55.327 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:55.327 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:06:55.327 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:06:55.327 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:55.327 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:06:55.327 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:06:55.327 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:06:55.327 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:06:55.327 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:55.327 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:06:55.327 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:06:55.327 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:55.327 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:55.327 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:06:55.327 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:55.327 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:55.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.327 --rc genhtml_branch_coverage=1 00:06:55.327 --rc genhtml_function_coverage=1 00:06:55.327 --rc genhtml_legend=1 00:06:55.327 --rc geninfo_all_blocks=1 00:06:55.327 --rc geninfo_unexecuted_blocks=1 00:06:55.327 00:06:55.327 ' 00:06:55.327 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:55.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.327 --rc genhtml_branch_coverage=1 00:06:55.327 --rc genhtml_function_coverage=1 00:06:55.327 --rc genhtml_legend=1 00:06:55.327 --rc geninfo_all_blocks=1 00:06:55.327 --rc geninfo_unexecuted_blocks=1 00:06:55.327 00:06:55.327 ' 00:06:55.327 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:55.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.327 --rc genhtml_branch_coverage=1 00:06:55.327 --rc genhtml_function_coverage=1 00:06:55.327 --rc genhtml_legend=1 00:06:55.327 --rc geninfo_all_blocks=1 00:06:55.327 --rc geninfo_unexecuted_blocks=1 00:06:55.327 00:06:55.327 ' 00:06:55.327 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:55.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:55.327 --rc genhtml_branch_coverage=1 00:06:55.327 --rc genhtml_function_coverage=1 00:06:55.327 --rc genhtml_legend=1 00:06:55.327 --rc geninfo_all_blocks=1 00:06:55.327 --rc geninfo_unexecuted_blocks=1 00:06:55.327 00:06:55.327 ' 00:06:55.327 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:06:55.327 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:55.327 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:55.327 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:55.327 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:55.327 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:55.327 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:55.327 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:55.327 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:55.327 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:55.327 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:55.328 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:55.328 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:55.328 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:55.328 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:55.328 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:55.328 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:55.328 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:55.328 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:06:55.328 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:06:55.328 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:55.328 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:55.328 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:55.328 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.328 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.328 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.328 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:55.328 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:55.328 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:06:55.328 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:55.328 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:55.328 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:55.328 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:55.328 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:55.328 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:55.328 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:55.328 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:55.328 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:55.328 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:55.328 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:55.328 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:55.328 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:55.328 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:55.328 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:55.328 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:55.328 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:55.328 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:55.328 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:55.328 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:55.328 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:55.328 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:06:55.328 22:38:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:01.898 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:01.898 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:01.898 Found net devices under 0000:86:00.0: cvl_0_0 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:01.898 Found net devices under 0000:86:00.1: cvl_0_1 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:01.898 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:01.898 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:01.898 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.328 ms 00:07:01.898 00:07:01.898 --- 10.0.0.2 ping statistics --- 00:07:01.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:01.898 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:07:01.899 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:01.899 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:01.899 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:07:01.899 00:07:01.899 --- 10.0.0.1 ping statistics --- 00:07:01.899 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:01.899 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:07:01.899 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:01.899 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:07:01.899 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:01.899 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:01.899 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:01.899 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:01.899 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:01.899 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:01.899 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:01.899 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:01.899 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:01.899 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:01.899 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:01.899 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2211361 00:07:01.899 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:01.899 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2211361 00:07:01.899 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 2211361 ']' 00:07:01.899 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.899 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:01.899 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.899 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:01.899 22:38:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:01.899 [2024-12-10 22:38:08.838879] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:07:01.899 [2024-12-10 22:38:08.838928] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:01.899 [2024-12-10 22:38:08.920938] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:01.899 [2024-12-10 22:38:08.962409] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:01.899 [2024-12-10 22:38:08.962445] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:01.899 [2024-12-10 22:38:08.962452] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:01.899 [2024-12-10 22:38:08.962461] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:01.899 [2024-12-10 22:38:08.962467] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:01.899 [2024-12-10 22:38:08.963687] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:01.899 [2024-12-10 22:38:08.963690] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.899 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:01.899 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:07:01.899 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:01.899 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:01.899 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:01.899 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:01.899 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:01.899 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.899 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:01.899 [2024-12-10 22:38:09.101103] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:01.899 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.899 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:01.899 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.899 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:01.899 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.899 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:01.899 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.899 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:01.899 [2024-12-10 22:38:09.121317] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:01.899 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.899 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:01.899 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.899 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:01.899 NULL1 00:07:01.899 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.899 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:01.899 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.899 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:01.899 Delay0 00:07:01.899 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.899 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:01.899 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.899 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:01.899 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.899 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2211516 00:07:01.899 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:01.899 22:38:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:01.899 [2024-12-10 22:38:09.213088] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:03.802 22:38:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:03.802 22:38:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:03.802 22:38:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:03.802 Read completed with error (sct=0, sc=8) 00:07:03.802 Read completed with error (sct=0, sc=8) 00:07:03.802 Read completed with error (sct=0, sc=8) 00:07:03.802 Read completed with error (sct=0, sc=8) 00:07:03.802 starting I/O failed: -6 00:07:03.802 Write completed with error (sct=0, sc=8) 00:07:03.802 Read completed with error (sct=0, sc=8) 00:07:03.802 Read completed with error (sct=0, sc=8) 00:07:03.802 Read completed with error (sct=0, sc=8) 00:07:03.802 starting I/O failed: -6 00:07:03.802 Read completed with error (sct=0, sc=8) 00:07:03.802 Write completed with error (sct=0, sc=8) 00:07:03.802 Read completed with error (sct=0, sc=8) 00:07:03.802 Write completed with error (sct=0, sc=8) 00:07:03.802 starting I/O failed: -6 00:07:03.802 Read completed with error (sct=0, sc=8) 00:07:03.802 Read completed with error (sct=0, sc=8) 00:07:03.802 Write completed with error (sct=0, sc=8) 00:07:03.802 Write completed with error (sct=0, sc=8) 00:07:03.802 starting I/O failed: -6 00:07:03.802 Write completed with error (sct=0, sc=8) 00:07:03.802 Read completed with error (sct=0, sc=8) 00:07:03.802 Write completed with error (sct=0, sc=8) 00:07:03.802 Read completed with error (sct=0, sc=8) 00:07:03.802 starting I/O failed: -6 00:07:03.802 Read completed with error (sct=0, sc=8) 00:07:03.802 Read completed with error (sct=0, sc=8) 00:07:03.802 Write completed with error (sct=0, sc=8) 00:07:03.802 Write completed with error (sct=0, sc=8) 00:07:03.802 starting I/O failed: -6 00:07:03.802 Read completed with error (sct=0, sc=8) 00:07:03.802 Write completed with error (sct=0, sc=8) 00:07:03.802 Write completed with error (sct=0, sc=8) 00:07:03.802 Write completed with error (sct=0, sc=8) 00:07:03.802 starting I/O failed: -6 00:07:03.802 Write completed with error (sct=0, sc=8) 00:07:03.802 Write completed with error (sct=0, sc=8) 00:07:03.802 Write completed with error (sct=0, sc=8) 00:07:03.802 Read completed with error (sct=0, sc=8) 00:07:03.802 starting I/O failed: -6 00:07:03.802 Write completed with error (sct=0, sc=8) 00:07:03.802 Read completed with error (sct=0, sc=8) 00:07:03.802 Read completed with error (sct=0, sc=8) 00:07:03.802 Read completed with error (sct=0, sc=8) 00:07:03.802 starting I/O failed: -6 00:07:03.802 Read completed with error (sct=0, sc=8) 00:07:03.802 Write completed with error (sct=0, sc=8) 00:07:03.802 Write completed with error (sct=0, sc=8) 00:07:03.802 Read completed with error (sct=0, sc=8) 00:07:03.802 starting I/O failed: -6 00:07:03.802 Read completed with error (sct=0, sc=8) 00:07:03.802 Read completed with error (sct=0, sc=8) 00:07:03.802 Write completed with error (sct=0, sc=8) 00:07:03.802 starting I/O failed: -6 00:07:03.802 starting I/O failed: -6 00:07:03.802 starting I/O failed: -6 00:07:03.802 starting I/O failed: -6 00:07:03.802 Read completed with error (sct=0, sc=8) 00:07:03.802 Read completed with error (sct=0, sc=8) 00:07:03.802 Read completed with error (sct=0, sc=8) 00:07:03.802 Read completed with error (sct=0, sc=8) 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 Write completed with error (sct=0, sc=8) 00:07:03.803 Write completed with error (sct=0, sc=8) 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 Write completed with error (sct=0, sc=8) 00:07:03.803 Write completed with error (sct=0, sc=8) 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 Write completed with error (sct=0, sc=8) 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 Write completed with error (sct=0, sc=8) 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 Write completed with error (sct=0, sc=8) 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 Write completed with error (sct=0, sc=8) 00:07:03.803 Write completed with error (sct=0, sc=8) 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 Write completed with error (sct=0, sc=8) 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 Write completed with error (sct=0, sc=8) 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 Write completed with error (sct=0, sc=8) 00:07:03.803 Write completed with error (sct=0, sc=8) 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 Write completed with error (sct=0, sc=8) 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 Write completed with error (sct=0, sc=8) 00:07:03.803 Write completed with error (sct=0, sc=8) 00:07:03.803 Write completed with error (sct=0, sc=8) 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 starting I/O failed: -6 00:07:03.803 Write completed with error (sct=0, sc=8) 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 Write completed with error (sct=0, sc=8) 00:07:03.803 starting I/O failed: -6 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 Write completed with error (sct=0, sc=8) 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 starting I/O failed: -6 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 Write completed with error (sct=0, sc=8) 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 Write completed with error (sct=0, sc=8) 00:07:03.803 starting I/O failed: -6 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 Write completed with error (sct=0, sc=8) 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 starting I/O failed: -6 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 Write completed with error (sct=0, sc=8) 00:07:03.803 starting I/O failed: -6 00:07:03.803 Write completed with error (sct=0, sc=8) 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 starting I/O failed: -6 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 starting I/O failed: -6 00:07:03.803 Write completed with error (sct=0, sc=8) 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 Write completed with error (sct=0, sc=8) 00:07:03.803 [2024-12-10 22:38:11.293255] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f75f400d510 is same with the state(6) to be set 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 Write completed with error (sct=0, sc=8) 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 Write completed with error (sct=0, sc=8) 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 Write completed with error (sct=0, sc=8) 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 Write completed with error (sct=0, sc=8) 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 Write completed with error (sct=0, sc=8) 00:07:03.803 Write completed with error (sct=0, sc=8) 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 Write completed with error (sct=0, sc=8) 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 Write completed with error (sct=0, sc=8) 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:03.803 Read completed with error (sct=0, sc=8) 00:07:04.738 [2024-12-10 22:38:12.266837] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a19b0 is same with the state(6) to be set 00:07:04.738 Read completed with error (sct=0, sc=8) 00:07:04.738 Write completed with error (sct=0, sc=8) 00:07:04.738 Write completed with error (sct=0, sc=8) 00:07:04.738 Read completed with error (sct=0, sc=8) 00:07:04.738 Read completed with error (sct=0, sc=8) 00:07:04.738 Read completed with error (sct=0, sc=8) 00:07:04.738 Write completed with error (sct=0, sc=8) 00:07:04.738 Write completed with error (sct=0, sc=8) 00:07:04.738 Read completed with error (sct=0, sc=8) 00:07:04.738 Read completed with error (sct=0, sc=8) 00:07:04.738 Read completed with error (sct=0, sc=8) 00:07:04.738 Read completed with error (sct=0, sc=8) 00:07:04.738 Read completed with error (sct=0, sc=8) 00:07:04.739 Write completed with error (sct=0, sc=8) 00:07:04.739 Read completed with error (sct=0, sc=8) 00:07:04.739 Read completed with error (sct=0, sc=8) 00:07:04.739 Read completed with error (sct=0, sc=8) 00:07:04.739 Read completed with error (sct=0, sc=8) 00:07:04.739 Write completed with error (sct=0, sc=8) 00:07:04.739 Read completed with error (sct=0, sc=8) 00:07:04.739 Read completed with error (sct=0, sc=8) 00:07:04.739 Read completed with error (sct=0, sc=8) 00:07:04.739 Write completed with error (sct=0, sc=8) 00:07:04.739 Read completed with error (sct=0, sc=8) 00:07:04.739 [2024-12-10 22:38:12.291437] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a0680 is same with the state(6) to be set 00:07:04.739 Read completed with error (sct=0, sc=8) 00:07:04.739 Read completed with error (sct=0, sc=8) 00:07:04.739 Read completed with error (sct=0, sc=8) 00:07:04.739 Read completed with error (sct=0, sc=8) 00:07:04.739 Read completed with error (sct=0, sc=8) 00:07:04.739 Read completed with error (sct=0, sc=8) 00:07:04.739 Write completed with error (sct=0, sc=8) 00:07:04.739 Write completed with error (sct=0, sc=8) 00:07:04.739 Read completed with error (sct=0, sc=8) 00:07:04.739 Write completed with error (sct=0, sc=8) 00:07:04.739 Read completed with error (sct=0, sc=8) 00:07:04.739 Write completed with error (sct=0, sc=8) 00:07:04.739 Read completed with error (sct=0, sc=8) 00:07:04.739 Read completed with error (sct=0, sc=8) 00:07:04.739 Read completed with error (sct=0, sc=8) 00:07:04.739 Read completed with error (sct=0, sc=8) 00:07:04.739 Read completed with error (sct=0, sc=8) 00:07:04.739 Write completed with error (sct=0, sc=8) 00:07:04.739 Write completed with error (sct=0, sc=8) 00:07:04.739 Read completed with error (sct=0, sc=8) 00:07:04.739 Read completed with error (sct=0, sc=8) 00:07:04.739 Read completed with error (sct=0, sc=8) 00:07:04.739 Read completed with error (sct=0, sc=8) 00:07:04.739 Write completed with error (sct=0, sc=8) 00:07:04.739 Read completed with error (sct=0, sc=8) 00:07:04.739 [2024-12-10 22:38:12.291631] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a02c0 is same with the state(6) to be set 00:07:04.739 Read completed with error (sct=0, sc=8) 00:07:04.739 Read completed with error (sct=0, sc=8) 00:07:04.739 Read completed with error (sct=0, sc=8) 00:07:04.739 Write completed with error (sct=0, sc=8) 00:07:04.739 Read completed with error (sct=0, sc=8) 00:07:04.739 Write completed with error (sct=0, sc=8) 00:07:04.739 Write completed with error (sct=0, sc=8) 00:07:04.739 Read completed with error (sct=0, sc=8) 00:07:04.739 Read completed with error (sct=0, sc=8) 00:07:04.739 [2024-12-10 22:38:12.295749] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f75f400d840 is same with the state(6) to be set 00:07:04.739 Read completed with error (sct=0, sc=8) 00:07:04.739 Read completed with error (sct=0, sc=8) 00:07:04.739 Write completed with error (sct=0, sc=8) 00:07:04.739 Read completed with error (sct=0, sc=8) 00:07:04.739 Read completed with error (sct=0, sc=8) 00:07:04.739 Read completed with error (sct=0, sc=8) 00:07:04.739 Read completed with error (sct=0, sc=8) 00:07:04.739 Read completed with error (sct=0, sc=8) 00:07:04.739 Write completed with error (sct=0, sc=8) 00:07:04.739 Read completed with error (sct=0, sc=8) 00:07:04.739 [2024-12-10 22:38:12.296438] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f75f400d060 is same with the state(6) to be set 00:07:04.739 Initializing NVMe Controllers 00:07:04.739 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:04.739 Controller IO queue size 128, less than required. 00:07:04.739 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:04.739 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:04.739 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:04.739 Initialization complete. Launching workers. 00:07:04.739 ======================================================== 00:07:04.739 Latency(us) 00:07:04.739 Device Information : IOPS MiB/s Average min max 00:07:04.739 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 169.31 0.08 899906.92 287.76 1005811.90 00:07:04.739 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 142.42 0.07 966769.20 219.90 1010122.18 00:07:04.739 ======================================================== 00:07:04.739 Total : 311.72 0.15 930454.22 219.90 1010122.18 00:07:04.739 00:07:04.739 [2024-12-10 22:38:12.297038] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a19b0 (9): Bad file descriptor 00:07:04.739 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:04.739 22:38:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:04.739 22:38:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:04.739 22:38:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2211516 00:07:04.739 22:38:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:05.309 22:38:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:05.309 22:38:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2211516 00:07:05.309 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2211516) - No such process 00:07:05.309 22:38:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2211516 00:07:05.309 22:38:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:07:05.309 22:38:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2211516 00:07:05.309 22:38:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:07:05.309 22:38:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:05.310 22:38:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:07:05.310 22:38:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:05.310 22:38:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 2211516 00:07:05.310 22:38:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:07:05.310 22:38:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:05.310 22:38:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:05.310 22:38:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:05.310 22:38:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:05.310 22:38:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.310 22:38:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:05.310 22:38:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.310 22:38:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:05.310 22:38:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.310 22:38:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:05.310 [2024-12-10 22:38:12.829756] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:05.310 22:38:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.310 22:38:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:05.310 22:38:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.310 22:38:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:05.310 22:38:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.310 22:38:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2212077 00:07:05.310 22:38:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:05.310 22:38:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:05.310 22:38:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2212077 00:07:05.310 22:38:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:05.310 [2024-12-10 22:38:12.914893] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:05.877 22:38:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:05.877 22:38:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2212077 00:07:05.877 22:38:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:06.136 22:38:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:06.136 22:38:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2212077 00:07:06.136 22:38:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:06.702 22:38:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:06.702 22:38:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2212077 00:07:06.702 22:38:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:07.270 22:38:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:07.270 22:38:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2212077 00:07:07.270 22:38:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:07.837 22:38:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:07.837 22:38:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2212077 00:07:07.837 22:38:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:08.404 22:38:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:08.404 22:38:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2212077 00:07:08.404 22:38:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:08.404 Initializing NVMe Controllers 00:07:08.404 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:08.404 Controller IO queue size 128, less than required. 00:07:08.404 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:08.404 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:08.404 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:08.404 Initialization complete. Launching workers. 00:07:08.404 ======================================================== 00:07:08.404 Latency(us) 00:07:08.404 Device Information : IOPS MiB/s Average min max 00:07:08.404 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002427.39 1000143.29 1041008.38 00:07:08.404 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004126.25 1000189.48 1010470.51 00:07:08.404 ======================================================== 00:07:08.404 Total : 256.00 0.12 1003276.82 1000143.29 1041008.38 00:07:08.404 00:07:08.663 22:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:08.663 22:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2212077 00:07:08.663 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2212077) - No such process 00:07:08.663 22:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2212077 00:07:08.663 22:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:08.663 22:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:08.663 22:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:08.663 22:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:07:08.663 22:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:08.663 22:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:07:08.663 22:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:08.663 22:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:08.663 rmmod nvme_tcp 00:07:08.922 rmmod nvme_fabrics 00:07:08.922 rmmod nvme_keyring 00:07:08.922 22:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:08.922 22:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:07:08.922 22:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:07:08.922 22:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2211361 ']' 00:07:08.922 22:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2211361 00:07:08.922 22:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 2211361 ']' 00:07:08.922 22:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 2211361 00:07:08.922 22:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:07:08.922 22:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:08.922 22:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2211361 00:07:08.922 22:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:08.922 22:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:08.922 22:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2211361' 00:07:08.922 killing process with pid 2211361 00:07:08.922 22:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 2211361 00:07:08.922 22:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 2211361 00:07:09.181 22:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:09.181 22:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:09.181 22:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:09.181 22:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:07:09.181 22:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:07:09.181 22:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:09.181 22:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:07:09.181 22:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:09.181 22:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:09.181 22:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:09.181 22:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:09.181 22:38:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:11.087 22:38:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:11.088 00:07:11.088 real 0m16.162s 00:07:11.088 user 0m29.058s 00:07:11.088 sys 0m5.482s 00:07:11.088 22:38:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:11.088 22:38:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:11.088 ************************************ 00:07:11.088 END TEST nvmf_delete_subsystem 00:07:11.088 ************************************ 00:07:11.088 22:38:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:11.088 22:38:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:11.088 22:38:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:11.088 22:38:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:11.088 ************************************ 00:07:11.088 START TEST nvmf_host_management 00:07:11.088 ************************************ 00:07:11.088 22:38:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:11.348 * Looking for test storage... 00:07:11.348 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:07:11.348 22:38:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:11.348 22:38:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:07:11.348 22:38:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:11.348 22:38:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:11.348 22:38:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:11.348 22:38:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:11.348 22:38:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:11.348 22:38:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:11.348 22:38:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:11.348 22:38:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:11.348 22:38:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:11.348 22:38:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:11.348 22:38:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:11.348 22:38:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:11.348 22:38:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:11.348 22:38:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:11.348 22:38:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:11.348 22:38:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:11.348 22:38:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:11.348 22:38:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:11.348 22:38:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:11.348 22:38:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:11.348 22:38:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:11.348 22:38:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:11.348 22:38:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:11.348 22:38:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:11.348 22:38:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:11.348 22:38:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:11.348 22:38:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:11.348 22:38:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:11.348 22:38:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:11.348 22:38:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:11.348 22:38:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:11.348 22:38:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:11.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.348 --rc genhtml_branch_coverage=1 00:07:11.348 --rc genhtml_function_coverage=1 00:07:11.348 --rc genhtml_legend=1 00:07:11.348 --rc geninfo_all_blocks=1 00:07:11.348 --rc geninfo_unexecuted_blocks=1 00:07:11.348 00:07:11.348 ' 00:07:11.348 22:38:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:11.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.348 --rc genhtml_branch_coverage=1 00:07:11.348 --rc genhtml_function_coverage=1 00:07:11.348 --rc genhtml_legend=1 00:07:11.348 --rc geninfo_all_blocks=1 00:07:11.348 --rc geninfo_unexecuted_blocks=1 00:07:11.348 00:07:11.348 ' 00:07:11.348 22:38:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:11.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.348 --rc genhtml_branch_coverage=1 00:07:11.348 --rc genhtml_function_coverage=1 00:07:11.348 --rc genhtml_legend=1 00:07:11.348 --rc geninfo_all_blocks=1 00:07:11.348 --rc geninfo_unexecuted_blocks=1 00:07:11.348 00:07:11.348 ' 00:07:11.348 22:38:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:11.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.348 --rc genhtml_branch_coverage=1 00:07:11.348 --rc genhtml_function_coverage=1 00:07:11.348 --rc genhtml_legend=1 00:07:11.348 --rc geninfo_all_blocks=1 00:07:11.348 --rc geninfo_unexecuted_blocks=1 00:07:11.348 00:07:11.348 ' 00:07:11.348 22:38:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:07:11.348 22:38:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:11.349 22:38:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:11.349 22:38:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:11.349 22:38:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:11.349 22:38:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:11.349 22:38:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:11.349 22:38:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:11.349 22:38:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:11.349 22:38:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:11.349 22:38:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:11.349 22:38:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:11.349 22:38:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:11.349 22:38:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:11.349 22:38:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:11.349 22:38:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:11.349 22:38:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:11.349 22:38:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:11.349 22:38:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:07:11.349 22:38:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:11.349 22:38:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:11.349 22:38:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:11.349 22:38:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:11.349 22:38:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.349 22:38:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.349 22:38:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.349 22:38:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:11.349 22:38:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:11.349 22:38:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:11.349 22:38:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:11.349 22:38:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:11.349 22:38:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:11.349 22:38:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:11.349 22:38:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:11.349 22:38:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:11.349 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:11.349 22:38:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:11.349 22:38:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:11.349 22:38:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:11.349 22:38:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:11.349 22:38:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:11.349 22:38:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:11.349 22:38:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:11.349 22:38:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:11.349 22:38:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:11.349 22:38:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:11.349 22:38:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:11.349 22:38:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:11.349 22:38:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:11.349 22:38:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:11.349 22:38:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:11.349 22:38:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:11.349 22:38:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:07:11.349 22:38:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:17.917 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:17.917 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:07:17.917 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:17.917 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:17.917 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:17.917 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:17.917 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:17.917 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:07:17.917 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:17.917 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:07:17.917 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:07:17.917 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:07:17.917 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:07:17.917 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:07:17.917 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:07:17.917 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:17.917 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:17.917 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:17.917 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:17.917 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:17.917 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:17.917 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:17.917 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:17.917 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:17.917 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:17.917 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:17.917 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:17.917 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:17.917 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:17.917 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:17.917 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:17.917 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:17.917 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:17.917 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:17.917 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:17.917 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:17.917 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:17.917 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:17.917 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:17.917 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:17.917 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:17.917 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:17.917 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:17.917 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:17.917 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:17.917 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:17.917 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:17.917 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:17.917 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:17.917 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:17.917 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:17.917 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:17.917 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:17.917 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:17.917 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:17.917 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:17.917 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:17.917 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:17.917 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:17.917 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:17.917 Found net devices under 0000:86:00.0: cvl_0_0 00:07:17.917 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:17.917 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:17.917 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:17.917 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:17.917 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:17.917 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:17.917 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:17.917 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:17.917 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:17.917 Found net devices under 0000:86:00.1: cvl_0_1 00:07:17.917 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:17.917 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:17.918 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:07:17.918 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:17.918 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:17.918 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:17.918 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:17.918 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:17.918 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:17.918 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:17.918 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:17.918 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:17.918 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:17.918 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:17.918 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:17.918 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:17.918 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:17.918 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:17.918 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:17.918 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:17.918 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:17.918 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:17.918 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:17.918 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:17.918 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:17.918 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:17.918 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:17.918 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:17.918 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:17.918 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:17.918 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.400 ms 00:07:17.918 00:07:17.918 --- 10.0.0.2 ping statistics --- 00:07:17.918 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:17.918 rtt min/avg/max/mdev = 0.400/0.400/0.400/0.000 ms 00:07:17.918 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:17.918 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:17.918 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:07:17.918 00:07:17.918 --- 10.0.0.1 ping statistics --- 00:07:17.918 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:17.918 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:07:17.918 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:17.918 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:07:17.918 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:17.918 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:17.918 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:17.918 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:17.918 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:17.918 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:17.918 22:38:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:17.918 22:38:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:17.918 22:38:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:17.918 22:38:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:17.918 22:38:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:17.918 22:38:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:17.918 22:38:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:17.918 22:38:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2216306 00:07:17.918 22:38:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2216306 00:07:17.918 22:38:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:17.918 22:38:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2216306 ']' 00:07:17.918 22:38:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.918 22:38:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:17.918 22:38:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.918 22:38:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:17.918 22:38:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:17.918 [2024-12-10 22:38:25.097849] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:07:17.918 [2024-12-10 22:38:25.097895] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:17.918 [2024-12-10 22:38:25.176941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:17.918 [2024-12-10 22:38:25.217236] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:17.918 [2024-12-10 22:38:25.217274] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:17.918 [2024-12-10 22:38:25.217281] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:17.918 [2024-12-10 22:38:25.217287] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:17.918 [2024-12-10 22:38:25.217292] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:17.918 [2024-12-10 22:38:25.218764] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:07:17.918 [2024-12-10 22:38:25.218852] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:07:17.918 [2024-12-10 22:38:25.218958] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:17.918 [2024-12-10 22:38:25.218959] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:07:17.918 22:38:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:17.918 22:38:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:17.918 22:38:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:17.918 22:38:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:17.918 22:38:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:17.918 22:38:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:17.918 22:38:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:17.918 22:38:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.918 22:38:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:17.918 [2024-12-10 22:38:25.369109] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:17.918 22:38:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.918 22:38:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:17.918 22:38:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:17.918 22:38:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:17.918 22:38:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/rpcs.txt 00:07:17.918 22:38:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:17.918 22:38:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:17.918 22:38:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:17.918 22:38:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:17.918 Malloc0 00:07:17.918 [2024-12-10 22:38:25.441396] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:17.918 22:38:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:17.918 22:38:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:17.918 22:38:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:17.918 22:38:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:17.918 22:38:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2216351 00:07:17.919 22:38:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2216351 /var/tmp/bdevperf.sock 00:07:17.919 22:38:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2216351 ']' 00:07:17.919 22:38:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:17.919 22:38:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:17.919 22:38:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:17.919 22:38:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:17.919 22:38:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:17.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:17.919 22:38:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:17.919 22:38:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:17.919 22:38:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:17.919 22:38:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:17.919 22:38:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:17.919 22:38:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:17.919 { 00:07:17.919 "params": { 00:07:17.919 "name": "Nvme$subsystem", 00:07:17.919 "trtype": "$TEST_TRANSPORT", 00:07:17.919 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:17.919 "adrfam": "ipv4", 00:07:17.919 "trsvcid": "$NVMF_PORT", 00:07:17.919 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:17.919 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:17.919 "hdgst": ${hdgst:-false}, 00:07:17.919 "ddgst": ${ddgst:-false} 00:07:17.919 }, 00:07:17.919 "method": "bdev_nvme_attach_controller" 00:07:17.919 } 00:07:17.919 EOF 00:07:17.919 )") 00:07:17.919 22:38:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:17.919 22:38:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:17.919 22:38:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:17.919 22:38:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:17.919 "params": { 00:07:17.919 "name": "Nvme0", 00:07:17.919 "trtype": "tcp", 00:07:17.919 "traddr": "10.0.0.2", 00:07:17.919 "adrfam": "ipv4", 00:07:17.919 "trsvcid": "4420", 00:07:17.919 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:17.919 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:17.919 "hdgst": false, 00:07:17.919 "ddgst": false 00:07:17.919 }, 00:07:17.919 "method": "bdev_nvme_attach_controller" 00:07:17.919 }' 00:07:17.919 [2024-12-10 22:38:25.538178] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:07:17.919 [2024-12-10 22:38:25.538220] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2216351 ] 00:07:17.919 [2024-12-10 22:38:25.613013] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.177 [2024-12-10 22:38:25.654388] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.177 Running I/O for 10 seconds... 00:07:18.177 22:38:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:18.177 22:38:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:18.177 22:38:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:18.177 22:38:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.177 22:38:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:18.436 22:38:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.436 22:38:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:18.436 22:38:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:18.436 22:38:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:18.436 22:38:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:18.436 22:38:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:18.436 22:38:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:18.436 22:38:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:18.436 22:38:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:18.436 22:38:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:18.436 22:38:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:18.436 22:38:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.436 22:38:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:18.436 22:38:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.436 22:38:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=79 00:07:18.436 22:38:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 79 -ge 100 ']' 00:07:18.436 22:38:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:07:18.695 22:38:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:07:18.695 22:38:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:18.695 22:38:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:18.695 22:38:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:18.695 22:38:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.695 22:38:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:18.695 22:38:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.695 22:38:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=705 00:07:18.695 22:38:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 705 -ge 100 ']' 00:07:18.695 22:38:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:18.695 22:38:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:18.695 22:38:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:18.696 22:38:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:18.696 22:38:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.696 22:38:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:18.696 [2024-12-10 22:38:26.260387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.696 [2024-12-10 22:38:26.260424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.696 [2024-12-10 22:38:26.260439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.696 [2024-12-10 22:38:26.260447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.696 [2024-12-10 22:38:26.260457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.696 [2024-12-10 22:38:26.260465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.696 [2024-12-10 22:38:26.260473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.696 [2024-12-10 22:38:26.260480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.696 [2024-12-10 22:38:26.260495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.696 [2024-12-10 22:38:26.260501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.696 [2024-12-10 22:38:26.260510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.696 [2024-12-10 22:38:26.260516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.696 [2024-12-10 22:38:26.260524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.696 [2024-12-10 22:38:26.260531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.696 [2024-12-10 22:38:26.260539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.696 [2024-12-10 22:38:26.260545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.696 [2024-12-10 22:38:26.260553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.696 [2024-12-10 22:38:26.260559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.696 [2024-12-10 22:38:26.260567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.696 [2024-12-10 22:38:26.260573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.696 [2024-12-10 22:38:26.260581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.696 [2024-12-10 22:38:26.260588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.696 [2024-12-10 22:38:26.260596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.696 [2024-12-10 22:38:26.260602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.696 [2024-12-10 22:38:26.260610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.696 [2024-12-10 22:38:26.260616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.696 [2024-12-10 22:38:26.260624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.696 [2024-12-10 22:38:26.260631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.696 [2024-12-10 22:38:26.260639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.696 [2024-12-10 22:38:26.260646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.696 [2024-12-10 22:38:26.260654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.696 [2024-12-10 22:38:26.260661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.696 [2024-12-10 22:38:26.260669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.696 [2024-12-10 22:38:26.260677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.696 [2024-12-10 22:38:26.260685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.696 [2024-12-10 22:38:26.260694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.696 [2024-12-10 22:38:26.260702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.696 [2024-12-10 22:38:26.260708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.696 [2024-12-10 22:38:26.260717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.696 [2024-12-10 22:38:26.260723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.696 [2024-12-10 22:38:26.260732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.696 [2024-12-10 22:38:26.260738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.696 [2024-12-10 22:38:26.260747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.696 [2024-12-10 22:38:26.260753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.696 [2024-12-10 22:38:26.260761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.696 [2024-12-10 22:38:26.260767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.696 [2024-12-10 22:38:26.260775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.696 [2024-12-10 22:38:26.260782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.696 [2024-12-10 22:38:26.260790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.696 [2024-12-10 22:38:26.260796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.696 [2024-12-10 22:38:26.260804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.696 [2024-12-10 22:38:26.260810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.696 [2024-12-10 22:38:26.260818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.696 [2024-12-10 22:38:26.260824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.696 [2024-12-10 22:38:26.260832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.696 [2024-12-10 22:38:26.260839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.696 [2024-12-10 22:38:26.260847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.696 [2024-12-10 22:38:26.260853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.696 [2024-12-10 22:38:26.260866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.696 [2024-12-10 22:38:26.260873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.696 [2024-12-10 22:38:26.260881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.696 [2024-12-10 22:38:26.260887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.696 [2024-12-10 22:38:26.260896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.696 [2024-12-10 22:38:26.260902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.696 [2024-12-10 22:38:26.260910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.696 [2024-12-10 22:38:26.260916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.696 [2024-12-10 22:38:26.260924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.696 [2024-12-10 22:38:26.260931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.696 [2024-12-10 22:38:26.260939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.696 [2024-12-10 22:38:26.260946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.696 [2024-12-10 22:38:26.260954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.696 [2024-12-10 22:38:26.260960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.696 [2024-12-10 22:38:26.260968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.696 [2024-12-10 22:38:26.260974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.696 [2024-12-10 22:38:26.260982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.697 [2024-12-10 22:38:26.260988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.697 [2024-12-10 22:38:26.260997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.697 [2024-12-10 22:38:26.261004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.697 [2024-12-10 22:38:26.261012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.697 [2024-12-10 22:38:26.261018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.697 [2024-12-10 22:38:26.261026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.697 [2024-12-10 22:38:26.261032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.697 [2024-12-10 22:38:26.261040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.697 [2024-12-10 22:38:26.261048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.697 [2024-12-10 22:38:26.261056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.697 [2024-12-10 22:38:26.261062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.697 [2024-12-10 22:38:26.261070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.697 [2024-12-10 22:38:26.261077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.697 [2024-12-10 22:38:26.261085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.697 [2024-12-10 22:38:26.261091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.697 [2024-12-10 22:38:26.261099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.697 [2024-12-10 22:38:26.261108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.697 [2024-12-10 22:38:26.261116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.697 [2024-12-10 22:38:26.261122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.697 [2024-12-10 22:38:26.261130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.697 [2024-12-10 22:38:26.261136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.697 [2024-12-10 22:38:26.261144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.697 [2024-12-10 22:38:26.261150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.697 [2024-12-10 22:38:26.261164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.697 [2024-12-10 22:38:26.261172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.697 [2024-12-10 22:38:26.261180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.697 [2024-12-10 22:38:26.261186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.697 [2024-12-10 22:38:26.261194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.697 [2024-12-10 22:38:26.261201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.697 [2024-12-10 22:38:26.261209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.697 [2024-12-10 22:38:26.261215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.697 [2024-12-10 22:38:26.261223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.697 [2024-12-10 22:38:26.261229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.697 [2024-12-10 22:38:26.261239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.697 [2024-12-10 22:38:26.261246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.697 [2024-12-10 22:38:26.261254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.697 [2024-12-10 22:38:26.261261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.697 [2024-12-10 22:38:26.261269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.697 [2024-12-10 22:38:26.261275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.697 [2024-12-10 22:38:26.261283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.697 [2024-12-10 22:38:26.261289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.697 [2024-12-10 22:38:26.261297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.697 [2024-12-10 22:38:26.261304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.697 [2024-12-10 22:38:26.261311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.697 [2024-12-10 22:38:26.261318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.697 [2024-12-10 22:38:26.261325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.697 [2024-12-10 22:38:26.261332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.697 [2024-12-10 22:38:26.261339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.697 [2024-12-10 22:38:26.261347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.697 [2024-12-10 22:38:26.261355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.697 [2024-12-10 22:38:26.261362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.697 [2024-12-10 22:38:26.261369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:18.697 [2024-12-10 22:38:26.261376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:18.697 [2024-12-10 22:38:26.262360] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:07:18.697 task offset: 101376 on job bdev=Nvme0n1 fails 00:07:18.697 00:07:18.697 Latency(us) 00:07:18.697 [2024-12-10T21:38:26.426Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:18.697 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:18.697 Job: Nvme0n1 ended in about 0.40 seconds with error 00:07:18.697 Verification LBA range: start 0x0 length 0x400 00:07:18.697 Nvme0n1 : 0.40 1905.97 119.12 158.83 0.00 30147.63 1438.94 27354.16 00:07:18.697 [2024-12-10T21:38:26.426Z] =================================================================================================================== 00:07:18.697 [2024-12-10T21:38:26.426Z] Total : 1905.97 119.12 158.83 0.00 30147.63 1438.94 27354.16 00:07:18.697 [2024-12-10 22:38:26.264776] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:18.697 [2024-12-10 22:38:26.264799] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17a91a0 (9): Bad file descriptor 00:07:18.697 22:38:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.697 22:38:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:18.697 22:38:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:18.697 22:38:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:18.697 22:38:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:18.697 22:38:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:18.697 [2024-12-10 22:38:26.285539] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:07:19.630 22:38:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2216351 00:07:19.630 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2216351) - No such process 00:07:19.630 22:38:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:19.630 22:38:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:19.630 22:38:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:19.630 22:38:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:19.630 22:38:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:19.630 22:38:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:19.630 22:38:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:19.630 22:38:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:19.630 { 00:07:19.630 "params": { 00:07:19.631 "name": "Nvme$subsystem", 00:07:19.631 "trtype": "$TEST_TRANSPORT", 00:07:19.631 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:19.631 "adrfam": "ipv4", 00:07:19.631 "trsvcid": "$NVMF_PORT", 00:07:19.631 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:19.631 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:19.631 "hdgst": ${hdgst:-false}, 00:07:19.631 "ddgst": ${ddgst:-false} 00:07:19.631 }, 00:07:19.631 "method": "bdev_nvme_attach_controller" 00:07:19.631 } 00:07:19.631 EOF 00:07:19.631 )") 00:07:19.631 22:38:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:19.631 22:38:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:19.631 22:38:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:19.631 22:38:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:19.631 "params": { 00:07:19.631 "name": "Nvme0", 00:07:19.631 "trtype": "tcp", 00:07:19.631 "traddr": "10.0.0.2", 00:07:19.631 "adrfam": "ipv4", 00:07:19.631 "trsvcid": "4420", 00:07:19.631 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:19.631 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:19.631 "hdgst": false, 00:07:19.631 "ddgst": false 00:07:19.631 }, 00:07:19.631 "method": "bdev_nvme_attach_controller" 00:07:19.631 }' 00:07:19.631 [2024-12-10 22:38:27.331900] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:07:19.631 [2024-12-10 22:38:27.331951] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2216792 ] 00:07:19.889 [2024-12-10 22:38:27.409386] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.889 [2024-12-10 22:38:27.448450] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.152 Running I/O for 1 seconds... 00:07:21.088 1984.00 IOPS, 124.00 MiB/s 00:07:21.088 Latency(us) 00:07:21.088 [2024-12-10T21:38:28.817Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:21.088 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:21.088 Verification LBA range: start 0x0 length 0x400 00:07:21.088 Nvme0n1 : 1.03 1993.74 124.61 0.00 0.00 31596.36 6069.20 27468.13 00:07:21.088 [2024-12-10T21:38:28.817Z] =================================================================================================================== 00:07:21.088 [2024-12-10T21:38:28.817Z] Total : 1993.74 124.61 0.00 0.00 31596.36 6069.20 27468.13 00:07:21.346 22:38:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:21.346 22:38:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:21.346 22:38:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/bdevperf.conf 00:07:21.346 22:38:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/rpcs.txt 00:07:21.346 22:38:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:21.346 22:38:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:21.346 22:38:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:21.346 22:38:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:21.346 22:38:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:21.346 22:38:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:21.346 22:38:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:21.346 rmmod nvme_tcp 00:07:21.346 rmmod nvme_fabrics 00:07:21.346 rmmod nvme_keyring 00:07:21.346 22:38:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:21.346 22:38:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:21.346 22:38:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:21.346 22:38:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2216306 ']' 00:07:21.346 22:38:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2216306 00:07:21.346 22:38:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 2216306 ']' 00:07:21.346 22:38:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 2216306 00:07:21.346 22:38:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:07:21.346 22:38:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:21.346 22:38:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2216306 00:07:21.605 22:38:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:21.605 22:38:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:21.605 22:38:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2216306' 00:07:21.605 killing process with pid 2216306 00:07:21.605 22:38:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 2216306 00:07:21.605 22:38:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 2216306 00:07:21.605 [2024-12-10 22:38:29.252019] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:21.605 22:38:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:21.605 22:38:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:21.605 22:38:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:21.605 22:38:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:21.605 22:38:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:07:21.605 22:38:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:21.605 22:38:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:07:21.605 22:38:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:21.605 22:38:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:21.605 22:38:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:21.605 22:38:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:21.605 22:38:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:24.143 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:24.143 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:24.143 00:07:24.143 real 0m12.543s 00:07:24.143 user 0m20.237s 00:07:24.143 sys 0m5.596s 00:07:24.143 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:24.143 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:24.143 ************************************ 00:07:24.143 END TEST nvmf_host_management 00:07:24.143 ************************************ 00:07:24.143 22:38:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:24.143 22:38:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:24.143 22:38:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:24.143 22:38:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:24.143 ************************************ 00:07:24.143 START TEST nvmf_lvol 00:07:24.143 ************************************ 00:07:24.143 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:24.143 * Looking for test storage... 00:07:24.143 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:07:24.143 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:24.143 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:07:24.143 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:24.143 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:24.143 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:24.143 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:24.143 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:24.143 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:24.143 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:24.143 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:24.143 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:24.143 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:24.143 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:24.143 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:24.143 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:24.143 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:24.143 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:24.143 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:24.143 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:24.143 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:24.143 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:24.143 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:24.143 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:24.143 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:24.143 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:24.143 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:24.143 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:24.143 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:24.143 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:24.143 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:24.143 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:24.143 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:24.143 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:24.143 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:24.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.143 --rc genhtml_branch_coverage=1 00:07:24.143 --rc genhtml_function_coverage=1 00:07:24.143 --rc genhtml_legend=1 00:07:24.143 --rc geninfo_all_blocks=1 00:07:24.143 --rc geninfo_unexecuted_blocks=1 00:07:24.143 00:07:24.143 ' 00:07:24.143 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:24.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.143 --rc genhtml_branch_coverage=1 00:07:24.143 --rc genhtml_function_coverage=1 00:07:24.143 --rc genhtml_legend=1 00:07:24.143 --rc geninfo_all_blocks=1 00:07:24.143 --rc geninfo_unexecuted_blocks=1 00:07:24.143 00:07:24.143 ' 00:07:24.143 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:24.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.143 --rc genhtml_branch_coverage=1 00:07:24.143 --rc genhtml_function_coverage=1 00:07:24.143 --rc genhtml_legend=1 00:07:24.143 --rc geninfo_all_blocks=1 00:07:24.143 --rc geninfo_unexecuted_blocks=1 00:07:24.143 00:07:24.143 ' 00:07:24.143 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:24.143 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:24.143 --rc genhtml_branch_coverage=1 00:07:24.143 --rc genhtml_function_coverage=1 00:07:24.143 --rc genhtml_legend=1 00:07:24.143 --rc geninfo_all_blocks=1 00:07:24.143 --rc geninfo_unexecuted_blocks=1 00:07:24.143 00:07:24.143 ' 00:07:24.143 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:07:24.143 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:24.143 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:24.143 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:24.143 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:24.143 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:24.143 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:24.143 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:24.143 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:24.143 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:24.143 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:24.143 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:24.143 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:24.143 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:24.143 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:24.143 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:24.143 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:24.143 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:24.143 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:07:24.143 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:24.143 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:24.143 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:24.143 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:24.144 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.144 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.144 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.144 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:24.144 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:24.144 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:24.144 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:24.144 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:24.144 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:24.144 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:24.144 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:24.144 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:24.144 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:24.144 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:24.144 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:24.144 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:24.144 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:24.144 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:24.144 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:24.144 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:24.144 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:07:24.144 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:24.144 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:24.144 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:24.144 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:24.144 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:24.144 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:24.144 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:24.144 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:24.144 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:24.144 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:24.144 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:24.144 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:07:24.144 22:38:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:30.738 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:30.738 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:30.738 Found net devices under 0000:86:00.0: cvl_0_0 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:30.738 Found net devices under 0000:86:00.1: cvl_0_1 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:30.738 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:30.738 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.264 ms 00:07:30.738 00:07:30.738 --- 10.0.0.2 ping statistics --- 00:07:30.738 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:30.738 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:07:30.738 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:30.738 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:30.738 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.229 ms 00:07:30.739 00:07:30.739 --- 10.0.0.1 ping statistics --- 00:07:30.739 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:30.739 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:07:30.739 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:30.739 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:07:30.739 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:30.739 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:30.739 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:30.739 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:30.739 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:30.739 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:30.739 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:30.739 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:30.739 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:30.739 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:30.739 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:30.739 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2220597 00:07:30.739 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:30.739 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2220597 00:07:30.739 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 2220597 ']' 00:07:30.739 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.739 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:30.739 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.739 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:30.739 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:30.739 [2024-12-10 22:38:37.727396] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:07:30.739 [2024-12-10 22:38:37.727446] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:30.739 [2024-12-10 22:38:37.806313] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:30.739 [2024-12-10 22:38:37.845901] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:30.739 [2024-12-10 22:38:37.845939] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:30.739 [2024-12-10 22:38:37.845947] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:30.739 [2024-12-10 22:38:37.845953] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:30.739 [2024-12-10 22:38:37.845957] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:30.739 [2024-12-10 22:38:37.847358] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:30.739 [2024-12-10 22:38:37.847465] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.739 [2024-12-10 22:38:37.847466] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:07:30.739 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:30.739 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:07:30.739 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:30.739 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:30.739 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:30.739 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:30.739 22:38:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:30.739 [2024-12-10 22:38:38.174129] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:30.739 22:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:30.739 22:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:30.739 22:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:31.027 22:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:31.027 22:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:31.298 22:38:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:31.556 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=446a6abc-aa67-4cce-b7dc-490223d09f89 00:07:31.556 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_create -u 446a6abc-aa67-4cce-b7dc-490223d09f89 lvol 20 00:07:31.556 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=b5d4aaa1-1285-4615-99d1-0457fce66114 00:07:31.556 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:31.815 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b5d4aaa1-1285-4615-99d1-0457fce66114 00:07:32.072 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:32.330 [2024-12-10 22:38:39.874097] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:32.330 22:38:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:32.588 22:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:32.588 22:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2221088 00:07:32.588 22:38:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:33.524 22:38:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_snapshot b5d4aaa1-1285-4615-99d1-0457fce66114 MY_SNAPSHOT 00:07:33.783 22:38:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=5f363dcc-36a5-4400-b3ed-a5fcfb388962 00:07:33.783 22:38:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_resize b5d4aaa1-1285-4615-99d1-0457fce66114 30 00:07:34.042 22:38:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_clone 5f363dcc-36a5-4400-b3ed-a5fcfb388962 MY_CLONE 00:07:34.300 22:38:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=adbf7a85-0a1e-454a-a6f6-14aa2b1bb28d 00:07:34.300 22:38:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_inflate adbf7a85-0a1e-454a-a6f6-14aa2b1bb28d 00:07:34.868 22:38:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2221088 00:07:42.989 Initializing NVMe Controllers 00:07:42.989 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:42.989 Controller IO queue size 128, less than required. 00:07:42.989 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:42.989 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:42.989 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:42.989 Initialization complete. Launching workers. 00:07:42.989 ======================================================== 00:07:42.989 Latency(us) 00:07:42.989 Device Information : IOPS MiB/s Average min max 00:07:42.989 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11877.90 46.40 10777.58 1613.83 70626.16 00:07:42.989 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12036.00 47.02 10636.91 3085.05 59535.69 00:07:42.989 ======================================================== 00:07:42.989 Total : 23913.90 93.41 10706.78 1613.83 70626.16 00:07:42.989 00:07:42.989 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:43.248 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_delete b5d4aaa1-1285-4615-99d1-0457fce66114 00:07:43.506 22:38:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 446a6abc-aa67-4cce-b7dc-490223d09f89 00:07:43.506 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:43.506 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:43.506 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:43.506 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:43.506 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:07:43.506 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:43.506 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:07:43.506 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:43.506 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:43.506 rmmod nvme_tcp 00:07:43.506 rmmod nvme_fabrics 00:07:43.765 rmmod nvme_keyring 00:07:43.765 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:43.765 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:07:43.765 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:07:43.765 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2220597 ']' 00:07:43.765 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2220597 00:07:43.765 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 2220597 ']' 00:07:43.765 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 2220597 00:07:43.765 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:07:43.765 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:43.765 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2220597 00:07:43.765 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:43.765 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:43.765 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2220597' 00:07:43.765 killing process with pid 2220597 00:07:43.765 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 2220597 00:07:43.765 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 2220597 00:07:44.023 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:44.023 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:44.023 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:44.023 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:07:44.023 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:07:44.023 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:44.023 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:07:44.023 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:44.023 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:44.023 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:44.023 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:44.023 22:38:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:45.928 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:45.928 00:07:45.928 real 0m22.158s 00:07:45.928 user 1m3.668s 00:07:45.928 sys 0m7.725s 00:07:45.928 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.928 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:45.928 ************************************ 00:07:45.928 END TEST nvmf_lvol 00:07:45.928 ************************************ 00:07:45.928 22:38:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:45.928 22:38:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:45.928 22:38:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.928 22:38:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:46.190 ************************************ 00:07:46.190 START TEST nvmf_lvs_grow 00:07:46.190 ************************************ 00:07:46.190 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:46.190 * Looking for test storage... 00:07:46.190 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:07:46.190 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:46.190 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:07:46.190 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:46.190 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:46.190 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:46.190 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:46.190 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:46.190 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:46.190 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:46.190 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:46.190 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:46.190 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:46.190 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:46.190 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:46.190 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:46.190 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:46.190 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:46.190 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:46.190 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:46.190 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:46.190 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:46.190 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:46.190 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:46.190 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:46.190 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:46.190 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:46.190 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:46.190 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:46.190 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:46.190 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:46.190 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:46.190 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:46.190 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:46.190 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:46.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.190 --rc genhtml_branch_coverage=1 00:07:46.190 --rc genhtml_function_coverage=1 00:07:46.190 --rc genhtml_legend=1 00:07:46.190 --rc geninfo_all_blocks=1 00:07:46.190 --rc geninfo_unexecuted_blocks=1 00:07:46.190 00:07:46.190 ' 00:07:46.190 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:46.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.190 --rc genhtml_branch_coverage=1 00:07:46.190 --rc genhtml_function_coverage=1 00:07:46.190 --rc genhtml_legend=1 00:07:46.190 --rc geninfo_all_blocks=1 00:07:46.190 --rc geninfo_unexecuted_blocks=1 00:07:46.190 00:07:46.190 ' 00:07:46.190 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:46.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.190 --rc genhtml_branch_coverage=1 00:07:46.190 --rc genhtml_function_coverage=1 00:07:46.190 --rc genhtml_legend=1 00:07:46.190 --rc geninfo_all_blocks=1 00:07:46.190 --rc geninfo_unexecuted_blocks=1 00:07:46.190 00:07:46.190 ' 00:07:46.190 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:46.190 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.190 --rc genhtml_branch_coverage=1 00:07:46.190 --rc genhtml_function_coverage=1 00:07:46.190 --rc genhtml_legend=1 00:07:46.190 --rc geninfo_all_blocks=1 00:07:46.190 --rc geninfo_unexecuted_blocks=1 00:07:46.190 00:07:46.190 ' 00:07:46.190 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:07:46.190 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:46.190 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:46.190 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:46.190 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:46.190 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:46.190 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:46.190 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:46.190 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:46.190 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:46.190 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:46.190 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:46.190 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:46.190 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:46.191 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:46.191 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:46.191 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:46.191 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:46.191 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:07:46.191 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:46.191 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:46.191 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:46.191 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:46.191 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.191 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.191 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.191 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:46.191 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.191 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:07:46.191 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:46.191 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:46.191 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:46.191 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:46.191 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:46.191 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:46.191 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:46.191 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:46.191 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:46.191 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:46.191 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:07:46.191 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:46.191 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:46.191 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:46.191 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:46.191 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:46.191 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:46.191 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:46.191 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:46.191 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:46.191 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:46.191 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:46.191 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:46.191 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:07:46.191 22:38:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:52.763 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:52.763 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:07:52.763 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:52.763 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:52.763 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:52.763 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:52.763 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:52.763 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:07:52.763 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:52.763 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:07:52.763 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:07:52.763 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:07:52.763 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:07:52.763 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:07:52.763 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:07:52.763 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:52.763 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:52.763 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:52.763 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:52.763 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:52.763 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:52.763 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:52.763 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:52.763 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:52.763 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:52.763 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:52.763 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:52.763 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:52.763 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:52.763 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:52.763 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:52.763 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:52.763 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:52.763 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:52.763 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:52.763 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:52.763 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:52.763 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:52.763 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:52.763 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:52.763 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:52.763 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:52.763 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:52.763 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:52.763 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:52.763 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:52.763 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:52.763 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:52.763 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:52.763 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:52.763 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:52.763 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:52.763 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:52.763 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:52.763 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:52.763 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:52.764 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:52.764 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:52.764 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:52.764 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:52.764 Found net devices under 0000:86:00.0: cvl_0_0 00:07:52.764 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:52.764 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:52.764 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:52.764 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:52.764 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:52.764 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:52.764 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:52.764 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:52.764 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:52.764 Found net devices under 0000:86:00.1: cvl_0_1 00:07:52.764 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:52.764 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:52.764 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:07:52.764 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:52.764 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:52.764 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:52.764 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:52.764 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:52.764 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:52.764 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:52.764 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:52.764 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:52.764 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:52.764 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:52.764 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:52.764 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:52.764 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:52.764 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:52.764 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:52.764 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:52.764 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:52.764 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:52.764 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:52.764 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:52.764 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:52.764 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:52.764 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:52.764 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:52.764 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:52.764 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:52.764 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.381 ms 00:07:52.764 00:07:52.764 --- 10.0.0.2 ping statistics --- 00:07:52.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:52.764 rtt min/avg/max/mdev = 0.381/0.381/0.381/0.000 ms 00:07:52.764 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:52.764 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:52.764 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:07:52.764 00:07:52.764 --- 10.0.0.1 ping statistics --- 00:07:52.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:52.764 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:07:52.764 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:52.764 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:07:52.764 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:52.764 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:52.764 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:52.764 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:52.764 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:52.764 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:52.764 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:52.764 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:52.764 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:52.764 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:52.764 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:52.764 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2226475 00:07:52.764 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:52.764 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2226475 00:07:52.764 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 2226475 ']' 00:07:52.764 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.764 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:52.764 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.764 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:52.764 22:38:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:52.764 [2024-12-10 22:38:59.927487] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:07:52.764 [2024-12-10 22:38:59.927536] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:52.764 [2024-12-10 22:38:59.993217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.764 [2024-12-10 22:39:00.040658] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:52.764 [2024-12-10 22:39:00.040692] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:52.764 [2024-12-10 22:39:00.040700] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:52.764 [2024-12-10 22:39:00.040707] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:52.764 [2024-12-10 22:39:00.040712] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:52.764 [2024-12-10 22:39:00.041211] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.764 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:52.764 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:07:52.764 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:52.764 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:52.764 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:52.764 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:52.764 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:52.764 [2024-12-10 22:39:00.358393] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:52.764 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:52.764 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:52.764 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:52.764 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:52.764 ************************************ 00:07:52.764 START TEST lvs_grow_clean 00:07:52.764 ************************************ 00:07:52.764 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:07:52.764 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:52.764 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:52.764 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:52.764 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:52.764 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:52.764 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:52.764 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev 00:07:52.764 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev 00:07:52.764 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:53.024 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:53.024 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:53.283 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=d2a8ed35-a2ac-4973-9ab9-1484d5185f72 00:07:53.283 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d2a8ed35-a2ac-4973-9ab9-1484d5185f72 00:07:53.283 22:39:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:53.541 22:39:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:53.541 22:39:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:53.541 22:39:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_create -u d2a8ed35-a2ac-4973-9ab9-1484d5185f72 lvol 150 00:07:53.801 22:39:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=f7cf3f23-b126-4e5b-bfa0-5991e21bf5ba 00:07:53.801 22:39:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev 00:07:53.801 22:39:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:53.801 [2024-12-10 22:39:01.459100] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:53.801 [2024-12-10 22:39:01.459162] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:53.801 true 00:07:53.801 22:39:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d2a8ed35-a2ac-4973-9ab9-1484d5185f72 00:07:53.801 22:39:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:54.060 22:39:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:54.060 22:39:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:54.319 22:39:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f7cf3f23-b126-4e5b-bfa0-5991e21bf5ba 00:07:54.578 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:54.578 [2024-12-10 22:39:02.225462] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:54.578 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:54.838 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2227103 00:07:54.838 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:54.838 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:54.838 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2227103 /var/tmp/bdevperf.sock 00:07:54.838 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 2227103 ']' 00:07:54.838 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:54.838 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:54.838 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:54.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:54.838 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:54.838 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:54.838 [2024-12-10 22:39:02.478825] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:07:54.838 [2024-12-10 22:39:02.478873] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2227103 ] 00:07:54.838 [2024-12-10 22:39:02.553170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.097 [2024-12-10 22:39:02.595318] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:55.097 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:55.097 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:07:55.097 22:39:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:55.666 Nvme0n1 00:07:55.666 22:39:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:55.666 [ 00:07:55.666 { 00:07:55.666 "name": "Nvme0n1", 00:07:55.666 "aliases": [ 00:07:55.666 "f7cf3f23-b126-4e5b-bfa0-5991e21bf5ba" 00:07:55.666 ], 00:07:55.666 "product_name": "NVMe disk", 00:07:55.666 "block_size": 4096, 00:07:55.666 "num_blocks": 38912, 00:07:55.666 "uuid": "f7cf3f23-b126-4e5b-bfa0-5991e21bf5ba", 00:07:55.666 "numa_id": 1, 00:07:55.666 "assigned_rate_limits": { 00:07:55.666 "rw_ios_per_sec": 0, 00:07:55.666 "rw_mbytes_per_sec": 0, 00:07:55.666 "r_mbytes_per_sec": 0, 00:07:55.666 "w_mbytes_per_sec": 0 00:07:55.666 }, 00:07:55.666 "claimed": false, 00:07:55.666 "zoned": false, 00:07:55.666 "supported_io_types": { 00:07:55.666 "read": true, 00:07:55.666 "write": true, 00:07:55.666 "unmap": true, 00:07:55.666 "flush": true, 00:07:55.666 "reset": true, 00:07:55.666 "nvme_admin": true, 00:07:55.666 "nvme_io": true, 00:07:55.666 "nvme_io_md": false, 00:07:55.666 "write_zeroes": true, 00:07:55.666 "zcopy": false, 00:07:55.666 "get_zone_info": false, 00:07:55.666 "zone_management": false, 00:07:55.666 "zone_append": false, 00:07:55.666 "compare": true, 00:07:55.666 "compare_and_write": true, 00:07:55.666 "abort": true, 00:07:55.666 "seek_hole": false, 00:07:55.666 "seek_data": false, 00:07:55.666 "copy": true, 00:07:55.666 "nvme_iov_md": false 00:07:55.666 }, 00:07:55.666 "memory_domains": [ 00:07:55.666 { 00:07:55.666 "dma_device_id": "system", 00:07:55.666 "dma_device_type": 1 00:07:55.666 } 00:07:55.666 ], 00:07:55.666 "driver_specific": { 00:07:55.666 "nvme": [ 00:07:55.666 { 00:07:55.666 "trid": { 00:07:55.666 "trtype": "TCP", 00:07:55.666 "adrfam": "IPv4", 00:07:55.666 "traddr": "10.0.0.2", 00:07:55.666 "trsvcid": "4420", 00:07:55.666 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:55.666 }, 00:07:55.666 "ctrlr_data": { 00:07:55.666 "cntlid": 1, 00:07:55.666 "vendor_id": "0x8086", 00:07:55.666 "model_number": "SPDK bdev Controller", 00:07:55.666 "serial_number": "SPDK0", 00:07:55.666 "firmware_revision": "25.01", 00:07:55.666 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:55.666 "oacs": { 00:07:55.666 "security": 0, 00:07:55.666 "format": 0, 00:07:55.666 "firmware": 0, 00:07:55.666 "ns_manage": 0 00:07:55.666 }, 00:07:55.666 "multi_ctrlr": true, 00:07:55.666 "ana_reporting": false 00:07:55.666 }, 00:07:55.666 "vs": { 00:07:55.666 "nvme_version": "1.3" 00:07:55.666 }, 00:07:55.666 "ns_data": { 00:07:55.666 "id": 1, 00:07:55.666 "can_share": true 00:07:55.666 } 00:07:55.666 } 00:07:55.666 ], 00:07:55.666 "mp_policy": "active_passive" 00:07:55.666 } 00:07:55.666 } 00:07:55.666 ] 00:07:55.666 22:39:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2227280 00:07:55.666 22:39:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:55.666 22:39:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:55.925 Running I/O for 10 seconds... 00:07:56.862 Latency(us) 00:07:56.862 [2024-12-10T21:39:04.591Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:56.862 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:56.862 Nvme0n1 : 1.00 21558.00 84.21 0.00 0.00 0.00 0.00 0.00 00:07:56.862 [2024-12-10T21:39:04.591Z] =================================================================================================================== 00:07:56.862 [2024-12-10T21:39:04.591Z] Total : 21558.00 84.21 0.00 0.00 0.00 0.00 0.00 00:07:56.862 00:07:57.797 22:39:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u d2a8ed35-a2ac-4973-9ab9-1484d5185f72 00:07:57.797 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:57.797 Nvme0n1 : 2.00 21643.00 84.54 0.00 0.00 0.00 0.00 0.00 00:07:57.797 [2024-12-10T21:39:05.526Z] =================================================================================================================== 00:07:57.797 [2024-12-10T21:39:05.526Z] Total : 21643.00 84.54 0.00 0.00 0.00 0.00 0.00 00:07:57.797 00:07:58.056 true 00:07:58.056 22:39:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d2a8ed35-a2ac-4973-9ab9-1484d5185f72 00:07:58.056 22:39:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:58.315 22:39:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:58.315 22:39:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:58.315 22:39:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2227280 00:07:58.882 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:58.882 Nvme0n1 : 3.00 21684.67 84.71 0.00 0.00 0.00 0.00 0.00 00:07:58.882 [2024-12-10T21:39:06.612Z] =================================================================================================================== 00:07:58.883 [2024-12-10T21:39:06.612Z] Total : 21684.67 84.71 0.00 0.00 0.00 0.00 0.00 00:07:58.883 00:07:59.820 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:59.820 Nvme0n1 : 4.00 21733.50 84.90 0.00 0.00 0.00 0.00 0.00 00:07:59.820 [2024-12-10T21:39:07.549Z] =================================================================================================================== 00:07:59.820 [2024-12-10T21:39:07.549Z] Total : 21733.50 84.90 0.00 0.00 0.00 0.00 0.00 00:07:59.820 00:08:00.756 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:00.756 Nvme0n1 : 5.00 21788.40 85.11 0.00 0.00 0.00 0.00 0.00 00:08:00.756 [2024-12-10T21:39:08.485Z] =================================================================================================================== 00:08:00.756 [2024-12-10T21:39:08.485Z] Total : 21788.40 85.11 0.00 0.00 0.00 0.00 0.00 00:08:00.756 00:08:02.134 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:02.134 Nvme0n1 : 6.00 21853.00 85.36 0.00 0.00 0.00 0.00 0.00 00:08:02.134 [2024-12-10T21:39:09.863Z] =================================================================================================================== 00:08:02.134 [2024-12-10T21:39:09.863Z] Total : 21853.00 85.36 0.00 0.00 0.00 0.00 0.00 00:08:02.134 00:08:03.070 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:03.070 Nvme0n1 : 7.00 21874.00 85.45 0.00 0.00 0.00 0.00 0.00 00:08:03.070 [2024-12-10T21:39:10.799Z] =================================================================================================================== 00:08:03.070 [2024-12-10T21:39:10.799Z] Total : 21874.00 85.45 0.00 0.00 0.00 0.00 0.00 00:08:03.070 00:08:04.007 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:04.007 Nvme0n1 : 8.00 21924.75 85.64 0.00 0.00 0.00 0.00 0.00 00:08:04.007 [2024-12-10T21:39:11.736Z] =================================================================================================================== 00:08:04.007 [2024-12-10T21:39:11.736Z] Total : 21924.75 85.64 0.00 0.00 0.00 0.00 0.00 00:08:04.007 00:08:04.943 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:04.943 Nvme0n1 : 9.00 21984.67 85.88 0.00 0.00 0.00 0.00 0.00 00:08:04.943 [2024-12-10T21:39:12.672Z] =================================================================================================================== 00:08:04.943 [2024-12-10T21:39:12.672Z] Total : 21984.67 85.88 0.00 0.00 0.00 0.00 0.00 00:08:04.943 00:08:05.879 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:05.879 Nvme0n1 : 10.00 22030.20 86.06 0.00 0.00 0.00 0.00 0.00 00:08:05.879 [2024-12-10T21:39:13.608Z] =================================================================================================================== 00:08:05.879 [2024-12-10T21:39:13.608Z] Total : 22030.20 86.06 0.00 0.00 0.00 0.00 0.00 00:08:05.879 00:08:05.879 00:08:05.879 Latency(us) 00:08:05.879 [2024-12-10T21:39:13.608Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:05.879 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:05.879 Nvme0n1 : 10.01 22029.06 86.05 0.00 0.00 5806.18 3590.23 9402.99 00:08:05.879 [2024-12-10T21:39:13.608Z] =================================================================================================================== 00:08:05.879 [2024-12-10T21:39:13.608Z] Total : 22029.06 86.05 0.00 0.00 5806.18 3590.23 9402.99 00:08:05.879 { 00:08:05.879 "results": [ 00:08:05.879 { 00:08:05.879 "job": "Nvme0n1", 00:08:05.879 "core_mask": "0x2", 00:08:05.879 "workload": "randwrite", 00:08:05.879 "status": "finished", 00:08:05.879 "queue_depth": 128, 00:08:05.879 "io_size": 4096, 00:08:05.879 "runtime": 10.005603, 00:08:05.879 "iops": 22029.05711929606, 00:08:05.879 "mibps": 86.05100437225023, 00:08:05.879 "io_failed": 0, 00:08:05.879 "io_timeout": 0, 00:08:05.879 "avg_latency_us": 5806.177412387203, 00:08:05.879 "min_latency_us": 3590.2330434782607, 00:08:05.879 "max_latency_us": 9402.991304347826 00:08:05.879 } 00:08:05.879 ], 00:08:05.879 "core_count": 1 00:08:05.879 } 00:08:05.879 22:39:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2227103 00:08:05.879 22:39:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 2227103 ']' 00:08:05.879 22:39:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 2227103 00:08:05.879 22:39:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:08:05.879 22:39:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:05.879 22:39:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2227103 00:08:05.879 22:39:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:05.879 22:39:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:05.879 22:39:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2227103' 00:08:05.879 killing process with pid 2227103 00:08:05.879 22:39:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 2227103 00:08:05.879 Received shutdown signal, test time was about 10.000000 seconds 00:08:05.879 00:08:05.879 Latency(us) 00:08:05.879 [2024-12-10T21:39:13.608Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:05.879 [2024-12-10T21:39:13.608Z] =================================================================================================================== 00:08:05.879 [2024-12-10T21:39:13.608Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:05.879 22:39:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 2227103 00:08:06.137 22:39:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:06.396 22:39:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:06.655 22:39:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d2a8ed35-a2ac-4973-9ab9-1484d5185f72 00:08:06.655 22:39:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:06.655 22:39:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:06.655 22:39:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:06.655 22:39:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:06.914 [2024-12-10 22:39:14.521896] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:06.914 22:39:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d2a8ed35-a2ac-4973-9ab9-1484d5185f72 00:08:06.914 22:39:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:08:06.914 22:39:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d2a8ed35-a2ac-4973-9ab9-1484d5185f72 00:08:06.914 22:39:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:08:06.914 22:39:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:06.914 22:39:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:08:06.914 22:39:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:06.914 22:39:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:08:06.914 22:39:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:06.914 22:39:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:08:06.914 22:39:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py ]] 00:08:06.914 22:39:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d2a8ed35-a2ac-4973-9ab9-1484d5185f72 00:08:07.173 request: 00:08:07.173 { 00:08:07.173 "uuid": "d2a8ed35-a2ac-4973-9ab9-1484d5185f72", 00:08:07.173 "method": "bdev_lvol_get_lvstores", 00:08:07.173 "req_id": 1 00:08:07.173 } 00:08:07.173 Got JSON-RPC error response 00:08:07.173 response: 00:08:07.173 { 00:08:07.173 "code": -19, 00:08:07.173 "message": "No such device" 00:08:07.173 } 00:08:07.173 22:39:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:08:07.173 22:39:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:07.173 22:39:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:07.173 22:39:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:07.173 22:39:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:07.433 aio_bdev 00:08:07.433 22:39:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev f7cf3f23-b126-4e5b-bfa0-5991e21bf5ba 00:08:07.433 22:39:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=f7cf3f23-b126-4e5b-bfa0-5991e21bf5ba 00:08:07.433 22:39:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:07.433 22:39:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:08:07.433 22:39:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:07.433 22:39:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:07.433 22:39:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:07.433 22:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_get_bdevs -b f7cf3f23-b126-4e5b-bfa0-5991e21bf5ba -t 2000 00:08:07.692 [ 00:08:07.692 { 00:08:07.692 "name": "f7cf3f23-b126-4e5b-bfa0-5991e21bf5ba", 00:08:07.692 "aliases": [ 00:08:07.692 "lvs/lvol" 00:08:07.692 ], 00:08:07.692 "product_name": "Logical Volume", 00:08:07.692 "block_size": 4096, 00:08:07.692 "num_blocks": 38912, 00:08:07.692 "uuid": "f7cf3f23-b126-4e5b-bfa0-5991e21bf5ba", 00:08:07.692 "assigned_rate_limits": { 00:08:07.692 "rw_ios_per_sec": 0, 00:08:07.692 "rw_mbytes_per_sec": 0, 00:08:07.692 "r_mbytes_per_sec": 0, 00:08:07.692 "w_mbytes_per_sec": 0 00:08:07.692 }, 00:08:07.692 "claimed": false, 00:08:07.692 "zoned": false, 00:08:07.692 "supported_io_types": { 00:08:07.692 "read": true, 00:08:07.692 "write": true, 00:08:07.692 "unmap": true, 00:08:07.692 "flush": false, 00:08:07.692 "reset": true, 00:08:07.692 "nvme_admin": false, 00:08:07.692 "nvme_io": false, 00:08:07.692 "nvme_io_md": false, 00:08:07.692 "write_zeroes": true, 00:08:07.692 "zcopy": false, 00:08:07.692 "get_zone_info": false, 00:08:07.692 "zone_management": false, 00:08:07.692 "zone_append": false, 00:08:07.692 "compare": false, 00:08:07.692 "compare_and_write": false, 00:08:07.692 "abort": false, 00:08:07.692 "seek_hole": true, 00:08:07.692 "seek_data": true, 00:08:07.692 "copy": false, 00:08:07.692 "nvme_iov_md": false 00:08:07.692 }, 00:08:07.692 "driver_specific": { 00:08:07.692 "lvol": { 00:08:07.692 "lvol_store_uuid": "d2a8ed35-a2ac-4973-9ab9-1484d5185f72", 00:08:07.692 "base_bdev": "aio_bdev", 00:08:07.692 "thin_provision": false, 00:08:07.692 "num_allocated_clusters": 38, 00:08:07.692 "snapshot": false, 00:08:07.692 "clone": false, 00:08:07.692 "esnap_clone": false 00:08:07.692 } 00:08:07.692 } 00:08:07.692 } 00:08:07.692 ] 00:08:07.692 22:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:08:07.692 22:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d2a8ed35-a2ac-4973-9ab9-1484d5185f72 00:08:07.692 22:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:07.951 22:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:07.951 22:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d2a8ed35-a2ac-4973-9ab9-1484d5185f72 00:08:07.951 22:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:08.210 22:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:08.210 22:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_delete f7cf3f23-b126-4e5b-bfa0-5991e21bf5ba 00:08:08.210 22:39:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d2a8ed35-a2ac-4973-9ab9-1484d5185f72 00:08:08.469 22:39:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:08.728 22:39:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev 00:08:08.728 00:08:08.728 real 0m15.907s 00:08:08.728 user 0m15.301s 00:08:08.728 sys 0m1.652s 00:08:08.728 22:39:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:08.728 22:39:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:08.728 ************************************ 00:08:08.728 END TEST lvs_grow_clean 00:08:08.728 ************************************ 00:08:08.728 22:39:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:08.728 22:39:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:08.728 22:39:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:08.728 22:39:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:08.728 ************************************ 00:08:08.728 START TEST lvs_grow_dirty 00:08:08.728 ************************************ 00:08:08.728 22:39:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:08:08.728 22:39:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:08.728 22:39:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:08.728 22:39:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:08.728 22:39:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:08.728 22:39:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:08.728 22:39:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:08.728 22:39:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev 00:08:08.728 22:39:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev 00:08:08.728 22:39:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:09.026 22:39:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:09.026 22:39:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:09.356 22:39:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=07044c4d-b8f8-4674-95d8-171e18823bdb 00:08:09.356 22:39:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 07044c4d-b8f8-4674-95d8-171e18823bdb 00:08:09.356 22:39:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:09.356 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:09.356 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:09.356 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_create -u 07044c4d-b8f8-4674-95d8-171e18823bdb lvol 150 00:08:09.615 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=2c224a02-b60e-48b8-bd01-39aa932168b8 00:08:09.615 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev 00:08:09.615 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:09.874 [2024-12-10 22:39:17.406212] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:09.874 [2024-12-10 22:39:17.406261] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:09.874 true 00:08:09.874 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 07044c4d-b8f8-4674-95d8-171e18823bdb 00:08:09.874 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:09.874 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:09.874 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:10.133 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 2c224a02-b60e-48b8-bd01-39aa932168b8 00:08:10.392 22:39:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:10.651 [2024-12-10 22:39:18.160494] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:10.651 22:39:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:10.651 22:39:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2230178 00:08:10.651 22:39:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:10.651 22:39:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:10.651 22:39:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2230178 /var/tmp/bdevperf.sock 00:08:10.651 22:39:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2230178 ']' 00:08:10.651 22:39:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:10.651 22:39:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:10.651 22:39:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:10.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:10.652 22:39:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:10.652 22:39:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:10.910 [2024-12-10 22:39:18.389332] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:08:10.910 [2024-12-10 22:39:18.389379] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2230178 ] 00:08:10.910 [2024-12-10 22:39:18.464023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.910 [2024-12-10 22:39:18.506525] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:10.910 22:39:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:10.910 22:39:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:10.910 22:39:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:11.170 Nvme0n1 00:08:11.170 22:39:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:11.429 [ 00:08:11.429 { 00:08:11.429 "name": "Nvme0n1", 00:08:11.429 "aliases": [ 00:08:11.429 "2c224a02-b60e-48b8-bd01-39aa932168b8" 00:08:11.429 ], 00:08:11.429 "product_name": "NVMe disk", 00:08:11.429 "block_size": 4096, 00:08:11.429 "num_blocks": 38912, 00:08:11.429 "uuid": "2c224a02-b60e-48b8-bd01-39aa932168b8", 00:08:11.429 "numa_id": 1, 00:08:11.429 "assigned_rate_limits": { 00:08:11.429 "rw_ios_per_sec": 0, 00:08:11.429 "rw_mbytes_per_sec": 0, 00:08:11.429 "r_mbytes_per_sec": 0, 00:08:11.429 "w_mbytes_per_sec": 0 00:08:11.429 }, 00:08:11.429 "claimed": false, 00:08:11.429 "zoned": false, 00:08:11.429 "supported_io_types": { 00:08:11.429 "read": true, 00:08:11.429 "write": true, 00:08:11.429 "unmap": true, 00:08:11.429 "flush": true, 00:08:11.429 "reset": true, 00:08:11.429 "nvme_admin": true, 00:08:11.429 "nvme_io": true, 00:08:11.429 "nvme_io_md": false, 00:08:11.429 "write_zeroes": true, 00:08:11.429 "zcopy": false, 00:08:11.429 "get_zone_info": false, 00:08:11.429 "zone_management": false, 00:08:11.429 "zone_append": false, 00:08:11.429 "compare": true, 00:08:11.429 "compare_and_write": true, 00:08:11.429 "abort": true, 00:08:11.429 "seek_hole": false, 00:08:11.429 "seek_data": false, 00:08:11.429 "copy": true, 00:08:11.429 "nvme_iov_md": false 00:08:11.429 }, 00:08:11.429 "memory_domains": [ 00:08:11.429 { 00:08:11.429 "dma_device_id": "system", 00:08:11.429 "dma_device_type": 1 00:08:11.429 } 00:08:11.429 ], 00:08:11.429 "driver_specific": { 00:08:11.429 "nvme": [ 00:08:11.429 { 00:08:11.429 "trid": { 00:08:11.429 "trtype": "TCP", 00:08:11.429 "adrfam": "IPv4", 00:08:11.429 "traddr": "10.0.0.2", 00:08:11.429 "trsvcid": "4420", 00:08:11.429 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:11.429 }, 00:08:11.429 "ctrlr_data": { 00:08:11.429 "cntlid": 1, 00:08:11.429 "vendor_id": "0x8086", 00:08:11.429 "model_number": "SPDK bdev Controller", 00:08:11.429 "serial_number": "SPDK0", 00:08:11.429 "firmware_revision": "25.01", 00:08:11.429 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:11.429 "oacs": { 00:08:11.429 "security": 0, 00:08:11.429 "format": 0, 00:08:11.429 "firmware": 0, 00:08:11.429 "ns_manage": 0 00:08:11.429 }, 00:08:11.429 "multi_ctrlr": true, 00:08:11.429 "ana_reporting": false 00:08:11.429 }, 00:08:11.429 "vs": { 00:08:11.429 "nvme_version": "1.3" 00:08:11.429 }, 00:08:11.429 "ns_data": { 00:08:11.429 "id": 1, 00:08:11.429 "can_share": true 00:08:11.429 } 00:08:11.429 } 00:08:11.429 ], 00:08:11.429 "mp_policy": "active_passive" 00:08:11.429 } 00:08:11.429 } 00:08:11.429 ] 00:08:11.429 22:39:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2230326 00:08:11.429 22:39:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:11.429 22:39:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:11.688 Running I/O for 10 seconds... 00:08:12.625 Latency(us) 00:08:12.625 [2024-12-10T21:39:20.354Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:12.625 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:12.625 Nvme0n1 : 1.00 22584.00 88.22 0.00 0.00 0.00 0.00 0.00 00:08:12.625 [2024-12-10T21:39:20.354Z] =================================================================================================================== 00:08:12.625 [2024-12-10T21:39:20.354Z] Total : 22584.00 88.22 0.00 0.00 0.00 0.00 0.00 00:08:12.625 00:08:13.562 22:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 07044c4d-b8f8-4674-95d8-171e18823bdb 00:08:13.562 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:13.562 Nvme0n1 : 2.00 22697.00 88.66 0.00 0.00 0.00 0.00 0.00 00:08:13.562 [2024-12-10T21:39:21.291Z] =================================================================================================================== 00:08:13.562 [2024-12-10T21:39:21.291Z] Total : 22697.00 88.66 0.00 0.00 0.00 0.00 0.00 00:08:13.562 00:08:13.562 true 00:08:13.821 22:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 07044c4d-b8f8-4674-95d8-171e18823bdb 00:08:13.821 22:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:13.821 22:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:13.821 22:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:13.821 22:39:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2230326 00:08:14.758 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:14.758 Nvme0n1 : 3.00 22784.67 89.00 0.00 0.00 0.00 0.00 0.00 00:08:14.758 [2024-12-10T21:39:22.487Z] =================================================================================================================== 00:08:14.758 [2024-12-10T21:39:22.487Z] Total : 22784.67 89.00 0.00 0.00 0.00 0.00 0.00 00:08:14.758 00:08:15.695 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:15.695 Nvme0n1 : 4.00 22856.00 89.28 0.00 0.00 0.00 0.00 0.00 00:08:15.695 [2024-12-10T21:39:23.424Z] =================================================================================================================== 00:08:15.695 [2024-12-10T21:39:23.424Z] Total : 22856.00 89.28 0.00 0.00 0.00 0.00 0.00 00:08:15.695 00:08:16.632 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:16.632 Nvme0n1 : 5.00 22937.00 89.60 0.00 0.00 0.00 0.00 0.00 00:08:16.632 [2024-12-10T21:39:24.361Z] =================================================================================================================== 00:08:16.632 [2024-12-10T21:39:24.361Z] Total : 22937.00 89.60 0.00 0.00 0.00 0.00 0.00 00:08:16.632 00:08:17.568 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:17.568 Nvme0n1 : 6.00 22994.00 89.82 0.00 0.00 0.00 0.00 0.00 00:08:17.568 [2024-12-10T21:39:25.297Z] =================================================================================================================== 00:08:17.568 [2024-12-10T21:39:25.297Z] Total : 22994.00 89.82 0.00 0.00 0.00 0.00 0.00 00:08:17.568 00:08:18.505 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:18.505 Nvme0n1 : 7.00 23049.43 90.04 0.00 0.00 0.00 0.00 0.00 00:08:18.505 [2024-12-10T21:39:26.234Z] =================================================================================================================== 00:08:18.505 [2024-12-10T21:39:26.234Z] Total : 23049.43 90.04 0.00 0.00 0.00 0.00 0.00 00:08:18.505 00:08:19.882 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:19.882 Nvme0n1 : 8.00 23068.50 90.11 0.00 0.00 0.00 0.00 0.00 00:08:19.882 [2024-12-10T21:39:27.611Z] =================================================================================================================== 00:08:19.882 [2024-12-10T21:39:27.611Z] Total : 23068.50 90.11 0.00 0.00 0.00 0.00 0.00 00:08:19.882 00:08:20.820 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:20.820 Nvme0n1 : 9.00 23098.33 90.23 0.00 0.00 0.00 0.00 0.00 00:08:20.820 [2024-12-10T21:39:28.549Z] =================================================================================================================== 00:08:20.820 [2024-12-10T21:39:28.549Z] Total : 23098.33 90.23 0.00 0.00 0.00 0.00 0.00 00:08:20.820 00:08:21.758 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:21.758 Nvme0n1 : 10.00 23117.00 90.30 0.00 0.00 0.00 0.00 0.00 00:08:21.758 [2024-12-10T21:39:29.487Z] =================================================================================================================== 00:08:21.758 [2024-12-10T21:39:29.487Z] Total : 23117.00 90.30 0.00 0.00 0.00 0.00 0.00 00:08:21.758 00:08:21.758 00:08:21.758 Latency(us) 00:08:21.758 [2024-12-10T21:39:29.487Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:21.758 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:21.758 Nvme0n1 : 10.00 23113.22 90.29 0.00 0.00 5534.07 3248.31 11055.64 00:08:21.758 [2024-12-10T21:39:29.487Z] =================================================================================================================== 00:08:21.758 [2024-12-10T21:39:29.488Z] Total : 23113.22 90.29 0.00 0.00 5534.07 3248.31 11055.64 00:08:21.759 { 00:08:21.759 "results": [ 00:08:21.759 { 00:08:21.759 "job": "Nvme0n1", 00:08:21.759 "core_mask": "0x2", 00:08:21.759 "workload": "randwrite", 00:08:21.759 "status": "finished", 00:08:21.759 "queue_depth": 128, 00:08:21.759 "io_size": 4096, 00:08:21.759 "runtime": 10.004363, 00:08:21.759 "iops": 23113.21570398835, 00:08:21.759 "mibps": 90.28599884370449, 00:08:21.759 "io_failed": 0, 00:08:21.759 "io_timeout": 0, 00:08:21.759 "avg_latency_us": 5534.068320367241, 00:08:21.759 "min_latency_us": 3248.3060869565215, 00:08:21.759 "max_latency_us": 11055.638260869566 00:08:21.759 } 00:08:21.759 ], 00:08:21.759 "core_count": 1 00:08:21.759 } 00:08:21.759 22:39:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2230178 00:08:21.759 22:39:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 2230178 ']' 00:08:21.759 22:39:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 2230178 00:08:21.759 22:39:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:08:21.759 22:39:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:21.759 22:39:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2230178 00:08:21.759 22:39:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:21.759 22:39:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:21.759 22:39:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2230178' 00:08:21.759 killing process with pid 2230178 00:08:21.759 22:39:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 2230178 00:08:21.759 Received shutdown signal, test time was about 10.000000 seconds 00:08:21.759 00:08:21.759 Latency(us) 00:08:21.759 [2024-12-10T21:39:29.488Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:21.759 [2024-12-10T21:39:29.488Z] =================================================================================================================== 00:08:21.759 [2024-12-10T21:39:29.488Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:21.759 22:39:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 2230178 00:08:21.759 22:39:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:22.017 22:39:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:22.276 22:39:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 07044c4d-b8f8-4674-95d8-171e18823bdb 00:08:22.276 22:39:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:22.536 22:39:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:22.536 22:39:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:22.536 22:39:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2226475 00:08:22.536 22:39:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2226475 00:08:22.536 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2226475 Killed "${NVMF_APP[@]}" "$@" 00:08:22.536 22:39:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:22.536 22:39:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:22.536 22:39:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:22.536 22:39:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:22.536 22:39:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:22.536 22:39:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2232178 00:08:22.536 22:39:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:22.536 22:39:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2232178 00:08:22.536 22:39:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2232178 ']' 00:08:22.536 22:39:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:22.536 22:39:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:22.536 22:39:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:22.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:22.536 22:39:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:22.536 22:39:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:22.536 [2024-12-10 22:39:30.155322] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:08:22.536 [2024-12-10 22:39:30.155367] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:22.536 [2024-12-10 22:39:30.219161] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.536 [2024-12-10 22:39:30.260069] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:22.536 [2024-12-10 22:39:30.260102] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:22.536 [2024-12-10 22:39:30.260110] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:22.536 [2024-12-10 22:39:30.260116] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:22.536 [2024-12-10 22:39:30.260121] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:22.536 [2024-12-10 22:39:30.260650] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.795 22:39:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:22.795 22:39:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:22.796 22:39:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:22.796 22:39:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:22.796 22:39:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:22.796 22:39:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:22.796 22:39:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:23.055 [2024-12-10 22:39:30.572334] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:23.055 [2024-12-10 22:39:30.572427] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:23.055 [2024-12-10 22:39:30.572452] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:23.055 22:39:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:23.055 22:39:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 2c224a02-b60e-48b8-bd01-39aa932168b8 00:08:23.055 22:39:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=2c224a02-b60e-48b8-bd01-39aa932168b8 00:08:23.055 22:39:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:23.055 22:39:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:23.055 22:39:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:23.055 22:39:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:23.055 22:39:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:23.315 22:39:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_get_bdevs -b 2c224a02-b60e-48b8-bd01-39aa932168b8 -t 2000 00:08:23.315 [ 00:08:23.315 { 00:08:23.315 "name": "2c224a02-b60e-48b8-bd01-39aa932168b8", 00:08:23.315 "aliases": [ 00:08:23.315 "lvs/lvol" 00:08:23.315 ], 00:08:23.315 "product_name": "Logical Volume", 00:08:23.315 "block_size": 4096, 00:08:23.315 "num_blocks": 38912, 00:08:23.315 "uuid": "2c224a02-b60e-48b8-bd01-39aa932168b8", 00:08:23.315 "assigned_rate_limits": { 00:08:23.315 "rw_ios_per_sec": 0, 00:08:23.315 "rw_mbytes_per_sec": 0, 00:08:23.315 "r_mbytes_per_sec": 0, 00:08:23.315 "w_mbytes_per_sec": 0 00:08:23.315 }, 00:08:23.315 "claimed": false, 00:08:23.315 "zoned": false, 00:08:23.315 "supported_io_types": { 00:08:23.315 "read": true, 00:08:23.315 "write": true, 00:08:23.315 "unmap": true, 00:08:23.315 "flush": false, 00:08:23.315 "reset": true, 00:08:23.315 "nvme_admin": false, 00:08:23.315 "nvme_io": false, 00:08:23.315 "nvme_io_md": false, 00:08:23.315 "write_zeroes": true, 00:08:23.315 "zcopy": false, 00:08:23.315 "get_zone_info": false, 00:08:23.315 "zone_management": false, 00:08:23.315 "zone_append": false, 00:08:23.315 "compare": false, 00:08:23.315 "compare_and_write": false, 00:08:23.315 "abort": false, 00:08:23.315 "seek_hole": true, 00:08:23.315 "seek_data": true, 00:08:23.315 "copy": false, 00:08:23.315 "nvme_iov_md": false 00:08:23.315 }, 00:08:23.315 "driver_specific": { 00:08:23.315 "lvol": { 00:08:23.315 "lvol_store_uuid": "07044c4d-b8f8-4674-95d8-171e18823bdb", 00:08:23.315 "base_bdev": "aio_bdev", 00:08:23.315 "thin_provision": false, 00:08:23.315 "num_allocated_clusters": 38, 00:08:23.315 "snapshot": false, 00:08:23.315 "clone": false, 00:08:23.315 "esnap_clone": false 00:08:23.315 } 00:08:23.315 } 00:08:23.315 } 00:08:23.315 ] 00:08:23.315 22:39:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:23.315 22:39:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 07044c4d-b8f8-4674-95d8-171e18823bdb 00:08:23.315 22:39:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:23.574 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:23.574 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 07044c4d-b8f8-4674-95d8-171e18823bdb 00:08:23.574 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:23.834 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:23.834 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:23.834 [2024-12-10 22:39:31.537169] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:24.093 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 07044c4d-b8f8-4674-95d8-171e18823bdb 00:08:24.093 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:08:24.093 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 07044c4d-b8f8-4674-95d8-171e18823bdb 00:08:24.093 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:08:24.093 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:24.093 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:08:24.093 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:24.093 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:08:24.093 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:24.093 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:08:24.093 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py ]] 00:08:24.093 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 07044c4d-b8f8-4674-95d8-171e18823bdb 00:08:24.093 request: 00:08:24.093 { 00:08:24.093 "uuid": "07044c4d-b8f8-4674-95d8-171e18823bdb", 00:08:24.093 "method": "bdev_lvol_get_lvstores", 00:08:24.093 "req_id": 1 00:08:24.093 } 00:08:24.093 Got JSON-RPC error response 00:08:24.093 response: 00:08:24.093 { 00:08:24.093 "code": -19, 00:08:24.093 "message": "No such device" 00:08:24.093 } 00:08:24.093 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:08:24.093 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:24.093 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:24.093 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:24.093 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:24.352 aio_bdev 00:08:24.352 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 2c224a02-b60e-48b8-bd01-39aa932168b8 00:08:24.352 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=2c224a02-b60e-48b8-bd01-39aa932168b8 00:08:24.352 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:24.352 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:24.352 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:24.352 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:24.352 22:39:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:24.611 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_get_bdevs -b 2c224a02-b60e-48b8-bd01-39aa932168b8 -t 2000 00:08:24.611 [ 00:08:24.611 { 00:08:24.611 "name": "2c224a02-b60e-48b8-bd01-39aa932168b8", 00:08:24.611 "aliases": [ 00:08:24.611 "lvs/lvol" 00:08:24.611 ], 00:08:24.611 "product_name": "Logical Volume", 00:08:24.611 "block_size": 4096, 00:08:24.611 "num_blocks": 38912, 00:08:24.611 "uuid": "2c224a02-b60e-48b8-bd01-39aa932168b8", 00:08:24.611 "assigned_rate_limits": { 00:08:24.611 "rw_ios_per_sec": 0, 00:08:24.611 "rw_mbytes_per_sec": 0, 00:08:24.611 "r_mbytes_per_sec": 0, 00:08:24.611 "w_mbytes_per_sec": 0 00:08:24.611 }, 00:08:24.611 "claimed": false, 00:08:24.611 "zoned": false, 00:08:24.611 "supported_io_types": { 00:08:24.611 "read": true, 00:08:24.611 "write": true, 00:08:24.611 "unmap": true, 00:08:24.611 "flush": false, 00:08:24.611 "reset": true, 00:08:24.611 "nvme_admin": false, 00:08:24.611 "nvme_io": false, 00:08:24.611 "nvme_io_md": false, 00:08:24.611 "write_zeroes": true, 00:08:24.611 "zcopy": false, 00:08:24.611 "get_zone_info": false, 00:08:24.611 "zone_management": false, 00:08:24.611 "zone_append": false, 00:08:24.611 "compare": false, 00:08:24.611 "compare_and_write": false, 00:08:24.611 "abort": false, 00:08:24.611 "seek_hole": true, 00:08:24.611 "seek_data": true, 00:08:24.611 "copy": false, 00:08:24.611 "nvme_iov_md": false 00:08:24.611 }, 00:08:24.611 "driver_specific": { 00:08:24.611 "lvol": { 00:08:24.611 "lvol_store_uuid": "07044c4d-b8f8-4674-95d8-171e18823bdb", 00:08:24.611 "base_bdev": "aio_bdev", 00:08:24.611 "thin_provision": false, 00:08:24.611 "num_allocated_clusters": 38, 00:08:24.611 "snapshot": false, 00:08:24.611 "clone": false, 00:08:24.611 "esnap_clone": false 00:08:24.611 } 00:08:24.611 } 00:08:24.611 } 00:08:24.611 ] 00:08:24.611 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:24.611 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 07044c4d-b8f8-4674-95d8-171e18823bdb 00:08:24.611 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:24.871 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:24.871 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 07044c4d-b8f8-4674-95d8-171e18823bdb 00:08:24.871 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:25.129 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:25.129 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_delete 2c224a02-b60e-48b8-bd01-39aa932168b8 00:08:25.389 22:39:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 07044c4d-b8f8-4674-95d8-171e18823bdb 00:08:25.389 22:39:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:25.648 22:39:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev 00:08:25.648 00:08:25.648 real 0m16.918s 00:08:25.648 user 0m44.064s 00:08:25.648 sys 0m3.759s 00:08:25.648 22:39:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:25.648 22:39:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:25.648 ************************************ 00:08:25.648 END TEST lvs_grow_dirty 00:08:25.648 ************************************ 00:08:25.649 22:39:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:25.649 22:39:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:08:25.649 22:39:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:08:25.649 22:39:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:08:25.649 22:39:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:25.649 22:39:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:08:25.649 22:39:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:08:25.649 22:39:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:08:25.649 22:39:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:25.908 nvmf_trace.0 00:08:25.908 22:39:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:08:25.908 22:39:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:25.908 22:39:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:25.908 22:39:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:25.908 22:39:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:25.908 22:39:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:25.908 22:39:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:25.908 22:39:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:25.908 rmmod nvme_tcp 00:08:25.908 rmmod nvme_fabrics 00:08:25.908 rmmod nvme_keyring 00:08:25.908 22:39:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:25.908 22:39:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:25.908 22:39:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:25.908 22:39:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2232178 ']' 00:08:25.908 22:39:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2232178 00:08:25.908 22:39:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 2232178 ']' 00:08:25.908 22:39:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 2232178 00:08:25.908 22:39:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:08:25.908 22:39:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:25.908 22:39:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2232178 00:08:25.908 22:39:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:25.908 22:39:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:25.908 22:39:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2232178' 00:08:25.908 killing process with pid 2232178 00:08:25.909 22:39:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 2232178 00:08:25.909 22:39:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 2232178 00:08:26.168 22:39:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:26.168 22:39:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:26.168 22:39:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:26.168 22:39:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:26.168 22:39:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:08:26.168 22:39:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:26.168 22:39:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:08:26.168 22:39:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:26.168 22:39:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:26.168 22:39:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:26.168 22:39:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:26.168 22:39:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:28.073 22:39:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:28.073 00:08:28.073 real 0m42.112s 00:08:28.073 user 1m5.068s 00:08:28.073 sys 0m10.348s 00:08:28.073 22:39:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:28.073 22:39:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:28.073 ************************************ 00:08:28.073 END TEST nvmf_lvs_grow 00:08:28.073 ************************************ 00:08:28.332 22:39:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:28.332 22:39:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:28.332 22:39:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:28.333 22:39:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:28.333 ************************************ 00:08:28.333 START TEST nvmf_bdev_io_wait 00:08:28.333 ************************************ 00:08:28.333 22:39:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:28.333 * Looking for test storage... 00:08:28.333 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:08:28.333 22:39:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:28.333 22:39:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:08:28.333 22:39:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:28.333 22:39:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:28.333 22:39:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:28.333 22:39:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:28.333 22:39:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:28.333 22:39:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:28.333 22:39:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:28.333 22:39:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:28.333 22:39:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:28.333 22:39:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:28.333 22:39:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:28.333 22:39:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:28.333 22:39:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:28.333 22:39:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:28.333 22:39:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:28.333 22:39:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:28.333 22:39:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:28.333 22:39:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:28.333 22:39:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:28.333 22:39:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:28.333 22:39:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:28.333 22:39:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:28.333 22:39:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:28.333 22:39:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:28.333 22:39:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:28.333 22:39:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:28.333 22:39:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:28.333 22:39:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:28.333 22:39:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:28.333 22:39:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:28.333 22:39:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:28.333 22:39:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:28.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.333 --rc genhtml_branch_coverage=1 00:08:28.333 --rc genhtml_function_coverage=1 00:08:28.333 --rc genhtml_legend=1 00:08:28.333 --rc geninfo_all_blocks=1 00:08:28.333 --rc geninfo_unexecuted_blocks=1 00:08:28.333 00:08:28.333 ' 00:08:28.333 22:39:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:28.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.333 --rc genhtml_branch_coverage=1 00:08:28.333 --rc genhtml_function_coverage=1 00:08:28.333 --rc genhtml_legend=1 00:08:28.333 --rc geninfo_all_blocks=1 00:08:28.333 --rc geninfo_unexecuted_blocks=1 00:08:28.333 00:08:28.333 ' 00:08:28.333 22:39:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:28.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.333 --rc genhtml_branch_coverage=1 00:08:28.333 --rc genhtml_function_coverage=1 00:08:28.333 --rc genhtml_legend=1 00:08:28.333 --rc geninfo_all_blocks=1 00:08:28.333 --rc geninfo_unexecuted_blocks=1 00:08:28.333 00:08:28.333 ' 00:08:28.333 22:39:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:28.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.333 --rc genhtml_branch_coverage=1 00:08:28.333 --rc genhtml_function_coverage=1 00:08:28.333 --rc genhtml_legend=1 00:08:28.333 --rc geninfo_all_blocks=1 00:08:28.333 --rc geninfo_unexecuted_blocks=1 00:08:28.333 00:08:28.333 ' 00:08:28.333 22:39:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:08:28.333 22:39:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:28.333 22:39:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:28.333 22:39:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:28.333 22:39:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:28.333 22:39:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:28.333 22:39:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:28.333 22:39:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:28.333 22:39:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:28.333 22:39:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:28.333 22:39:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:28.333 22:39:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:28.333 22:39:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:28.333 22:39:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:28.333 22:39:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:28.333 22:39:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:28.333 22:39:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:28.333 22:39:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:28.333 22:39:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:08:28.333 22:39:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:28.333 22:39:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:28.333 22:39:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:28.333 22:39:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:28.333 22:39:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.333 22:39:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.333 22:39:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.333 22:39:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:28.333 22:39:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.333 22:39:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:28.333 22:39:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:28.333 22:39:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:28.333 22:39:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:28.333 22:39:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:28.333 22:39:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:28.333 22:39:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:28.333 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:28.333 22:39:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:28.333 22:39:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:28.333 22:39:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:28.333 22:39:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:28.333 22:39:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:28.333 22:39:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:28.333 22:39:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:28.333 22:39:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:28.333 22:39:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:28.334 22:39:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:28.592 22:39:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:28.592 22:39:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:28.592 22:39:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:28.592 22:39:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:28.592 22:39:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:28.592 22:39:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:28.593 22:39:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:08:28.593 22:39:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:35.165 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:35.165 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:08:35.165 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:35.165 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:35.165 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:35.165 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:35.165 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:35.165 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:08:35.165 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:35.165 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:08:35.165 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:08:35.165 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:08:35.165 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:08:35.165 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:08:35.165 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:08:35.165 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:35.165 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:35.166 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:35.166 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:35.166 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:35.166 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:35.166 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:35.166 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:35.166 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:35.166 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:35.166 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:35.166 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:35.166 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:35.166 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:35.166 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:35.166 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:35.166 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:35.166 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:35.166 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:35.166 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:35.166 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:35.166 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:35.166 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:35.166 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:35.166 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:35.166 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:35.166 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:35.166 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:35.166 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:35.166 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:35.166 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:35.166 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:35.166 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:35.166 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:35.166 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:35.166 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:35.166 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:35.166 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:35.166 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:35.166 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:35.166 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:35.166 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:35.166 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:35.166 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:35.166 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:35.166 Found net devices under 0000:86:00.0: cvl_0_0 00:08:35.166 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:35.166 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:35.166 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:35.166 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:35.166 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:35.166 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:35.166 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:35.166 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:35.166 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:35.166 Found net devices under 0000:86:00.1: cvl_0_1 00:08:35.166 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:35.166 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:35.166 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:08:35.166 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:35.166 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:35.166 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:35.166 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:35.166 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:35.166 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:35.166 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:35.166 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:35.166 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:35.166 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:35.166 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:35.166 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:35.166 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:35.166 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:35.166 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:35.166 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:35.166 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:35.166 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:35.166 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:35.166 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:35.166 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:35.166 22:39:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:35.166 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:35.166 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:35.166 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:35.166 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:35.166 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:35.166 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.296 ms 00:08:35.166 00:08:35.166 --- 10.0.0.2 ping statistics --- 00:08:35.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:35.166 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:08:35.166 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:35.166 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:35.166 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:08:35.166 00:08:35.166 --- 10.0.0.1 ping statistics --- 00:08:35.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:35.166 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:08:35.166 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:35.166 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:08:35.166 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:35.166 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:35.166 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:35.166 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:35.166 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:35.166 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:35.166 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:35.166 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:35.166 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:35.166 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:35.167 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:35.167 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2236414 00:08:35.167 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2236414 00:08:35.167 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:35.167 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 2236414 ']' 00:08:35.167 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:35.167 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:35.167 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:35.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:35.167 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:35.167 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:35.167 [2024-12-10 22:39:42.181022] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:08:35.167 [2024-12-10 22:39:42.181073] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:35.167 [2024-12-10 22:39:42.259767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:35.167 [2024-12-10 22:39:42.303011] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:35.167 [2024-12-10 22:39:42.303047] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:35.167 [2024-12-10 22:39:42.303054] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:35.167 [2024-12-10 22:39:42.303061] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:35.167 [2024-12-10 22:39:42.303066] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:35.167 [2024-12-10 22:39:42.304483] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:35.167 [2024-12-10 22:39:42.304594] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:08:35.167 [2024-12-10 22:39:42.304698] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.167 [2024-12-10 22:39:42.304699] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:08:35.167 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:35.167 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:08:35.167 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:35.167 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:35.167 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:35.167 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:35.167 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:35.167 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.167 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:35.167 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.167 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:35.167 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.167 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:35.167 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.167 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:35.167 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.167 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:35.167 [2024-12-10 22:39:42.445260] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:35.167 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.167 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:35.167 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.167 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:35.167 Malloc0 00:08:35.167 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.167 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:35.167 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.167 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:35.167 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.167 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:35.167 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.167 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:35.167 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.167 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:35.167 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:35.167 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:35.167 [2024-12-10 22:39:42.488809] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:35.167 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:35.167 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2236480 00:08:35.167 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:35.167 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:35.167 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2236482 00:08:35.167 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:35.167 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:35.167 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:35.167 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:35.167 { 00:08:35.167 "params": { 00:08:35.167 "name": "Nvme$subsystem", 00:08:35.167 "trtype": "$TEST_TRANSPORT", 00:08:35.167 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:35.167 "adrfam": "ipv4", 00:08:35.167 "trsvcid": "$NVMF_PORT", 00:08:35.167 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:35.167 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:35.167 "hdgst": ${hdgst:-false}, 00:08:35.167 "ddgst": ${ddgst:-false} 00:08:35.167 }, 00:08:35.167 "method": "bdev_nvme_attach_controller" 00:08:35.167 } 00:08:35.167 EOF 00:08:35.167 )") 00:08:35.167 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:35.167 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2236484 00:08:35.167 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:35.167 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:35.167 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:35.167 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:35.167 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:35.167 { 00:08:35.167 "params": { 00:08:35.167 "name": "Nvme$subsystem", 00:08:35.167 "trtype": "$TEST_TRANSPORT", 00:08:35.167 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:35.167 "adrfam": "ipv4", 00:08:35.167 "trsvcid": "$NVMF_PORT", 00:08:35.167 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:35.167 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:35.167 "hdgst": ${hdgst:-false}, 00:08:35.167 "ddgst": ${ddgst:-false} 00:08:35.167 }, 00:08:35.167 "method": "bdev_nvme_attach_controller" 00:08:35.167 } 00:08:35.167 EOF 00:08:35.167 )") 00:08:35.167 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:35.167 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2236487 00:08:35.167 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:35.167 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:35.167 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:35.167 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:35.167 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:35.167 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:35.167 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:35.167 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:35.167 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:35.167 { 00:08:35.167 "params": { 00:08:35.167 "name": "Nvme$subsystem", 00:08:35.167 "trtype": "$TEST_TRANSPORT", 00:08:35.167 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:35.167 "adrfam": "ipv4", 00:08:35.167 "trsvcid": "$NVMF_PORT", 00:08:35.167 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:35.167 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:35.167 "hdgst": ${hdgst:-false}, 00:08:35.167 "ddgst": ${ddgst:-false} 00:08:35.167 }, 00:08:35.167 "method": "bdev_nvme_attach_controller" 00:08:35.167 } 00:08:35.167 EOF 00:08:35.168 )") 00:08:35.168 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:35.168 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:35.168 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:35.168 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:35.168 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:35.168 { 00:08:35.168 "params": { 00:08:35.168 "name": "Nvme$subsystem", 00:08:35.168 "trtype": "$TEST_TRANSPORT", 00:08:35.168 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:35.168 "adrfam": "ipv4", 00:08:35.168 "trsvcid": "$NVMF_PORT", 00:08:35.168 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:35.168 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:35.168 "hdgst": ${hdgst:-false}, 00:08:35.168 "ddgst": ${ddgst:-false} 00:08:35.168 }, 00:08:35.168 "method": "bdev_nvme_attach_controller" 00:08:35.168 } 00:08:35.168 EOF 00:08:35.168 )") 00:08:35.168 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:35.168 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2236480 00:08:35.168 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:35.168 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:35.168 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:35.168 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:35.168 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:35.168 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:35.168 "params": { 00:08:35.168 "name": "Nvme1", 00:08:35.168 "trtype": "tcp", 00:08:35.168 "traddr": "10.0.0.2", 00:08:35.168 "adrfam": "ipv4", 00:08:35.168 "trsvcid": "4420", 00:08:35.168 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:35.168 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:35.168 "hdgst": false, 00:08:35.168 "ddgst": false 00:08:35.168 }, 00:08:35.168 "method": "bdev_nvme_attach_controller" 00:08:35.168 }' 00:08:35.168 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:35.168 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:35.168 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:35.168 "params": { 00:08:35.168 "name": "Nvme1", 00:08:35.168 "trtype": "tcp", 00:08:35.168 "traddr": "10.0.0.2", 00:08:35.168 "adrfam": "ipv4", 00:08:35.168 "trsvcid": "4420", 00:08:35.168 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:35.168 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:35.168 "hdgst": false, 00:08:35.168 "ddgst": false 00:08:35.168 }, 00:08:35.168 "method": "bdev_nvme_attach_controller" 00:08:35.168 }' 00:08:35.168 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:35.168 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:35.168 "params": { 00:08:35.168 "name": "Nvme1", 00:08:35.168 "trtype": "tcp", 00:08:35.168 "traddr": "10.0.0.2", 00:08:35.168 "adrfam": "ipv4", 00:08:35.168 "trsvcid": "4420", 00:08:35.168 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:35.168 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:35.168 "hdgst": false, 00:08:35.168 "ddgst": false 00:08:35.168 }, 00:08:35.168 "method": "bdev_nvme_attach_controller" 00:08:35.168 }' 00:08:35.168 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:35.168 22:39:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:35.168 "params": { 00:08:35.168 "name": "Nvme1", 00:08:35.168 "trtype": "tcp", 00:08:35.168 "traddr": "10.0.0.2", 00:08:35.168 "adrfam": "ipv4", 00:08:35.168 "trsvcid": "4420", 00:08:35.168 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:35.168 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:35.168 "hdgst": false, 00:08:35.168 "ddgst": false 00:08:35.168 }, 00:08:35.168 "method": "bdev_nvme_attach_controller" 00:08:35.168 }' 00:08:35.168 [2024-12-10 22:39:42.538992] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:08:35.168 [2024-12-10 22:39:42.539039] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:35.168 [2024-12-10 22:39:42.539949] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:08:35.168 [2024-12-10 22:39:42.539957] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:08:35.168 [2024-12-10 22:39:42.539989] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:35.168 [2024-12-10 22:39:42.539992] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:35.168 [2024-12-10 22:39:42.545281] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:08:35.168 [2024-12-10 22:39:42.545329] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:35.168 [2024-12-10 22:39:42.724525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.168 [2024-12-10 22:39:42.766169] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:08:35.168 [2024-12-10 22:39:42.823993] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.168 [2024-12-10 22:39:42.865762] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:08:35.427 [2024-12-10 22:39:42.925341] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.427 [2024-12-10 22:39:42.968811] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.427 [2024-12-10 22:39:42.980349] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:08:35.428 [2024-12-10 22:39:43.010780] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:08:35.428 Running I/O for 1 seconds... 00:08:35.428 Running I/O for 1 seconds... 00:08:35.687 Running I/O for 1 seconds... 00:08:35.687 Running I/O for 1 seconds... 00:08:36.623 13485.00 IOPS, 52.68 MiB/s 00:08:36.624 Latency(us) 00:08:36.624 [2024-12-10T21:39:44.353Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:36.624 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:36.624 Nvme1n1 : 1.01 13528.96 52.85 0.00 0.00 9430.10 5100.41 16526.47 00:08:36.624 [2024-12-10T21:39:44.353Z] =================================================================================================================== 00:08:36.624 [2024-12-10T21:39:44.353Z] Total : 13528.96 52.85 0.00 0.00 9430.10 5100.41 16526.47 00:08:36.624 6185.00 IOPS, 24.16 MiB/s 00:08:36.624 Latency(us) 00:08:36.624 [2024-12-10T21:39:44.353Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:36.624 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:36.624 Nvme1n1 : 1.01 6238.41 24.37 0.00 0.00 20329.22 8206.25 30317.52 00:08:36.624 [2024-12-10T21:39:44.353Z] =================================================================================================================== 00:08:36.624 [2024-12-10T21:39:44.353Z] Total : 6238.41 24.37 0.00 0.00 20329.22 8206.25 30317.52 00:08:36.624 236736.00 IOPS, 924.75 MiB/s 00:08:36.624 Latency(us) 00:08:36.624 [2024-12-10T21:39:44.353Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:36.624 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:36.624 Nvme1n1 : 1.00 236371.71 923.33 0.00 0.00 538.70 227.95 1538.67 00:08:36.624 [2024-12-10T21:39:44.353Z] =================================================================================================================== 00:08:36.624 [2024-12-10T21:39:44.353Z] Total : 236371.71 923.33 0.00 0.00 538.70 227.95 1538.67 00:08:36.624 6611.00 IOPS, 25.82 MiB/s 00:08:36.624 Latency(us) 00:08:36.624 [2024-12-10T21:39:44.353Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:36.624 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:36.624 Nvme1n1 : 1.01 6712.78 26.22 0.00 0.00 19018.00 3575.99 42170.99 00:08:36.624 [2024-12-10T21:39:44.353Z] =================================================================================================================== 00:08:36.624 [2024-12-10T21:39:44.353Z] Total : 6712.78 26.22 0.00 0.00 19018.00 3575.99 42170.99 00:08:36.624 22:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2236482 00:08:36.624 22:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2236484 00:08:36.624 22:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2236487 00:08:36.624 22:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:36.624 22:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.624 22:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:36.624 22:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.624 22:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:36.624 22:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:36.624 22:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:36.624 22:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:36.624 22:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:36.624 22:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:36.624 22:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:36.624 22:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:36.624 rmmod nvme_tcp 00:08:36.883 rmmod nvme_fabrics 00:08:36.883 rmmod nvme_keyring 00:08:36.883 22:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:36.883 22:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:36.883 22:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:36.883 22:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2236414 ']' 00:08:36.883 22:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2236414 00:08:36.883 22:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 2236414 ']' 00:08:36.883 22:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 2236414 00:08:36.883 22:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:08:36.883 22:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:36.883 22:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2236414 00:08:36.883 22:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:36.883 22:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:36.883 22:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2236414' 00:08:36.883 killing process with pid 2236414 00:08:36.883 22:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 2236414 00:08:36.884 22:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 2236414 00:08:36.884 22:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:36.884 22:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:36.884 22:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:36.884 22:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:36.884 22:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:08:36.884 22:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:36.884 22:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:08:36.884 22:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:36.884 22:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:36.884 22:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:36.884 22:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:36.884 22:39:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:39.420 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:39.420 00:08:39.420 real 0m10.823s 00:08:39.420 user 0m15.830s 00:08:39.420 sys 0m6.223s 00:08:39.420 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:39.420 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:39.420 ************************************ 00:08:39.420 END TEST nvmf_bdev_io_wait 00:08:39.420 ************************************ 00:08:39.420 22:39:46 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:39.420 22:39:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:39.420 22:39:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:39.420 22:39:46 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:39.420 ************************************ 00:08:39.420 START TEST nvmf_queue_depth 00:08:39.420 ************************************ 00:08:39.420 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:39.420 * Looking for test storage... 00:08:39.420 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:08:39.420 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:39.420 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:08:39.420 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:39.420 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:39.420 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:39.420 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:39.420 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:39.420 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:39.420 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:39.421 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:39.421 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:39.421 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:39.421 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:39.421 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:39.421 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:39.421 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:39.421 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:39.421 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:39.421 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:39.421 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:39.421 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:39.421 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:39.421 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:39.421 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:39.421 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:39.421 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:39.421 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:39.421 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:39.421 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:39.421 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:39.421 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:39.421 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:39.421 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:39.421 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:39.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.421 --rc genhtml_branch_coverage=1 00:08:39.421 --rc genhtml_function_coverage=1 00:08:39.421 --rc genhtml_legend=1 00:08:39.421 --rc geninfo_all_blocks=1 00:08:39.421 --rc geninfo_unexecuted_blocks=1 00:08:39.421 00:08:39.421 ' 00:08:39.421 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:39.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.421 --rc genhtml_branch_coverage=1 00:08:39.421 --rc genhtml_function_coverage=1 00:08:39.421 --rc genhtml_legend=1 00:08:39.421 --rc geninfo_all_blocks=1 00:08:39.421 --rc geninfo_unexecuted_blocks=1 00:08:39.421 00:08:39.421 ' 00:08:39.421 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:39.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.421 --rc genhtml_branch_coverage=1 00:08:39.421 --rc genhtml_function_coverage=1 00:08:39.421 --rc genhtml_legend=1 00:08:39.421 --rc geninfo_all_blocks=1 00:08:39.421 --rc geninfo_unexecuted_blocks=1 00:08:39.421 00:08:39.421 ' 00:08:39.421 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:39.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.421 --rc genhtml_branch_coverage=1 00:08:39.421 --rc genhtml_function_coverage=1 00:08:39.421 --rc genhtml_legend=1 00:08:39.421 --rc geninfo_all_blocks=1 00:08:39.421 --rc geninfo_unexecuted_blocks=1 00:08:39.421 00:08:39.421 ' 00:08:39.421 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:08:39.421 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:39.421 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:39.421 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:39.421 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:39.421 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:39.421 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:39.421 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:39.421 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:39.421 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:39.421 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:39.421 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:39.421 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:39.421 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:39.421 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:39.421 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:39.421 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:39.421 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:39.421 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:08:39.421 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:39.421 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:39.421 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:39.421 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:39.421 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.421 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.421 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.421 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:39.421 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.421 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:39.421 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:39.421 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:39.421 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:39.421 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:39.421 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:39.421 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:39.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:39.421 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:39.421 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:39.421 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:39.421 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:39.421 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:39.421 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:39.421 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:39.421 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:39.421 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:39.421 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:39.421 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:39.421 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:39.421 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:39.421 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:39.421 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:39.421 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:39.421 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:39.421 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:08:39.422 22:39:46 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:45.995 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:45.995 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:08:45.995 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:45.995 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:45.995 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:45.995 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:45.995 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:45.995 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:08:45.995 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:45.995 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:08:45.995 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:08:45.995 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:08:45.995 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:08:45.995 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:08:45.995 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:08:45.995 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:45.995 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:45.995 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:45.995 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:45.995 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:45.995 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:45.995 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:45.995 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:45.995 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:45.995 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:45.995 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:45.995 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:45.995 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:45.995 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:45.995 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:45.995 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:45.995 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:45.995 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:45.995 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:45.995 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:45.995 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:45.995 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:45.995 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:45.995 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:45.995 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:45.995 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:45.995 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:45.995 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:45.995 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:45.995 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:45.995 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:45.995 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:45.995 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:45.995 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:45.995 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:45.995 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:45.995 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:45.995 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:45.995 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:45.995 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:45.995 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:45.995 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:45.995 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:45.995 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:45.995 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:45.995 Found net devices under 0000:86:00.0: cvl_0_0 00:08:45.995 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:45.995 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:45.995 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:45.995 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:45.995 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:45.995 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:45.995 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:45.995 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:45.995 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:45.995 Found net devices under 0000:86:00.1: cvl_0_1 00:08:45.995 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:45.995 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:45.995 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:08:45.995 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:45.995 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:45.995 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:45.995 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:45.995 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:45.995 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:45.996 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:45.996 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:45.996 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:45.996 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:45.996 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:45.996 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:45.996 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:45.996 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:45.996 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:45.996 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:45.996 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:45.996 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:45.996 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:45.996 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:45.996 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:45.996 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:45.996 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:45.996 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:45.996 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:45.996 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:45.996 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:45.996 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.383 ms 00:08:45.996 00:08:45.996 --- 10.0.0.2 ping statistics --- 00:08:45.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:45.996 rtt min/avg/max/mdev = 0.383/0.383/0.383/0.000 ms 00:08:45.996 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:45.996 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:45.996 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:08:45.996 00:08:45.996 --- 10.0.0.1 ping statistics --- 00:08:45.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:45.996 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:08:45.996 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:45.996 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:08:45.996 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:45.996 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:45.996 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:45.996 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:45.996 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:45.996 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:45.996 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:45.996 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:45.996 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:45.996 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:45.996 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:45.996 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2240277 00:08:45.996 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2240277 00:08:45.996 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:45.996 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2240277 ']' 00:08:45.996 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:45.996 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:45.996 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:45.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:45.996 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:45.996 22:39:52 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:45.996 [2024-12-10 22:39:52.992021] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:08:45.996 [2024-12-10 22:39:52.992070] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:45.996 [2024-12-10 22:39:53.076220] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.996 [2024-12-10 22:39:53.114588] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:45.996 [2024-12-10 22:39:53.114624] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:45.996 [2024-12-10 22:39:53.114632] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:45.996 [2024-12-10 22:39:53.114638] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:45.996 [2024-12-10 22:39:53.114647] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:45.996 [2024-12-10 22:39:53.115174] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:45.996 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:45.996 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:45.996 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:45.996 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:45.996 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:45.996 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:45.996 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:45.996 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.996 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:45.996 [2024-12-10 22:39:53.260425] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:45.996 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.996 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:45.996 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.996 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:45.996 Malloc0 00:08:45.996 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.996 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:45.996 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.996 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:45.996 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.996 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:45.996 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.996 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:45.996 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.996 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:45.996 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.996 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:45.996 [2024-12-10 22:39:53.310982] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:45.996 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.996 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2240441 00:08:45.996 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:45.996 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:45.996 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2240441 /var/tmp/bdevperf.sock 00:08:45.996 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2240441 ']' 00:08:45.996 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:45.996 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:45.996 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:45.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:45.996 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:45.996 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:45.996 [2024-12-10 22:39:53.363150] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:08:45.996 [2024-12-10 22:39:53.363200] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2240441 ] 00:08:45.996 [2024-12-10 22:39:53.440515] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.996 [2024-12-10 22:39:53.482866] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.996 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:45.997 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:45.997 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:45.997 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.997 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:46.255 NVMe0n1 00:08:46.255 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.255 22:39:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:46.255 Running I/O for 10 seconds... 00:08:48.608 11642.00 IOPS, 45.48 MiB/s [2024-12-10T21:39:56.932Z] 12015.00 IOPS, 46.93 MiB/s [2024-12-10T21:39:58.309Z] 12166.67 IOPS, 47.53 MiB/s [2024-12-10T21:39:59.245Z] 12222.75 IOPS, 47.75 MiB/s [2024-12-10T21:40:00.179Z] 12260.20 IOPS, 47.89 MiB/s [2024-12-10T21:40:01.115Z] 12280.17 IOPS, 47.97 MiB/s [2024-12-10T21:40:02.050Z] 12275.43 IOPS, 47.95 MiB/s [2024-12-10T21:40:02.985Z] 12300.00 IOPS, 48.05 MiB/s [2024-12-10T21:40:03.922Z] 12339.44 IOPS, 48.20 MiB/s [2024-12-10T21:40:04.181Z] 12358.00 IOPS, 48.27 MiB/s 00:08:56.452 Latency(us) 00:08:56.452 [2024-12-10T21:40:04.181Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:56.452 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:56.452 Verification LBA range: start 0x0 length 0x4000 00:08:56.452 NVMe0n1 : 10.11 12325.08 48.14 0.00 0.00 82480.92 19375.86 69297.20 00:08:56.452 [2024-12-10T21:40:04.181Z] =================================================================================================================== 00:08:56.452 [2024-12-10T21:40:04.181Z] Total : 12325.08 48.14 0.00 0.00 82480.92 19375.86 69297.20 00:08:56.452 { 00:08:56.452 "results": [ 00:08:56.452 { 00:08:56.452 "job": "NVMe0n1", 00:08:56.452 "core_mask": "0x1", 00:08:56.452 "workload": "verify", 00:08:56.452 "status": "finished", 00:08:56.452 "verify_range": { 00:08:56.452 "start": 0, 00:08:56.452 "length": 16384 00:08:56.452 }, 00:08:56.452 "queue_depth": 1024, 00:08:56.452 "io_size": 4096, 00:08:56.452 "runtime": 10.105814, 00:08:56.452 "iops": 12325.08336290377, 00:08:56.452 "mibps": 48.14485688634285, 00:08:56.452 "io_failed": 0, 00:08:56.452 "io_timeout": 0, 00:08:56.452 "avg_latency_us": 82480.91865182659, 00:08:56.452 "min_latency_us": 19375.86086956522, 00:08:56.452 "max_latency_us": 69297.19652173913 00:08:56.452 } 00:08:56.452 ], 00:08:56.452 "core_count": 1 00:08:56.452 } 00:08:56.452 22:40:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2240441 00:08:56.452 22:40:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2240441 ']' 00:08:56.452 22:40:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2240441 00:08:56.452 22:40:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:56.452 22:40:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:56.452 22:40:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2240441 00:08:56.452 22:40:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:56.452 22:40:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:56.452 22:40:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2240441' 00:08:56.452 killing process with pid 2240441 00:08:56.452 22:40:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2240441 00:08:56.452 Received shutdown signal, test time was about 10.000000 seconds 00:08:56.452 00:08:56.452 Latency(us) 00:08:56.452 [2024-12-10T21:40:04.181Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:56.452 [2024-12-10T21:40:04.181Z] =================================================================================================================== 00:08:56.452 [2024-12-10T21:40:04.181Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:56.452 22:40:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2240441 00:08:56.711 22:40:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:56.711 22:40:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:56.711 22:40:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:56.711 22:40:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:08:56.711 22:40:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:56.711 22:40:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:08:56.711 22:40:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:56.711 22:40:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:56.711 rmmod nvme_tcp 00:08:56.711 rmmod nvme_fabrics 00:08:56.711 rmmod nvme_keyring 00:08:56.711 22:40:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:56.711 22:40:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:08:56.711 22:40:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:08:56.711 22:40:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2240277 ']' 00:08:56.711 22:40:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2240277 00:08:56.711 22:40:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2240277 ']' 00:08:56.711 22:40:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2240277 00:08:56.711 22:40:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:56.711 22:40:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:56.711 22:40:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2240277 00:08:56.711 22:40:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:56.711 22:40:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:56.711 22:40:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2240277' 00:08:56.711 killing process with pid 2240277 00:08:56.711 22:40:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2240277 00:08:56.711 22:40:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2240277 00:08:56.971 22:40:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:56.971 22:40:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:56.971 22:40:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:56.971 22:40:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:08:56.971 22:40:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:08:56.971 22:40:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:56.971 22:40:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:08:56.971 22:40:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:56.971 22:40:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:56.971 22:40:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:56.971 22:40:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:56.971 22:40:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:58.875 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:59.134 00:08:59.134 real 0m19.864s 00:08:59.134 user 0m23.368s 00:08:59.134 sys 0m6.045s 00:08:59.134 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:59.134 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:59.134 ************************************ 00:08:59.134 END TEST nvmf_queue_depth 00:08:59.134 ************************************ 00:08:59.134 22:40:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:59.134 22:40:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:59.134 22:40:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:59.134 22:40:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:59.134 ************************************ 00:08:59.134 START TEST nvmf_target_multipath 00:08:59.134 ************************************ 00:08:59.134 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:59.134 * Looking for test storage... 00:08:59.134 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:08:59.134 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:59.134 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:08:59.134 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:59.134 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:59.134 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:59.134 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:59.134 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:59.134 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:08:59.134 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:08:59.134 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:08:59.134 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:08:59.134 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:08:59.134 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:08:59.134 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:08:59.134 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:59.134 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:08:59.134 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:08:59.134 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:59.134 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:59.134 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:08:59.134 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:08:59.135 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:59.135 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:08:59.135 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:08:59.135 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:08:59.135 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:08:59.135 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:59.135 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:08:59.135 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:08:59.135 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:59.135 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:59.135 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:08:59.135 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:59.135 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:59.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.135 --rc genhtml_branch_coverage=1 00:08:59.135 --rc genhtml_function_coverage=1 00:08:59.135 --rc genhtml_legend=1 00:08:59.135 --rc geninfo_all_blocks=1 00:08:59.135 --rc geninfo_unexecuted_blocks=1 00:08:59.135 00:08:59.135 ' 00:08:59.135 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:59.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.135 --rc genhtml_branch_coverage=1 00:08:59.135 --rc genhtml_function_coverage=1 00:08:59.135 --rc genhtml_legend=1 00:08:59.135 --rc geninfo_all_blocks=1 00:08:59.135 --rc geninfo_unexecuted_blocks=1 00:08:59.135 00:08:59.135 ' 00:08:59.135 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:59.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.135 --rc genhtml_branch_coverage=1 00:08:59.135 --rc genhtml_function_coverage=1 00:08:59.135 --rc genhtml_legend=1 00:08:59.135 --rc geninfo_all_blocks=1 00:08:59.135 --rc geninfo_unexecuted_blocks=1 00:08:59.135 00:08:59.135 ' 00:08:59.135 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:59.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.135 --rc genhtml_branch_coverage=1 00:08:59.135 --rc genhtml_function_coverage=1 00:08:59.135 --rc genhtml_legend=1 00:08:59.135 --rc geninfo_all_blocks=1 00:08:59.135 --rc geninfo_unexecuted_blocks=1 00:08:59.135 00:08:59.135 ' 00:08:59.135 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:08:59.135 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:59.135 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:59.135 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:59.135 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:59.135 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:59.135 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:59.135 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:59.135 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:59.395 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:59.395 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:59.395 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:59.395 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:59.395 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:59.395 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:59.395 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:59.395 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:59.395 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:59.395 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:08:59.395 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:08:59.395 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:59.395 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:59.395 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:59.395 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.395 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.395 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.395 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:59.395 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.395 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:08:59.395 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:59.395 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:59.395 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:59.395 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:59.395 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:59.395 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:59.395 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:59.395 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:59.395 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:59.395 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:59.395 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:59.395 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:59.395 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:59.395 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:08:59.395 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:59.395 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:59.395 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:59.395 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:59.395 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:59.395 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:59.395 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:59.395 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:59.395 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:59.395 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:59.395 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:59.395 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:08:59.395 22:40:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:05.970 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:05.970 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:09:05.970 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:05.970 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:05.970 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:05.970 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:05.970 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:05.970 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:09:05.970 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:05.970 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:09:05.970 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:09:05.970 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:09:05.970 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:09:05.970 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:09:05.970 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:09:05.970 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:05.970 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:05.970 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:05.970 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:05.970 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:05.970 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:05.970 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:05.970 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:05.970 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:05.970 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:05.970 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:05.970 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:05.970 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:05.970 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:05.970 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:05.970 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:05.970 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:05.970 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:05.970 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:05.970 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:05.970 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:05.970 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:05.970 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:05.970 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:05.970 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:05.970 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:05.970 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:05.970 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:05.970 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:05.970 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:05.970 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:05.970 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:05.970 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:05.971 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:05.971 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:05.971 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:05.971 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:05.971 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:05.971 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:05.971 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:05.971 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:05.971 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:05.971 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:05.971 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:05.971 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:05.971 Found net devices under 0000:86:00.0: cvl_0_0 00:09:05.971 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:05.971 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:05.971 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:05.971 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:05.971 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:05.971 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:05.971 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:05.971 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:05.971 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:05.971 Found net devices under 0000:86:00.1: cvl_0_1 00:09:05.971 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:05.971 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:05.971 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:09:05.971 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:05.971 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:05.971 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:05.971 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:05.971 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:05.971 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:05.971 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:05.971 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:05.971 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:05.971 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:05.971 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:05.971 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:05.971 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:05.971 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:05.971 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:05.971 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:05.971 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:05.971 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:05.971 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:05.971 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:05.971 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:05.971 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:05.971 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:05.971 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:05.971 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:05.971 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:05.971 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:05.971 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.320 ms 00:09:05.971 00:09:05.971 --- 10.0.0.2 ping statistics --- 00:09:05.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:05.971 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:09:05.971 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:05.971 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:05.971 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:09:05.971 00:09:05.971 --- 10.0.0.1 ping statistics --- 00:09:05.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:05.971 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:09:05.971 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:05.971 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:09:05.971 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:05.971 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:05.971 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:05.971 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:05.971 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:05.971 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:05.971 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:05.971 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:09:05.971 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:09:05.971 only one NIC for nvmf test 00:09:05.971 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:09:05.971 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:05.971 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:05.971 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:05.971 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:05.971 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:05.971 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:05.971 rmmod nvme_tcp 00:09:05.971 rmmod nvme_fabrics 00:09:05.971 rmmod nvme_keyring 00:09:05.971 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:05.971 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:05.971 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:05.971 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:05.971 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:05.971 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:05.971 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:05.971 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:05.971 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:05.971 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:05.971 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:05.971 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:05.971 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:05.971 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:05.971 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:05.971 22:40:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:07.351 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:07.351 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:07.351 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:07.351 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:07.351 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:07.351 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:07.351 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:07.351 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:07.351 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:07.351 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:07.351 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:07.351 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:07.351 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:07.351 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:07.351 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:07.351 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:07.351 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:07.351 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:07.351 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:07.351 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:07.352 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:07.352 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:07.352 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:07.352 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:07.352 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:07.352 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:07.610 00:09:07.610 real 0m8.403s 00:09:07.610 user 0m1.855s 00:09:07.610 sys 0m4.552s 00:09:07.610 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:07.610 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:07.610 ************************************ 00:09:07.610 END TEST nvmf_target_multipath 00:09:07.610 ************************************ 00:09:07.610 22:40:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:07.610 22:40:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:07.610 22:40:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:07.611 22:40:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:07.611 ************************************ 00:09:07.611 START TEST nvmf_zcopy 00:09:07.611 ************************************ 00:09:07.611 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:07.611 * Looking for test storage... 00:09:07.611 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:09:07.611 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:07.611 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:09:07.611 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:07.611 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:07.611 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:07.611 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:07.611 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:07.611 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:07.611 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:07.611 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:07.611 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:07.611 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:07.611 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:07.611 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:07.611 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:07.611 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:07.611 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:07.611 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:07.611 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:07.611 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:07.611 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:07.611 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:07.611 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:07.611 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:07.611 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:07.611 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:07.611 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:07.611 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:07.611 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:07.611 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:07.611 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:07.611 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:07.611 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:07.611 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:07.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.611 --rc genhtml_branch_coverage=1 00:09:07.611 --rc genhtml_function_coverage=1 00:09:07.611 --rc genhtml_legend=1 00:09:07.611 --rc geninfo_all_blocks=1 00:09:07.611 --rc geninfo_unexecuted_blocks=1 00:09:07.611 00:09:07.611 ' 00:09:07.611 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:07.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.611 --rc genhtml_branch_coverage=1 00:09:07.611 --rc genhtml_function_coverage=1 00:09:07.611 --rc genhtml_legend=1 00:09:07.611 --rc geninfo_all_blocks=1 00:09:07.611 --rc geninfo_unexecuted_blocks=1 00:09:07.611 00:09:07.611 ' 00:09:07.611 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:07.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.611 --rc genhtml_branch_coverage=1 00:09:07.611 --rc genhtml_function_coverage=1 00:09:07.611 --rc genhtml_legend=1 00:09:07.611 --rc geninfo_all_blocks=1 00:09:07.611 --rc geninfo_unexecuted_blocks=1 00:09:07.611 00:09:07.611 ' 00:09:07.611 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:07.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.611 --rc genhtml_branch_coverage=1 00:09:07.611 --rc genhtml_function_coverage=1 00:09:07.611 --rc genhtml_legend=1 00:09:07.611 --rc geninfo_all_blocks=1 00:09:07.611 --rc geninfo_unexecuted_blocks=1 00:09:07.611 00:09:07.611 ' 00:09:07.611 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:09:07.611 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:07.611 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:07.871 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:07.871 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:07.871 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:07.871 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:07.871 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:07.871 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:07.871 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:07.871 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:07.871 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:07.871 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:07.871 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:07.871 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:07.871 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:07.871 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:07.871 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:07.871 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:09:07.871 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:07.871 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:07.871 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:07.871 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:07.871 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.871 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.871 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.871 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:07.871 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.871 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:07.871 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:07.871 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:07.871 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:07.871 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:07.871 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:07.871 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:07.871 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:07.871 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:07.871 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:07.871 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:07.871 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:07.871 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:07.872 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:07.872 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:07.872 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:07.872 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:07.872 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:07.872 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:07.872 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:07.872 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:07.872 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:07.872 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:09:07.872 22:40:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:14.444 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:14.444 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:09:14.444 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:14.444 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:14.444 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:14.444 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:14.444 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:14.444 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:09:14.444 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:14.444 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:09:14.444 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:09:14.444 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:09:14.444 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:09:14.444 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:09:14.444 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:09:14.444 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:14.444 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:14.444 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:14.444 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:14.444 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:14.444 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:14.444 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:14.444 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:14.444 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:14.444 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:14.444 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:14.444 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:14.444 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:14.444 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:14.444 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:14.444 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:14.444 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:14.444 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:14.444 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:14.444 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:14.444 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:14.444 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:14.444 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:14.444 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:14.444 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:14.444 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:14.444 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:14.444 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:14.444 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:14.444 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:14.444 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:14.444 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:14.444 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:14.444 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:14.444 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:14.444 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:14.444 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:14.444 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:14.444 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:14.444 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:14.444 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:14.444 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:14.444 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:14.444 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:14.444 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:14.444 Found net devices under 0000:86:00.0: cvl_0_0 00:09:14.444 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:14.444 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:14.444 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:14.444 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:14.444 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:14.444 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:14.444 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:14.444 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:14.444 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:14.444 Found net devices under 0000:86:00.1: cvl_0_1 00:09:14.444 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:14.444 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:14.444 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:09:14.444 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:14.444 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:14.444 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:14.444 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:14.444 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:14.444 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:14.444 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:14.444 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:14.444 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:14.444 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:14.444 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:14.444 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:14.444 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:14.444 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:14.444 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:14.444 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:14.444 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:14.444 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:14.444 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:14.445 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:14.445 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:14.445 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:14.445 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:14.445 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:14.445 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:14.445 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:14.445 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:14.445 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.367 ms 00:09:14.445 00:09:14.445 --- 10.0.0.2 ping statistics --- 00:09:14.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:14.445 rtt min/avg/max/mdev = 0.367/0.367/0.367/0.000 ms 00:09:14.445 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:14.445 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:14.445 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:09:14.445 00:09:14.445 --- 10.0.0.1 ping statistics --- 00:09:14.445 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:14.445 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:09:14.445 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:14.445 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:09:14.445 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:14.445 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:14.445 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:14.445 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:14.445 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:14.445 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:14.445 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:14.445 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:14.445 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:14.445 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:14.445 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:14.445 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2249377 00:09:14.445 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:14.445 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2249377 00:09:14.445 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 2249377 ']' 00:09:14.445 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:14.445 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:14.445 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:14.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:14.445 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:14.445 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:14.445 [2024-12-10 22:40:21.387801] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:09:14.445 [2024-12-10 22:40:21.387843] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:14.445 [2024-12-10 22:40:21.464306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.445 [2024-12-10 22:40:21.504515] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:14.445 [2024-12-10 22:40:21.504554] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:14.445 [2024-12-10 22:40:21.504562] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:14.445 [2024-12-10 22:40:21.504568] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:14.445 [2024-12-10 22:40:21.504573] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:14.445 [2024-12-10 22:40:21.505126] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:14.445 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:14.445 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:09:14.445 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:14.445 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:14.445 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:14.445 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:14.445 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:14.445 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:14.445 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.445 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:14.445 [2024-12-10 22:40:21.649030] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:14.445 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.445 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:14.445 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.445 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:14.445 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.445 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:14.445 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.445 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:14.445 [2024-12-10 22:40:21.665245] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:14.445 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.445 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:14.445 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.445 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:14.445 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.445 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:14.445 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.445 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:14.445 malloc0 00:09:14.445 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.445 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:14.445 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.445 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:14.445 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.445 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:14.445 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:14.445 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:14.445 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:14.445 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:14.445 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:14.445 { 00:09:14.445 "params": { 00:09:14.445 "name": "Nvme$subsystem", 00:09:14.445 "trtype": "$TEST_TRANSPORT", 00:09:14.445 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:14.445 "adrfam": "ipv4", 00:09:14.445 "trsvcid": "$NVMF_PORT", 00:09:14.445 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:14.445 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:14.445 "hdgst": ${hdgst:-false}, 00:09:14.445 "ddgst": ${ddgst:-false} 00:09:14.445 }, 00:09:14.445 "method": "bdev_nvme_attach_controller" 00:09:14.445 } 00:09:14.445 EOF 00:09:14.445 )") 00:09:14.445 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:14.445 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:14.445 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:14.445 22:40:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:14.445 "params": { 00:09:14.445 "name": "Nvme1", 00:09:14.445 "trtype": "tcp", 00:09:14.445 "traddr": "10.0.0.2", 00:09:14.445 "adrfam": "ipv4", 00:09:14.445 "trsvcid": "4420", 00:09:14.445 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:14.445 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:14.445 "hdgst": false, 00:09:14.445 "ddgst": false 00:09:14.445 }, 00:09:14.445 "method": "bdev_nvme_attach_controller" 00:09:14.445 }' 00:09:14.445 [2024-12-10 22:40:21.751889] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:09:14.445 [2024-12-10 22:40:21.751931] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2249444 ] 00:09:14.445 [2024-12-10 22:40:21.828252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.445 [2024-12-10 22:40:21.868513] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.445 Running I/O for 10 seconds... 00:09:16.757 8493.00 IOPS, 66.35 MiB/s [2024-12-10T21:40:25.422Z] 8548.50 IOPS, 66.79 MiB/s [2024-12-10T21:40:26.358Z] 8588.00 IOPS, 67.09 MiB/s [2024-12-10T21:40:27.295Z] 8596.25 IOPS, 67.16 MiB/s [2024-12-10T21:40:28.236Z] 8597.80 IOPS, 67.17 MiB/s [2024-12-10T21:40:29.172Z] 8610.17 IOPS, 67.27 MiB/s [2024-12-10T21:40:30.108Z] 8612.86 IOPS, 67.29 MiB/s [2024-12-10T21:40:31.485Z] 8618.75 IOPS, 67.33 MiB/s [2024-12-10T21:40:32.422Z] 8597.22 IOPS, 67.17 MiB/s [2024-12-10T21:40:32.422Z] 8598.30 IOPS, 67.17 MiB/s 00:09:24.693 Latency(us) 00:09:24.693 [2024-12-10T21:40:32.422Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:24.693 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:24.693 Verification LBA range: start 0x0 length 0x1000 00:09:24.693 Nvme1n1 : 10.01 8602.49 67.21 0.00 0.00 14837.31 2151.29 22225.25 00:09:24.693 [2024-12-10T21:40:32.422Z] =================================================================================================================== 00:09:24.693 [2024-12-10T21:40:32.422Z] Total : 8602.49 67.21 0.00 0.00 14837.31 2151.29 22225.25 00:09:24.693 22:40:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2251154 00:09:24.693 22:40:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:24.693 22:40:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:24.693 22:40:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:24.693 22:40:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:24.693 22:40:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:24.693 22:40:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:24.693 22:40:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:24.693 22:40:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:24.693 { 00:09:24.693 "params": { 00:09:24.693 "name": "Nvme$subsystem", 00:09:24.693 "trtype": "$TEST_TRANSPORT", 00:09:24.693 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:24.693 "adrfam": "ipv4", 00:09:24.693 "trsvcid": "$NVMF_PORT", 00:09:24.693 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:24.693 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:24.693 "hdgst": ${hdgst:-false}, 00:09:24.693 "ddgst": ${ddgst:-false} 00:09:24.693 }, 00:09:24.693 "method": "bdev_nvme_attach_controller" 00:09:24.693 } 00:09:24.693 EOF 00:09:24.693 )") 00:09:24.693 [2024-12-10 22:40:32.274607] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.693 [2024-12-10 22:40:32.274640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.693 22:40:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:24.693 22:40:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:24.693 22:40:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:24.693 22:40:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:24.693 "params": { 00:09:24.693 "name": "Nvme1", 00:09:24.693 "trtype": "tcp", 00:09:24.693 "traddr": "10.0.0.2", 00:09:24.693 "adrfam": "ipv4", 00:09:24.693 "trsvcid": "4420", 00:09:24.693 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:24.693 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:24.693 "hdgst": false, 00:09:24.693 "ddgst": false 00:09:24.693 }, 00:09:24.693 "method": "bdev_nvme_attach_controller" 00:09:24.693 }' 00:09:24.693 [2024-12-10 22:40:32.286597] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.693 [2024-12-10 22:40:32.286611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.693 [2024-12-10 22:40:32.298629] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.693 [2024-12-10 22:40:32.298640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.693 [2024-12-10 22:40:32.310660] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.693 [2024-12-10 22:40:32.310672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.693 [2024-12-10 22:40:32.316966] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:09:24.693 [2024-12-10 22:40:32.317010] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2251154 ] 00:09:24.693 [2024-12-10 22:40:32.322694] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.693 [2024-12-10 22:40:32.322705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.693 [2024-12-10 22:40:32.334725] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.693 [2024-12-10 22:40:32.334735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.693 [2024-12-10 22:40:32.346759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.693 [2024-12-10 22:40:32.346776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.693 [2024-12-10 22:40:32.358791] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.693 [2024-12-10 22:40:32.358802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.693 [2024-12-10 22:40:32.370825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.693 [2024-12-10 22:40:32.370835] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.693 [2024-12-10 22:40:32.382858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.693 [2024-12-10 22:40:32.382870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.693 [2024-12-10 22:40:32.392264] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.693 [2024-12-10 22:40:32.394886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.693 [2024-12-10 22:40:32.394900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.693 [2024-12-10 22:40:32.406927] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.693 [2024-12-10 22:40:32.406945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.693 [2024-12-10 22:40:32.418968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.693 [2024-12-10 22:40:32.418987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.953 [2024-12-10 22:40:32.430991] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.953 [2024-12-10 22:40:32.431003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.953 [2024-12-10 22:40:32.433702] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.953 [2024-12-10 22:40:32.443022] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.953 [2024-12-10 22:40:32.443034] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.953 [2024-12-10 22:40:32.455062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.953 [2024-12-10 22:40:32.455086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.953 [2024-12-10 22:40:32.467090] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.953 [2024-12-10 22:40:32.467107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.953 [2024-12-10 22:40:32.479120] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.953 [2024-12-10 22:40:32.479135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.953 [2024-12-10 22:40:32.491149] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.953 [2024-12-10 22:40:32.491168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.953 [2024-12-10 22:40:32.503184] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.953 [2024-12-10 22:40:32.503199] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.953 [2024-12-10 22:40:32.515225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.953 [2024-12-10 22:40:32.515236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.953 [2024-12-10 22:40:32.527262] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.953 [2024-12-10 22:40:32.527285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.953 [2024-12-10 22:40:32.539297] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.953 [2024-12-10 22:40:32.539312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.953 [2024-12-10 22:40:32.551311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.953 [2024-12-10 22:40:32.551326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.953 [2024-12-10 22:40:32.563338] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.953 [2024-12-10 22:40:32.563349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.953 [2024-12-10 22:40:32.575369] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.953 [2024-12-10 22:40:32.575379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.953 [2024-12-10 22:40:32.587412] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.953 [2024-12-10 22:40:32.587424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.953 [2024-12-10 22:40:32.599453] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.953 [2024-12-10 22:40:32.599467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.953 [2024-12-10 22:40:32.611481] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.953 [2024-12-10 22:40:32.611496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.953 [2024-12-10 22:40:32.623522] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.953 [2024-12-10 22:40:32.623541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.953 [2024-12-10 22:40:32.635546] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.953 [2024-12-10 22:40:32.635559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.953 Running I/O for 5 seconds... 00:09:24.953 [2024-12-10 22:40:32.646782] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.953 [2024-12-10 22:40:32.646802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.953 [2024-12-10 22:40:32.661278] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.953 [2024-12-10 22:40:32.661298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:24.953 [2024-12-10 22:40:32.675249] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:24.953 [2024-12-10 22:40:32.675271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.212 [2024-12-10 22:40:32.689759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.212 [2024-12-10 22:40:32.689778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.212 [2024-12-10 22:40:32.701340] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.212 [2024-12-10 22:40:32.701360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.212 [2024-12-10 22:40:32.715835] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.212 [2024-12-10 22:40:32.715859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.212 [2024-12-10 22:40:32.730045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.212 [2024-12-10 22:40:32.730064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.212 [2024-12-10 22:40:32.743942] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.212 [2024-12-10 22:40:32.743961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.212 [2024-12-10 22:40:32.758070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.212 [2024-12-10 22:40:32.758092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.212 [2024-12-10 22:40:32.772044] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.212 [2024-12-10 22:40:32.772064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.212 [2024-12-10 22:40:32.786481] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.212 [2024-12-10 22:40:32.786500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.212 [2024-12-10 22:40:32.797356] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.212 [2024-12-10 22:40:32.797376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.212 [2024-12-10 22:40:32.811488] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.212 [2024-12-10 22:40:32.811508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.212 [2024-12-10 22:40:32.824861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.212 [2024-12-10 22:40:32.824881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.212 [2024-12-10 22:40:32.839225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.212 [2024-12-10 22:40:32.839244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.212 [2024-12-10 22:40:32.852951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.212 [2024-12-10 22:40:32.852970] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.212 [2024-12-10 22:40:32.867337] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.212 [2024-12-10 22:40:32.867357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.212 [2024-12-10 22:40:32.876491] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.212 [2024-12-10 22:40:32.876510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.212 [2024-12-10 22:40:32.890929] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.212 [2024-12-10 22:40:32.890949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.212 [2024-12-10 22:40:32.904742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.212 [2024-12-10 22:40:32.904762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.212 [2024-12-10 22:40:32.918550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.212 [2024-12-10 22:40:32.918569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.212 [2024-12-10 22:40:32.932504] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.212 [2024-12-10 22:40:32.932524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.471 [2024-12-10 22:40:32.946881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.471 [2024-12-10 22:40:32.946901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.471 [2024-12-10 22:40:32.961334] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.471 [2024-12-10 22:40:32.961354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.471 [2024-12-10 22:40:32.975276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.471 [2024-12-10 22:40:32.975296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.471 [2024-12-10 22:40:32.989128] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.471 [2024-12-10 22:40:32.989147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.471 [2024-12-10 22:40:33.003198] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.471 [2024-12-10 22:40:33.003218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.471 [2024-12-10 22:40:33.013994] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.471 [2024-12-10 22:40:33.014013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.471 [2024-12-10 22:40:33.028753] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.471 [2024-12-10 22:40:33.028772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.471 [2024-12-10 22:40:33.039328] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.471 [2024-12-10 22:40:33.039347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.472 [2024-12-10 22:40:33.054234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.472 [2024-12-10 22:40:33.054254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.472 [2024-12-10 22:40:33.069291] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.472 [2024-12-10 22:40:33.069312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.472 [2024-12-10 22:40:33.083835] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.472 [2024-12-10 22:40:33.083855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.472 [2024-12-10 22:40:33.094672] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.472 [2024-12-10 22:40:33.094692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.472 [2024-12-10 22:40:33.108876] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.472 [2024-12-10 22:40:33.108896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.472 [2024-12-10 22:40:33.122982] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.472 [2024-12-10 22:40:33.123007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.472 [2024-12-10 22:40:33.137025] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.472 [2024-12-10 22:40:33.137044] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.472 [2024-12-10 22:40:33.150930] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.472 [2024-12-10 22:40:33.150950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.472 [2024-12-10 22:40:33.165286] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.472 [2024-12-10 22:40:33.165305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.472 [2024-12-10 22:40:33.176069] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.472 [2024-12-10 22:40:33.176088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.472 [2024-12-10 22:40:33.190920] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.472 [2024-12-10 22:40:33.190939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.731 [2024-12-10 22:40:33.201940] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.731 [2024-12-10 22:40:33.201959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.731 [2024-12-10 22:40:33.216647] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.731 [2024-12-10 22:40:33.216667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.731 [2024-12-10 22:40:33.227257] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.731 [2024-12-10 22:40:33.227277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.731 [2024-12-10 22:40:33.241778] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.731 [2024-12-10 22:40:33.241798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.731 [2024-12-10 22:40:33.255772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.731 [2024-12-10 22:40:33.255791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.731 [2024-12-10 22:40:33.267005] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.731 [2024-12-10 22:40:33.267025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.731 [2024-12-10 22:40:33.281068] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.731 [2024-12-10 22:40:33.281087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.731 [2024-12-10 22:40:33.290283] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.731 [2024-12-10 22:40:33.290302] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.731 [2024-12-10 22:40:33.304902] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.731 [2024-12-10 22:40:33.304921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.731 [2024-12-10 22:40:33.318957] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.731 [2024-12-10 22:40:33.318976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.731 [2024-12-10 22:40:33.333188] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.731 [2024-12-10 22:40:33.333208] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.731 [2024-12-10 22:40:33.346882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.731 [2024-12-10 22:40:33.346901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.731 [2024-12-10 22:40:33.360732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.731 [2024-12-10 22:40:33.360751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.731 [2024-12-10 22:40:33.374723] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.731 [2024-12-10 22:40:33.374750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.731 [2024-12-10 22:40:33.388813] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.731 [2024-12-10 22:40:33.388832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.731 [2024-12-10 22:40:33.397824] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.731 [2024-12-10 22:40:33.397843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.731 [2024-12-10 22:40:33.407275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.731 [2024-12-10 22:40:33.407293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.731 [2024-12-10 22:40:33.421336] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.731 [2024-12-10 22:40:33.421355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.731 [2024-12-10 22:40:33.435762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.731 [2024-12-10 22:40:33.435782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.731 [2024-12-10 22:40:33.449289] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.731 [2024-12-10 22:40:33.449308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.990 [2024-12-10 22:40:33.463676] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.990 [2024-12-10 22:40:33.463697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.990 [2024-12-10 22:40:33.475214] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.990 [2024-12-10 22:40:33.475236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.990 [2024-12-10 22:40:33.489382] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.990 [2024-12-10 22:40:33.489403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.990 [2024-12-10 22:40:33.503349] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.990 [2024-12-10 22:40:33.503371] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.990 [2024-12-10 22:40:33.517025] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.990 [2024-12-10 22:40:33.517047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.990 [2024-12-10 22:40:33.531067] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.990 [2024-12-10 22:40:33.531088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.990 [2024-12-10 22:40:33.545507] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.990 [2024-12-10 22:40:33.545527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.990 [2024-12-10 22:40:33.561090] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.990 [2024-12-10 22:40:33.561110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.990 [2024-12-10 22:40:33.575012] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.990 [2024-12-10 22:40:33.575032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.990 [2024-12-10 22:40:33.588963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.990 [2024-12-10 22:40:33.588983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.990 [2024-12-10 22:40:33.602448] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.990 [2024-12-10 22:40:33.602467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.990 [2024-12-10 22:40:33.616718] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.990 [2024-12-10 22:40:33.616738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.990 [2024-12-10 22:40:33.627338] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.990 [2024-12-10 22:40:33.627362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.990 16483.00 IOPS, 128.77 MiB/s [2024-12-10T21:40:33.719Z] [2024-12-10 22:40:33.641887] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.990 [2024-12-10 22:40:33.641907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.990 [2024-12-10 22:40:33.655837] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.990 [2024-12-10 22:40:33.655857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.990 [2024-12-10 22:40:33.669894] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.990 [2024-12-10 22:40:33.669915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.990 [2024-12-10 22:40:33.683765] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.990 [2024-12-10 22:40:33.683786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.990 [2024-12-10 22:40:33.697540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.990 [2024-12-10 22:40:33.697560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:25.990 [2024-12-10 22:40:33.711458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:25.990 [2024-12-10 22:40:33.711478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.249 [2024-12-10 22:40:33.725787] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.249 [2024-12-10 22:40:33.725808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.250 [2024-12-10 22:40:33.739753] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.250 [2024-12-10 22:40:33.739772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.250 [2024-12-10 22:40:33.755377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.250 [2024-12-10 22:40:33.755397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.250 [2024-12-10 22:40:33.769411] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.250 [2024-12-10 22:40:33.769431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.250 [2024-12-10 22:40:33.783472] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.250 [2024-12-10 22:40:33.783492] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.250 [2024-12-10 22:40:33.797789] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.250 [2024-12-10 22:40:33.797810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.250 [2024-12-10 22:40:33.811708] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.250 [2024-12-10 22:40:33.811729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.250 [2024-12-10 22:40:33.825380] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.250 [2024-12-10 22:40:33.825400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.250 [2024-12-10 22:40:33.839245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.250 [2024-12-10 22:40:33.839265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.250 [2024-12-10 22:40:33.852634] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.250 [2024-12-10 22:40:33.852654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.250 [2024-12-10 22:40:33.866296] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.250 [2024-12-10 22:40:33.866316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.250 [2024-12-10 22:40:33.880029] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.250 [2024-12-10 22:40:33.880049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.250 [2024-12-10 22:40:33.893936] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.250 [2024-12-10 22:40:33.893957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.250 [2024-12-10 22:40:33.907807] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.250 [2024-12-10 22:40:33.907827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.250 [2024-12-10 22:40:33.921954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.250 [2024-12-10 22:40:33.921974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.250 [2024-12-10 22:40:33.935869] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.250 [2024-12-10 22:40:33.935888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.250 [2024-12-10 22:40:33.949712] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.250 [2024-12-10 22:40:33.949731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.250 [2024-12-10 22:40:33.963685] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.250 [2024-12-10 22:40:33.963704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.509 [2024-12-10 22:40:33.977474] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.509 [2024-12-10 22:40:33.977495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.509 [2024-12-10 22:40:33.991757] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.509 [2024-12-10 22:40:33.991776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.509 [2024-12-10 22:40:34.005897] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.509 [2024-12-10 22:40:34.005917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.509 [2024-12-10 22:40:34.020225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.509 [2024-12-10 22:40:34.020245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.509 [2024-12-10 22:40:34.031122] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.509 [2024-12-10 22:40:34.031144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.509 [2024-12-10 22:40:34.046204] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.509 [2024-12-10 22:40:34.046224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.509 [2024-12-10 22:40:34.057261] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.509 [2024-12-10 22:40:34.057282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.509 [2024-12-10 22:40:34.072295] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.509 [2024-12-10 22:40:34.072316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.509 [2024-12-10 22:40:34.083102] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.509 [2024-12-10 22:40:34.083123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.509 [2024-12-10 22:40:34.097802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.509 [2024-12-10 22:40:34.097823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.509 [2024-12-10 22:40:34.111945] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.509 [2024-12-10 22:40:34.111965] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.509 [2024-12-10 22:40:34.125794] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.509 [2024-12-10 22:40:34.125813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.509 [2024-12-10 22:40:34.139710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.509 [2024-12-10 22:40:34.139730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.509 [2024-12-10 22:40:34.153769] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.509 [2024-12-10 22:40:34.153789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.509 [2024-12-10 22:40:34.164418] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.509 [2024-12-10 22:40:34.164438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.509 [2024-12-10 22:40:34.178818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.509 [2024-12-10 22:40:34.178838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.509 [2024-12-10 22:40:34.192428] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.509 [2024-12-10 22:40:34.192447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.509 [2024-12-10 22:40:34.206576] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.509 [2024-12-10 22:40:34.206596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.509 [2024-12-10 22:40:34.220801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.509 [2024-12-10 22:40:34.220820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.509 [2024-12-10 22:40:34.234598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.509 [2024-12-10 22:40:34.234620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.768 [2024-12-10 22:40:34.244115] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.768 [2024-12-10 22:40:34.244135] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.768 [2024-12-10 22:40:34.259046] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.768 [2024-12-10 22:40:34.259066] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.768 [2024-12-10 22:40:34.269943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.768 [2024-12-10 22:40:34.269963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.768 [2024-12-10 22:40:34.284607] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.768 [2024-12-10 22:40:34.284626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.769 [2024-12-10 22:40:34.298651] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.769 [2024-12-10 22:40:34.298671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.769 [2024-12-10 22:40:34.312520] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.769 [2024-12-10 22:40:34.312539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.769 [2024-12-10 22:40:34.326489] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.769 [2024-12-10 22:40:34.326509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.769 [2024-12-10 22:40:34.340436] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.769 [2024-12-10 22:40:34.340455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.769 [2024-12-10 22:40:34.353853] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.769 [2024-12-10 22:40:34.353872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.769 [2024-12-10 22:40:34.367518] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.769 [2024-12-10 22:40:34.367537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.769 [2024-12-10 22:40:34.381789] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.769 [2024-12-10 22:40:34.381808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.769 [2024-12-10 22:40:34.395471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.769 [2024-12-10 22:40:34.395491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.769 [2024-12-10 22:40:34.409723] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.769 [2024-12-10 22:40:34.409743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.769 [2024-12-10 22:40:34.423594] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.769 [2024-12-10 22:40:34.423614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.769 [2024-12-10 22:40:34.437287] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.769 [2024-12-10 22:40:34.437307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.769 [2024-12-10 22:40:34.450926] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.769 [2024-12-10 22:40:34.450945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.769 [2024-12-10 22:40:34.464674] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.769 [2024-12-10 22:40:34.464694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.769 [2024-12-10 22:40:34.479072] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.769 [2024-12-10 22:40:34.479091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:26.769 [2024-12-10 22:40:34.490257] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:26.769 [2024-12-10 22:40:34.490275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.028 [2024-12-10 22:40:34.505133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.028 [2024-12-10 22:40:34.505153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.028 [2024-12-10 22:40:34.515975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.028 [2024-12-10 22:40:34.515994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.028 [2024-12-10 22:40:34.530461] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.028 [2024-12-10 22:40:34.530480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.028 [2024-12-10 22:40:34.544286] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.028 [2024-12-10 22:40:34.544305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.028 [2024-12-10 22:40:34.558327] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.028 [2024-12-10 22:40:34.558346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.028 [2024-12-10 22:40:34.572256] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.028 [2024-12-10 22:40:34.572276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.028 [2024-12-10 22:40:34.586242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.028 [2024-12-10 22:40:34.586262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.028 [2024-12-10 22:40:34.600152] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.028 [2024-12-10 22:40:34.600178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.028 [2024-12-10 22:40:34.614179] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.028 [2024-12-10 22:40:34.614198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.028 [2024-12-10 22:40:34.628163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.028 [2024-12-10 22:40:34.628198] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.028 [2024-12-10 22:40:34.642367] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.028 [2024-12-10 22:40:34.642386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.028 16599.50 IOPS, 129.68 MiB/s [2024-12-10T21:40:34.757Z] [2024-12-10 22:40:34.656358] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.028 [2024-12-10 22:40:34.656382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.028 [2024-12-10 22:40:34.670513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.028 [2024-12-10 22:40:34.670532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.028 [2024-12-10 22:40:34.684673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.028 [2024-12-10 22:40:34.684695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.028 [2024-12-10 22:40:34.698525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.028 [2024-12-10 22:40:34.698546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.028 [2024-12-10 22:40:34.712273] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.028 [2024-12-10 22:40:34.712292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.028 [2024-12-10 22:40:34.726519] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.028 [2024-12-10 22:40:34.726539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.028 [2024-12-10 22:40:34.740874] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.028 [2024-12-10 22:40:34.740893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.028 [2024-12-10 22:40:34.752209] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.028 [2024-12-10 22:40:34.752228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.287 [2024-12-10 22:40:34.766564] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.287 [2024-12-10 22:40:34.766584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.287 [2024-12-10 22:40:34.780405] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.287 [2024-12-10 22:40:34.780424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.287 [2024-12-10 22:40:34.794600] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.288 [2024-12-10 22:40:34.794620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.288 [2024-12-10 22:40:34.808616] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.288 [2024-12-10 22:40:34.808635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.288 [2024-12-10 22:40:34.823071] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.288 [2024-12-10 22:40:34.823091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.288 [2024-12-10 22:40:34.834437] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.288 [2024-12-10 22:40:34.834457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.288 [2024-12-10 22:40:34.849035] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.288 [2024-12-10 22:40:34.849056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.288 [2024-12-10 22:40:34.862851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.288 [2024-12-10 22:40:34.862872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.288 [2024-12-10 22:40:34.876935] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.288 [2024-12-10 22:40:34.876956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.288 [2024-12-10 22:40:34.891124] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.288 [2024-12-10 22:40:34.891145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.288 [2024-12-10 22:40:34.902095] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.288 [2024-12-10 22:40:34.902116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.288 [2024-12-10 22:40:34.916956] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.288 [2024-12-10 22:40:34.916981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.288 [2024-12-10 22:40:34.927572] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.288 [2024-12-10 22:40:34.927591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.288 [2024-12-10 22:40:34.942468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.288 [2024-12-10 22:40:34.942488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.288 [2024-12-10 22:40:34.953650] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.288 [2024-12-10 22:40:34.953670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.288 [2024-12-10 22:40:34.967863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.288 [2024-12-10 22:40:34.967882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.288 [2024-12-10 22:40:34.982025] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.288 [2024-12-10 22:40:34.982045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.288 [2024-12-10 22:40:34.992768] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.288 [2024-12-10 22:40:34.992788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.288 [2024-12-10 22:40:35.006913] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.288 [2024-12-10 22:40:35.006933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.553 [2024-12-10 22:40:35.020871] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.553 [2024-12-10 22:40:35.020891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.553 [2024-12-10 22:40:35.034758] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.553 [2024-12-10 22:40:35.034778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.553 [2024-12-10 22:40:35.048901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.553 [2024-12-10 22:40:35.048922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.553 [2024-12-10 22:40:35.062120] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.553 [2024-12-10 22:40:35.062140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.553 [2024-12-10 22:40:35.076005] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.553 [2024-12-10 22:40:35.076026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.553 [2024-12-10 22:40:35.090296] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.553 [2024-12-10 22:40:35.090316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.553 [2024-12-10 22:40:35.100973] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.553 [2024-12-10 22:40:35.100993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.553 [2024-12-10 22:40:35.115319] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.553 [2024-12-10 22:40:35.115340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.553 [2024-12-10 22:40:35.129289] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.553 [2024-12-10 22:40:35.129310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.553 [2024-12-10 22:40:35.143172] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.553 [2024-12-10 22:40:35.143192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.553 [2024-12-10 22:40:35.157227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.553 [2024-12-10 22:40:35.157247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.553 [2024-12-10 22:40:35.171062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.553 [2024-12-10 22:40:35.171086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.553 [2024-12-10 22:40:35.185261] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.553 [2024-12-10 22:40:35.185281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.553 [2024-12-10 22:40:35.199338] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.553 [2024-12-10 22:40:35.199358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.553 [2024-12-10 22:40:35.214591] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.553 [2024-12-10 22:40:35.214610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.554 [2024-12-10 22:40:35.228611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.554 [2024-12-10 22:40:35.228632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.554 [2024-12-10 22:40:35.242488] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.554 [2024-12-10 22:40:35.242507] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.554 [2024-12-10 22:40:35.256669] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.554 [2024-12-10 22:40:35.256688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.554 [2024-12-10 22:40:35.270718] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.554 [2024-12-10 22:40:35.270738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.813 [2024-12-10 22:40:35.285031] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.813 [2024-12-10 22:40:35.285051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.813 [2024-12-10 22:40:35.295843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.813 [2024-12-10 22:40:35.295862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.813 [2024-12-10 22:40:35.310466] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.813 [2024-12-10 22:40:35.310485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.813 [2024-12-10 22:40:35.324575] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.814 [2024-12-10 22:40:35.324595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.814 [2024-12-10 22:40:35.338675] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.814 [2024-12-10 22:40:35.338695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.814 [2024-12-10 22:40:35.352656] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.814 [2024-12-10 22:40:35.352676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.814 [2024-12-10 22:40:35.366959] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.814 [2024-12-10 22:40:35.366979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.814 [2024-12-10 22:40:35.382760] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.814 [2024-12-10 22:40:35.382779] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.814 [2024-12-10 22:40:35.397555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.814 [2024-12-10 22:40:35.397575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.814 [2024-12-10 22:40:35.412661] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.814 [2024-12-10 22:40:35.412681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.814 [2024-12-10 22:40:35.426500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.814 [2024-12-10 22:40:35.426525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.814 [2024-12-10 22:40:35.440550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.814 [2024-12-10 22:40:35.440573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.814 [2024-12-10 22:40:35.451628] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.814 [2024-12-10 22:40:35.451647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.814 [2024-12-10 22:40:35.465866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.814 [2024-12-10 22:40:35.465885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.814 [2024-12-10 22:40:35.479723] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.814 [2024-12-10 22:40:35.479742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.814 [2024-12-10 22:40:35.494022] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.814 [2024-12-10 22:40:35.494042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.814 [2024-12-10 22:40:35.508031] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.814 [2024-12-10 22:40:35.508051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.814 [2024-12-10 22:40:35.522199] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.814 [2024-12-10 22:40:35.522219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:27.814 [2024-12-10 22:40:35.536763] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:27.814 [2024-12-10 22:40:35.536782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.074 [2024-12-10 22:40:35.551806] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.074 [2024-12-10 22:40:35.551827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.074 [2024-12-10 22:40:35.566274] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.074 [2024-12-10 22:40:35.566294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.074 [2024-12-10 22:40:35.580270] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.074 [2024-12-10 22:40:35.580290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.074 [2024-12-10 22:40:35.594756] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.074 [2024-12-10 22:40:35.594776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.074 [2024-12-10 22:40:35.609870] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.074 [2024-12-10 22:40:35.609890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.074 [2024-12-10 22:40:35.624238] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.074 [2024-12-10 22:40:35.624258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.074 [2024-12-10 22:40:35.637963] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.074 [2024-12-10 22:40:35.637982] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.074 16594.67 IOPS, 129.65 MiB/s [2024-12-10T21:40:35.803Z] [2024-12-10 22:40:35.652228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.074 [2024-12-10 22:40:35.652246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.074 [2024-12-10 22:40:35.666585] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.074 [2024-12-10 22:40:35.666604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.074 [2024-12-10 22:40:35.677595] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.074 [2024-12-10 22:40:35.677614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.074 [2024-12-10 22:40:35.692223] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.074 [2024-12-10 22:40:35.692242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.074 [2024-12-10 22:40:35.706112] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.074 [2024-12-10 22:40:35.706133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.074 [2024-12-10 22:40:35.719869] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.074 [2024-12-10 22:40:35.719890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.074 [2024-12-10 22:40:35.733809] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.074 [2024-12-10 22:40:35.733829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.074 [2024-12-10 22:40:35.747819] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.074 [2024-12-10 22:40:35.747839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.074 [2024-12-10 22:40:35.761877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.074 [2024-12-10 22:40:35.761896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.074 [2024-12-10 22:40:35.776143] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.074 [2024-12-10 22:40:35.776169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.074 [2024-12-10 22:40:35.787247] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.074 [2024-12-10 22:40:35.787266] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.347 [2024-12-10 22:40:35.801837] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.347 [2024-12-10 22:40:35.801858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.347 [2024-12-10 22:40:35.815360] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.347 [2024-12-10 22:40:35.815382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.347 [2024-12-10 22:40:35.830080] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.347 [2024-12-10 22:40:35.830100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.347 [2024-12-10 22:40:35.841134] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.347 [2024-12-10 22:40:35.841165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.347 [2024-12-10 22:40:35.855958] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.347 [2024-12-10 22:40:35.855978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.347 [2024-12-10 22:40:35.870090] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.347 [2024-12-10 22:40:35.870110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.347 [2024-12-10 22:40:35.884151] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.347 [2024-12-10 22:40:35.884179] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.348 [2024-12-10 22:40:35.898509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.348 [2024-12-10 22:40:35.898534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.348 [2024-12-10 22:40:35.909422] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.348 [2024-12-10 22:40:35.909441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.348 [2024-12-10 22:40:35.923778] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.348 [2024-12-10 22:40:35.923798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.348 [2024-12-10 22:40:35.937658] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.348 [2024-12-10 22:40:35.937677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.348 [2024-12-10 22:40:35.951392] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.348 [2024-12-10 22:40:35.951422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.348 [2024-12-10 22:40:35.960861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.348 [2024-12-10 22:40:35.960879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.348 [2024-12-10 22:40:35.975202] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.348 [2024-12-10 22:40:35.975222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.348 [2024-12-10 22:40:35.989250] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.348 [2024-12-10 22:40:35.989269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.348 [2024-12-10 22:40:36.003515] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.348 [2024-12-10 22:40:36.003535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.348 [2024-12-10 22:40:36.013885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.348 [2024-12-10 22:40:36.013905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.348 [2024-12-10 22:40:36.028500] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.348 [2024-12-10 22:40:36.028520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.348 [2024-12-10 22:40:36.042307] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.348 [2024-12-10 22:40:36.042326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.348 [2024-12-10 22:40:36.056417] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.348 [2024-12-10 22:40:36.056437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.647 [2024-12-10 22:40:36.070558] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.647 [2024-12-10 22:40:36.070579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.647 [2024-12-10 22:40:36.085040] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.647 [2024-12-10 22:40:36.085061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.647 [2024-12-10 22:40:36.100314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.647 [2024-12-10 22:40:36.100335] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.647 [2024-12-10 22:40:36.114703] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.647 [2024-12-10 22:40:36.114723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.647 [2024-12-10 22:40:36.125393] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.647 [2024-12-10 22:40:36.125412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.647 [2024-12-10 22:40:36.140471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.647 [2024-12-10 22:40:36.140491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.647 [2024-12-10 22:40:36.155259] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.647 [2024-12-10 22:40:36.155278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.647 [2024-12-10 22:40:36.169805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.647 [2024-12-10 22:40:36.169824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.647 [2024-12-10 22:40:36.183562] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.647 [2024-12-10 22:40:36.183581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.647 [2024-12-10 22:40:36.197550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.647 [2024-12-10 22:40:36.197569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.647 [2024-12-10 22:40:36.211288] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.647 [2024-12-10 22:40:36.211313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.647 [2024-12-10 22:40:36.225373] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.647 [2024-12-10 22:40:36.225394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.647 [2024-12-10 22:40:36.239339] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.647 [2024-12-10 22:40:36.239361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.647 [2024-12-10 22:40:36.253239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.647 [2024-12-10 22:40:36.253261] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.647 [2024-12-10 22:40:36.267142] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.647 [2024-12-10 22:40:36.267169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.647 [2024-12-10 22:40:36.280714] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.647 [2024-12-10 22:40:36.280736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.647 [2024-12-10 22:40:36.295311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.647 [2024-12-10 22:40:36.295331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.647 [2024-12-10 22:40:36.311518] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.647 [2024-12-10 22:40:36.311538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.647 [2024-12-10 22:40:36.325419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.647 [2024-12-10 22:40:36.325439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.647 [2024-12-10 22:40:36.339139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.647 [2024-12-10 22:40:36.339164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.647 [2024-12-10 22:40:36.353617] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.647 [2024-12-10 22:40:36.353637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.647 [2024-12-10 22:40:36.364718] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.647 [2024-12-10 22:40:36.364737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.906 [2024-12-10 22:40:36.379333] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.906 [2024-12-10 22:40:36.379356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.906 [2024-12-10 22:40:36.393139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.906 [2024-12-10 22:40:36.393174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.906 [2024-12-10 22:40:36.402255] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.906 [2024-12-10 22:40:36.402274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.906 [2024-12-10 22:40:36.416724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.906 [2024-12-10 22:40:36.416743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.906 [2024-12-10 22:40:36.430180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.906 [2024-12-10 22:40:36.430200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.906 [2024-12-10 22:40:36.444324] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.906 [2024-12-10 22:40:36.444345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.906 [2024-12-10 22:40:36.458361] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.906 [2024-12-10 22:40:36.458381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.906 [2024-12-10 22:40:36.472427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.906 [2024-12-10 22:40:36.472453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.906 [2024-12-10 22:40:36.486494] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.906 [2024-12-10 22:40:36.486514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.906 [2024-12-10 22:40:36.500828] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.906 [2024-12-10 22:40:36.500848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.906 [2024-12-10 22:40:36.514810] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.906 [2024-12-10 22:40:36.514830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.906 [2024-12-10 22:40:36.529042] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.906 [2024-12-10 22:40:36.529062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.906 [2024-12-10 22:40:36.543402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.906 [2024-12-10 22:40:36.543422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.906 [2024-12-10 22:40:36.554721] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.906 [2024-12-10 22:40:36.554741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.906 [2024-12-10 22:40:36.568949] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.906 [2024-12-10 22:40:36.568969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.906 [2024-12-10 22:40:36.582583] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.906 [2024-12-10 22:40:36.582604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.906 [2024-12-10 22:40:36.596784] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.906 [2024-12-10 22:40:36.596805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.906 [2024-12-10 22:40:36.610973] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.906 [2024-12-10 22:40:36.610994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:28.906 [2024-12-10 22:40:36.624740] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:28.906 [2024-12-10 22:40:36.624759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.165 [2024-12-10 22:40:36.639558] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.165 [2024-12-10 22:40:36.639577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.165 16579.25 IOPS, 129.53 MiB/s [2024-12-10T21:40:36.895Z] [2024-12-10 22:40:36.654804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.166 [2024-12-10 22:40:36.654823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.166 [2024-12-10 22:40:36.668796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.166 [2024-12-10 22:40:36.668817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.166 [2024-12-10 22:40:36.682334] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.166 [2024-12-10 22:40:36.682354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.166 [2024-12-10 22:40:36.696297] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.166 [2024-12-10 22:40:36.696317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.166 [2024-12-10 22:40:36.709653] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.166 [2024-12-10 22:40:36.709673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.166 [2024-12-10 22:40:36.723774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.166 [2024-12-10 22:40:36.723793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.166 [2024-12-10 22:40:36.738129] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.166 [2024-12-10 22:40:36.738154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.166 [2024-12-10 22:40:36.752097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.166 [2024-12-10 22:40:36.752117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.166 [2024-12-10 22:40:36.766088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.166 [2024-12-10 22:40:36.766108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.166 [2024-12-10 22:40:36.780065] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.166 [2024-12-10 22:40:36.780084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.166 [2024-12-10 22:40:36.794028] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.166 [2024-12-10 22:40:36.794049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.166 [2024-12-10 22:40:36.808220] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.166 [2024-12-10 22:40:36.808240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.166 [2024-12-10 22:40:36.822631] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.166 [2024-12-10 22:40:36.822650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.166 [2024-12-10 22:40:36.838508] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.166 [2024-12-10 22:40:36.838528] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.166 [2024-12-10 22:40:36.852589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.166 [2024-12-10 22:40:36.852609] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.166 [2024-12-10 22:40:36.866080] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.166 [2024-12-10 22:40:36.866100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.166 [2024-12-10 22:40:36.879835] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.166 [2024-12-10 22:40:36.879854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.425 [2024-12-10 22:40:36.894017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.425 [2024-12-10 22:40:36.894038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.425 [2024-12-10 22:40:36.907995] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.425 [2024-12-10 22:40:36.908015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.425 [2024-12-10 22:40:36.922146] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.425 [2024-12-10 22:40:36.922171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.425 [2024-12-10 22:40:36.936217] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.425 [2024-12-10 22:40:36.936238] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.425 [2024-12-10 22:40:36.947111] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.425 [2024-12-10 22:40:36.947130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.425 [2024-12-10 22:40:36.961929] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.425 [2024-12-10 22:40:36.961949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.425 [2024-12-10 22:40:36.972865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.425 [2024-12-10 22:40:36.972885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.425 [2024-12-10 22:40:36.982200] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.425 [2024-12-10 22:40:36.982218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.425 [2024-12-10 22:40:36.996772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.425 [2024-12-10 22:40:36.996791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.425 [2024-12-10 22:40:37.010821] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.425 [2024-12-10 22:40:37.010841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.425 [2024-12-10 22:40:37.024584] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.425 [2024-12-10 22:40:37.024604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.425 [2024-12-10 22:40:37.038601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.425 [2024-12-10 22:40:37.038622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.425 [2024-12-10 22:40:37.052593] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.425 [2024-12-10 22:40:37.052614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.425 [2024-12-10 22:40:37.066715] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.425 [2024-12-10 22:40:37.066735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.425 [2024-12-10 22:40:37.080864] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.425 [2024-12-10 22:40:37.080883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.425 [2024-12-10 22:40:37.094737] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.425 [2024-12-10 22:40:37.094756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.425 [2024-12-10 22:40:37.108551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.425 [2024-12-10 22:40:37.108571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.425 [2024-12-10 22:40:37.122555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.425 [2024-12-10 22:40:37.122575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.425 [2024-12-10 22:40:37.136929] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.425 [2024-12-10 22:40:37.136949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.425 [2024-12-10 22:40:37.148117] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.425 [2024-12-10 22:40:37.148137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.685 [2024-12-10 22:40:37.162767] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.685 [2024-12-10 22:40:37.162788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.685 [2024-12-10 22:40:37.176426] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.685 [2024-12-10 22:40:37.176448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.685 [2024-12-10 22:40:37.190480] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.685 [2024-12-10 22:40:37.190500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.685 [2024-12-10 22:40:37.204966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.685 [2024-12-10 22:40:37.204986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.685 [2024-12-10 22:40:37.215472] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.685 [2024-12-10 22:40:37.215491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.685 [2024-12-10 22:40:37.230010] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.685 [2024-12-10 22:40:37.230030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.685 [2024-12-10 22:40:37.243128] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.685 [2024-12-10 22:40:37.243147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.685 [2024-12-10 22:40:37.257467] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.685 [2024-12-10 22:40:37.257487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.685 [2024-12-10 22:40:37.271114] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.685 [2024-12-10 22:40:37.271133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.685 [2024-12-10 22:40:37.285065] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.685 [2024-12-10 22:40:37.285084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.685 [2024-12-10 22:40:37.299123] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.685 [2024-12-10 22:40:37.299143] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.685 [2024-12-10 22:40:37.313086] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.685 [2024-12-10 22:40:37.313105] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.685 [2024-12-10 22:40:37.326924] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.685 [2024-12-10 22:40:37.326943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.685 [2024-12-10 22:40:37.341186] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.685 [2024-12-10 22:40:37.341206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.685 [2024-12-10 22:40:37.351802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.685 [2024-12-10 22:40:37.351821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.685 [2024-12-10 22:40:37.366268] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.685 [2024-12-10 22:40:37.366287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.685 [2024-12-10 22:40:37.379040] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.685 [2024-12-10 22:40:37.379059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.685 [2024-12-10 22:40:37.393181] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.685 [2024-12-10 22:40:37.393200] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.685 [2024-12-10 22:40:37.407439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.685 [2024-12-10 22:40:37.407461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.944 [2024-12-10 22:40:37.421668] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.944 [2024-12-10 22:40:37.421687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.944 [2024-12-10 22:40:37.435811] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.944 [2024-12-10 22:40:37.435830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.944 [2024-12-10 22:40:37.449452] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.944 [2024-12-10 22:40:37.449470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.944 [2024-12-10 22:40:37.463389] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.944 [2024-12-10 22:40:37.463409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.944 [2024-12-10 22:40:37.477757] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.944 [2024-12-10 22:40:37.477776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.944 [2024-12-10 22:40:37.494127] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.944 [2024-12-10 22:40:37.494146] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.944 [2024-12-10 22:40:37.505615] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.945 [2024-12-10 22:40:37.505635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.945 [2024-12-10 22:40:37.520014] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.945 [2024-12-10 22:40:37.520032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.945 [2024-12-10 22:40:37.533786] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.945 [2024-12-10 22:40:37.533805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.945 [2024-12-10 22:40:37.548178] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.945 [2024-12-10 22:40:37.548197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.945 [2024-12-10 22:40:37.561840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.945 [2024-12-10 22:40:37.561859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.945 [2024-12-10 22:40:37.576128] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.945 [2024-12-10 22:40:37.576148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.945 [2024-12-10 22:40:37.585755] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.945 [2024-12-10 22:40:37.585774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.945 [2024-12-10 22:40:37.599979] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.945 [2024-12-10 22:40:37.600001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.945 [2024-12-10 22:40:37.613679] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.945 [2024-12-10 22:40:37.613700] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.945 [2024-12-10 22:40:37.627770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.945 [2024-12-10 22:40:37.627792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.945 [2024-12-10 22:40:37.641414] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.945 [2024-12-10 22:40:37.641436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.945 16597.60 IOPS, 129.67 MiB/s [2024-12-10T21:40:37.674Z] [2024-12-10 22:40:37.650902] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.945 [2024-12-10 22:40:37.650923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:29.945 00:09:29.945 Latency(us) 00:09:29.945 [2024-12-10T21:40:37.674Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:29.945 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:29.945 Nvme1n1 : 5.01 16602.03 129.70 0.00 0.00 7702.91 3419.27 15614.66 00:09:29.945 [2024-12-10T21:40:37.674Z] =================================================================================================================== 00:09:29.945 [2024-12-10T21:40:37.674Z] Total : 16602.03 129.70 0.00 0.00 7702.91 3419.27 15614.66 00:09:29.945 [2024-12-10 22:40:37.661400] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:29.945 [2024-12-10 22:40:37.661419] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.204 [2024-12-10 22:40:37.673425] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.204 [2024-12-10 22:40:37.673443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.204 [2024-12-10 22:40:37.685481] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.204 [2024-12-10 22:40:37.685501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.204 [2024-12-10 22:40:37.697509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.204 [2024-12-10 22:40:37.697530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.204 [2024-12-10 22:40:37.709537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.204 [2024-12-10 22:40:37.709563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.204 [2024-12-10 22:40:37.721568] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.204 [2024-12-10 22:40:37.721585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.204 [2024-12-10 22:40:37.733595] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.204 [2024-12-10 22:40:37.733612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.204 [2024-12-10 22:40:37.745630] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.204 [2024-12-10 22:40:37.745651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.204 [2024-12-10 22:40:37.757660] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.204 [2024-12-10 22:40:37.757677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.204 [2024-12-10 22:40:37.769688] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.204 [2024-12-10 22:40:37.769702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.204 [2024-12-10 22:40:37.781720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.204 [2024-12-10 22:40:37.781742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.204 [2024-12-10 22:40:37.793760] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.204 [2024-12-10 22:40:37.793776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.204 [2024-12-10 22:40:37.805782] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.204 [2024-12-10 22:40:37.805793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.204 [2024-12-10 22:40:37.817818] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:30.204 [2024-12-10 22:40:37.817831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:30.204 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2251154) - No such process 00:09:30.204 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2251154 00:09:30.204 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:30.204 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.204 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:30.204 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.204 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:30.204 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.204 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:30.204 delay0 00:09:30.204 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.204 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:30.204 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.204 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:30.204 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.204 22:40:37 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:30.463 [2024-12-10 22:40:38.004312] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:37.028 Initializing NVMe Controllers 00:09:37.028 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:37.028 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:37.028 Initialization complete. Launching workers. 00:09:37.028 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 4627 00:09:37.028 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 4910, failed to submit 37 00:09:37.028 success 4727, unsuccessful 183, failed 0 00:09:37.028 22:40:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:37.028 22:40:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:37.028 22:40:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:37.028 22:40:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:37.028 22:40:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:37.028 22:40:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:37.028 22:40:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:37.028 22:40:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:37.028 rmmod nvme_tcp 00:09:37.028 rmmod nvme_fabrics 00:09:37.028 rmmod nvme_keyring 00:09:37.287 22:40:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:37.287 22:40:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:37.287 22:40:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:37.287 22:40:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2249377 ']' 00:09:37.287 22:40:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2249377 00:09:37.287 22:40:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 2249377 ']' 00:09:37.287 22:40:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 2249377 00:09:37.287 22:40:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:09:37.287 22:40:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:37.287 22:40:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2249377 00:09:37.287 22:40:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:37.287 22:40:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:37.287 22:40:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2249377' 00:09:37.287 killing process with pid 2249377 00:09:37.287 22:40:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 2249377 00:09:37.287 22:40:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 2249377 00:09:37.287 22:40:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:37.287 22:40:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:37.287 22:40:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:37.287 22:40:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:37.287 22:40:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:09:37.287 22:40:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:37.287 22:40:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:09:37.287 22:40:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:37.287 22:40:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:37.287 22:40:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:37.287 22:40:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:37.287 22:40:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:39.827 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:39.827 00:09:39.827 real 0m31.906s 00:09:39.827 user 0m42.647s 00:09:39.827 sys 0m11.396s 00:09:39.827 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:39.827 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:39.827 ************************************ 00:09:39.827 END TEST nvmf_zcopy 00:09:39.827 ************************************ 00:09:39.827 22:40:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:39.827 22:40:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:39.827 22:40:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:39.827 22:40:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:39.827 ************************************ 00:09:39.827 START TEST nvmf_nmic 00:09:39.827 ************************************ 00:09:39.827 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:39.827 * Looking for test storage... 00:09:39.827 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:09:39.827 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:39.827 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:09:39.827 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:39.827 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:39.827 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:39.827 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:39.827 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:39.828 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:39.828 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:39.828 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:39.828 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:39.828 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:39.828 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:39.828 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:39.828 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:39.828 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:39.828 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:39.828 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:39.828 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:39.828 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:39.828 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:39.828 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:39.828 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:39.828 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:39.828 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:39.828 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:39.828 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:39.828 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:39.828 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:39.828 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:39.828 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:39.828 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:39.828 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:39.828 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:39.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.828 --rc genhtml_branch_coverage=1 00:09:39.828 --rc genhtml_function_coverage=1 00:09:39.828 --rc genhtml_legend=1 00:09:39.828 --rc geninfo_all_blocks=1 00:09:39.828 --rc geninfo_unexecuted_blocks=1 00:09:39.828 00:09:39.828 ' 00:09:39.828 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:39.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.828 --rc genhtml_branch_coverage=1 00:09:39.828 --rc genhtml_function_coverage=1 00:09:39.828 --rc genhtml_legend=1 00:09:39.828 --rc geninfo_all_blocks=1 00:09:39.828 --rc geninfo_unexecuted_blocks=1 00:09:39.828 00:09:39.828 ' 00:09:39.828 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:39.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.828 --rc genhtml_branch_coverage=1 00:09:39.828 --rc genhtml_function_coverage=1 00:09:39.828 --rc genhtml_legend=1 00:09:39.828 --rc geninfo_all_blocks=1 00:09:39.828 --rc geninfo_unexecuted_blocks=1 00:09:39.828 00:09:39.828 ' 00:09:39.828 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:39.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.828 --rc genhtml_branch_coverage=1 00:09:39.828 --rc genhtml_function_coverage=1 00:09:39.828 --rc genhtml_legend=1 00:09:39.828 --rc geninfo_all_blocks=1 00:09:39.828 --rc geninfo_unexecuted_blocks=1 00:09:39.828 00:09:39.828 ' 00:09:39.828 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:09:39.828 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:39.828 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:39.828 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:39.828 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:39.828 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:39.828 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:39.828 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:39.828 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:39.828 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:39.828 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:39.828 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:39.828 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:39.828 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:39.828 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:39.828 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:39.828 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:39.828 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:39.828 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:09:39.828 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:39.828 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:39.828 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:39.828 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:39.828 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.828 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.828 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.828 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:39.829 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.829 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:39.829 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:39.829 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:39.829 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:39.829 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:39.829 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:39.829 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:39.829 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:39.829 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:39.829 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:39.829 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:39.829 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:39.829 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:39.829 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:39.829 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:39.829 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:39.829 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:39.829 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:39.829 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:39.829 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:39.829 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:39.829 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:39.829 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:39.829 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:39.829 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:09:39.829 22:40:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:46.400 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:46.400 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:09:46.400 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:46.400 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:46.400 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:46.400 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:46.400 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:46.400 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:09:46.400 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:46.400 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:09:46.400 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:09:46.400 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:09:46.400 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:09:46.400 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:09:46.400 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:09:46.400 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:46.400 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:46.400 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:46.400 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:46.400 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:46.400 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:46.400 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:46.400 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:46.400 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:46.400 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:46.400 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:46.400 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:46.400 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:46.400 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:46.400 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:46.400 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:46.400 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:46.400 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:46.400 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:46.400 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:46.400 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:46.400 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:46.400 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:46.400 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:46.400 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:46.400 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:46.400 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:46.400 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:46.400 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:46.400 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:46.400 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:46.400 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:46.400 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:46.400 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:46.400 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:46.400 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:46.400 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:46.400 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:46.400 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:46.400 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:46.400 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:46.400 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:46.400 22:40:52 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:46.400 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:46.400 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:46.400 Found net devices under 0000:86:00.0: cvl_0_0 00:09:46.400 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:46.400 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:46.400 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:46.400 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:46.400 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:46.400 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:46.400 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:46.400 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:46.400 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:46.400 Found net devices under 0000:86:00.1: cvl_0_1 00:09:46.400 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:46.400 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:46.400 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:09:46.400 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:46.400 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:46.400 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:46.400 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:46.400 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:46.400 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:46.400 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:46.400 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:46.400 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:46.400 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:46.400 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:46.400 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:46.400 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:46.400 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:46.400 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:46.400 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:46.400 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:46.400 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:46.400 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:46.401 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:46.401 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:46.401 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:46.401 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:46.401 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:46.401 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:46.401 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:46.401 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:46.401 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.386 ms 00:09:46.401 00:09:46.401 --- 10.0.0.2 ping statistics --- 00:09:46.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:46.401 rtt min/avg/max/mdev = 0.386/0.386/0.386/0.000 ms 00:09:46.401 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:46.401 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:46.401 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:09:46.401 00:09:46.401 --- 10.0.0.1 ping statistics --- 00:09:46.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:46.401 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:09:46.401 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:46.401 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:09:46.401 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:46.401 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:46.401 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:46.401 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:46.401 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:46.401 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:46.401 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:46.401 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:09:46.401 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:46.401 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:46.401 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:46.401 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2256886 00:09:46.401 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:46.401 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2256886 00:09:46.401 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 2256886 ']' 00:09:46.401 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:46.401 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:46.401 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:46.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:46.401 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:46.401 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:46.401 [2024-12-10 22:40:53.381364] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:09:46.401 [2024-12-10 22:40:53.381411] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:46.401 [2024-12-10 22:40:53.464538] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:46.401 [2024-12-10 22:40:53.506922] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:46.401 [2024-12-10 22:40:53.506961] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:46.401 [2024-12-10 22:40:53.506968] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:46.401 [2024-12-10 22:40:53.506974] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:46.401 [2024-12-10 22:40:53.506979] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:46.401 [2024-12-10 22:40:53.508555] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:46.401 [2024-12-10 22:40:53.508666] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:09:46.401 [2024-12-10 22:40:53.508680] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:09:46.401 [2024-12-10 22:40:53.508683] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:46.401 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:46.401 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:09:46.401 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:46.401 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:46.401 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:46.401 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:46.401 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:46.401 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.401 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:46.401 [2024-12-10 22:40:53.655293] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:46.401 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.401 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:46.401 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.401 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:46.401 Malloc0 00:09:46.401 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.401 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:46.401 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.401 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:46.401 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.401 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:46.401 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.401 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:46.401 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.401 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:46.401 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.401 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:46.401 [2024-12-10 22:40:53.716226] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:46.401 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.401 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:09:46.401 test case1: single bdev can't be used in multiple subsystems 00:09:46.401 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:09:46.401 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.401 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:46.401 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.401 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:09:46.401 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.401 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:46.401 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.401 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:09:46.401 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:09:46.401 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.401 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:46.401 [2024-12-10 22:40:53.744123] bdev.c:8538:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:09:46.401 [2024-12-10 22:40:53.744144] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:09:46.401 [2024-12-10 22:40:53.744152] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.401 request: 00:09:46.401 { 00:09:46.401 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:46.401 "namespace": { 00:09:46.401 "bdev_name": "Malloc0", 00:09:46.401 "no_auto_visible": false, 00:09:46.401 "hide_metadata": false 00:09:46.401 }, 00:09:46.401 "method": "nvmf_subsystem_add_ns", 00:09:46.401 "req_id": 1 00:09:46.401 } 00:09:46.401 Got JSON-RPC error response 00:09:46.401 response: 00:09:46.401 { 00:09:46.401 "code": -32602, 00:09:46.401 "message": "Invalid parameters" 00:09:46.401 } 00:09:46.401 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:46.401 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:09:46.401 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:09:46.401 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:09:46.401 Adding namespace failed - expected result. 00:09:46.401 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:09:46.401 test case2: host connect to nvmf target in multiple paths 00:09:46.401 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:09:46.401 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.401 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:46.402 [2024-12-10 22:40:53.756257] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:09:46.402 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.402 22:40:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:47.338 22:40:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:09:48.720 22:40:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:09:48.720 22:40:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:09:48.720 22:40:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:48.720 22:40:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:48.720 22:40:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:09:50.642 22:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:50.642 22:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:50.642 22:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:50.642 22:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:50.642 22:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:50.642 22:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:09:50.642 22:40:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:50.642 [global] 00:09:50.642 thread=1 00:09:50.642 invalidate=1 00:09:50.642 rw=write 00:09:50.642 time_based=1 00:09:50.642 runtime=1 00:09:50.642 ioengine=libaio 00:09:50.642 direct=1 00:09:50.642 bs=4096 00:09:50.642 iodepth=1 00:09:50.642 norandommap=0 00:09:50.642 numjobs=1 00:09:50.642 00:09:50.642 verify_dump=1 00:09:50.642 verify_backlog=512 00:09:50.642 verify_state_save=0 00:09:50.642 do_verify=1 00:09:50.642 verify=crc32c-intel 00:09:50.642 [job0] 00:09:50.642 filename=/dev/nvme0n1 00:09:50.642 Could not set queue depth (nvme0n1) 00:09:50.902 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:50.902 fio-3.35 00:09:50.902 Starting 1 thread 00:09:52.273 00:09:52.273 job0: (groupid=0, jobs=1): err= 0: pid=2257754: Tue Dec 10 22:40:59 2024 00:09:52.273 read: IOPS=22, BW=90.0KiB/s (92.2kB/s)(92.0KiB/1022msec) 00:09:52.273 slat (nsec): min=10657, max=23792, avg=21441.39, stdev=2419.71 00:09:52.273 clat (usec): min=40841, max=41239, avg=40986.66, stdev=86.20 00:09:52.273 lat (usec): min=40863, max=41250, avg=41008.10, stdev=84.44 00:09:52.273 clat percentiles (usec): 00:09:52.273 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:09:52.273 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:52.273 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:52.273 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:52.273 | 99.99th=[41157] 00:09:52.273 write: IOPS=500, BW=2004KiB/s (2052kB/s)(2048KiB/1022msec); 0 zone resets 00:09:52.273 slat (usec): min=10, max=106, avg=11.51, stdev= 4.49 00:09:52.273 clat (usec): min=119, max=310, avg=139.10, stdev=18.24 00:09:52.273 lat (usec): min=131, max=416, avg=150.61, stdev=20.77 00:09:52.273 clat percentiles (usec): 00:09:52.273 | 1.00th=[ 124], 5.00th=[ 126], 10.00th=[ 127], 20.00th=[ 129], 00:09:52.273 | 30.00th=[ 130], 40.00th=[ 133], 50.00th=[ 133], 60.00th=[ 135], 00:09:52.273 | 70.00th=[ 137], 80.00th=[ 141], 90.00th=[ 167], 95.00th=[ 180], 00:09:52.273 | 99.00th=[ 190], 99.50th=[ 212], 99.90th=[ 310], 99.95th=[ 310], 00:09:52.273 | 99.99th=[ 310] 00:09:52.273 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:09:52.273 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:52.273 lat (usec) : 250=95.51%, 500=0.19% 00:09:52.273 lat (msec) : 50=4.30% 00:09:52.273 cpu : usr=0.20%, sys=1.08%, ctx=536, majf=0, minf=1 00:09:52.273 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:52.273 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.273 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:52.273 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:52.273 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:52.273 00:09:52.273 Run status group 0 (all jobs): 00:09:52.273 READ: bw=90.0KiB/s (92.2kB/s), 90.0KiB/s-90.0KiB/s (92.2kB/s-92.2kB/s), io=92.0KiB (94.2kB), run=1022-1022msec 00:09:52.273 WRITE: bw=2004KiB/s (2052kB/s), 2004KiB/s-2004KiB/s (2052kB/s-2052kB/s), io=2048KiB (2097kB), run=1022-1022msec 00:09:52.273 00:09:52.273 Disk stats (read/write): 00:09:52.273 nvme0n1: ios=70/512, merge=0/0, ticks=833/67, in_queue=900, util=90.98% 00:09:52.273 22:40:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:52.273 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:09:52.273 22:40:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:52.273 22:40:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:09:52.273 22:40:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:52.273 22:40:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:52.273 22:40:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:52.273 22:40:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:52.273 22:40:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:09:52.273 22:40:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:09:52.273 22:40:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:09:52.273 22:40:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:52.273 22:40:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:09:52.273 22:40:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:52.273 22:40:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:09:52.273 22:40:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:52.273 22:40:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:52.273 rmmod nvme_tcp 00:09:52.273 rmmod nvme_fabrics 00:09:52.273 rmmod nvme_keyring 00:09:52.273 22:40:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:52.273 22:40:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:09:52.273 22:40:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:09:52.273 22:40:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2256886 ']' 00:09:52.273 22:40:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2256886 00:09:52.273 22:40:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 2256886 ']' 00:09:52.273 22:40:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 2256886 00:09:52.273 22:40:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:09:52.273 22:40:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:52.273 22:40:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2256886 00:09:52.273 22:40:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:52.273 22:40:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:52.273 22:40:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2256886' 00:09:52.273 killing process with pid 2256886 00:09:52.273 22:40:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 2256886 00:09:52.273 22:40:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 2256886 00:09:52.532 22:41:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:52.532 22:41:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:52.532 22:41:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:52.532 22:41:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:09:52.532 22:41:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:09:52.532 22:41:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:52.532 22:41:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:09:52.532 22:41:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:52.532 22:41:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:52.532 22:41:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:52.532 22:41:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:52.532 22:41:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:54.438 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:54.438 00:09:54.438 real 0m14.973s 00:09:54.438 user 0m33.464s 00:09:54.438 sys 0m5.180s 00:09:54.438 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:54.438 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:09:54.438 ************************************ 00:09:54.438 END TEST nvmf_nmic 00:09:54.438 ************************************ 00:09:54.438 22:41:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:54.438 22:41:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:54.438 22:41:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:54.438 22:41:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:54.698 ************************************ 00:09:54.698 START TEST nvmf_fio_target 00:09:54.698 ************************************ 00:09:54.698 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/fio.sh --transport=tcp 00:09:54.698 * Looking for test storage... 00:09:54.698 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:09:54.698 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:54.698 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:09:54.698 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:54.698 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:54.698 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:54.698 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:54.698 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:54.698 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:09:54.698 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:09:54.698 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:09:54.698 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:09:54.698 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:09:54.698 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:09:54.698 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:09:54.698 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:54.698 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:09:54.698 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:09:54.698 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:54.698 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:54.698 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:09:54.698 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:09:54.698 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:54.698 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:09:54.698 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:09:54.698 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:09:54.698 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:09:54.698 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:54.698 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:09:54.698 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:09:54.698 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:54.698 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:54.698 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:09:54.698 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:54.698 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:54.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.698 --rc genhtml_branch_coverage=1 00:09:54.698 --rc genhtml_function_coverage=1 00:09:54.698 --rc genhtml_legend=1 00:09:54.698 --rc geninfo_all_blocks=1 00:09:54.698 --rc geninfo_unexecuted_blocks=1 00:09:54.698 00:09:54.698 ' 00:09:54.698 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:54.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.698 --rc genhtml_branch_coverage=1 00:09:54.698 --rc genhtml_function_coverage=1 00:09:54.698 --rc genhtml_legend=1 00:09:54.698 --rc geninfo_all_blocks=1 00:09:54.698 --rc geninfo_unexecuted_blocks=1 00:09:54.698 00:09:54.698 ' 00:09:54.698 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:54.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.698 --rc genhtml_branch_coverage=1 00:09:54.698 --rc genhtml_function_coverage=1 00:09:54.698 --rc genhtml_legend=1 00:09:54.698 --rc geninfo_all_blocks=1 00:09:54.698 --rc geninfo_unexecuted_blocks=1 00:09:54.698 00:09:54.698 ' 00:09:54.698 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:54.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:54.698 --rc genhtml_branch_coverage=1 00:09:54.698 --rc genhtml_function_coverage=1 00:09:54.698 --rc genhtml_legend=1 00:09:54.698 --rc geninfo_all_blocks=1 00:09:54.698 --rc geninfo_unexecuted_blocks=1 00:09:54.698 00:09:54.698 ' 00:09:54.698 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:09:54.698 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:09:54.698 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:54.698 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:54.698 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:54.698 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:54.698 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:54.698 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:54.698 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:54.698 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:54.698 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:54.698 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:54.698 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:54.698 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:54.698 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:54.698 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:54.698 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:54.698 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:54.698 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:09:54.699 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:09:54.699 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:54.699 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:54.699 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:54.699 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.699 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.699 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.699 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:09:54.699 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.699 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:09:54.699 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:54.699 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:54.699 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:54.699 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:54.699 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:54.699 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:54.699 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:54.699 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:54.699 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:54.699 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:54.699 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:54.699 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:54.699 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:09:54.699 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:09:54.699 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:54.699 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:54.699 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:54.699 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:54.699 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:54.699 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:54.699 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:54.699 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:54.699 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:54.699 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:54.699 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:09:54.699 22:41:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:01.270 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:01.270 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:10:01.270 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:01.270 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:01.270 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:01.270 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:01.270 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:01.270 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:10:01.270 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:01.270 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:10:01.270 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:10:01.270 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:10:01.270 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:10:01.270 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:10:01.270 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:10:01.270 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:01.270 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:01.270 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:01.270 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:01.270 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:01.270 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:01.270 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:01.270 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:01.270 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:01.270 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:01.270 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:01.270 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:01.270 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:01.270 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:01.270 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:01.270 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:01.270 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:01.270 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:01.271 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:01.271 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:01.271 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:01.271 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:01.271 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:01.271 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:01.271 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:01.271 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:01.271 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:01.271 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:01.271 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:01.271 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:01.271 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:01.271 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:01.271 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:01.271 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:01.271 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:01.271 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:01.271 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:01.271 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:01.271 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:01.271 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:01.271 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:01.271 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:01.271 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:01.271 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:01.271 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:01.271 Found net devices under 0000:86:00.0: cvl_0_0 00:10:01.271 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:01.271 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:01.271 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:01.271 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:01.271 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:01.271 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:01.271 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:01.271 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:01.271 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:01.271 Found net devices under 0000:86:00.1: cvl_0_1 00:10:01.271 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:01.271 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:01.271 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:10:01.271 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:01.271 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:01.271 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:01.271 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:01.271 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:01.271 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:01.271 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:01.271 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:01.271 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:01.271 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:01.271 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:01.271 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:01.271 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:01.271 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:01.271 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:01.271 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:01.271 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:01.271 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:01.271 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:01.271 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:01.271 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:01.271 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:01.271 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:01.271 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:01.271 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:01.271 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:01.271 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:01.271 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.323 ms 00:10:01.271 00:10:01.271 --- 10.0.0.2 ping statistics --- 00:10:01.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:01.271 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:10:01.271 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:01.271 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:01.271 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:10:01.271 00:10:01.271 --- 10.0.0.1 ping statistics --- 00:10:01.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:01.271 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:10:01.271 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:01.271 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:10:01.271 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:01.271 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:01.271 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:01.271 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:01.271 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:01.271 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:01.271 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:01.271 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:01.271 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:01.271 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:01.271 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:01.271 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2261637 00:10:01.271 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2261637 00:10:01.271 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 2261637 ']' 00:10:01.271 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:01.271 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:01.271 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:01.271 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:01.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:01.271 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:01.271 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:01.271 [2024-12-10 22:41:08.411517] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:10:01.271 [2024-12-10 22:41:08.411565] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:01.271 [2024-12-10 22:41:08.491271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:01.271 [2024-12-10 22:41:08.531538] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:01.271 [2024-12-10 22:41:08.531578] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:01.271 [2024-12-10 22:41:08.531586] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:01.271 [2024-12-10 22:41:08.531592] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:01.271 [2024-12-10 22:41:08.531597] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:01.271 [2024-12-10 22:41:08.533199] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:01.271 [2024-12-10 22:41:08.533251] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:10:01.271 [2024-12-10 22:41:08.533355] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.271 [2024-12-10 22:41:08.533356] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:10:01.271 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:01.272 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:10:01.272 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:01.272 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:01.272 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:01.272 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:01.272 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:01.272 [2024-12-10 22:41:08.843482] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:01.272 22:41:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:01.529 22:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:01.529 22:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:01.786 22:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:01.786 22:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:02.044 22:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:02.044 22:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:02.044 22:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:02.044 22:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:02.302 22:41:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:02.560 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:02.560 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:02.818 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:02.818 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:03.077 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:03.077 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:03.335 22:41:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:03.335 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:03.335 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:03.593 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:03.593 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:03.851 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:04.109 [2024-12-10 22:41:11.591439] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:04.109 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:04.109 22:41:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:04.368 22:41:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:05.742 22:41:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:05.742 22:41:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:10:05.742 22:41:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:05.742 22:41:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:10:05.742 22:41:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:10:05.742 22:41:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:10:07.645 22:41:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:07.645 22:41:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:07.645 22:41:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:07.645 22:41:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:10:07.645 22:41:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:07.645 22:41:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:10:07.645 22:41:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:07.645 [global] 00:10:07.645 thread=1 00:10:07.645 invalidate=1 00:10:07.645 rw=write 00:10:07.645 time_based=1 00:10:07.645 runtime=1 00:10:07.645 ioengine=libaio 00:10:07.645 direct=1 00:10:07.645 bs=4096 00:10:07.645 iodepth=1 00:10:07.645 norandommap=0 00:10:07.645 numjobs=1 00:10:07.645 00:10:07.645 verify_dump=1 00:10:07.645 verify_backlog=512 00:10:07.645 verify_state_save=0 00:10:07.645 do_verify=1 00:10:07.645 verify=crc32c-intel 00:10:07.645 [job0] 00:10:07.645 filename=/dev/nvme0n1 00:10:07.645 [job1] 00:10:07.645 filename=/dev/nvme0n2 00:10:07.645 [job2] 00:10:07.645 filename=/dev/nvme0n3 00:10:07.645 [job3] 00:10:07.645 filename=/dev/nvme0n4 00:10:07.645 Could not set queue depth (nvme0n1) 00:10:07.645 Could not set queue depth (nvme0n2) 00:10:07.645 Could not set queue depth (nvme0n3) 00:10:07.645 Could not set queue depth (nvme0n4) 00:10:07.902 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:07.902 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:07.902 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:07.902 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:07.902 fio-3.35 00:10:07.902 Starting 4 threads 00:10:09.278 00:10:09.278 job0: (groupid=0, jobs=1): err= 0: pid=2263080: Tue Dec 10 22:41:16 2024 00:10:09.278 read: IOPS=2210, BW=8843KiB/s (9055kB/s)(8852KiB/1001msec) 00:10:09.278 slat (nsec): min=7260, max=42812, avg=8350.75, stdev=1609.56 00:10:09.278 clat (usec): min=172, max=500, avg=227.69, stdev=29.33 00:10:09.278 lat (usec): min=179, max=523, avg=236.04, stdev=29.43 00:10:09.278 clat percentiles (usec): 00:10:09.278 | 1.00th=[ 186], 5.00th=[ 194], 10.00th=[ 198], 20.00th=[ 204], 00:10:09.278 | 30.00th=[ 208], 40.00th=[ 215], 50.00th=[ 221], 60.00th=[ 227], 00:10:09.278 | 70.00th=[ 237], 80.00th=[ 260], 90.00th=[ 273], 95.00th=[ 281], 00:10:09.278 | 99.00th=[ 293], 99.50th=[ 297], 99.90th=[ 396], 99.95th=[ 396], 00:10:09.278 | 99.99th=[ 502] 00:10:09.278 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:09.278 slat (usec): min=3, max=17141, avg=17.39, stdev=338.59 00:10:09.278 clat (usec): min=116, max=671, avg=163.99, stdev=33.32 00:10:09.278 lat (usec): min=120, max=17386, avg=181.37, stdev=341.66 00:10:09.278 clat percentiles (usec): 00:10:09.278 | 1.00th=[ 129], 5.00th=[ 135], 10.00th=[ 139], 20.00th=[ 143], 00:10:09.278 | 30.00th=[ 147], 40.00th=[ 149], 50.00th=[ 153], 60.00th=[ 157], 00:10:09.278 | 70.00th=[ 163], 80.00th=[ 178], 90.00th=[ 215], 95.00th=[ 237], 00:10:09.278 | 99.00th=[ 265], 99.50th=[ 285], 99.90th=[ 367], 99.95th=[ 469], 00:10:09.278 | 99.99th=[ 676] 00:10:09.278 bw ( KiB/s): min= 9493, max= 9493, per=39.28%, avg=9493.00, stdev= 0.00, samples=1 00:10:09.278 iops : min= 2373, max= 2373, avg=2373.00, stdev= 0.00, samples=1 00:10:09.278 lat (usec) : 250=88.75%, 500=11.21%, 750=0.04% 00:10:09.278 cpu : usr=4.50%, sys=6.50%, ctx=4776, majf=0, minf=2 00:10:09.278 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:09.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:09.278 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:09.278 issued rwts: total=2213,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:09.278 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:09.278 job1: (groupid=0, jobs=1): err= 0: pid=2263081: Tue Dec 10 22:41:16 2024 00:10:09.278 read: IOPS=36, BW=146KiB/s (149kB/s)(148KiB/1017msec) 00:10:09.278 slat (nsec): min=8309, max=20961, avg=12143.86, stdev=3964.88 00:10:09.278 clat (usec): min=283, max=41038, avg=24464.55, stdev=20142.57 00:10:09.278 lat (usec): min=302, max=41054, avg=24476.69, stdev=20142.96 00:10:09.278 clat percentiles (usec): 00:10:09.278 | 1.00th=[ 285], 5.00th=[ 322], 10.00th=[ 326], 20.00th=[ 420], 00:10:09.278 | 30.00th=[ 490], 40.00th=[ 498], 50.00th=[40633], 60.00th=[40633], 00:10:09.278 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:09.278 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:09.278 | 99.99th=[41157] 00:10:09.278 write: IOPS=503, BW=2014KiB/s (2062kB/s)(2048KiB/1017msec); 0 zone resets 00:10:09.278 slat (nsec): min=9928, max=91590, avg=11907.72, stdev=4123.85 00:10:09.278 clat (usec): min=122, max=615, avg=201.72, stdev=51.20 00:10:09.278 lat (usec): min=133, max=642, avg=213.63, stdev=52.07 00:10:09.278 clat percentiles (usec): 00:10:09.278 | 1.00th=[ 137], 5.00th=[ 143], 10.00th=[ 147], 20.00th=[ 169], 00:10:09.278 | 30.00th=[ 184], 40.00th=[ 190], 50.00th=[ 196], 60.00th=[ 206], 00:10:09.278 | 70.00th=[ 217], 80.00th=[ 227], 90.00th=[ 243], 95.00th=[ 265], 00:10:09.278 | 99.00th=[ 343], 99.50th=[ 578], 99.90th=[ 619], 99.95th=[ 619], 00:10:09.278 | 99.99th=[ 619] 00:10:09.278 bw ( KiB/s): min= 4087, max= 4087, per=16.91%, avg=4087.00, stdev= 0.00, samples=1 00:10:09.278 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:10:09.278 lat (usec) : 250=86.70%, 500=8.56%, 750=0.73% 00:10:09.278 lat (msec) : 50=4.01% 00:10:09.278 cpu : usr=1.08%, sys=0.30%, ctx=550, majf=0, minf=1 00:10:09.278 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:09.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:09.278 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:09.278 issued rwts: total=37,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:09.278 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:09.278 job2: (groupid=0, jobs=1): err= 0: pid=2263082: Tue Dec 10 22:41:16 2024 00:10:09.278 read: IOPS=22, BW=90.5KiB/s (92.6kB/s)(92.0KiB/1017msec) 00:10:09.278 slat (nsec): min=9520, max=24350, avg=22117.78, stdev=3979.61 00:10:09.278 clat (usec): min=323, max=42212, avg=39370.73, stdev=8520.93 00:10:09.278 lat (usec): min=346, max=42222, avg=39392.85, stdev=8520.71 00:10:09.278 clat percentiles (usec): 00:10:09.278 | 1.00th=[ 326], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:10:09.278 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:09.278 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:10:09.278 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:09.278 | 99.99th=[42206] 00:10:09.278 write: IOPS=503, BW=2014KiB/s (2062kB/s)(2048KiB/1017msec); 0 zone resets 00:10:09.278 slat (nsec): min=9316, max=39811, avg=10353.40, stdev=1914.50 00:10:09.278 clat (usec): min=130, max=612, avg=203.62, stdev=48.86 00:10:09.278 lat (usec): min=140, max=622, avg=213.97, stdev=49.01 00:10:09.278 clat percentiles (usec): 00:10:09.278 | 1.00th=[ 137], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 172], 00:10:09.278 | 30.00th=[ 188], 40.00th=[ 192], 50.00th=[ 200], 60.00th=[ 208], 00:10:09.278 | 70.00th=[ 221], 80.00th=[ 229], 90.00th=[ 245], 95.00th=[ 265], 00:10:09.278 | 99.00th=[ 334], 99.50th=[ 537], 99.90th=[ 611], 99.95th=[ 611], 00:10:09.278 | 99.99th=[ 611] 00:10:09.278 bw ( KiB/s): min= 4087, max= 4087, per=16.91%, avg=4087.00, stdev= 0.00, samples=1 00:10:09.278 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:10:09.278 lat (usec) : 250=88.41%, 500=6.73%, 750=0.75% 00:10:09.278 lat (msec) : 50=4.11% 00:10:09.278 cpu : usr=0.20%, sys=0.59%, ctx=535, majf=0, minf=1 00:10:09.278 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:09.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:09.278 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:09.278 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:09.278 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:09.278 job3: (groupid=0, jobs=1): err= 0: pid=2263083: Tue Dec 10 22:41:16 2024 00:10:09.278 read: IOPS=2222, BW=8891KiB/s (9104kB/s)(8900KiB/1001msec) 00:10:09.278 slat (nsec): min=6979, max=23516, avg=9456.83, stdev=1475.67 00:10:09.278 clat (usec): min=188, max=690, avg=229.35, stdev=22.36 00:10:09.278 lat (usec): min=198, max=700, avg=238.81, stdev=22.27 00:10:09.278 clat percentiles (usec): 00:10:09.278 | 1.00th=[ 200], 5.00th=[ 206], 10.00th=[ 212], 20.00th=[ 217], 00:10:09.278 | 30.00th=[ 221], 40.00th=[ 225], 50.00th=[ 229], 60.00th=[ 231], 00:10:09.278 | 70.00th=[ 235], 80.00th=[ 241], 90.00th=[ 247], 95.00th=[ 253], 00:10:09.278 | 99.00th=[ 273], 99.50th=[ 297], 99.90th=[ 519], 99.95th=[ 553], 00:10:09.278 | 99.99th=[ 693] 00:10:09.278 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:09.278 slat (usec): min=9, max=14470, avg=18.31, stdev=286.39 00:10:09.278 clat (usec): min=123, max=324, avg=159.72, stdev=23.17 00:10:09.278 lat (usec): min=136, max=14793, avg=178.03, stdev=290.63 00:10:09.278 clat percentiles (usec): 00:10:09.278 | 1.00th=[ 130], 5.00th=[ 137], 10.00th=[ 141], 20.00th=[ 145], 00:10:09.278 | 30.00th=[ 149], 40.00th=[ 151], 50.00th=[ 155], 60.00th=[ 157], 00:10:09.278 | 70.00th=[ 161], 80.00th=[ 169], 90.00th=[ 186], 95.00th=[ 208], 00:10:09.278 | 99.00th=[ 243], 99.50th=[ 247], 99.90th=[ 281], 99.95th=[ 322], 00:10:09.278 | 99.99th=[ 326] 00:10:09.278 bw ( KiB/s): min=10355, max=10355, per=42.85%, avg=10355.00, stdev= 0.00, samples=1 00:10:09.278 iops : min= 2588, max= 2588, avg=2588.00, stdev= 0.00, samples=1 00:10:09.278 lat (usec) : 250=96.45%, 500=3.49%, 750=0.06% 00:10:09.278 cpu : usr=2.40%, sys=5.80%, ctx=4789, majf=0, minf=1 00:10:09.278 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:09.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:09.278 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:09.278 issued rwts: total=2225,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:09.278 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:09.278 00:10:09.278 Run status group 0 (all jobs): 00:10:09.278 READ: bw=17.3MiB/s (18.1MB/s), 90.5KiB/s-8891KiB/s (92.6kB/s-9104kB/s), io=17.6MiB (18.4MB), run=1001-1017msec 00:10:09.278 WRITE: bw=23.6MiB/s (24.7MB/s), 2014KiB/s-9.99MiB/s (2062kB/s-10.5MB/s), io=24.0MiB (25.2MB), run=1001-1017msec 00:10:09.278 00:10:09.278 Disk stats (read/write): 00:10:09.278 nvme0n1: ios=1780/2048, merge=0/0, ticks=606/324, in_queue=930, util=86.97% 00:10:09.278 nvme0n2: ios=81/512, merge=0/0, ticks=714/97, in_queue=811, util=85.57% 00:10:09.278 nvme0n3: ios=74/512, merge=0/0, ticks=729/99, in_queue=828, util=89.75% 00:10:09.278 nvme0n4: ios=1803/2048, merge=0/0, ticks=580/320, in_queue=900, util=97.56% 00:10:09.278 22:41:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:09.278 [global] 00:10:09.278 thread=1 00:10:09.278 invalidate=1 00:10:09.278 rw=randwrite 00:10:09.278 time_based=1 00:10:09.278 runtime=1 00:10:09.279 ioengine=libaio 00:10:09.279 direct=1 00:10:09.279 bs=4096 00:10:09.279 iodepth=1 00:10:09.279 norandommap=0 00:10:09.279 numjobs=1 00:10:09.279 00:10:09.279 verify_dump=1 00:10:09.279 verify_backlog=512 00:10:09.279 verify_state_save=0 00:10:09.279 do_verify=1 00:10:09.279 verify=crc32c-intel 00:10:09.279 [job0] 00:10:09.279 filename=/dev/nvme0n1 00:10:09.279 [job1] 00:10:09.279 filename=/dev/nvme0n2 00:10:09.279 [job2] 00:10:09.279 filename=/dev/nvme0n3 00:10:09.279 [job3] 00:10:09.279 filename=/dev/nvme0n4 00:10:09.279 Could not set queue depth (nvme0n1) 00:10:09.279 Could not set queue depth (nvme0n2) 00:10:09.279 Could not set queue depth (nvme0n3) 00:10:09.279 Could not set queue depth (nvme0n4) 00:10:09.537 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:09.537 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:09.537 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:09.537 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:09.537 fio-3.35 00:10:09.537 Starting 4 threads 00:10:10.916 00:10:10.916 job0: (groupid=0, jobs=1): err= 0: pid=2263461: Tue Dec 10 22:41:18 2024 00:10:10.916 read: IOPS=21, BW=87.5KiB/s (89.6kB/s)(88.0KiB/1006msec) 00:10:10.916 slat (nsec): min=10599, max=24145, avg=21277.55, stdev=3315.31 00:10:10.916 clat (usec): min=40865, max=41208, avg=40983.51, stdev=79.50 00:10:10.916 lat (usec): min=40886, max=41219, avg=41004.79, stdev=77.68 00:10:10.916 clat percentiles (usec): 00:10:10.916 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:10:10.916 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:10.916 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:10.916 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:10.916 | 99.99th=[41157] 00:10:10.916 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:10:10.916 slat (nsec): min=9659, max=41599, avg=11626.77, stdev=2423.65 00:10:10.916 clat (usec): min=141, max=314, avg=188.39, stdev=15.69 00:10:10.916 lat (usec): min=151, max=356, avg=200.01, stdev=16.34 00:10:10.916 clat percentiles (usec): 00:10:10.916 | 1.00th=[ 149], 5.00th=[ 167], 10.00th=[ 172], 20.00th=[ 176], 00:10:10.916 | 30.00th=[ 182], 40.00th=[ 184], 50.00th=[ 188], 60.00th=[ 192], 00:10:10.916 | 70.00th=[ 196], 80.00th=[ 202], 90.00th=[ 206], 95.00th=[ 212], 00:10:10.916 | 99.00th=[ 225], 99.50th=[ 237], 99.90th=[ 314], 99.95th=[ 314], 00:10:10.916 | 99.99th=[ 314] 00:10:10.916 bw ( KiB/s): min= 4096, max= 4096, per=23.07%, avg=4096.00, stdev= 0.00, samples=1 00:10:10.916 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:10.916 lat (usec) : 250=95.69%, 500=0.19% 00:10:10.916 lat (msec) : 50=4.12% 00:10:10.916 cpu : usr=0.20%, sys=1.09%, ctx=534, majf=0, minf=1 00:10:10.916 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:10.917 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.917 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.917 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:10.917 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:10.917 job1: (groupid=0, jobs=1): err= 0: pid=2263462: Tue Dec 10 22:41:18 2024 00:10:10.917 read: IOPS=22, BW=88.6KiB/s (90.8kB/s)(92.0KiB/1038msec) 00:10:10.917 slat (nsec): min=9204, max=24627, avg=18615.00, stdev=5319.74 00:10:10.917 clat (usec): min=40810, max=43864, avg=41379.11, stdev=715.84 00:10:10.917 lat (usec): min=40832, max=43887, avg=41397.73, stdev=715.59 00:10:10.917 clat percentiles (usec): 00:10:10.917 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:10:10.917 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:10.917 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:10.917 | 99.00th=[43779], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 00:10:10.917 | 99.99th=[43779] 00:10:10.917 write: IOPS=493, BW=1973KiB/s (2020kB/s)(2048KiB/1038msec); 0 zone resets 00:10:10.917 slat (nsec): min=9012, max=40176, avg=10102.52, stdev=1631.85 00:10:10.917 clat (usec): min=132, max=315, avg=155.46, stdev=13.13 00:10:10.917 lat (usec): min=141, max=355, avg=165.56, stdev=13.95 00:10:10.917 clat percentiles (usec): 00:10:10.917 | 1.00th=[ 135], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 147], 00:10:10.917 | 30.00th=[ 149], 40.00th=[ 151], 50.00th=[ 155], 60.00th=[ 157], 00:10:10.917 | 70.00th=[ 161], 80.00th=[ 163], 90.00th=[ 169], 95.00th=[ 176], 00:10:10.917 | 99.00th=[ 186], 99.50th=[ 194], 99.90th=[ 314], 99.95th=[ 314], 00:10:10.917 | 99.99th=[ 314] 00:10:10.917 bw ( KiB/s): min= 4096, max= 4096, per=23.07%, avg=4096.00, stdev= 0.00, samples=1 00:10:10.917 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:10.917 lat (usec) : 250=95.51%, 500=0.19% 00:10:10.917 lat (msec) : 50=4.30% 00:10:10.917 cpu : usr=0.29%, sys=0.39%, ctx=535, majf=0, minf=1 00:10:10.917 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:10.917 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.917 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.917 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:10.917 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:10.917 job2: (groupid=0, jobs=1): err= 0: pid=2263463: Tue Dec 10 22:41:18 2024 00:10:10.917 read: IOPS=1896, BW=7584KiB/s (7766kB/s)(7592KiB/1001msec) 00:10:10.917 slat (nsec): min=7404, max=42284, avg=8583.34, stdev=1579.45 00:10:10.917 clat (usec): min=171, max=41224, avg=330.03, stdev=1870.79 00:10:10.917 lat (usec): min=179, max=41233, avg=338.61, stdev=1870.81 00:10:10.917 clat percentiles (usec): 00:10:10.917 | 1.00th=[ 180], 5.00th=[ 188], 10.00th=[ 194], 20.00th=[ 204], 00:10:10.917 | 30.00th=[ 212], 40.00th=[ 225], 50.00th=[ 237], 60.00th=[ 241], 00:10:10.917 | 70.00th=[ 247], 80.00th=[ 255], 90.00th=[ 367], 95.00th=[ 371], 00:10:10.917 | 99.00th=[ 375], 99.50th=[ 379], 99.90th=[41157], 99.95th=[41157], 00:10:10.917 | 99.99th=[41157] 00:10:10.917 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:10.917 slat (nsec): min=10649, max=42691, avg=11953.91, stdev=1801.47 00:10:10.917 clat (usec): min=119, max=3844, avg=156.72, stdev=84.88 00:10:10.917 lat (usec): min=130, max=3855, avg=168.67, stdev=84.98 00:10:10.917 clat percentiles (usec): 00:10:10.917 | 1.00th=[ 127], 5.00th=[ 131], 10.00th=[ 133], 20.00th=[ 137], 00:10:10.917 | 30.00th=[ 139], 40.00th=[ 143], 50.00th=[ 147], 60.00th=[ 153], 00:10:10.917 | 70.00th=[ 161], 80.00th=[ 178], 90.00th=[ 192], 95.00th=[ 202], 00:10:10.917 | 99.00th=[ 219], 99.50th=[ 227], 99.90th=[ 243], 99.95th=[ 338], 00:10:10.917 | 99.99th=[ 3851] 00:10:10.917 bw ( KiB/s): min= 8192, max= 8192, per=46.13%, avg=8192.00, stdev= 0.00, samples=1 00:10:10.917 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:10.917 lat (usec) : 250=87.84%, 500=12.04% 00:10:10.917 lat (msec) : 4=0.03%, 50=0.10% 00:10:10.917 cpu : usr=4.00%, sys=5.70%, ctx=3947, majf=0, minf=1 00:10:10.917 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:10.917 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.917 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.917 issued rwts: total=1898,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:10.917 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:10.917 job3: (groupid=0, jobs=1): err= 0: pid=2263464: Tue Dec 10 22:41:18 2024 00:10:10.917 read: IOPS=1046, BW=4187KiB/s (4287kB/s)(4308KiB/1029msec) 00:10:10.917 slat (nsec): min=7217, max=46648, avg=8591.17, stdev=2143.80 00:10:10.917 clat (usec): min=172, max=41120, avg=670.34, stdev=4144.90 00:10:10.917 lat (usec): min=180, max=41129, avg=678.93, stdev=4145.07 00:10:10.917 clat percentiles (usec): 00:10:10.917 | 1.00th=[ 192], 5.00th=[ 198], 10.00th=[ 202], 20.00th=[ 208], 00:10:10.917 | 30.00th=[ 215], 40.00th=[ 221], 50.00th=[ 229], 60.00th=[ 237], 00:10:10.917 | 70.00th=[ 239], 80.00th=[ 245], 90.00th=[ 253], 95.00th=[ 367], 00:10:10.917 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:10.917 | 99.99th=[41157] 00:10:10.917 write: IOPS=1492, BW=5971KiB/s (6114kB/s)(6144KiB/1029msec); 0 zone resets 00:10:10.917 slat (nsec): min=10011, max=53523, avg=11645.71, stdev=3434.51 00:10:10.917 clat (usec): min=112, max=325, avg=177.10, stdev=42.35 00:10:10.917 lat (usec): min=131, max=361, avg=188.75, stdev=42.90 00:10:10.917 clat percentiles (usec): 00:10:10.917 | 1.00th=[ 126], 5.00th=[ 131], 10.00th=[ 133], 20.00th=[ 137], 00:10:10.917 | 30.00th=[ 143], 40.00th=[ 155], 50.00th=[ 165], 60.00th=[ 176], 00:10:10.917 | 70.00th=[ 196], 80.00th=[ 233], 90.00th=[ 243], 95.00th=[ 247], 00:10:10.917 | 99.00th=[ 258], 99.50th=[ 277], 99.90th=[ 302], 99.95th=[ 326], 00:10:10.917 | 99.99th=[ 326] 00:10:10.917 bw ( KiB/s): min= 4096, max= 8192, per=34.60%, avg=6144.00, stdev=2896.31, samples=2 00:10:10.917 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:10:10.917 lat (usec) : 250=93.72%, 500=5.82% 00:10:10.917 lat (msec) : 50=0.46% 00:10:10.917 cpu : usr=2.53%, sys=3.60%, ctx=2613, majf=0, minf=1 00:10:10.917 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:10.917 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.917 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:10.917 issued rwts: total=1077,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:10.917 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:10.917 00:10:10.917 Run status group 0 (all jobs): 00:10:10.917 READ: bw=11.4MiB/s (11.9MB/s), 87.5KiB/s-7584KiB/s (89.6kB/s-7766kB/s), io=11.8MiB (12.4MB), run=1001-1038msec 00:10:10.917 WRITE: bw=17.3MiB/s (18.2MB/s), 1973KiB/s-8184KiB/s (2020kB/s-8380kB/s), io=18.0MiB (18.9MB), run=1001-1038msec 00:10:10.917 00:10:10.917 Disk stats (read/write): 00:10:10.917 nvme0n1: ios=68/512, merge=0/0, ticks=776/88, in_queue=864, util=87.27% 00:10:10.917 nvme0n2: ios=31/512, merge=0/0, ticks=922/77, in_queue=999, util=91.07% 00:10:10.917 nvme0n3: ios=1563/1628, merge=0/0, ticks=1526/246, in_queue=1772, util=98.44% 00:10:10.917 nvme0n4: ios=1067/1536, merge=0/0, ticks=522/251, in_queue=773, util=89.73% 00:10:10.917 22:41:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:10.917 [global] 00:10:10.917 thread=1 00:10:10.917 invalidate=1 00:10:10.917 rw=write 00:10:10.917 time_based=1 00:10:10.917 runtime=1 00:10:10.917 ioengine=libaio 00:10:10.917 direct=1 00:10:10.917 bs=4096 00:10:10.917 iodepth=128 00:10:10.917 norandommap=0 00:10:10.917 numjobs=1 00:10:10.917 00:10:10.917 verify_dump=1 00:10:10.917 verify_backlog=512 00:10:10.917 verify_state_save=0 00:10:10.917 do_verify=1 00:10:10.917 verify=crc32c-intel 00:10:10.917 [job0] 00:10:10.917 filename=/dev/nvme0n1 00:10:10.917 [job1] 00:10:10.917 filename=/dev/nvme0n2 00:10:10.917 [job2] 00:10:10.917 filename=/dev/nvme0n3 00:10:10.917 [job3] 00:10:10.917 filename=/dev/nvme0n4 00:10:10.917 Could not set queue depth (nvme0n1) 00:10:10.917 Could not set queue depth (nvme0n2) 00:10:10.917 Could not set queue depth (nvme0n3) 00:10:10.917 Could not set queue depth (nvme0n4) 00:10:11.179 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:11.179 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:11.179 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:11.179 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:11.179 fio-3.35 00:10:11.179 Starting 4 threads 00:10:12.642 00:10:12.642 job0: (groupid=0, jobs=1): err= 0: pid=2263830: Tue Dec 10 22:41:19 2024 00:10:12.642 read: IOPS=5149, BW=20.1MiB/s (21.1MB/s)(20.2MiB/1005msec) 00:10:12.642 slat (nsec): min=1406, max=31978k, avg=106952.85, stdev=877579.46 00:10:12.642 clat (usec): min=3696, max=98426, avg=13419.26, stdev=12390.08 00:10:12.642 lat (usec): min=4259, max=98437, avg=13526.22, stdev=12471.64 00:10:12.642 clat percentiles (usec): 00:10:12.642 | 1.00th=[ 5604], 5.00th=[ 7504], 10.00th=[ 8717], 20.00th=[ 9896], 00:10:12.642 | 30.00th=[10159], 40.00th=[10290], 50.00th=[10421], 60.00th=[10552], 00:10:12.642 | 70.00th=[10683], 80.00th=[11863], 90.00th=[15533], 95.00th=[35390], 00:10:12.642 | 99.00th=[84411], 99.50th=[94897], 99.90th=[98042], 99.95th=[98042], 00:10:12.642 | 99.99th=[98042] 00:10:12.642 write: IOPS=5603, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1005msec); 0 zone resets 00:10:12.642 slat (usec): min=2, max=8195, avg=73.09, stdev=359.61 00:10:12.642 clat (usec): min=1046, max=23571, avg=10299.68, stdev=2054.38 00:10:12.642 lat (usec): min=1056, max=23581, avg=10372.77, stdev=2081.03 00:10:12.642 clat percentiles (usec): 00:10:12.642 | 1.00th=[ 4424], 5.00th=[ 7177], 10.00th=[ 8717], 20.00th=[ 9634], 00:10:12.642 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10290], 60.00th=[10421], 00:10:12.642 | 70.00th=[10552], 80.00th=[10683], 90.00th=[11600], 95.00th=[13566], 00:10:12.642 | 99.00th=[18220], 99.50th=[23200], 99.90th=[23462], 99.95th=[23462], 00:10:12.642 | 99.99th=[23462] 00:10:12.642 bw ( KiB/s): min=19880, max=24600, per=31.28%, avg=22240.00, stdev=3337.54, samples=2 00:10:12.642 iops : min= 4970, max= 6150, avg=5560.00, stdev=834.39, samples=2 00:10:12.642 lat (msec) : 2=0.02%, 4=0.28%, 10=28.31%, 20=67.84%, 50=2.37% 00:10:12.642 lat (msec) : 100=1.18% 00:10:12.642 cpu : usr=3.59%, sys=5.08%, ctx=677, majf=0, minf=1 00:10:12.642 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:10:12.642 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:12.642 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:12.642 issued rwts: total=5175,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:12.642 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:12.642 job1: (groupid=0, jobs=1): err= 0: pid=2263831: Tue Dec 10 22:41:19 2024 00:10:12.642 read: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec) 00:10:12.642 slat (nsec): min=1123, max=35963k, avg=115421.58, stdev=957257.67 00:10:12.642 clat (usec): min=4433, max=79571, avg=15778.16, stdev=12407.71 00:10:12.642 lat (usec): min=4439, max=79582, avg=15893.58, stdev=12487.27 00:10:12.642 clat percentiles (usec): 00:10:12.642 | 1.00th=[ 6259], 5.00th=[ 9110], 10.00th=[10028], 20.00th=[10683], 00:10:12.642 | 30.00th=[11207], 40.00th=[11600], 50.00th=[12125], 60.00th=[12649], 00:10:12.642 | 70.00th=[13304], 80.00th=[15533], 90.00th=[26084], 95.00th=[40633], 00:10:12.642 | 99.00th=[78119], 99.50th=[78119], 99.90th=[79168], 99.95th=[79168], 00:10:12.642 | 99.99th=[79168] 00:10:12.642 write: IOPS=4510, BW=17.6MiB/s (18.5MB/s)(17.7MiB/1005msec); 0 zone resets 00:10:12.642 slat (usec): min=2, max=17465, avg=106.40, stdev=759.68 00:10:12.642 clat (usec): min=1063, max=42831, avg=13880.41, stdev=6163.51 00:10:12.642 lat (usec): min=1073, max=42838, avg=13986.81, stdev=6235.67 00:10:12.642 clat percentiles (usec): 00:10:12.642 | 1.00th=[ 3294], 5.00th=[ 7439], 10.00th=[ 8848], 20.00th=[ 9765], 00:10:12.642 | 30.00th=[10028], 40.00th=[10290], 50.00th=[10814], 60.00th=[12780], 00:10:12.642 | 70.00th=[14746], 80.00th=[20579], 90.00th=[22938], 95.00th=[25297], 00:10:12.642 | 99.00th=[31065], 99.50th=[31851], 99.90th=[41681], 99.95th=[42206], 00:10:12.643 | 99.99th=[42730] 00:10:12.643 bw ( KiB/s): min=12792, max=22448, per=24.78%, avg=17620.00, stdev=6827.82, samples=2 00:10:12.643 iops : min= 3198, max= 5612, avg=4405.00, stdev=1706.96, samples=2 00:10:12.643 lat (msec) : 2=0.02%, 4=0.73%, 10=19.38%, 20=63.19%, 50=15.20% 00:10:12.643 lat (msec) : 100=1.47% 00:10:12.643 cpu : usr=3.09%, sys=3.78%, ctx=344, majf=0, minf=1 00:10:12.643 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:12.643 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:12.643 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:12.643 issued rwts: total=4096,4533,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:12.643 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:12.643 job2: (groupid=0, jobs=1): err= 0: pid=2263832: Tue Dec 10 22:41:19 2024 00:10:12.643 read: IOPS=2884, BW=11.3MiB/s (11.8MB/s)(11.3MiB/1003msec) 00:10:12.643 slat (nsec): min=1169, max=19846k, avg=140116.95, stdev=803195.62 00:10:12.643 clat (usec): min=742, max=71179, avg=16281.73, stdev=8370.54 00:10:12.643 lat (usec): min=2028, max=71188, avg=16421.85, stdev=8439.33 00:10:12.643 clat percentiles (usec): 00:10:12.643 | 1.00th=[ 3916], 5.00th=[ 8848], 10.00th=[11207], 20.00th=[11600], 00:10:12.643 | 30.00th=[12518], 40.00th=[12780], 50.00th=[14222], 60.00th=[14877], 00:10:12.643 | 70.00th=[17433], 80.00th=[21365], 90.00th=[22676], 95.00th=[23200], 00:10:12.643 | 99.00th=[70779], 99.50th=[70779], 99.90th=[70779], 99.95th=[70779], 00:10:12.643 | 99.99th=[70779] 00:10:12.643 write: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec); 0 zone resets 00:10:12.643 slat (nsec): min=1976, max=54510k, avg=188362.44, stdev=1521977.09 00:10:12.643 clat (usec): min=7245, max=85981, avg=22148.17, stdev=17327.25 00:10:12.643 lat (msec): min=7, max=104, avg=22.34, stdev=17.47 00:10:12.643 clat percentiles (usec): 00:10:12.643 | 1.00th=[ 9372], 5.00th=[10945], 10.00th=[11469], 20.00th=[11863], 00:10:12.643 | 30.00th=[11994], 40.00th=[12125], 50.00th=[13960], 60.00th=[16909], 00:10:12.643 | 70.00th=[20841], 80.00th=[28181], 90.00th=[46924], 95.00th=[67634], 00:10:12.643 | 99.00th=[83362], 99.50th=[85459], 99.90th=[85459], 99.95th=[85459], 00:10:12.643 | 99.99th=[85459] 00:10:12.643 bw ( KiB/s): min=12288, max=12288, per=17.28%, avg=12288.00, stdev= 0.00, samples=2 00:10:12.643 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:10:12.643 lat (usec) : 750=0.02% 00:10:12.643 lat (msec) : 4=0.54%, 10=3.39%, 20=68.33%, 50=22.45%, 100=5.28% 00:10:12.643 cpu : usr=2.00%, sys=3.09%, ctx=414, majf=0, minf=1 00:10:12.643 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:10:12.643 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:12.643 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:12.643 issued rwts: total=2893,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:12.643 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:12.643 job3: (groupid=0, jobs=1): err= 0: pid=2263833: Tue Dec 10 22:41:19 2024 00:10:12.643 read: IOPS=4571, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1008msec) 00:10:12.643 slat (nsec): min=1357, max=14528k, avg=107413.46, stdev=806483.96 00:10:12.643 clat (usec): min=5018, max=40298, avg=13961.89, stdev=4188.27 00:10:12.643 lat (usec): min=5029, max=47344, avg=14069.30, stdev=4251.95 00:10:12.643 clat percentiles (usec): 00:10:12.643 | 1.00th=[ 6915], 5.00th=[ 9241], 10.00th=[10159], 20.00th=[11469], 00:10:12.643 | 30.00th=[11731], 40.00th=[12125], 50.00th=[13042], 60.00th=[13829], 00:10:12.643 | 70.00th=[14877], 80.00th=[16450], 90.00th=[18744], 95.00th=[20055], 00:10:12.643 | 99.00th=[34866], 99.50th=[34866], 99.90th=[35390], 99.95th=[35390], 00:10:12.643 | 99.99th=[40109] 00:10:12.643 write: IOPS=4642, BW=18.1MiB/s (19.0MB/s)(18.3MiB/1008msec); 0 zone resets 00:10:12.643 slat (nsec): min=1957, max=12163k, avg=96645.00, stdev=517434.74 00:10:12.643 clat (usec): min=1867, max=58693, avg=13520.25, stdev=8078.79 00:10:12.643 lat (usec): min=1899, max=58708, avg=13616.90, stdev=8134.08 00:10:12.643 clat percentiles (usec): 00:10:12.643 | 1.00th=[ 4621], 5.00th=[ 6521], 10.00th=[ 8094], 20.00th=[10290], 00:10:12.643 | 30.00th=[11076], 40.00th=[11600], 50.00th=[11863], 60.00th=[12125], 00:10:12.643 | 70.00th=[12387], 80.00th=[13435], 90.00th=[19268], 95.00th=[28443], 00:10:12.643 | 99.00th=[52167], 99.50th=[55837], 99.90th=[58459], 99.95th=[58459], 00:10:12.643 | 99.99th=[58459] 00:10:12.643 bw ( KiB/s): min=18424, max=18440, per=25.92%, avg=18432.00, stdev=11.31, samples=2 00:10:12.643 iops : min= 4606, max= 4610, avg=4608.00, stdev= 2.83, samples=2 00:10:12.643 lat (msec) : 2=0.08%, 4=0.18%, 10=12.76%, 20=79.83%, 50=6.33% 00:10:12.643 lat (msec) : 100=0.82% 00:10:12.643 cpu : usr=2.98%, sys=5.76%, ctx=530, majf=0, minf=1 00:10:12.643 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:10:12.643 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:12.643 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:12.643 issued rwts: total=4608,4680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:12.643 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:12.643 00:10:12.643 Run status group 0 (all jobs): 00:10:12.643 READ: bw=65.0MiB/s (68.2MB/s), 11.3MiB/s-20.1MiB/s (11.8MB/s-21.1MB/s), io=65.5MiB (68.7MB), run=1003-1008msec 00:10:12.643 WRITE: bw=69.4MiB/s (72.8MB/s), 12.0MiB/s-21.9MiB/s (12.5MB/s-23.0MB/s), io=70.0MiB (73.4MB), run=1003-1008msec 00:10:12.643 00:10:12.643 Disk stats (read/write): 00:10:12.643 nvme0n1: ios=4381/4608, merge=0/0, ticks=35947/32170, in_queue=68117, util=86.67% 00:10:12.643 nvme0n2: ios=3991/4096, merge=0/0, ticks=43213/47927, in_queue=91140, util=91.07% 00:10:12.643 nvme0n3: ios=2108/2389, merge=0/0, ticks=15131/21810, in_queue=36941, util=99.17% 00:10:12.643 nvme0n4: ios=3986/4096, merge=0/0, ticks=41842/46682, in_queue=88524, util=96.54% 00:10:12.643 22:41:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:12.643 [global] 00:10:12.643 thread=1 00:10:12.643 invalidate=1 00:10:12.643 rw=randwrite 00:10:12.643 time_based=1 00:10:12.643 runtime=1 00:10:12.643 ioengine=libaio 00:10:12.643 direct=1 00:10:12.643 bs=4096 00:10:12.643 iodepth=128 00:10:12.643 norandommap=0 00:10:12.643 numjobs=1 00:10:12.643 00:10:12.643 verify_dump=1 00:10:12.643 verify_backlog=512 00:10:12.643 verify_state_save=0 00:10:12.643 do_verify=1 00:10:12.643 verify=crc32c-intel 00:10:12.643 [job0] 00:10:12.643 filename=/dev/nvme0n1 00:10:12.643 [job1] 00:10:12.643 filename=/dev/nvme0n2 00:10:12.643 [job2] 00:10:12.643 filename=/dev/nvme0n3 00:10:12.643 [job3] 00:10:12.643 filename=/dev/nvme0n4 00:10:12.643 Could not set queue depth (nvme0n1) 00:10:12.643 Could not set queue depth (nvme0n2) 00:10:12.643 Could not set queue depth (nvme0n3) 00:10:12.643 Could not set queue depth (nvme0n4) 00:10:12.643 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:12.643 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:12.643 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:12.643 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:12.643 fio-3.35 00:10:12.643 Starting 4 threads 00:10:14.067 00:10:14.067 job0: (groupid=0, jobs=1): err= 0: pid=2264210: Tue Dec 10 22:41:21 2024 00:10:14.067 read: IOPS=4566, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1009msec) 00:10:14.067 slat (nsec): min=1109, max=26787k, avg=97694.37, stdev=803919.27 00:10:14.067 clat (usec): min=1552, max=51160, avg=12829.48, stdev=7416.41 00:10:14.067 lat (usec): min=1891, max=51177, avg=12927.18, stdev=7474.35 00:10:14.067 clat percentiles (usec): 00:10:14.067 | 1.00th=[ 2671], 5.00th=[ 4490], 10.00th=[ 6325], 20.00th=[ 7898], 00:10:14.067 | 30.00th=[ 8586], 40.00th=[ 9634], 50.00th=[10683], 60.00th=[12125], 00:10:14.067 | 70.00th=[13435], 80.00th=[18744], 90.00th=[21365], 95.00th=[25560], 00:10:14.067 | 99.00th=[43254], 99.50th=[47449], 99.90th=[50070], 99.95th=[50070], 00:10:14.067 | 99.99th=[51119] 00:10:14.067 write: IOPS=4986, BW=19.5MiB/s (20.4MB/s)(19.7MiB/1009msec); 0 zone resets 00:10:14.067 slat (nsec): min=1884, max=29037k, avg=98684.57, stdev=695248.07 00:10:14.067 clat (usec): min=1454, max=63309, avg=13673.39, stdev=9991.45 00:10:14.067 lat (usec): min=1465, max=63314, avg=13772.08, stdev=10055.99 00:10:14.067 clat percentiles (usec): 00:10:14.067 | 1.00th=[ 2606], 5.00th=[ 3785], 10.00th=[ 5669], 20.00th=[ 7635], 00:10:14.067 | 30.00th=[ 8291], 40.00th=[ 9241], 50.00th=[10552], 60.00th=[11863], 00:10:14.067 | 70.00th=[13042], 80.00th=[18482], 90.00th=[27657], 95.00th=[35390], 00:10:14.067 | 99.00th=[55313], 99.50th=[58459], 99.90th=[63177], 99.95th=[63177], 00:10:14.067 | 99.99th=[63177] 00:10:14.067 bw ( KiB/s): min=14648, max=24576, per=29.05%, avg=19612.00, stdev=7020.16, samples=2 00:10:14.067 iops : min= 3662, max= 6144, avg=4903.00, stdev=1755.04, samples=2 00:10:14.067 lat (msec) : 2=0.50%, 4=4.67%, 10=38.98%, 20=39.31%, 50=15.80% 00:10:14.067 lat (msec) : 100=0.75% 00:10:14.067 cpu : usr=2.78%, sys=4.66%, ctx=437, majf=0, minf=1 00:10:14.067 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:10:14.067 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:14.067 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:14.067 issued rwts: total=4608,5031,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:14.067 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:14.067 job1: (groupid=0, jobs=1): err= 0: pid=2264211: Tue Dec 10 22:41:21 2024 00:10:14.067 read: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec) 00:10:14.067 slat (nsec): min=1022, max=30275k, avg=120133.25, stdev=969742.57 00:10:14.067 clat (usec): min=4094, max=87757, avg=15620.08, stdev=12093.22 00:10:14.067 lat (usec): min=4098, max=87775, avg=15740.21, stdev=12212.12 00:10:14.068 clat percentiles (usec): 00:10:14.068 | 1.00th=[ 4178], 5.00th=[ 7111], 10.00th=[ 8717], 20.00th=[ 9503], 00:10:14.068 | 30.00th=[10159], 40.00th=[10552], 50.00th=[11600], 60.00th=[13173], 00:10:14.068 | 70.00th=[14353], 80.00th=[19006], 90.00th=[25560], 95.00th=[45351], 00:10:14.068 | 99.00th=[71828], 99.50th=[83362], 99.90th=[87557], 99.95th=[87557], 00:10:14.068 | 99.99th=[87557] 00:10:14.068 write: IOPS=4058, BW=15.9MiB/s (16.6MB/s)(15.9MiB/1002msec); 0 zone resets 00:10:14.068 slat (nsec): min=1775, max=12255k, avg=112245.81, stdev=704889.07 00:10:14.068 clat (usec): min=368, max=87757, avg=17519.22, stdev=16442.84 00:10:14.068 lat (usec): min=925, max=87763, avg=17631.46, stdev=16507.65 00:10:14.068 clat percentiles (usec): 00:10:14.068 | 1.00th=[ 2409], 5.00th=[ 5342], 10.00th=[ 6390], 20.00th=[ 8356], 00:10:14.068 | 30.00th=[ 9765], 40.00th=[10290], 50.00th=[10683], 60.00th=[11731], 00:10:14.068 | 70.00th=[14353], 80.00th=[21890], 90.00th=[42730], 95.00th=[61080], 00:10:14.068 | 99.00th=[79168], 99.50th=[82314], 99.90th=[83362], 99.95th=[83362], 00:10:14.068 | 99.99th=[87557] 00:10:14.068 bw ( KiB/s): min=12152, max=12152, per=18.00%, avg=12152.00, stdev= 0.00, samples=1 00:10:14.068 iops : min= 3038, max= 3038, avg=3038.00, stdev= 0.00, samples=1 00:10:14.068 lat (usec) : 500=0.01%, 1000=0.07% 00:10:14.068 lat (msec) : 2=0.25%, 4=0.94%, 10=30.30%, 20=46.57%, 50=16.39% 00:10:14.068 lat (msec) : 100=5.48% 00:10:14.068 cpu : usr=3.20%, sys=3.70%, ctx=375, majf=0, minf=2 00:10:14.068 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:14.068 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:14.068 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:14.068 issued rwts: total=3584,4067,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:14.068 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:14.068 job2: (groupid=0, jobs=1): err= 0: pid=2264213: Tue Dec 10 22:41:21 2024 00:10:14.068 read: IOPS=3559, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1007msec) 00:10:14.068 slat (nsec): min=1386, max=12018k, avg=113298.98, stdev=790759.05 00:10:14.068 clat (usec): min=5060, max=33352, avg=13879.85, stdev=4119.66 00:10:14.068 lat (usec): min=5068, max=33390, avg=13993.15, stdev=4174.30 00:10:14.068 clat percentiles (usec): 00:10:14.068 | 1.00th=[ 5932], 5.00th=[ 8979], 10.00th=[10159], 20.00th=[11338], 00:10:14.068 | 30.00th=[11994], 40.00th=[12649], 50.00th=[12911], 60.00th=[13304], 00:10:14.068 | 70.00th=[14222], 80.00th=[16450], 90.00th=[19268], 95.00th=[22676], 00:10:14.068 | 99.00th=[29230], 99.50th=[30278], 99.90th=[31589], 99.95th=[33424], 00:10:14.068 | 99.99th=[33424] 00:10:14.068 write: IOPS=3923, BW=15.3MiB/s (16.1MB/s)(15.4MiB/1007msec); 0 zone resets 00:10:14.068 slat (usec): min=2, max=17091, avg=136.89, stdev=815.13 00:10:14.068 clat (usec): min=550, max=85263, avg=19753.10, stdev=15361.30 00:10:14.068 lat (usec): min=557, max=85272, avg=19889.99, stdev=15457.07 00:10:14.068 clat percentiles (usec): 00:10:14.068 | 1.00th=[ 2245], 5.00th=[ 4555], 10.00th=[ 6783], 20.00th=[10028], 00:10:14.068 | 30.00th=[11076], 40.00th=[11600], 50.00th=[11863], 60.00th=[17433], 00:10:14.068 | 70.00th=[21365], 80.00th=[29492], 90.00th=[41681], 95.00th=[52691], 00:10:14.068 | 99.00th=[76022], 99.50th=[80217], 99.90th=[85459], 99.95th=[85459], 00:10:14.068 | 99.99th=[85459] 00:10:14.068 bw ( KiB/s): min=12464, max=18128, per=22.66%, avg=15296.00, stdev=4005.05, samples=2 00:10:14.068 iops : min= 3116, max= 4532, avg=3824.00, stdev=1001.26, samples=2 00:10:14.068 lat (usec) : 750=0.04% 00:10:14.068 lat (msec) : 2=0.17%, 4=1.37%, 10=13.48%, 20=62.72%, 50=19.06% 00:10:14.068 lat (msec) : 100=3.16% 00:10:14.068 cpu : usr=2.39%, sys=4.97%, ctx=377, majf=0, minf=2 00:10:14.068 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:14.068 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:14.068 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:14.068 issued rwts: total=3584,3951,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:14.068 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:14.068 job3: (groupid=0, jobs=1): err= 0: pid=2264214: Tue Dec 10 22:41:21 2024 00:10:14.068 read: IOPS=3555, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1008msec) 00:10:14.068 slat (nsec): min=1081, max=22375k, avg=114921.93, stdev=935040.99 00:10:14.068 clat (usec): min=7759, max=53772, avg=16843.70, stdev=7450.61 00:10:14.068 lat (usec): min=7765, max=53794, avg=16958.62, stdev=7516.55 00:10:14.068 clat percentiles (usec): 00:10:14.068 | 1.00th=[ 8160], 5.00th=[ 8848], 10.00th=[10814], 20.00th=[11469], 00:10:14.068 | 30.00th=[11863], 40.00th=[12780], 50.00th=[13435], 60.00th=[16581], 00:10:14.068 | 70.00th=[19268], 80.00th=[21365], 90.00th=[27395], 95.00th=[31327], 00:10:14.068 | 99.00th=[41157], 99.50th=[44303], 99.90th=[44303], 99.95th=[44303], 00:10:14.068 | 99.99th=[53740] 00:10:14.068 write: IOPS=3948, BW=15.4MiB/s (16.2MB/s)(15.5MiB/1008msec); 0 zone resets 00:10:14.068 slat (nsec): min=1955, max=18103k, avg=126054.63, stdev=866547.90 00:10:14.068 clat (usec): min=1058, max=40114, avg=16968.39, stdev=6481.72 00:10:14.068 lat (usec): min=1068, max=40142, avg=17094.44, stdev=6537.90 00:10:14.068 clat percentiles (usec): 00:10:14.068 | 1.00th=[ 7963], 5.00th=[10028], 10.00th=[10945], 20.00th=[11076], 00:10:14.068 | 30.00th=[11863], 40.00th=[12780], 50.00th=[14615], 60.00th=[19006], 00:10:14.068 | 70.00th=[21627], 80.00th=[22676], 90.00th=[26084], 95.00th=[27395], 00:10:14.068 | 99.00th=[38536], 99.50th=[38536], 99.90th=[39584], 99.95th=[40109], 00:10:14.068 | 99.99th=[40109] 00:10:14.068 bw ( KiB/s): min=14432, max=16384, per=22.82%, avg=15408.00, stdev=1380.27, samples=2 00:10:14.068 iops : min= 3608, max= 4096, avg=3852.00, stdev=345.07, samples=2 00:10:14.068 lat (msec) : 2=0.04%, 4=0.01%, 10=6.60%, 20=62.81%, 50=30.53% 00:10:14.068 lat (msec) : 100=0.01% 00:10:14.068 cpu : usr=1.79%, sys=4.57%, ctx=306, majf=0, minf=1 00:10:14.068 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:14.068 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:14.068 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:14.068 issued rwts: total=3584,3980,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:14.068 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:14.068 00:10:14.068 Run status group 0 (all jobs): 00:10:14.068 READ: bw=59.5MiB/s (62.4MB/s), 13.9MiB/s-17.8MiB/s (14.6MB/s-18.7MB/s), io=60.0MiB (62.9MB), run=1002-1009msec 00:10:14.068 WRITE: bw=65.9MiB/s (69.1MB/s), 15.3MiB/s-19.5MiB/s (16.1MB/s-20.4MB/s), io=66.5MiB (69.8MB), run=1002-1009msec 00:10:14.068 00:10:14.068 Disk stats (read/write): 00:10:14.068 nvme0n1: ios=4524/4608, merge=0/0, ticks=41483/40224, in_queue=81707, util=97.49% 00:10:14.068 nvme0n2: ios=3072/3256, merge=0/0, ticks=34965/58875, in_queue=93840, util=86.09% 00:10:14.068 nvme0n3: ios=3072/3191, merge=0/0, ticks=41870/62309, in_queue=104179, util=88.96% 00:10:14.068 nvme0n4: ios=3072/3098, merge=0/0, ticks=38265/34740, in_queue=73005, util=89.61% 00:10:14.068 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:14.068 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2264447 00:10:14.068 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:14.068 22:41:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:14.068 [global] 00:10:14.068 thread=1 00:10:14.068 invalidate=1 00:10:14.068 rw=read 00:10:14.068 time_based=1 00:10:14.068 runtime=10 00:10:14.068 ioengine=libaio 00:10:14.068 direct=1 00:10:14.068 bs=4096 00:10:14.068 iodepth=1 00:10:14.068 norandommap=1 00:10:14.068 numjobs=1 00:10:14.068 00:10:14.068 [job0] 00:10:14.068 filename=/dev/nvme0n1 00:10:14.068 [job1] 00:10:14.068 filename=/dev/nvme0n2 00:10:14.068 [job2] 00:10:14.068 filename=/dev/nvme0n3 00:10:14.068 [job3] 00:10:14.068 filename=/dev/nvme0n4 00:10:14.068 Could not set queue depth (nvme0n1) 00:10:14.068 Could not set queue depth (nvme0n2) 00:10:14.068 Could not set queue depth (nvme0n3) 00:10:14.068 Could not set queue depth (nvme0n4) 00:10:14.327 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:14.327 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:14.327 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:14.327 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:14.327 fio-3.35 00:10:14.327 Starting 4 threads 00:10:16.861 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:17.120 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=41484288, buflen=4096 00:10:17.120 fio: pid=2264586, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:17.120 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:17.379 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=46182400, buflen=4096 00:10:17.379 fio: pid=2264585, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:17.379 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:17.379 22:41:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:17.637 22:41:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:17.637 22:41:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:17.637 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=45522944, buflen=4096 00:10:17.637 fio: pid=2264583, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:17.896 22:41:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:17.896 22:41:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:17.896 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=5017600, buflen=4096 00:10:17.896 fio: pid=2264584, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:17.896 00:10:17.896 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2264583: Tue Dec 10 22:41:25 2024 00:10:17.896 read: IOPS=3516, BW=13.7MiB/s (14.4MB/s)(43.4MiB/3161msec) 00:10:17.896 slat (usec): min=2, max=15690, avg=10.78, stdev=222.97 00:10:17.896 clat (usec): min=163, max=42360, avg=270.59, stdev=1271.99 00:10:17.896 lat (usec): min=169, max=49194, avg=281.36, stdev=1323.38 00:10:17.896 clat percentiles (usec): 00:10:17.896 | 1.00th=[ 186], 5.00th=[ 198], 10.00th=[ 202], 20.00th=[ 208], 00:10:17.896 | 30.00th=[ 215], 40.00th=[ 219], 50.00th=[ 223], 60.00th=[ 227], 00:10:17.896 | 70.00th=[ 233], 80.00th=[ 241], 90.00th=[ 265], 95.00th=[ 289], 00:10:17.896 | 99.00th=[ 412], 99.50th=[ 424], 99.90th=[ 506], 99.95th=[41157], 00:10:17.896 | 99.99th=[41157] 00:10:17.896 bw ( KiB/s): min=10032, max=17688, per=36.23%, avg=14406.50, stdev=3031.59, samples=6 00:10:17.896 iops : min= 2508, max= 4422, avg=3601.50, stdev=758.01, samples=6 00:10:17.896 lat (usec) : 250=85.34%, 500=14.54%, 750=0.02% 00:10:17.896 lat (msec) : 50=0.10% 00:10:17.896 cpu : usr=0.82%, sys=3.54%, ctx=11119, majf=0, minf=1 00:10:17.896 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:17.896 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:17.896 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:17.896 issued rwts: total=11115,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:17.896 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:17.896 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2264584: Tue Dec 10 22:41:25 2024 00:10:17.896 read: IOPS=361, BW=1444KiB/s (1478kB/s)(4900KiB/3394msec) 00:10:17.896 slat (usec): min=6, max=8800, avg=27.35, stdev=388.42 00:10:17.896 clat (usec): min=177, max=42336, avg=2724.31, stdev=9709.53 00:10:17.896 lat (usec): min=184, max=42343, avg=2751.67, stdev=9713.82 00:10:17.896 clat percentiles (usec): 00:10:17.896 | 1.00th=[ 192], 5.00th=[ 202], 10.00th=[ 206], 20.00th=[ 217], 00:10:17.896 | 30.00th=[ 231], 40.00th=[ 258], 50.00th=[ 277], 60.00th=[ 289], 00:10:17.896 | 70.00th=[ 297], 80.00th=[ 306], 90.00th=[ 326], 95.00th=[40633], 00:10:17.896 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:10:17.896 | 99.99th=[42206] 00:10:17.896 bw ( KiB/s): min= 120, max= 5240, per=3.83%, avg=1522.33, stdev=2111.19, samples=6 00:10:17.896 iops : min= 30, max= 1310, avg=380.50, stdev=527.73, samples=6 00:10:17.896 lat (usec) : 250=37.60%, 500=56.28% 00:10:17.897 lat (msec) : 50=6.04% 00:10:17.897 cpu : usr=0.00%, sys=0.47%, ctx=1230, majf=0, minf=2 00:10:17.897 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:17.897 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:17.897 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:17.897 issued rwts: total=1226,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:17.897 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:17.897 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2264585: Tue Dec 10 22:41:25 2024 00:10:17.897 read: IOPS=3847, BW=15.0MiB/s (15.8MB/s)(44.0MiB/2931msec) 00:10:17.897 slat (nsec): min=7013, max=47695, avg=8830.80, stdev=1644.44 00:10:17.897 clat (usec): min=176, max=41200, avg=247.27, stdev=662.04 00:10:17.897 lat (usec): min=184, max=41209, avg=256.10, stdev=662.05 00:10:17.897 clat percentiles (usec): 00:10:17.897 | 1.00th=[ 196], 5.00th=[ 204], 10.00th=[ 210], 20.00th=[ 217], 00:10:17.897 | 30.00th=[ 223], 40.00th=[ 227], 50.00th=[ 233], 60.00th=[ 237], 00:10:17.897 | 70.00th=[ 245], 80.00th=[ 253], 90.00th=[ 269], 95.00th=[ 293], 00:10:17.897 | 99.00th=[ 322], 99.50th=[ 334], 99.90th=[ 388], 99.95th=[ 441], 00:10:17.897 | 99.99th=[40633] 00:10:17.897 bw ( KiB/s): min=11784, max=16968, per=38.81%, avg=15433.60, stdev=2075.03, samples=5 00:10:17.897 iops : min= 2946, max= 4242, avg=3858.40, stdev=518.76, samples=5 00:10:17.897 lat (usec) : 250=77.92%, 500=22.05% 00:10:17.897 lat (msec) : 50=0.03% 00:10:17.897 cpu : usr=2.59%, sys=6.01%, ctx=11278, majf=0, minf=2 00:10:17.897 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:17.897 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:17.897 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:17.897 issued rwts: total=11276,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:17.897 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:17.897 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2264586: Tue Dec 10 22:41:25 2024 00:10:17.897 read: IOPS=3719, BW=14.5MiB/s (15.2MB/s)(39.6MiB/2723msec) 00:10:17.897 slat (nsec): min=6456, max=31163, avg=7539.69, stdev=1037.94 00:10:17.897 clat (usec): min=189, max=40565, avg=257.66, stdev=407.82 00:10:17.897 lat (usec): min=197, max=40572, avg=265.20, stdev=407.82 00:10:17.897 clat percentiles (usec): 00:10:17.897 | 1.00th=[ 206], 5.00th=[ 217], 10.00th=[ 225], 20.00th=[ 233], 00:10:17.897 | 30.00th=[ 239], 40.00th=[ 245], 50.00th=[ 249], 60.00th=[ 253], 00:10:17.897 | 70.00th=[ 258], 80.00th=[ 265], 90.00th=[ 281], 95.00th=[ 302], 00:10:17.897 | 99.00th=[ 412], 99.50th=[ 433], 99.90th=[ 506], 99.95th=[ 652], 00:10:17.897 | 99.99th=[ 4424] 00:10:17.897 bw ( KiB/s): min=13360, max=16152, per=37.72%, avg=15000.00, stdev=1035.17, samples=5 00:10:17.897 iops : min= 3340, max= 4038, avg=3750.00, stdev=258.79, samples=5 00:10:17.897 lat (usec) : 250=53.89%, 500=45.95%, 750=0.11% 00:10:17.897 lat (msec) : 10=0.03%, 50=0.01% 00:10:17.897 cpu : usr=0.88%, sys=3.49%, ctx=10129, majf=0, minf=2 00:10:17.897 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:17.897 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:17.897 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:17.897 issued rwts: total=10129,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:17.897 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:17.897 00:10:17.897 Run status group 0 (all jobs): 00:10:17.897 READ: bw=38.8MiB/s (40.7MB/s), 1444KiB/s-15.0MiB/s (1478kB/s-15.8MB/s), io=132MiB (138MB), run=2723-3394msec 00:10:17.897 00:10:17.897 Disk stats (read/write): 00:10:17.897 nvme0n1: ios=11113/0, merge=0/0, ticks=2901/0, in_queue=2901, util=94.58% 00:10:17.897 nvme0n2: ios=1224/0, merge=0/0, ticks=3295/0, in_queue=3295, util=95.78% 00:10:17.897 nvme0n3: ios=11097/0, merge=0/0, ticks=3519/0, in_queue=3519, util=99.49% 00:10:17.897 nvme0n4: ios=9748/0, merge=0/0, ticks=2458/0, in_queue=2458, util=96.41% 00:10:18.156 22:41:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:18.156 22:41:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:18.156 22:41:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:18.156 22:41:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:18.415 22:41:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:18.415 22:41:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:18.674 22:41:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:18.674 22:41:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:18.933 22:41:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:18.933 22:41:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 2264447 00:10:18.933 22:41:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:18.933 22:41:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:18.933 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:18.933 22:41:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:18.933 22:41:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:10:18.933 22:41:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:18.933 22:41:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:18.933 22:41:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:18.933 22:41:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:18.933 22:41:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:10:18.933 22:41:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:18.933 22:41:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:18.933 nvmf hotplug test: fio failed as expected 00:10:18.933 22:41:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:19.192 22:41:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:19.192 22:41:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:19.192 22:41:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:19.192 22:41:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:19.192 22:41:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:19.192 22:41:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:19.192 22:41:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:19.192 22:41:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:19.192 22:41:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:19.192 22:41:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:19.193 22:41:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:19.193 rmmod nvme_tcp 00:10:19.193 rmmod nvme_fabrics 00:10:19.193 rmmod nvme_keyring 00:10:19.193 22:41:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:19.193 22:41:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:19.193 22:41:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:19.193 22:41:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2261637 ']' 00:10:19.193 22:41:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2261637 00:10:19.193 22:41:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 2261637 ']' 00:10:19.193 22:41:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 2261637 00:10:19.193 22:41:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:10:19.193 22:41:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:19.193 22:41:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2261637 00:10:19.452 22:41:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:19.452 22:41:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:19.452 22:41:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2261637' 00:10:19.452 killing process with pid 2261637 00:10:19.452 22:41:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 2261637 00:10:19.452 22:41:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 2261637 00:10:19.452 22:41:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:19.452 22:41:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:19.452 22:41:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:19.452 22:41:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:19.452 22:41:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:10:19.452 22:41:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:19.452 22:41:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:10:19.452 22:41:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:19.452 22:41:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:19.452 22:41:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:19.452 22:41:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:19.452 22:41:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:21.990 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:21.990 00:10:21.990 real 0m26.979s 00:10:21.990 user 1m46.817s 00:10:21.990 sys 0m8.844s 00:10:21.990 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:21.990 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:21.990 ************************************ 00:10:21.990 END TEST nvmf_fio_target 00:10:21.990 ************************************ 00:10:21.990 22:41:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:21.990 22:41:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:21.990 22:41:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:21.990 22:41:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:21.990 ************************************ 00:10:21.990 START TEST nvmf_bdevio 00:10:21.990 ************************************ 00:10:21.990 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:21.990 * Looking for test storage... 00:10:21.990 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:10:21.990 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:21.990 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:10:21.990 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:21.990 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:21.990 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:21.990 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:21.990 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:21.990 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:21.990 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:21.990 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:21.990 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:21.990 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:21.990 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:21.990 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:21.990 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:21.990 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:21.990 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:21.990 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:21.990 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:21.990 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:21.990 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:21.990 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:21.991 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:21.991 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:21.991 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:21.991 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:21.991 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:21.991 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:21.991 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:21.991 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:21.991 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:21.991 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:21.991 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:21.991 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:21.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.991 --rc genhtml_branch_coverage=1 00:10:21.991 --rc genhtml_function_coverage=1 00:10:21.991 --rc genhtml_legend=1 00:10:21.991 --rc geninfo_all_blocks=1 00:10:21.991 --rc geninfo_unexecuted_blocks=1 00:10:21.991 00:10:21.991 ' 00:10:21.991 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:21.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.991 --rc genhtml_branch_coverage=1 00:10:21.991 --rc genhtml_function_coverage=1 00:10:21.991 --rc genhtml_legend=1 00:10:21.991 --rc geninfo_all_blocks=1 00:10:21.991 --rc geninfo_unexecuted_blocks=1 00:10:21.991 00:10:21.991 ' 00:10:21.991 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:21.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.991 --rc genhtml_branch_coverage=1 00:10:21.991 --rc genhtml_function_coverage=1 00:10:21.991 --rc genhtml_legend=1 00:10:21.991 --rc geninfo_all_blocks=1 00:10:21.991 --rc geninfo_unexecuted_blocks=1 00:10:21.991 00:10:21.991 ' 00:10:21.991 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:21.991 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.991 --rc genhtml_branch_coverage=1 00:10:21.991 --rc genhtml_function_coverage=1 00:10:21.991 --rc genhtml_legend=1 00:10:21.991 --rc geninfo_all_blocks=1 00:10:21.991 --rc geninfo_unexecuted_blocks=1 00:10:21.991 00:10:21.991 ' 00:10:21.991 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:10:21.991 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:21.991 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:21.991 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:21.991 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:21.991 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:21.991 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:21.991 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:21.991 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:21.991 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:21.991 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:21.991 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:21.991 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:21.991 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:21.991 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:21.991 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:21.991 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:21.991 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:21.991 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:10:21.991 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:21.991 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:21.991 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:21.991 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:21.991 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.991 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.991 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.991 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:21.991 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.991 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:21.991 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:21.991 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:21.991 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:21.992 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:21.992 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:21.992 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:21.992 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:21.992 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:21.992 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:21.992 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:21.992 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:21.992 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:21.992 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:21.992 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:21.992 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:21.992 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:21.992 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:21.992 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:21.992 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:21.992 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:21.992 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:21.992 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:21.992 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:21.992 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:10:21.992 22:41:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:28.567 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:28.567 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:10:28.567 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:28.567 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:28.567 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:28.567 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:28.567 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:28.567 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:10:28.567 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:28.567 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:10:28.567 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:10:28.567 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:10:28.567 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:10:28.567 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:10:28.567 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:10:28.567 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:28.567 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:28.567 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:28.567 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:28.567 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:28.567 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:28.567 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:28.567 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:28.567 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:28.567 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:28.567 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:28.567 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:28.567 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:28.567 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:28.567 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:28.567 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:28.567 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:28.567 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:28.567 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:28.567 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:28.567 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:28.567 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:28.567 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:28.567 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:28.567 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:28.567 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:28.567 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:28.567 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:28.567 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:28.567 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:28.567 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:28.567 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:28.567 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:28.567 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:28.567 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:28.567 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:28.567 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:28.567 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:28.567 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:28.567 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:28.567 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:28.567 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:28.567 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:28.567 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:28.567 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:28.567 Found net devices under 0000:86:00.0: cvl_0_0 00:10:28.567 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:28.567 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:28.567 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:28.567 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:28.567 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:28.567 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:28.567 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:28.567 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:28.567 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:28.567 Found net devices under 0000:86:00.1: cvl_0_1 00:10:28.567 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:28.567 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:28.567 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:10:28.567 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:28.567 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:28.567 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:28.567 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:28.567 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:28.567 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:28.567 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:28.567 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:28.567 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:28.567 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:28.567 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:28.567 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:28.567 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:28.567 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:28.567 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:28.567 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:28.568 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:28.568 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:28.568 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:28.568 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:28.568 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:28.568 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:28.568 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:28.568 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:28.568 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:28.568 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:28.568 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:28.568 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.372 ms 00:10:28.568 00:10:28.568 --- 10.0.0.2 ping statistics --- 00:10:28.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:28.568 rtt min/avg/max/mdev = 0.372/0.372/0.372/0.000 ms 00:10:28.568 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:28.568 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:28.568 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:10:28.568 00:10:28.568 --- 10.0.0.1 ping statistics --- 00:10:28.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:28.568 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:10:28.568 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:28.568 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:10:28.568 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:28.568 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:28.568 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:28.568 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:28.568 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:28.568 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:28.568 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:28.568 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:28.568 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:28.568 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:28.568 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:28.568 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2269054 00:10:28.568 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2269054 00:10:28.568 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:28.568 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 2269054 ']' 00:10:28.568 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:28.568 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:28.568 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:28.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:28.568 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:28.568 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:28.568 [2024-12-10 22:41:35.476444] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:10:28.568 [2024-12-10 22:41:35.476489] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:28.568 [2024-12-10 22:41:35.553523] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:28.568 [2024-12-10 22:41:35.593326] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:28.568 [2024-12-10 22:41:35.593369] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:28.568 [2024-12-10 22:41:35.593376] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:28.568 [2024-12-10 22:41:35.593382] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:28.568 [2024-12-10 22:41:35.593387] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:28.568 [2024-12-10 22:41:35.594983] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:10:28.568 [2024-12-10 22:41:35.595092] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:10:28.568 [2024-12-10 22:41:35.595212] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:10:28.568 [2024-12-10 22:41:35.595213] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:10:28.568 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:28.568 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:10:28.568 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:28.568 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:28.568 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:28.568 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:28.568 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:28.568 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.568 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:28.568 [2024-12-10 22:41:35.744337] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:28.568 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.568 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:28.568 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.568 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:28.568 Malloc0 00:10:28.568 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.568 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:28.568 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.568 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:28.568 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.568 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:28.568 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.568 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:28.568 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.568 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:28.568 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.568 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:28.568 [2024-12-10 22:41:35.816002] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:28.568 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.568 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:28.568 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:28.568 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:10:28.568 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:10:28.568 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:28.568 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:28.568 { 00:10:28.568 "params": { 00:10:28.568 "name": "Nvme$subsystem", 00:10:28.568 "trtype": "$TEST_TRANSPORT", 00:10:28.568 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:28.568 "adrfam": "ipv4", 00:10:28.568 "trsvcid": "$NVMF_PORT", 00:10:28.568 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:28.568 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:28.568 "hdgst": ${hdgst:-false}, 00:10:28.568 "ddgst": ${ddgst:-false} 00:10:28.568 }, 00:10:28.568 "method": "bdev_nvme_attach_controller" 00:10:28.568 } 00:10:28.568 EOF 00:10:28.568 )") 00:10:28.568 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:10:28.568 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:10:28.568 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:10:28.568 22:41:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:28.568 "params": { 00:10:28.568 "name": "Nvme1", 00:10:28.568 "trtype": "tcp", 00:10:28.568 "traddr": "10.0.0.2", 00:10:28.568 "adrfam": "ipv4", 00:10:28.568 "trsvcid": "4420", 00:10:28.568 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:28.568 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:28.568 "hdgst": false, 00:10:28.568 "ddgst": false 00:10:28.568 }, 00:10:28.568 "method": "bdev_nvme_attach_controller" 00:10:28.568 }' 00:10:28.568 [2024-12-10 22:41:35.865526] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:10:28.568 [2024-12-10 22:41:35.865570] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2269086 ] 00:10:28.568 [2024-12-10 22:41:35.941443] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:28.568 [2024-12-10 22:41:35.984756] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:28.569 [2024-12-10 22:41:35.984864] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:28.569 [2024-12-10 22:41:35.984865] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:10:28.569 I/O targets: 00:10:28.569 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:28.569 00:10:28.569 00:10:28.569 CUnit - A unit testing framework for C - Version 2.1-3 00:10:28.569 http://cunit.sourceforge.net/ 00:10:28.569 00:10:28.569 00:10:28.569 Suite: bdevio tests on: Nvme1n1 00:10:28.569 Test: blockdev write read block ...passed 00:10:28.569 Test: blockdev write zeroes read block ...passed 00:10:28.569 Test: blockdev write zeroes read no split ...passed 00:10:28.569 Test: blockdev write zeroes read split ...passed 00:10:28.827 Test: blockdev write zeroes read split partial ...passed 00:10:28.827 Test: blockdev reset ...[2024-12-10 22:41:36.299260] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:10:28.827 [2024-12-10 22:41:36.299323] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbef050 (9): Bad file descriptor 00:10:28.827 [2024-12-10 22:41:36.313598] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:10:28.827 passed 00:10:28.827 Test: blockdev write read 8 blocks ...passed 00:10:28.827 Test: blockdev write read size > 128k ...passed 00:10:28.827 Test: blockdev write read invalid size ...passed 00:10:28.828 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:28.828 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:28.828 Test: blockdev write read max offset ...passed 00:10:28.828 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:28.828 Test: blockdev writev readv 8 blocks ...passed 00:10:29.087 Test: blockdev writev readv 30 x 1block ...passed 00:10:29.087 Test: blockdev writev readv block ...passed 00:10:29.087 Test: blockdev writev readv size > 128k ...passed 00:10:29.087 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:29.087 Test: blockdev comparev and writev ...[2024-12-10 22:41:36.604984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:29.087 [2024-12-10 22:41:36.605012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:29.087 [2024-12-10 22:41:36.605027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:29.087 [2024-12-10 22:41:36.605035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:29.087 [2024-12-10 22:41:36.605270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:29.087 [2024-12-10 22:41:36.605281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:29.087 [2024-12-10 22:41:36.605294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:29.087 [2024-12-10 22:41:36.605302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:29.087 [2024-12-10 22:41:36.605531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:29.087 [2024-12-10 22:41:36.605543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:29.087 [2024-12-10 22:41:36.605555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:29.087 [2024-12-10 22:41:36.605562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:29.087 [2024-12-10 22:41:36.605811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:29.087 [2024-12-10 22:41:36.605824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:29.087 [2024-12-10 22:41:36.605836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:29.087 [2024-12-10 22:41:36.605848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:29.087 passed 00:10:29.087 Test: blockdev nvme passthru rw ...passed 00:10:29.087 Test: blockdev nvme passthru vendor specific ...[2024-12-10 22:41:36.687522] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:29.087 [2024-12-10 22:41:36.687541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:29.087 [2024-12-10 22:41:36.687658] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:29.087 [2024-12-10 22:41:36.687669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:29.087 [2024-12-10 22:41:36.687765] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:29.087 [2024-12-10 22:41:36.687776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:29.087 [2024-12-10 22:41:36.687876] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:29.087 [2024-12-10 22:41:36.687887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:29.087 passed 00:10:29.087 Test: blockdev nvme admin passthru ...passed 00:10:29.087 Test: blockdev copy ...passed 00:10:29.087 00:10:29.087 Run Summary: Type Total Ran Passed Failed Inactive 00:10:29.087 suites 1 1 n/a 0 0 00:10:29.087 tests 23 23 23 0 0 00:10:29.087 asserts 152 152 152 0 n/a 00:10:29.087 00:10:29.087 Elapsed time = 1.119 seconds 00:10:29.347 22:41:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:29.347 22:41:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.347 22:41:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:29.347 22:41:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.347 22:41:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:29.347 22:41:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:29.347 22:41:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:29.347 22:41:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:29.347 22:41:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:29.347 22:41:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:29.347 22:41:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:29.347 22:41:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:29.347 rmmod nvme_tcp 00:10:29.347 rmmod nvme_fabrics 00:10:29.347 rmmod nvme_keyring 00:10:29.347 22:41:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:29.347 22:41:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:29.347 22:41:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:29.347 22:41:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2269054 ']' 00:10:29.347 22:41:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2269054 00:10:29.347 22:41:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 2269054 ']' 00:10:29.347 22:41:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 2269054 00:10:29.347 22:41:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:10:29.347 22:41:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:29.347 22:41:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2269054 00:10:29.347 22:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:10:29.347 22:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:10:29.347 22:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2269054' 00:10:29.347 killing process with pid 2269054 00:10:29.347 22:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 2269054 00:10:29.347 22:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 2269054 00:10:29.607 22:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:29.607 22:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:29.607 22:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:29.607 22:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:29.607 22:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:10:29.607 22:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:29.607 22:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:10:29.607 22:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:29.607 22:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:29.607 22:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:29.607 22:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:29.607 22:41:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:32.146 22:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:32.146 00:10:32.146 real 0m10.018s 00:10:32.146 user 0m10.116s 00:10:32.146 sys 0m4.998s 00:10:32.146 22:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:32.146 22:41:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:32.146 ************************************ 00:10:32.146 END TEST nvmf_bdevio 00:10:32.146 ************************************ 00:10:32.146 22:41:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:32.146 00:10:32.146 real 4m36.767s 00:10:32.146 user 10m21.463s 00:10:32.146 sys 1m38.301s 00:10:32.146 22:41:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:32.146 22:41:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:32.146 ************************************ 00:10:32.146 END TEST nvmf_target_core 00:10:32.146 ************************************ 00:10:32.146 22:41:39 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:32.146 22:41:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:32.146 22:41:39 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:32.146 22:41:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:32.146 ************************************ 00:10:32.146 START TEST nvmf_target_extra 00:10:32.146 ************************************ 00:10:32.146 22:41:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:32.146 * Looking for test storage... 00:10:32.146 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf 00:10:32.146 22:41:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:32.146 22:41:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:10:32.146 22:41:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:32.146 22:41:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:32.146 22:41:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:32.146 22:41:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:32.146 22:41:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:32.146 22:41:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:32.146 22:41:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:32.146 22:41:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:32.146 22:41:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:32.146 22:41:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:32.146 22:41:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:32.146 22:41:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:32.146 22:41:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:32.146 22:41:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:32.146 22:41:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:32.146 22:41:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:32.146 22:41:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:32.146 22:41:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:32.146 22:41:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:32.146 22:41:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:32.146 22:41:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:32.146 22:41:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:32.146 22:41:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:32.146 22:41:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:32.146 22:41:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:32.146 22:41:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:32.146 22:41:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:32.146 22:41:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:32.146 22:41:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:32.146 22:41:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:32.146 22:41:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:32.146 22:41:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:32.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.146 --rc genhtml_branch_coverage=1 00:10:32.146 --rc genhtml_function_coverage=1 00:10:32.146 --rc genhtml_legend=1 00:10:32.146 --rc geninfo_all_blocks=1 00:10:32.146 --rc geninfo_unexecuted_blocks=1 00:10:32.146 00:10:32.146 ' 00:10:32.146 22:41:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:32.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.146 --rc genhtml_branch_coverage=1 00:10:32.146 --rc genhtml_function_coverage=1 00:10:32.146 --rc genhtml_legend=1 00:10:32.146 --rc geninfo_all_blocks=1 00:10:32.146 --rc geninfo_unexecuted_blocks=1 00:10:32.146 00:10:32.146 ' 00:10:32.146 22:41:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:32.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.146 --rc genhtml_branch_coverage=1 00:10:32.146 --rc genhtml_function_coverage=1 00:10:32.146 --rc genhtml_legend=1 00:10:32.146 --rc geninfo_all_blocks=1 00:10:32.146 --rc geninfo_unexecuted_blocks=1 00:10:32.146 00:10:32.146 ' 00:10:32.146 22:41:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:32.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.146 --rc genhtml_branch_coverage=1 00:10:32.146 --rc genhtml_function_coverage=1 00:10:32.146 --rc genhtml_legend=1 00:10:32.146 --rc geninfo_all_blocks=1 00:10:32.146 --rc geninfo_unexecuted_blocks=1 00:10:32.146 00:10:32.146 ' 00:10:32.146 22:41:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:10:32.146 22:41:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:32.146 22:41:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:32.146 22:41:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:32.146 22:41:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:32.146 22:41:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:32.146 22:41:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:32.146 22:41:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:32.146 22:41:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:32.146 22:41:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:32.146 22:41:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:32.146 22:41:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:32.146 22:41:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:32.146 22:41:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:32.146 22:41:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:32.146 22:41:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:32.146 22:41:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:32.146 22:41:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:32.146 22:41:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:10:32.146 22:41:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:32.146 22:41:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:32.146 22:41:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:32.146 22:41:39 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:32.146 22:41:39 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.147 22:41:39 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.147 22:41:39 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.147 22:41:39 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:32.147 22:41:39 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.147 22:41:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:32.147 22:41:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:32.147 22:41:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:32.147 22:41:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:32.147 22:41:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:32.147 22:41:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:32.147 22:41:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:32.147 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:32.147 22:41:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:32.147 22:41:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:32.147 22:41:39 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:32.147 22:41:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:32.147 22:41:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:32.147 22:41:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:32.147 22:41:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:32.147 22:41:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:32.147 22:41:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:32.147 22:41:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:32.147 ************************************ 00:10:32.147 START TEST nvmf_example 00:10:32.147 ************************************ 00:10:32.147 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:32.147 * Looking for test storage... 00:10:32.147 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:10:32.147 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:32.147 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version 00:10:32.147 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:32.147 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:32.147 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:32.147 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:32.147 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:32.147 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:32.147 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:32.147 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:32.147 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:32.147 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:32.147 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:32.147 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:32.147 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:32.147 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:32.147 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:32.147 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:32.147 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:32.147 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:32.147 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:32.147 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:32.147 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:32.147 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:32.147 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:32.147 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:32.147 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:32.147 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:32.147 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:32.147 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:32.147 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:32.147 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:32.147 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:32.147 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:32.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.147 --rc genhtml_branch_coverage=1 00:10:32.147 --rc genhtml_function_coverage=1 00:10:32.147 --rc genhtml_legend=1 00:10:32.147 --rc geninfo_all_blocks=1 00:10:32.147 --rc geninfo_unexecuted_blocks=1 00:10:32.147 00:10:32.147 ' 00:10:32.147 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:32.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.147 --rc genhtml_branch_coverage=1 00:10:32.147 --rc genhtml_function_coverage=1 00:10:32.147 --rc genhtml_legend=1 00:10:32.147 --rc geninfo_all_blocks=1 00:10:32.147 --rc geninfo_unexecuted_blocks=1 00:10:32.147 00:10:32.147 ' 00:10:32.147 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:32.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.147 --rc genhtml_branch_coverage=1 00:10:32.147 --rc genhtml_function_coverage=1 00:10:32.147 --rc genhtml_legend=1 00:10:32.147 --rc geninfo_all_blocks=1 00:10:32.147 --rc geninfo_unexecuted_blocks=1 00:10:32.147 00:10:32.147 ' 00:10:32.147 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:32.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.147 --rc genhtml_branch_coverage=1 00:10:32.147 --rc genhtml_function_coverage=1 00:10:32.147 --rc genhtml_legend=1 00:10:32.147 --rc geninfo_all_blocks=1 00:10:32.147 --rc geninfo_unexecuted_blocks=1 00:10:32.147 00:10:32.147 ' 00:10:32.147 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:10:32.147 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:32.147 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:32.147 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:32.147 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:32.147 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:32.147 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:32.147 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:32.147 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:32.147 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:32.147 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:32.147 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:32.147 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:32.147 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:32.147 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:32.147 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:32.147 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:32.147 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:32.147 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:10:32.147 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:32.147 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:32.147 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:32.147 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:32.147 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.148 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.148 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.148 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:32.148 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.148 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:32.148 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:32.148 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:32.148 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:32.148 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:32.148 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:32.148 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:32.148 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:32.148 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:32.148 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:32.148 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:32.148 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:32.148 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:32.148 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:32.148 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:32.148 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:32.148 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:32.148 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:32.148 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:32.148 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:32.148 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:32.148 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:32.148 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:32.148 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:32.148 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:32.148 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:32.148 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:32.148 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:32.148 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:32.148 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:32.148 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:32.148 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:32.148 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:10:32.148 22:41:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:38.719 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:38.719 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:10:38.719 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:38.719 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:38.719 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:38.719 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:38.719 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:38.719 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:10:38.719 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:38.719 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:10:38.719 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:10:38.719 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:10:38.719 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:10:38.719 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:10:38.719 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:10:38.719 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:38.719 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:38.719 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:38.719 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:38.719 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:38.719 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:38.719 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:38.719 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:38.719 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:38.719 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:38.719 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:38.719 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:38.719 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:38.719 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:38.719 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:38.719 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:38.719 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:38.719 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:38.719 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:38.719 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:38.719 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:38.719 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:38.719 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:38.719 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:38.719 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:38.719 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:38.719 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:38.719 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:38.719 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:38.720 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:38.720 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:38.720 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:38.720 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:38.720 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:38.720 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:38.720 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:38.720 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:38.720 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:38.720 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:38.720 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:38.720 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:38.720 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:38.720 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:38.720 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:38.720 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:38.720 Found net devices under 0000:86:00.0: cvl_0_0 00:10:38.720 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:38.720 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:38.720 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:38.720 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:38.720 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:38.720 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:38.720 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:38.720 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:38.720 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:38.720 Found net devices under 0000:86:00.1: cvl_0_1 00:10:38.720 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:38.720 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:38.720 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:10:38.720 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:38.720 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:38.720 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:38.720 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:38.720 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:38.720 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:38.720 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:38.720 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:38.720 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:38.720 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:38.720 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:38.720 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:38.720 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:38.720 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:38.720 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:38.720 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:38.720 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:38.720 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:38.720 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:38.720 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:38.720 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:38.720 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:38.720 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:38.720 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:38.720 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:38.720 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:38.720 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:38.720 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.320 ms 00:10:38.720 00:10:38.720 --- 10.0.0.2 ping statistics --- 00:10:38.720 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:38.720 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:10:38.720 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:38.720 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:38.720 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:10:38.720 00:10:38.720 --- 10.0.0.1 ping statistics --- 00:10:38.720 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:38.720 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:10:38.720 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:38.720 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:10:38.720 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:38.720 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:38.720 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:38.720 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:38.720 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:38.720 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:38.720 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:38.720 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:38.720 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:38.720 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:38.720 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:38.720 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:38.720 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:38.720 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2272902 00:10:38.720 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:38.720 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:38.720 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2272902 00:10:38.720 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 2272902 ']' 00:10:38.720 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:38.720 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:38.720 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:38.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:38.720 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:38.720 22:41:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:39.288 22:41:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:39.288 22:41:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:10:39.288 22:41:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:39.288 22:41:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:39.288 22:41:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:39.288 22:41:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:39.288 22:41:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.288 22:41:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:39.288 22:41:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.288 22:41:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:39.288 22:41:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.288 22:41:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:39.288 22:41:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.288 22:41:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:39.288 22:41:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:39.288 22:41:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.288 22:41:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:39.288 22:41:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.288 22:41:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:39.288 22:41:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:39.288 22:41:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.288 22:41:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:39.288 22:41:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.288 22:41:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:39.288 22:41:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.288 22:41:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:39.288 22:41:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.288 22:41:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf 00:10:39.288 22:41:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:51.494 Initializing NVMe Controllers 00:10:51.494 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:51.494 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:51.494 Initialization complete. Launching workers. 00:10:51.494 ======================================================== 00:10:51.494 Latency(us) 00:10:51.494 Device Information : IOPS MiB/s Average min max 00:10:51.494 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18106.94 70.73 3535.37 708.96 16210.94 00:10:51.494 ======================================================== 00:10:51.494 Total : 18106.94 70.73 3535.37 708.96 16210.94 00:10:51.494 00:10:51.494 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:51.494 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:51.494 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:51.494 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:10:51.494 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:51.494 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:10:51.494 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:51.494 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:51.494 rmmod nvme_tcp 00:10:51.494 rmmod nvme_fabrics 00:10:51.494 rmmod nvme_keyring 00:10:51.494 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:51.494 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:10:51.494 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:10:51.494 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 2272902 ']' 00:10:51.494 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 2272902 00:10:51.494 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 2272902 ']' 00:10:51.494 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 2272902 00:10:51.494 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:10:51.494 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:51.494 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2272902 00:10:51.494 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:10:51.494 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:10:51.494 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2272902' 00:10:51.494 killing process with pid 2272902 00:10:51.494 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 2272902 00:10:51.494 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 2272902 00:10:51.494 nvmf threads initialize successfully 00:10:51.494 bdev subsystem init successfully 00:10:51.494 created a nvmf target service 00:10:51.494 create targets's poll groups done 00:10:51.494 all subsystems of target started 00:10:51.494 nvmf target is running 00:10:51.494 all subsystems of target stopped 00:10:51.494 destroy targets's poll groups done 00:10:51.494 destroyed the nvmf target service 00:10:51.494 bdev subsystem finish successfully 00:10:51.494 nvmf threads destroy successfully 00:10:51.494 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:51.494 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:51.494 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:51.494 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:10:51.494 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:10:51.494 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:51.494 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:10:51.494 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:51.494 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:51.494 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:51.494 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:51.494 22:41:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:51.754 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:51.754 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:51.754 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:51.754 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:52.013 00:10:52.013 real 0m19.879s 00:10:52.013 user 0m46.045s 00:10:52.013 sys 0m6.158s 00:10:52.013 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:52.013 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:52.013 ************************************ 00:10:52.013 END TEST nvmf_example 00:10:52.013 ************************************ 00:10:52.013 22:41:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:52.013 22:41:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:52.013 22:41:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:52.013 22:41:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:52.013 ************************************ 00:10:52.013 START TEST nvmf_filesystem 00:10:52.013 ************************************ 00:10:52.013 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:52.013 * Looking for test storage... 00:10:52.013 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:10:52.013 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:52.013 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:10:52.013 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:52.013 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:52.013 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:52.013 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:52.013 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:52.013 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:52.013 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:52.013 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:52.013 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:52.013 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:52.013 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:52.014 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:52.014 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:52.014 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:52.014 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:52.014 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:52.014 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:52.014 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:52.014 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:52.014 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:52.014 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:52.014 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:52.014 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:52.014 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:52.014 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:52.014 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:52.014 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:52.014 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:52.014 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:52.014 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:52.014 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:52.014 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:52.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.014 --rc genhtml_branch_coverage=1 00:10:52.014 --rc genhtml_function_coverage=1 00:10:52.014 --rc genhtml_legend=1 00:10:52.014 --rc geninfo_all_blocks=1 00:10:52.014 --rc geninfo_unexecuted_blocks=1 00:10:52.014 00:10:52.014 ' 00:10:52.014 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:52.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.014 --rc genhtml_branch_coverage=1 00:10:52.014 --rc genhtml_function_coverage=1 00:10:52.014 --rc genhtml_legend=1 00:10:52.014 --rc geninfo_all_blocks=1 00:10:52.014 --rc geninfo_unexecuted_blocks=1 00:10:52.014 00:10:52.014 ' 00:10:52.014 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:52.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.014 --rc genhtml_branch_coverage=1 00:10:52.014 --rc genhtml_function_coverage=1 00:10:52.014 --rc genhtml_legend=1 00:10:52.014 --rc geninfo_all_blocks=1 00:10:52.014 --rc geninfo_unexecuted_blocks=1 00:10:52.014 00:10:52.014 ' 00:10:52.014 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:52.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.014 --rc genhtml_branch_coverage=1 00:10:52.014 --rc genhtml_function_coverage=1 00:10:52.014 --rc genhtml_legend=1 00:10:52.014 --rc geninfo_all_blocks=1 00:10:52.014 --rc geninfo_unexecuted_blocks=1 00:10:52.014 00:10:52.014 ' 00:10:52.014 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/common/autotest_common.sh 00:10:52.014 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:52.014 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:52.014 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:52.014 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:52.014 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:52.014 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output ']' 00:10:52.014 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/common/build_config.sh ]] 00:10:52.014 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/common/build_config.sh 00:10:52.014 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:52.014 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:52.014 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:52.014 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:52.277 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:52.277 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:52.277 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:52.277 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:52.277 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:52.277 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:52.277 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:52.277 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:52.277 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:52.277 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:52.277 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:52.277 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:52.277 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:10:52.277 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:10:52.277 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:52.277 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/lib/env_dpdk 00:10:52.277 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:10:52.277 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:10:52.277 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:10:52.277 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:52.277 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:10:52.277 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:10:52.277 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:10:52.277 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:52.277 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:52.277 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:10:52.277 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:10:52.277 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:10:52.277 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:10:52.277 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:10:52.277 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:10:52.277 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:10:52.277 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:10:52.277 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:10:52.277 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/build 00:10:52.277 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:10:52.277 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:10:52.277 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:10:52.277 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:10:52.277 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:10:52.277 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:10:52.277 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:10:52.277 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:52.277 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:10:52.277 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:10:52.277 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:10:52.277 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:10:52.277 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:10:52.277 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:10:52.277 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:52.277 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:10:52.277 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:10:52.277 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:10:52.277 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:10:52.277 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:10:52.277 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:10:52.277 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:10:52.277 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:10:52.277 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:10:52.277 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:10:52.277 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:10:52.277 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:10:52.277 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:10:52.277 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:10:52.277 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:10:52.277 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:10:52.277 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:10:52.277 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:10:52.277 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:10:52.277 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:10:52.277 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:52.277 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:10:52.277 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:10:52.277 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:10:52.277 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:10:52.277 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:10:52.277 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:10:52.277 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:10:52.277 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:10:52.277 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:10:52.277 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:10:52.278 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:10:52.278 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:52.278 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:10:52.278 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:10:52.278 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:10:52.278 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/common/applications.sh 00:10:52.278 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/common/applications.sh 00:10:52.278 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/common 00:10:52.278 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/common 00:10:52.278 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk 00:10:52.278 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin 00:10:52.278 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/app 00:10:52.278 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples 00:10:52.278 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:52.278 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:52.278 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:52.278 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:52.278 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:52.278 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:52.278 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/include/spdk/config.h ]] 00:10:52.278 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:52.278 #define SPDK_CONFIG_H 00:10:52.278 #define SPDK_CONFIG_AIO_FSDEV 1 00:10:52.278 #define SPDK_CONFIG_APPS 1 00:10:52.278 #define SPDK_CONFIG_ARCH native 00:10:52.278 #undef SPDK_CONFIG_ASAN 00:10:52.278 #undef SPDK_CONFIG_AVAHI 00:10:52.278 #undef SPDK_CONFIG_CET 00:10:52.278 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:10:52.278 #define SPDK_CONFIG_COVERAGE 1 00:10:52.278 #define SPDK_CONFIG_CROSS_PREFIX 00:10:52.278 #undef SPDK_CONFIG_CRYPTO 00:10:52.278 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:52.278 #undef SPDK_CONFIG_CUSTOMOCF 00:10:52.278 #undef SPDK_CONFIG_DAOS 00:10:52.278 #define SPDK_CONFIG_DAOS_DIR 00:10:52.278 #define SPDK_CONFIG_DEBUG 1 00:10:52.278 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:52.278 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/build 00:10:52.278 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:52.278 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:52.278 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:52.278 #undef SPDK_CONFIG_DPDK_UADK 00:10:52.278 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/lib/env_dpdk 00:10:52.278 #define SPDK_CONFIG_EXAMPLES 1 00:10:52.278 #undef SPDK_CONFIG_FC 00:10:52.278 #define SPDK_CONFIG_FC_PATH 00:10:52.278 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:52.278 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:52.278 #define SPDK_CONFIG_FSDEV 1 00:10:52.278 #undef SPDK_CONFIG_FUSE 00:10:52.278 #undef SPDK_CONFIG_FUZZER 00:10:52.278 #define SPDK_CONFIG_FUZZER_LIB 00:10:52.278 #undef SPDK_CONFIG_GOLANG 00:10:52.278 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:52.278 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:52.278 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:52.278 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:52.278 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:52.278 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:52.278 #undef SPDK_CONFIG_HAVE_LZ4 00:10:52.278 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:10:52.278 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:10:52.278 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:52.278 #define SPDK_CONFIG_IDXD 1 00:10:52.278 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:52.278 #undef SPDK_CONFIG_IPSEC_MB 00:10:52.278 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:52.278 #define SPDK_CONFIG_ISAL 1 00:10:52.278 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:52.278 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:52.278 #define SPDK_CONFIG_LIBDIR 00:10:52.278 #undef SPDK_CONFIG_LTO 00:10:52.278 #define SPDK_CONFIG_MAX_LCORES 128 00:10:52.278 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:10:52.278 #define SPDK_CONFIG_NVME_CUSE 1 00:10:52.278 #undef SPDK_CONFIG_OCF 00:10:52.278 #define SPDK_CONFIG_OCF_PATH 00:10:52.278 #define SPDK_CONFIG_OPENSSL_PATH 00:10:52.278 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:52.278 #define SPDK_CONFIG_PGO_DIR 00:10:52.278 #undef SPDK_CONFIG_PGO_USE 00:10:52.278 #define SPDK_CONFIG_PREFIX /usr/local 00:10:52.278 #undef SPDK_CONFIG_RAID5F 00:10:52.278 #undef SPDK_CONFIG_RBD 00:10:52.278 #define SPDK_CONFIG_RDMA 1 00:10:52.278 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:52.278 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:52.278 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:52.278 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:52.278 #define SPDK_CONFIG_SHARED 1 00:10:52.278 #undef SPDK_CONFIG_SMA 00:10:52.278 #define SPDK_CONFIG_TESTS 1 00:10:52.278 #undef SPDK_CONFIG_TSAN 00:10:52.278 #define SPDK_CONFIG_UBLK 1 00:10:52.278 #define SPDK_CONFIG_UBSAN 1 00:10:52.278 #undef SPDK_CONFIG_UNIT_TESTS 00:10:52.278 #undef SPDK_CONFIG_URING 00:10:52.278 #define SPDK_CONFIG_URING_PATH 00:10:52.278 #undef SPDK_CONFIG_URING_ZNS 00:10:52.278 #undef SPDK_CONFIG_USDT 00:10:52.278 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:52.278 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:52.278 #define SPDK_CONFIG_VFIO_USER 1 00:10:52.278 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:52.278 #define SPDK_CONFIG_VHOST 1 00:10:52.278 #define SPDK_CONFIG_VIRTIO 1 00:10:52.278 #undef SPDK_CONFIG_VTUNE 00:10:52.278 #define SPDK_CONFIG_VTUNE_DIR 00:10:52.278 #define SPDK_CONFIG_WERROR 1 00:10:52.278 #define SPDK_CONFIG_WPDK_DIR 00:10:52.278 #undef SPDK_CONFIG_XNVME 00:10:52.278 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:52.278 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:52.278 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:10:52.278 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:52.278 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:52.278 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:52.278 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:52.278 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.278 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.278 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.278 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:52.278 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.278 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/perf/pm/common 00:10:52.278 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/perf/pm/common 00:10:52.278 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/perf/pm 00:10:52.278 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/perf/pm 00:10:52.278 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/perf/pm/../../../ 00:10:52.278 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk 00:10:52.278 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:52.278 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/.run_test_name 00:10:52.278 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/power 00:10:52.278 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:52.278 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:52.278 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/power ]] 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:52.279 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:52.280 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:52.280 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:52.280 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:52.280 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:52.280 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:52.280 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:52.280 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:52.280 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:52.280 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:52.280 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:52.280 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:52.280 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:52.280 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:52.280 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:52.280 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:10:52.280 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:52.280 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:10:52.280 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:10:52.280 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:10:52.280 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:10:52.280 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/lib 00:10:52.280 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/lib 00:10:52.280 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/build/lib 00:10:52.280 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/build/lib 00:10:52.280 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/libvfio-user/usr/local/lib 00:10:52.280 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/libvfio-user/usr/local/lib 00:10:52.280 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/libvfio-user/usr/local/lib 00:10:52.280 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/libvfio-user/usr/local/lib 00:10:52.280 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:52.280 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:52.280 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/python 00:10:52.280 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/python 00:10:52.280 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:52.280 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:10:52.280 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:52.280 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:52.280 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:52.280 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:52.280 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:52.280 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:10:52.280 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:10:52.280 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:10:52.280 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:52.280 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:52.280 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:52.280 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:52.280 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:10:52.280 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:10:52.280 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin 00:10:52.280 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin 00:10:52.280 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples 00:10:52.280 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples 00:10:52.280 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:52.280 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:52.280 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:52.280 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:52.280 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/ar-xnvme-fixer 00:10:52.280 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/ar-xnvme-fixer 00:10:52.280 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:52.280 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:52.280 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:10:52.280 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:10:52.280 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:10:52.280 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:10:52.280 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:10:52.280 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:10:52.280 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:10:52.280 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:10:52.280 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:10:52.280 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:10:52.280 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:10:52.280 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:10:52.280 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:10:52.280 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:10:52.280 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:10:52.280 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:10:52.280 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:10:52.281 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j96 00:10:52.281 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:10:52.281 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:10:52.281 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:10:52.281 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:10:52.281 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:10:52.281 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:10:52.281 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:10:52.281 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 2275310 ]] 00:10:52.281 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 2275310 00:10:52.281 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:10:52.281 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:10:52.281 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:10:52.281 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:10:52.281 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:10:52.281 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:10:52.281 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:10:52.281 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:10:52.281 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.48wYDA 00:10:52.281 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:52.281 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:10:52.281 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:10:52.281 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target /tmp/spdk.48wYDA/tests/target /tmp/spdk.48wYDA 00:10:52.281 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:10:52.281 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:52.281 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:10:52.281 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:10:52.281 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:10:52.281 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:10:52.281 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:10:52.281 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:10:52.281 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:10:52.281 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:52.281 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:10:52.281 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:10:52.281 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=194059788288 00:10:52.281 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=201248804864 00:10:52.281 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=7189016576 00:10:52.281 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:52.281 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:52.281 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:52.281 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=100614369280 00:10:52.281 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=100624400384 00:10:52.281 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:10:52.281 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:52.281 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:52.281 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:52.281 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=40226734080 00:10:52.281 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=40249761792 00:10:52.281 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23027712 00:10:52.281 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:52.281 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:52.281 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:52.281 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=100624023552 00:10:52.281 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=100624404480 00:10:52.281 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=380928 00:10:52.281 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:52.281 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:52.281 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:52.281 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=20124864512 00:10:52.281 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=20124876800 00:10:52.281 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:10:52.281 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:52.281 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:10:52.281 * Looking for test storage... 00:10:52.281 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:10:52.281 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:10:52.281 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:10:52.281 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:52.281 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:10:52.281 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=194059788288 00:10:52.281 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:10:52.281 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:10:52.281 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:10:52.281 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:10:52.281 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:10:52.281 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=9403609088 00:10:52.281 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:52.281 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:10:52.281 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:10:52.281 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:10:52.281 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:10:52.281 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:10:52.281 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace 00:10:52.281 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:10:52.281 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:52.281 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:52.281 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true 00:10:52.281 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd 00:10:52.282 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:52.282 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:52.282 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:52.282 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:52.282 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:52.282 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:52.282 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:52.282 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:52.282 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:52.282 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:10:52.282 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:52.282 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:52.282 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:52.282 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:52.282 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:52.282 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:52.282 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:52.282 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:52.282 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:52.282 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:52.282 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:52.282 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:52.282 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:52.282 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:52.282 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:52.282 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:52.282 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:52.282 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:52.282 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:52.282 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:52.282 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:52.282 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:52.282 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:52.282 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:52.282 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:52.282 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:52.282 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:52.282 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:52.282 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:52.282 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:52.282 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:52.282 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:52.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.282 --rc genhtml_branch_coverage=1 00:10:52.282 --rc genhtml_function_coverage=1 00:10:52.282 --rc genhtml_legend=1 00:10:52.282 --rc geninfo_all_blocks=1 00:10:52.282 --rc geninfo_unexecuted_blocks=1 00:10:52.282 00:10:52.282 ' 00:10:52.282 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:52.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.282 --rc genhtml_branch_coverage=1 00:10:52.282 --rc genhtml_function_coverage=1 00:10:52.282 --rc genhtml_legend=1 00:10:52.282 --rc geninfo_all_blocks=1 00:10:52.282 --rc geninfo_unexecuted_blocks=1 00:10:52.282 00:10:52.282 ' 00:10:52.282 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:52.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.282 --rc genhtml_branch_coverage=1 00:10:52.282 --rc genhtml_function_coverage=1 00:10:52.282 --rc genhtml_legend=1 00:10:52.282 --rc geninfo_all_blocks=1 00:10:52.282 --rc geninfo_unexecuted_blocks=1 00:10:52.282 00:10:52.282 ' 00:10:52.282 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:52.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.282 --rc genhtml_branch_coverage=1 00:10:52.282 --rc genhtml_function_coverage=1 00:10:52.282 --rc genhtml_legend=1 00:10:52.282 --rc geninfo_all_blocks=1 00:10:52.282 --rc geninfo_unexecuted_blocks=1 00:10:52.282 00:10:52.282 ' 00:10:52.282 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:10:52.282 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:52.282 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:52.282 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:52.282 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:52.282 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:52.282 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:52.282 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:52.282 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:52.282 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:52.282 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:52.282 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:52.282 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:52.282 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:52.282 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:52.282 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:52.282 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:52.282 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:52.282 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:10:52.282 22:41:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:52.541 22:42:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:52.541 22:42:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:52.541 22:42:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:52.542 22:42:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.542 22:42:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.542 22:42:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.542 22:42:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:52.542 22:42:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.542 22:42:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:10:52.542 22:42:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:52.542 22:42:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:52.542 22:42:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:52.542 22:42:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:52.542 22:42:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:52.542 22:42:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:52.542 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:52.542 22:42:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:52.542 22:42:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:52.542 22:42:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:52.542 22:42:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:52.542 22:42:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:52.542 22:42:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:52.542 22:42:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:52.542 22:42:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:52.542 22:42:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:52.542 22:42:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:52.542 22:42:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:52.542 22:42:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:52.542 22:42:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:52.542 22:42:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:52.542 22:42:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:52.542 22:42:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:52.542 22:42:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:10:52.542 22:42:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:59.113 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:59.113 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:10:59.113 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:59.113 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:59.113 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:59.113 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:59.113 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:59.113 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:10:59.113 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:59.113 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:10:59.113 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:10:59.113 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:10:59.113 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:10:59.113 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:10:59.113 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:10:59.113 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:59.113 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:59.113 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:59.114 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:59.114 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:59.114 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:59.114 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:59.114 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:59.114 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:59.114 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:59.114 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:59.114 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:59.114 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:59.114 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:59.114 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:59.114 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:59.114 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:59.114 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:59.114 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:59.114 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:59.114 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:59.114 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:59.114 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:59.114 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:59.114 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:59.114 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:59.114 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:59.114 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:59.114 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:59.114 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:59.114 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:59.114 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:59.114 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:59.114 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:59.114 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:59.114 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:59.114 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:59.114 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:59.114 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:59.114 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:59.114 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:59.114 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:59.114 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:59.114 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:59.114 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:59.114 Found net devices under 0000:86:00.0: cvl_0_0 00:10:59.114 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:59.114 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:59.114 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:59.114 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:59.114 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:59.114 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:59.114 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:59.114 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:59.114 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:59.114 Found net devices under 0000:86:00.1: cvl_0_1 00:10:59.114 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:59.114 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:59.114 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:10:59.114 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:59.114 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:59.114 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:59.114 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:59.114 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:59.114 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:59.114 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:59.114 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:59.114 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:59.114 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:59.114 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:59.114 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:59.114 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:59.114 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:59.114 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:59.114 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:59.114 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:59.114 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:59.114 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:59.114 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:59.114 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:59.114 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:59.114 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:59.114 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:59.114 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:59.114 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:59.114 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:59.114 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.331 ms 00:10:59.114 00:10:59.114 --- 10.0.0.2 ping statistics --- 00:10:59.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:59.114 rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms 00:10:59.114 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:59.114 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:59.114 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:10:59.114 00:10:59.114 --- 10.0.0.1 ping statistics --- 00:10:59.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:59.114 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:10:59.114 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:59.114 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:10:59.114 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:59.114 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:59.114 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:59.114 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:59.114 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:59.114 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:59.114 22:42:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:59.114 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:59.114 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:59.115 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:59.115 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:59.115 ************************************ 00:10:59.115 START TEST nvmf_filesystem_no_in_capsule 00:10:59.115 ************************************ 00:10:59.115 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:10:59.115 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:59.115 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:59.115 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:59.115 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:59.115 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:59.115 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2278684 00:10:59.115 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2278684 00:10:59.115 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:59.115 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2278684 ']' 00:10:59.115 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:59.115 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:59.115 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:59.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:59.115 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:59.115 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:59.115 [2024-12-10 22:42:06.129023] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:10:59.115 [2024-12-10 22:42:06.129071] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:59.115 [2024-12-10 22:42:06.209995] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:59.115 [2024-12-10 22:42:06.249816] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:59.115 [2024-12-10 22:42:06.249855] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:59.115 [2024-12-10 22:42:06.249862] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:59.115 [2024-12-10 22:42:06.249870] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:59.115 [2024-12-10 22:42:06.249876] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:59.115 [2024-12-10 22:42:06.251420] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:59.115 [2024-12-10 22:42:06.251529] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:10:59.115 [2024-12-10 22:42:06.251612] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:59.115 [2024-12-10 22:42:06.251613] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:10:59.115 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:59.115 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:59.115 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:59.115 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:59.115 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:59.115 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:59.115 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:59.115 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:59.115 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.115 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:59.115 [2024-12-10 22:42:06.402112] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:59.115 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.115 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:59.115 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.115 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:59.115 Malloc1 00:10:59.115 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.115 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:59.115 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.115 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:59.115 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.115 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:59.115 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.115 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:59.115 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.115 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:59.115 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.115 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:59.115 [2024-12-10 22:42:06.560494] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:59.115 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.115 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:59.115 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:59.115 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:59.115 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:59.115 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:59.115 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:59.115 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.115 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:59.115 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.115 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:59.115 { 00:10:59.115 "name": "Malloc1", 00:10:59.115 "aliases": [ 00:10:59.115 "f930467a-d659-40cf-bf86-c54f165b3f97" 00:10:59.115 ], 00:10:59.115 "product_name": "Malloc disk", 00:10:59.115 "block_size": 512, 00:10:59.115 "num_blocks": 1048576, 00:10:59.115 "uuid": "f930467a-d659-40cf-bf86-c54f165b3f97", 00:10:59.115 "assigned_rate_limits": { 00:10:59.115 "rw_ios_per_sec": 0, 00:10:59.115 "rw_mbytes_per_sec": 0, 00:10:59.115 "r_mbytes_per_sec": 0, 00:10:59.115 "w_mbytes_per_sec": 0 00:10:59.115 }, 00:10:59.115 "claimed": true, 00:10:59.115 "claim_type": "exclusive_write", 00:10:59.115 "zoned": false, 00:10:59.115 "supported_io_types": { 00:10:59.115 "read": true, 00:10:59.115 "write": true, 00:10:59.115 "unmap": true, 00:10:59.115 "flush": true, 00:10:59.115 "reset": true, 00:10:59.115 "nvme_admin": false, 00:10:59.115 "nvme_io": false, 00:10:59.115 "nvme_io_md": false, 00:10:59.115 "write_zeroes": true, 00:10:59.115 "zcopy": true, 00:10:59.115 "get_zone_info": false, 00:10:59.115 "zone_management": false, 00:10:59.115 "zone_append": false, 00:10:59.115 "compare": false, 00:10:59.115 "compare_and_write": false, 00:10:59.115 "abort": true, 00:10:59.115 "seek_hole": false, 00:10:59.115 "seek_data": false, 00:10:59.115 "copy": true, 00:10:59.115 "nvme_iov_md": false 00:10:59.115 }, 00:10:59.115 "memory_domains": [ 00:10:59.115 { 00:10:59.115 "dma_device_id": "system", 00:10:59.115 "dma_device_type": 1 00:10:59.115 }, 00:10:59.115 { 00:10:59.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.115 "dma_device_type": 2 00:10:59.115 } 00:10:59.115 ], 00:10:59.115 "driver_specific": {} 00:10:59.115 } 00:10:59.115 ]' 00:10:59.115 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:59.115 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:59.115 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:59.115 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:59.115 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:59.115 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:59.116 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:59.116 22:42:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:00.492 22:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:00.492 22:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:00.492 22:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:00.492 22:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:00.492 22:42:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:02.396 22:42:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:02.396 22:42:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:02.396 22:42:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:02.396 22:42:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:02.396 22:42:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:02.396 22:42:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:02.396 22:42:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:02.396 22:42:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:02.396 22:42:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:02.396 22:42:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:02.396 22:42:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:02.396 22:42:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:02.396 22:42:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:02.396 22:42:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:02.396 22:42:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:02.396 22:42:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:02.396 22:42:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:02.655 22:42:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:02.655 22:42:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:03.590 22:42:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:03.590 22:42:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:03.590 22:42:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:03.590 22:42:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:03.590 22:42:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:03.848 ************************************ 00:11:03.848 START TEST filesystem_ext4 00:11:03.848 ************************************ 00:11:03.848 22:42:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:03.848 22:42:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:03.848 22:42:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:03.848 22:42:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:03.848 22:42:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:03.848 22:42:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:03.848 22:42:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:03.848 22:42:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:03.848 22:42:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:03.848 22:42:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:03.848 22:42:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:03.848 mke2fs 1.47.0 (5-Feb-2023) 00:11:03.848 Discarding device blocks: 0/522240 done 00:11:03.848 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:03.848 Filesystem UUID: 1454f8b2-2167-4553-b164-606a2d6478a8 00:11:03.848 Superblock backups stored on blocks: 00:11:03.848 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:03.848 00:11:03.848 Allocating group tables: 0/64 done 00:11:03.848 Writing inode tables: 0/64 done 00:11:04.414 Creating journal (8192 blocks): done 00:11:04.414 Writing superblocks and filesystem accounting information: 0/64 done 00:11:04.414 00:11:04.414 22:42:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:04.414 22:42:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:09.720 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:09.720 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:09.720 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:09.720 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:09.720 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:09.720 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:09.979 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2278684 00:11:09.979 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:09.979 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:09.979 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:09.979 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:09.979 00:11:09.979 real 0m6.136s 00:11:09.979 user 0m0.028s 00:11:09.979 sys 0m0.072s 00:11:09.979 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:09.979 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:09.979 ************************************ 00:11:09.979 END TEST filesystem_ext4 00:11:09.979 ************************************ 00:11:09.979 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:09.979 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:09.979 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:09.979 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:09.979 ************************************ 00:11:09.979 START TEST filesystem_btrfs 00:11:09.979 ************************************ 00:11:09.979 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:09.979 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:09.979 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:09.979 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:09.979 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:09.979 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:09.979 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:09.979 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:09.979 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:09.979 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:09.979 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:09.979 btrfs-progs v6.8.1 00:11:09.979 See https://btrfs.readthedocs.io for more information. 00:11:09.979 00:11:09.979 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:09.979 NOTE: several default settings have changed in version 5.15, please make sure 00:11:09.979 this does not affect your deployments: 00:11:09.979 - DUP for metadata (-m dup) 00:11:09.979 - enabled no-holes (-O no-holes) 00:11:09.979 - enabled free-space-tree (-R free-space-tree) 00:11:09.979 00:11:09.979 Label: (null) 00:11:09.979 UUID: afcea18b-ddf6-475c-8b91-e1b6c1fb57ed 00:11:09.979 Node size: 16384 00:11:09.979 Sector size: 4096 (CPU page size: 4096) 00:11:09.979 Filesystem size: 510.00MiB 00:11:09.979 Block group profiles: 00:11:09.979 Data: single 8.00MiB 00:11:09.979 Metadata: DUP 32.00MiB 00:11:09.979 System: DUP 8.00MiB 00:11:09.979 SSD detected: yes 00:11:09.979 Zoned device: no 00:11:09.979 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:09.979 Checksum: crc32c 00:11:09.979 Number of devices: 1 00:11:09.979 Devices: 00:11:09.979 ID SIZE PATH 00:11:09.979 1 510.00MiB /dev/nvme0n1p1 00:11:09.979 00:11:09.979 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:09.979 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:10.547 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:10.547 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:10.547 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:10.547 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:10.547 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:10.547 22:42:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:10.547 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2278684 00:11:10.547 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:10.547 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:10.547 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:10.547 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:10.547 00:11:10.547 real 0m0.489s 00:11:10.547 user 0m0.022s 00:11:10.547 sys 0m0.116s 00:11:10.547 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:10.547 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:10.547 ************************************ 00:11:10.547 END TEST filesystem_btrfs 00:11:10.547 ************************************ 00:11:10.547 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:10.547 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:10.547 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:10.547 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:10.547 ************************************ 00:11:10.547 START TEST filesystem_xfs 00:11:10.547 ************************************ 00:11:10.547 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:10.547 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:10.547 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:10.547 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:10.547 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:10.547 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:10.547 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:10.547 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:11:10.547 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:10.547 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:10.547 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:10.547 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:10.547 = sectsz=512 attr=2, projid32bit=1 00:11:10.547 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:10.547 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:10.547 data = bsize=4096 blocks=130560, imaxpct=25 00:11:10.547 = sunit=0 swidth=0 blks 00:11:10.547 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:10.547 log =internal log bsize=4096 blocks=16384, version=2 00:11:10.547 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:10.547 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:11.483 Discarding blocks...Done. 00:11:11.483 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:11.483 22:42:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:13.386 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:13.386 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:13.386 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:13.386 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:13.386 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:13.386 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:13.386 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2278684 00:11:13.386 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:13.386 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:13.386 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:13.386 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:13.386 00:11:13.386 real 0m2.646s 00:11:13.386 user 0m0.028s 00:11:13.386 sys 0m0.070s 00:11:13.386 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:13.386 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:13.386 ************************************ 00:11:13.386 END TEST filesystem_xfs 00:11:13.386 ************************************ 00:11:13.386 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:13.386 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:13.386 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:13.386 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.386 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:13.386 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:13.386 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:13.386 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:13.386 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:13.386 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:13.386 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:13.386 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:13.386 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.386 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:13.386 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.387 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:13.387 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2278684 00:11:13.387 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2278684 ']' 00:11:13.387 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2278684 00:11:13.387 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:13.387 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:13.387 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2278684 00:11:13.387 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:13.387 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:13.387 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2278684' 00:11:13.387 killing process with pid 2278684 00:11:13.387 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 2278684 00:11:13.387 22:42:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 2278684 00:11:13.646 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:13.646 00:11:13.646 real 0m15.236s 00:11:13.646 user 0m59.824s 00:11:13.646 sys 0m1.392s 00:11:13.646 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:13.646 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:13.646 ************************************ 00:11:13.646 END TEST nvmf_filesystem_no_in_capsule 00:11:13.646 ************************************ 00:11:13.646 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:13.646 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:13.646 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:13.646 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:13.905 ************************************ 00:11:13.905 START TEST nvmf_filesystem_in_capsule 00:11:13.905 ************************************ 00:11:13.905 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:11:13.905 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:13.905 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:13.905 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:13.905 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:13.905 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:13.905 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2281835 00:11:13.905 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2281835 00:11:13.905 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:13.905 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2281835 ']' 00:11:13.905 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:13.905 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:13.905 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:13.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:13.905 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:13.905 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:13.905 [2024-12-10 22:42:21.438047] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:11:13.905 [2024-12-10 22:42:21.438086] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:13.905 [2024-12-10 22:42:21.500194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:13.905 [2024-12-10 22:42:21.542314] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:13.906 [2024-12-10 22:42:21.542351] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:13.906 [2024-12-10 22:42:21.542362] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:13.906 [2024-12-10 22:42:21.542368] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:13.906 [2024-12-10 22:42:21.542373] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:13.906 [2024-12-10 22:42:21.543889] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:11:13.906 [2024-12-10 22:42:21.543928] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:11:13.906 [2024-12-10 22:42:21.544035] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:13.906 [2024-12-10 22:42:21.544036] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:11:14.165 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:14.165 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:14.165 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:14.165 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:14.165 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:14.165 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:14.165 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:14.165 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:14.165 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.165 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:14.165 [2024-12-10 22:42:21.685662] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:14.165 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.165 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:14.165 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.165 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:14.165 Malloc1 00:11:14.165 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.165 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:14.165 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.165 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:14.165 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.165 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:14.165 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.165 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:14.165 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.165 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:14.165 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.165 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:14.165 [2024-12-10 22:42:21.839331] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:14.165 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.165 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:14.165 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:14.165 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:14.165 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:14.165 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:14.165 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:14.165 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.165 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:14.165 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.165 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:14.165 { 00:11:14.165 "name": "Malloc1", 00:11:14.165 "aliases": [ 00:11:14.165 "e6ed8ae9-d014-42a8-b429-167d898de57a" 00:11:14.165 ], 00:11:14.165 "product_name": "Malloc disk", 00:11:14.165 "block_size": 512, 00:11:14.165 "num_blocks": 1048576, 00:11:14.165 "uuid": "e6ed8ae9-d014-42a8-b429-167d898de57a", 00:11:14.165 "assigned_rate_limits": { 00:11:14.165 "rw_ios_per_sec": 0, 00:11:14.165 "rw_mbytes_per_sec": 0, 00:11:14.165 "r_mbytes_per_sec": 0, 00:11:14.165 "w_mbytes_per_sec": 0 00:11:14.165 }, 00:11:14.165 "claimed": true, 00:11:14.165 "claim_type": "exclusive_write", 00:11:14.165 "zoned": false, 00:11:14.165 "supported_io_types": { 00:11:14.165 "read": true, 00:11:14.165 "write": true, 00:11:14.165 "unmap": true, 00:11:14.165 "flush": true, 00:11:14.165 "reset": true, 00:11:14.165 "nvme_admin": false, 00:11:14.165 "nvme_io": false, 00:11:14.165 "nvme_io_md": false, 00:11:14.165 "write_zeroes": true, 00:11:14.165 "zcopy": true, 00:11:14.165 "get_zone_info": false, 00:11:14.165 "zone_management": false, 00:11:14.165 "zone_append": false, 00:11:14.165 "compare": false, 00:11:14.165 "compare_and_write": false, 00:11:14.165 "abort": true, 00:11:14.165 "seek_hole": false, 00:11:14.165 "seek_data": false, 00:11:14.165 "copy": true, 00:11:14.165 "nvme_iov_md": false 00:11:14.165 }, 00:11:14.165 "memory_domains": [ 00:11:14.165 { 00:11:14.165 "dma_device_id": "system", 00:11:14.165 "dma_device_type": 1 00:11:14.165 }, 00:11:14.165 { 00:11:14.165 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.165 "dma_device_type": 2 00:11:14.165 } 00:11:14.165 ], 00:11:14.165 "driver_specific": {} 00:11:14.165 } 00:11:14.165 ]' 00:11:14.165 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:14.424 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:14.424 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:14.424 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:14.424 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:14.424 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:14.424 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:14.424 22:42:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:15.801 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:15.801 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:15.801 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:15.801 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:15.801 22:42:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:17.703 22:42:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:17.703 22:42:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:17.703 22:42:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:17.703 22:42:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:17.703 22:42:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:17.703 22:42:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:17.703 22:42:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:17.703 22:42:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:17.703 22:42:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:17.703 22:42:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:17.703 22:42:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:17.703 22:42:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:17.703 22:42:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:17.703 22:42:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:17.703 22:42:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:17.703 22:42:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:17.703 22:42:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:17.703 22:42:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:18.269 22:42:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:19.205 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:19.205 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:19.205 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:19.205 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:19.205 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:19.205 ************************************ 00:11:19.205 START TEST filesystem_in_capsule_ext4 00:11:19.205 ************************************ 00:11:19.205 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:19.205 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:19.205 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:19.205 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:19.205 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:19.205 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:19.205 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:19.205 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:19.205 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:19.205 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:19.205 22:42:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:19.205 mke2fs 1.47.0 (5-Feb-2023) 00:11:19.205 Discarding device blocks: 0/522240 done 00:11:19.205 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:19.205 Filesystem UUID: a53d8d60-a0d4-4a31-aa46-0bdeb4c25ff8 00:11:19.205 Superblock backups stored on blocks: 00:11:19.205 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:19.205 00:11:19.205 Allocating group tables: 0/64 done 00:11:19.205 Writing inode tables: 0/64 done 00:11:19.463 Creating journal (8192 blocks): done 00:11:21.099 Writing superblocks and filesystem accounting information: 0/64 done 00:11:21.099 00:11:21.099 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:21.099 22:42:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:27.662 22:42:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:27.662 22:42:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:27.662 22:42:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:27.662 22:42:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:27.662 22:42:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:27.662 22:42:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:27.662 22:42:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2281835 00:11:27.662 22:42:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:27.662 22:42:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:27.662 22:42:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:27.662 22:42:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:27.662 00:11:27.662 real 0m8.044s 00:11:27.662 user 0m0.022s 00:11:27.662 sys 0m0.077s 00:11:27.662 22:42:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:27.662 22:42:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:27.662 ************************************ 00:11:27.662 END TEST filesystem_in_capsule_ext4 00:11:27.662 ************************************ 00:11:27.662 22:42:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:27.662 22:42:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:27.663 22:42:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:27.663 22:42:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:27.663 ************************************ 00:11:27.663 START TEST filesystem_in_capsule_btrfs 00:11:27.663 ************************************ 00:11:27.663 22:42:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:27.663 22:42:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:27.663 22:42:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:27.663 22:42:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:27.663 22:42:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:27.663 22:42:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:27.663 22:42:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:27.663 22:42:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:27.663 22:42:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:27.663 22:42:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:27.663 22:42:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:27.663 btrfs-progs v6.8.1 00:11:27.663 See https://btrfs.readthedocs.io for more information. 00:11:27.663 00:11:27.663 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:27.663 NOTE: several default settings have changed in version 5.15, please make sure 00:11:27.663 this does not affect your deployments: 00:11:27.663 - DUP for metadata (-m dup) 00:11:27.663 - enabled no-holes (-O no-holes) 00:11:27.663 - enabled free-space-tree (-R free-space-tree) 00:11:27.663 00:11:27.663 Label: (null) 00:11:27.663 UUID: 59f8301d-4543-4815-8377-472c29824753 00:11:27.663 Node size: 16384 00:11:27.663 Sector size: 4096 (CPU page size: 4096) 00:11:27.663 Filesystem size: 510.00MiB 00:11:27.663 Block group profiles: 00:11:27.663 Data: single 8.00MiB 00:11:27.663 Metadata: DUP 32.00MiB 00:11:27.663 System: DUP 8.00MiB 00:11:27.663 SSD detected: yes 00:11:27.663 Zoned device: no 00:11:27.663 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:27.663 Checksum: crc32c 00:11:27.663 Number of devices: 1 00:11:27.663 Devices: 00:11:27.663 ID SIZE PATH 00:11:27.663 1 510.00MiB /dev/nvme0n1p1 00:11:27.663 00:11:27.663 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:27.663 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:27.921 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:27.921 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:27.921 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:27.921 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:27.921 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:27.921 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:27.921 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2281835 00:11:27.921 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:27.921 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:27.921 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:27.921 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:27.921 00:11:27.921 real 0m0.733s 00:11:27.921 user 0m0.023s 00:11:27.921 sys 0m0.115s 00:11:27.921 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:27.921 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:27.921 ************************************ 00:11:27.921 END TEST filesystem_in_capsule_btrfs 00:11:27.921 ************************************ 00:11:28.180 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:28.180 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:28.180 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:28.180 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:28.180 ************************************ 00:11:28.180 START TEST filesystem_in_capsule_xfs 00:11:28.180 ************************************ 00:11:28.180 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:28.180 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:28.180 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:28.180 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:28.180 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:28.180 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:28.180 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:28.180 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:11:28.180 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:28.180 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:28.180 22:42:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:28.180 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:28.180 = sectsz=512 attr=2, projid32bit=1 00:11:28.180 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:28.180 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:28.180 data = bsize=4096 blocks=130560, imaxpct=25 00:11:28.180 = sunit=0 swidth=0 blks 00:11:28.180 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:28.180 log =internal log bsize=4096 blocks=16384, version=2 00:11:28.180 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:28.180 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:29.117 Discarding blocks...Done. 00:11:29.117 22:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:29.117 22:42:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:31.020 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:31.020 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:31.020 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:31.020 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:31.020 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:31.020 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:31.020 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2281835 00:11:31.020 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:31.020 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:31.020 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:31.020 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:31.020 00:11:31.020 real 0m2.741s 00:11:31.020 user 0m0.023s 00:11:31.020 sys 0m0.074s 00:11:31.020 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:31.020 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:31.020 ************************************ 00:11:31.020 END TEST filesystem_in_capsule_xfs 00:11:31.020 ************************************ 00:11:31.020 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:31.278 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:31.278 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:31.278 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:31.278 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:31.278 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:31.278 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:31.278 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:31.278 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:31.278 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:31.278 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:31.278 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:31.279 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.279 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:31.279 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.279 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:31.279 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2281835 00:11:31.279 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2281835 ']' 00:11:31.279 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2281835 00:11:31.279 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:31.279 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:31.279 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2281835 00:11:31.279 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:31.279 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:31.279 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2281835' 00:11:31.279 killing process with pid 2281835 00:11:31.279 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 2281835 00:11:31.279 22:42:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 2281835 00:11:31.847 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:31.847 00:11:31.847 real 0m17.887s 00:11:31.847 user 1m10.474s 00:11:31.847 sys 0m1.420s 00:11:31.847 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:31.847 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:31.847 ************************************ 00:11:31.847 END TEST nvmf_filesystem_in_capsule 00:11:31.847 ************************************ 00:11:31.847 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:31.847 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:31.847 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:11:31.847 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:31.847 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:11:31.847 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:31.847 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:31.847 rmmod nvme_tcp 00:11:31.847 rmmod nvme_fabrics 00:11:31.847 rmmod nvme_keyring 00:11:31.847 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:31.847 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:11:31.847 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:11:31.847 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:31.847 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:31.847 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:31.847 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:31.847 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:11:31.847 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:11:31.847 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:31.847 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:11:31.847 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:31.847 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:31.847 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:31.847 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:31.847 22:42:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:33.752 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:33.752 00:11:33.752 real 0m41.904s 00:11:33.752 user 2m12.402s 00:11:33.752 sys 0m7.501s 00:11:33.752 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:33.752 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:33.752 ************************************ 00:11:33.752 END TEST nvmf_filesystem 00:11:33.752 ************************************ 00:11:34.011 22:42:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:34.011 22:42:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:34.011 22:42:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:34.011 22:42:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:34.011 ************************************ 00:11:34.011 START TEST nvmf_target_discovery 00:11:34.011 ************************************ 00:11:34.011 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:34.011 * Looking for test storage... 00:11:34.011 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:11:34.011 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:34.011 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:11:34.011 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:34.011 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:34.011 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:34.011 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:34.011 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:34.011 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:11:34.011 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:11:34.011 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:11:34.011 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:11:34.011 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:11:34.011 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:11:34.011 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:11:34.011 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:34.011 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:11:34.011 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:11:34.011 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:34.011 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:34.011 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:11:34.011 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:11:34.011 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:34.011 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:11:34.011 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:11:34.011 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:11:34.011 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:11:34.011 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:34.011 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:11:34.011 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:11:34.011 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:34.011 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:34.011 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:11:34.011 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:34.011 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:34.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.012 --rc genhtml_branch_coverage=1 00:11:34.012 --rc genhtml_function_coverage=1 00:11:34.012 --rc genhtml_legend=1 00:11:34.012 --rc geninfo_all_blocks=1 00:11:34.012 --rc geninfo_unexecuted_blocks=1 00:11:34.012 00:11:34.012 ' 00:11:34.012 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:34.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.012 --rc genhtml_branch_coverage=1 00:11:34.012 --rc genhtml_function_coverage=1 00:11:34.012 --rc genhtml_legend=1 00:11:34.012 --rc geninfo_all_blocks=1 00:11:34.012 --rc geninfo_unexecuted_blocks=1 00:11:34.012 00:11:34.012 ' 00:11:34.012 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:34.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.012 --rc genhtml_branch_coverage=1 00:11:34.012 --rc genhtml_function_coverage=1 00:11:34.012 --rc genhtml_legend=1 00:11:34.012 --rc geninfo_all_blocks=1 00:11:34.012 --rc geninfo_unexecuted_blocks=1 00:11:34.012 00:11:34.012 ' 00:11:34.012 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:34.012 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.012 --rc genhtml_branch_coverage=1 00:11:34.012 --rc genhtml_function_coverage=1 00:11:34.012 --rc genhtml_legend=1 00:11:34.012 --rc geninfo_all_blocks=1 00:11:34.012 --rc geninfo_unexecuted_blocks=1 00:11:34.012 00:11:34.012 ' 00:11:34.012 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:11:34.012 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:34.012 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:34.012 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:34.012 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:34.012 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:34.012 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:34.012 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:34.012 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:34.012 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:34.012 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:34.012 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:34.012 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:34.012 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:34.012 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:34.012 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:34.012 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:34.012 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:34.012 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:11:34.012 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:11:34.272 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:34.272 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:34.272 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:34.272 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.272 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.272 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.272 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:34.272 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.272 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:11:34.272 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:34.272 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:34.272 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:34.272 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:34.272 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:34.272 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:34.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:34.272 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:34.272 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:34.272 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:34.272 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:34.272 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:34.272 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:34.272 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:34.272 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:34.272 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:34.272 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:34.272 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:34.272 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:34.272 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:34.272 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:34.272 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:34.272 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:34.272 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:34.272 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:34.272 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:11:34.272 22:42:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:40.844 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:40.845 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:40.845 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:40.845 Found net devices under 0000:86:00.0: cvl_0_0 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:40.845 Found net devices under 0000:86:00.1: cvl_0_1 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:40.845 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:40.845 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.337 ms 00:11:40.845 00:11:40.845 --- 10.0.0.2 ping statistics --- 00:11:40.845 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:40.845 rtt min/avg/max/mdev = 0.337/0.337/0.337/0.000 ms 00:11:40.845 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:40.845 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:40.845 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:11:40.845 00:11:40.845 --- 10.0.0.1 ping statistics --- 00:11:40.846 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:40.846 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:11:40.846 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:40.846 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:11:40.846 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:40.846 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:40.846 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:40.846 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:40.846 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:40.846 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:40.846 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:40.846 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:40.846 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:40.846 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:40.846 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:40.846 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=2288359 00:11:40.846 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:40.846 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 2288359 00:11:40.846 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 2288359 ']' 00:11:40.846 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:40.846 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:40.846 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:40.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:40.846 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:40.846 22:42:47 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:40.846 [2024-12-10 22:42:47.769459] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:11:40.846 [2024-12-10 22:42:47.769516] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:40.846 [2024-12-10 22:42:47.851410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:40.846 [2024-12-10 22:42:47.893807] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:40.846 [2024-12-10 22:42:47.893840] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:40.846 [2024-12-10 22:42:47.893848] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:40.846 [2024-12-10 22:42:47.893854] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:40.846 [2024-12-10 22:42:47.893860] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:40.846 [2024-12-10 22:42:47.895417] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:11:40.846 [2024-12-10 22:42:47.895525] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:11:40.846 [2024-12-10 22:42:47.895543] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:40.846 [2024-12-10 22:42:47.895542] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:11:41.105 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:41.105 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:11:41.105 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:41.105 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:41.105 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.105 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:41.105 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:41.105 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.105 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.105 [2024-12-10 22:42:48.651744] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:41.105 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.105 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:11:41.105 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:41.105 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:41.105 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.105 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.105 Null1 00:11:41.105 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.105 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:41.105 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.105 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.105 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.105 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:41.105 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.105 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.105 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.105 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:41.105 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.105 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.105 [2024-12-10 22:42:48.712328] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:41.105 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.106 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:41.106 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:41.106 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.106 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.106 Null2 00:11:41.106 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.106 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:41.106 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.106 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.106 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.106 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:41.106 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.106 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.106 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.106 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:41.106 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.106 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.106 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.106 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:41.106 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:41.106 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.106 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.106 Null3 00:11:41.106 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.106 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:41.106 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.106 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.106 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.106 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:41.106 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.106 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.106 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.106 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:41.106 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.106 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.106 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.106 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:41.106 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:41.106 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.106 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.106 Null4 00:11:41.106 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.106 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:41.106 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.106 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.106 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.106 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:41.106 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.106 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.106 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.106 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:41.106 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.106 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.106 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.106 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:41.106 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.106 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.106 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.106 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:41.106 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.106 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.365 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.365 22:42:48 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:11:41.365 00:11:41.365 Discovery Log Number of Records 6, Generation counter 6 00:11:41.365 =====Discovery Log Entry 0====== 00:11:41.365 trtype: tcp 00:11:41.365 adrfam: ipv4 00:11:41.365 subtype: current discovery subsystem 00:11:41.365 treq: not required 00:11:41.365 portid: 0 00:11:41.365 trsvcid: 4420 00:11:41.365 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:41.365 traddr: 10.0.0.2 00:11:41.365 eflags: explicit discovery connections, duplicate discovery information 00:11:41.365 sectype: none 00:11:41.365 =====Discovery Log Entry 1====== 00:11:41.365 trtype: tcp 00:11:41.365 adrfam: ipv4 00:11:41.365 subtype: nvme subsystem 00:11:41.365 treq: not required 00:11:41.365 portid: 0 00:11:41.365 trsvcid: 4420 00:11:41.365 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:41.365 traddr: 10.0.0.2 00:11:41.365 eflags: none 00:11:41.365 sectype: none 00:11:41.365 =====Discovery Log Entry 2====== 00:11:41.365 trtype: tcp 00:11:41.365 adrfam: ipv4 00:11:41.365 subtype: nvme subsystem 00:11:41.365 treq: not required 00:11:41.365 portid: 0 00:11:41.365 trsvcid: 4420 00:11:41.365 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:41.365 traddr: 10.0.0.2 00:11:41.365 eflags: none 00:11:41.365 sectype: none 00:11:41.365 =====Discovery Log Entry 3====== 00:11:41.365 trtype: tcp 00:11:41.365 adrfam: ipv4 00:11:41.365 subtype: nvme subsystem 00:11:41.365 treq: not required 00:11:41.365 portid: 0 00:11:41.365 trsvcid: 4420 00:11:41.365 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:41.365 traddr: 10.0.0.2 00:11:41.365 eflags: none 00:11:41.365 sectype: none 00:11:41.365 =====Discovery Log Entry 4====== 00:11:41.365 trtype: tcp 00:11:41.365 adrfam: ipv4 00:11:41.365 subtype: nvme subsystem 00:11:41.365 treq: not required 00:11:41.365 portid: 0 00:11:41.365 trsvcid: 4420 00:11:41.365 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:41.365 traddr: 10.0.0.2 00:11:41.365 eflags: none 00:11:41.365 sectype: none 00:11:41.365 =====Discovery Log Entry 5====== 00:11:41.365 trtype: tcp 00:11:41.365 adrfam: ipv4 00:11:41.365 subtype: discovery subsystem referral 00:11:41.365 treq: not required 00:11:41.365 portid: 0 00:11:41.365 trsvcid: 4430 00:11:41.365 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:41.365 traddr: 10.0.0.2 00:11:41.365 eflags: none 00:11:41.365 sectype: none 00:11:41.365 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:41.365 Perform nvmf subsystem discovery via RPC 00:11:41.365 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:41.365 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.365 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.365 [ 00:11:41.365 { 00:11:41.365 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:41.365 "subtype": "Discovery", 00:11:41.365 "listen_addresses": [ 00:11:41.365 { 00:11:41.365 "trtype": "TCP", 00:11:41.365 "adrfam": "IPv4", 00:11:41.366 "traddr": "10.0.0.2", 00:11:41.366 "trsvcid": "4420" 00:11:41.366 } 00:11:41.366 ], 00:11:41.366 "allow_any_host": true, 00:11:41.366 "hosts": [] 00:11:41.366 }, 00:11:41.366 { 00:11:41.366 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:41.366 "subtype": "NVMe", 00:11:41.366 "listen_addresses": [ 00:11:41.366 { 00:11:41.366 "trtype": "TCP", 00:11:41.366 "adrfam": "IPv4", 00:11:41.366 "traddr": "10.0.0.2", 00:11:41.366 "trsvcid": "4420" 00:11:41.366 } 00:11:41.366 ], 00:11:41.366 "allow_any_host": true, 00:11:41.366 "hosts": [], 00:11:41.366 "serial_number": "SPDK00000000000001", 00:11:41.366 "model_number": "SPDK bdev Controller", 00:11:41.366 "max_namespaces": 32, 00:11:41.366 "min_cntlid": 1, 00:11:41.366 "max_cntlid": 65519, 00:11:41.366 "namespaces": [ 00:11:41.366 { 00:11:41.366 "nsid": 1, 00:11:41.366 "bdev_name": "Null1", 00:11:41.366 "name": "Null1", 00:11:41.366 "nguid": "7E247E8FE3D64BB5880D1D849E884519", 00:11:41.366 "uuid": "7e247e8f-e3d6-4bb5-880d-1d849e884519" 00:11:41.366 } 00:11:41.366 ] 00:11:41.366 }, 00:11:41.366 { 00:11:41.366 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:41.366 "subtype": "NVMe", 00:11:41.366 "listen_addresses": [ 00:11:41.366 { 00:11:41.366 "trtype": "TCP", 00:11:41.366 "adrfam": "IPv4", 00:11:41.366 "traddr": "10.0.0.2", 00:11:41.366 "trsvcid": "4420" 00:11:41.366 } 00:11:41.366 ], 00:11:41.366 "allow_any_host": true, 00:11:41.366 "hosts": [], 00:11:41.366 "serial_number": "SPDK00000000000002", 00:11:41.366 "model_number": "SPDK bdev Controller", 00:11:41.366 "max_namespaces": 32, 00:11:41.366 "min_cntlid": 1, 00:11:41.366 "max_cntlid": 65519, 00:11:41.366 "namespaces": [ 00:11:41.366 { 00:11:41.366 "nsid": 1, 00:11:41.366 "bdev_name": "Null2", 00:11:41.366 "name": "Null2", 00:11:41.366 "nguid": "36FE0730018A42D1AA37C3A6AE1B53BD", 00:11:41.366 "uuid": "36fe0730-018a-42d1-aa37-c3a6ae1b53bd" 00:11:41.366 } 00:11:41.366 ] 00:11:41.366 }, 00:11:41.366 { 00:11:41.366 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:41.366 "subtype": "NVMe", 00:11:41.366 "listen_addresses": [ 00:11:41.366 { 00:11:41.366 "trtype": "TCP", 00:11:41.366 "adrfam": "IPv4", 00:11:41.366 "traddr": "10.0.0.2", 00:11:41.366 "trsvcid": "4420" 00:11:41.366 } 00:11:41.366 ], 00:11:41.366 "allow_any_host": true, 00:11:41.366 "hosts": [], 00:11:41.366 "serial_number": "SPDK00000000000003", 00:11:41.366 "model_number": "SPDK bdev Controller", 00:11:41.366 "max_namespaces": 32, 00:11:41.366 "min_cntlid": 1, 00:11:41.366 "max_cntlid": 65519, 00:11:41.366 "namespaces": [ 00:11:41.366 { 00:11:41.366 "nsid": 1, 00:11:41.366 "bdev_name": "Null3", 00:11:41.366 "name": "Null3", 00:11:41.366 "nguid": "E734B6C89D534BBAAE4A47A3C97BE59E", 00:11:41.366 "uuid": "e734b6c8-9d53-4bba-ae4a-47a3c97be59e" 00:11:41.366 } 00:11:41.366 ] 00:11:41.366 }, 00:11:41.366 { 00:11:41.366 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:41.366 "subtype": "NVMe", 00:11:41.366 "listen_addresses": [ 00:11:41.366 { 00:11:41.366 "trtype": "TCP", 00:11:41.366 "adrfam": "IPv4", 00:11:41.366 "traddr": "10.0.0.2", 00:11:41.366 "trsvcid": "4420" 00:11:41.366 } 00:11:41.366 ], 00:11:41.366 "allow_any_host": true, 00:11:41.366 "hosts": [], 00:11:41.366 "serial_number": "SPDK00000000000004", 00:11:41.366 "model_number": "SPDK bdev Controller", 00:11:41.366 "max_namespaces": 32, 00:11:41.366 "min_cntlid": 1, 00:11:41.366 "max_cntlid": 65519, 00:11:41.366 "namespaces": [ 00:11:41.366 { 00:11:41.366 "nsid": 1, 00:11:41.366 "bdev_name": "Null4", 00:11:41.366 "name": "Null4", 00:11:41.366 "nguid": "1C4960853BBA4F24A1C748B53A9A821E", 00:11:41.366 "uuid": "1c496085-3bba-4f24-a1c7-48b53a9a821e" 00:11:41.366 } 00:11:41.366 ] 00:11:41.366 } 00:11:41.366 ] 00:11:41.366 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.366 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:11:41.366 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:41.366 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:41.366 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.366 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.366 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.366 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:41.366 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.366 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.625 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.626 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:41.626 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:41.626 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.626 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.626 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.626 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:41.626 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.626 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.626 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.626 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:41.626 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:41.626 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.626 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.626 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.626 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:41.626 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.626 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.626 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.626 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:41.626 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:41.626 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.626 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.626 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.626 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:41.626 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.626 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.626 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.626 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:41.626 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.626 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.626 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.626 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:41.626 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:41.626 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.626 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:41.626 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.626 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:11:41.626 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:41.626 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:41.626 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:11:41.626 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:41.626 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:11:41.626 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:41.626 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:11:41.626 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:41.626 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:41.626 rmmod nvme_tcp 00:11:41.626 rmmod nvme_fabrics 00:11:41.626 rmmod nvme_keyring 00:11:41.626 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:41.626 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:11:41.626 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:11:41.626 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 2288359 ']' 00:11:41.626 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 2288359 00:11:41.626 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 2288359 ']' 00:11:41.626 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 2288359 00:11:41.626 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:11:41.626 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:41.626 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2288359 00:11:41.626 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:41.626 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:41.626 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2288359' 00:11:41.626 killing process with pid 2288359 00:11:41.626 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 2288359 00:11:41.626 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 2288359 00:11:41.892 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:41.892 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:41.892 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:41.892 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:11:41.892 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:11:41.892 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:41.892 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:11:41.892 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:41.892 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:41.892 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:41.892 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:41.892 22:42:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:44.430 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:44.430 00:11:44.430 real 0m9.999s 00:11:44.430 user 0m8.284s 00:11:44.430 sys 0m4.882s 00:11:44.430 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:44.430 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:44.430 ************************************ 00:11:44.430 END TEST nvmf_target_discovery 00:11:44.430 ************************************ 00:11:44.430 22:42:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:44.430 22:42:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:44.430 22:42:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:44.430 22:42:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:44.430 ************************************ 00:11:44.430 START TEST nvmf_referrals 00:11:44.430 ************************************ 00:11:44.430 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:44.430 * Looking for test storage... 00:11:44.430 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:11:44.430 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:44.430 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version 00:11:44.430 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:44.430 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:44.430 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:44.430 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:44.430 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:44.430 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:11:44.430 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:11:44.430 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:11:44.430 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:11:44.430 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:11:44.430 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:11:44.430 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:11:44.430 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:44.430 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:11:44.430 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:11:44.430 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:44.430 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:44.430 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:11:44.430 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:11:44.430 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:44.430 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:11:44.430 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:11:44.430 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:11:44.430 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:11:44.430 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:44.430 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:11:44.430 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:11:44.430 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:44.430 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:44.430 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:11:44.430 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:44.430 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:44.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.430 --rc genhtml_branch_coverage=1 00:11:44.430 --rc genhtml_function_coverage=1 00:11:44.430 --rc genhtml_legend=1 00:11:44.430 --rc geninfo_all_blocks=1 00:11:44.430 --rc geninfo_unexecuted_blocks=1 00:11:44.430 00:11:44.430 ' 00:11:44.430 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:44.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.430 --rc genhtml_branch_coverage=1 00:11:44.430 --rc genhtml_function_coverage=1 00:11:44.430 --rc genhtml_legend=1 00:11:44.430 --rc geninfo_all_blocks=1 00:11:44.430 --rc geninfo_unexecuted_blocks=1 00:11:44.430 00:11:44.430 ' 00:11:44.430 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:44.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.430 --rc genhtml_branch_coverage=1 00:11:44.430 --rc genhtml_function_coverage=1 00:11:44.430 --rc genhtml_legend=1 00:11:44.430 --rc geninfo_all_blocks=1 00:11:44.430 --rc geninfo_unexecuted_blocks=1 00:11:44.430 00:11:44.430 ' 00:11:44.430 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:44.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:44.430 --rc genhtml_branch_coverage=1 00:11:44.430 --rc genhtml_function_coverage=1 00:11:44.430 --rc genhtml_legend=1 00:11:44.430 --rc geninfo_all_blocks=1 00:11:44.430 --rc geninfo_unexecuted_blocks=1 00:11:44.430 00:11:44.430 ' 00:11:44.430 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:11:44.430 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:44.430 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:44.430 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:44.430 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:44.430 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:44.430 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:44.430 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:44.430 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:44.430 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:44.430 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:44.430 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:44.430 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:44.430 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:44.430 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:44.431 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:44.431 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:44.431 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:44.431 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:11:44.431 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:11:44.431 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:44.431 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:44.431 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:44.431 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.431 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.431 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.431 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:44.431 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:44.431 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:11:44.431 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:44.431 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:44.431 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:44.431 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:44.431 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:44.431 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:44.431 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:44.431 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:44.431 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:44.431 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:44.431 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:44.431 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:44.431 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:44.431 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:44.431 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:44.431 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:44.431 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:44.431 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:44.431 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:44.431 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:44.431 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:44.431 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:44.431 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:44.431 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:44.431 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:44.431 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:44.431 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:44.431 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:11:44.431 22:42:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:51.006 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:51.006 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:11:51.006 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:51.006 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:51.006 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:51.006 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:51.006 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:51.006 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:11:51.006 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:51.006 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:11:51.006 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:11:51.006 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:11:51.006 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:11:51.006 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:11:51.006 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:11:51.006 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:51.006 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:51.006 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:51.006 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:51.006 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:51.006 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:51.006 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:51.006 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:51.006 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:51.006 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:51.006 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:51.006 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:51.006 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:51.006 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:51.006 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:51.006 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:51.006 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:51.006 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:51.006 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:51.006 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:51.006 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:51.006 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:51.006 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:51.006 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:51.006 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:51.006 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:51.006 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:51.006 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:51.006 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:51.006 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:51.006 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:51.006 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:51.006 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:51.006 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:51.006 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:51.006 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:51.006 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:51.006 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:51.006 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:51.006 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:51.006 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:51.006 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:51.006 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:51.006 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:51.007 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:51.007 Found net devices under 0000:86:00.0: cvl_0_0 00:11:51.007 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:51.007 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:51.007 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:51.007 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:51.007 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:51.007 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:51.007 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:51.007 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:51.007 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:51.007 Found net devices under 0000:86:00.1: cvl_0_1 00:11:51.007 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:51.007 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:51.007 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:11:51.007 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:51.007 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:51.007 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:51.007 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:51.007 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:51.007 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:51.007 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:51.007 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:51.007 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:51.007 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:51.007 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:51.007 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:51.007 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:51.007 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:51.007 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:51.007 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:51.007 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:51.007 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:51.007 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:51.007 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:51.007 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:51.007 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:51.007 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:51.007 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:51.007 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:51.007 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:51.007 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:51.007 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.417 ms 00:11:51.007 00:11:51.007 --- 10.0.0.2 ping statistics --- 00:11:51.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:51.007 rtt min/avg/max/mdev = 0.417/0.417/0.417/0.000 ms 00:11:51.007 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:51.007 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:51.007 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:11:51.007 00:11:51.007 --- 10.0.0.1 ping statistics --- 00:11:51.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:51.007 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:11:51.007 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:51.007 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:11:51.007 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:51.007 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:51.007 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:51.007 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:51.007 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:51.007 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:51.007 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:51.007 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:51.007 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:51.007 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:51.007 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:51.007 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=2292232 00:11:51.007 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 2292232 00:11:51.007 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:51.007 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 2292232 ']' 00:11:51.007 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:51.007 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:51.007 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:51.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:51.007 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:51.007 22:42:57 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:51.007 [2024-12-10 22:42:57.830693] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:11:51.007 [2024-12-10 22:42:57.830739] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:51.007 [2024-12-10 22:42:57.911061] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:51.007 [2024-12-10 22:42:57.952343] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:51.007 [2024-12-10 22:42:57.952380] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:51.007 [2024-12-10 22:42:57.952387] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:51.007 [2024-12-10 22:42:57.952393] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:51.007 [2024-12-10 22:42:57.952398] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:51.007 [2024-12-10 22:42:57.953845] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:11:51.007 [2024-12-10 22:42:57.953957] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:11:51.007 [2024-12-10 22:42:57.954064] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:51.007 [2024-12-10 22:42:57.954064] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:11:51.007 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:51.007 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:11:51.007 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:51.007 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:51.007 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:51.007 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:51.007 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:51.007 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.007 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:51.007 [2024-12-10 22:42:58.091416] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:51.007 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.007 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:11:51.007 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.007 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:51.007 [2024-12-10 22:42:58.119333] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:11:51.007 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.007 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:51.008 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.008 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:51.008 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.008 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:51.008 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.008 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:51.008 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.008 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:51.008 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.008 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:51.008 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.008 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:51.008 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:51.008 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.008 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:51.008 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.008 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:51.008 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:51.008 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:51.008 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:51.008 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:51.008 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.008 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:51.008 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:51.008 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.008 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:51.008 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:51.008 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:51.008 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:51.008 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:51.008 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:51.008 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:51.008 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:51.008 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:51.008 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:51.008 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:11:51.008 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.008 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:51.008 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.008 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:11:51.008 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.008 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:51.008 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.008 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:11:51.008 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.008 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:51.008 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.008 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:51.008 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:51.008 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.008 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:51.008 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.008 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:51.008 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:51.008 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:51.008 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:51.008 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:51.008 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:51.008 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:51.008 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:51.008 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:51.008 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:11:51.008 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.008 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:51.008 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.008 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:51.008 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.008 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:51.266 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.266 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:51.266 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:51.266 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:51.267 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:51.267 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.267 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:51.267 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:51.267 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.267 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:51.267 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:51.267 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:51.267 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:51.267 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:51.267 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:51.267 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:51.267 22:42:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:51.525 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:51.525 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:51.525 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:51.525 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:51.525 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:51.525 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:51.525 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:51.525 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:51.525 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:51.525 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:51.525 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:51.525 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:51.525 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:51.783 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:51.783 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:51.783 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.783 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:51.783 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.783 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:51.783 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:51.783 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:51.783 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:51.783 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.783 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:51.783 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:51.783 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.783 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:51.783 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:51.783 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:51.783 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:51.783 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:51.783 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:51.783 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:51.783 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:52.048 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:52.048 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:52.048 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:52.048 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:52.048 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:52.048 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:52.048 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:52.326 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:52.326 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:52.326 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:52.326 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:52.326 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:52.326 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:52.326 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:52.326 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:52.326 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.326 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:52.326 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.326 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:52.326 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:52.326 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:52.326 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:52.326 22:42:59 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:52.326 22:43:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:52.326 22:43:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:52.326 22:43:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:52.326 22:43:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:52.326 22:43:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:52.326 22:43:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:52.326 22:43:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:52.590 22:43:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:52.590 22:43:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:52.590 22:43:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:52.590 22:43:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:52.590 22:43:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:52.590 22:43:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:11:52.590 22:43:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:52.590 22:43:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:11:52.590 22:43:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:52.590 22:43:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:52.590 rmmod nvme_tcp 00:11:52.590 rmmod nvme_fabrics 00:11:52.590 rmmod nvme_keyring 00:11:52.590 22:43:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:52.590 22:43:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:11:52.590 22:43:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:11:52.590 22:43:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 2292232 ']' 00:11:52.590 22:43:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 2292232 00:11:52.590 22:43:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 2292232 ']' 00:11:52.590 22:43:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 2292232 00:11:52.590 22:43:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:11:52.590 22:43:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:52.590 22:43:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2292232 00:11:52.849 22:43:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:52.849 22:43:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:52.849 22:43:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2292232' 00:11:52.849 killing process with pid 2292232 00:11:52.849 22:43:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 2292232 00:11:52.850 22:43:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 2292232 00:11:52.850 22:43:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:52.850 22:43:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:52.850 22:43:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:52.850 22:43:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:11:52.850 22:43:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:11:52.850 22:43:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:52.850 22:43:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:11:52.850 22:43:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:52.850 22:43:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:52.850 22:43:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:52.850 22:43:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:52.850 22:43:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:55.386 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:55.386 00:11:55.386 real 0m10.973s 00:11:55.386 user 0m12.823s 00:11:55.386 sys 0m5.185s 00:11:55.386 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:55.386 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:55.386 ************************************ 00:11:55.386 END TEST nvmf_referrals 00:11:55.386 ************************************ 00:11:55.386 22:43:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:55.386 22:43:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:55.386 22:43:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:55.386 22:43:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:55.386 ************************************ 00:11:55.386 START TEST nvmf_connect_disconnect 00:11:55.386 ************************************ 00:11:55.386 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:55.386 * Looking for test storage... 00:11:55.386 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:11:55.386 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:55.386 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:11:55.386 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:55.386 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:55.386 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:55.386 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:55.386 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:55.386 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:11:55.386 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:11:55.386 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:11:55.386 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:11:55.386 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:11:55.386 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:11:55.386 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:11:55.386 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:55.386 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:11:55.386 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:11:55.386 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:55.386 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:55.386 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:11:55.386 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:11:55.386 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:55.386 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:11:55.386 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:11:55.386 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:11:55.386 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:11:55.387 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:55.387 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:11:55.387 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:11:55.387 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:55.387 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:55.387 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:11:55.387 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:55.387 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:55.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.387 --rc genhtml_branch_coverage=1 00:11:55.387 --rc genhtml_function_coverage=1 00:11:55.387 --rc genhtml_legend=1 00:11:55.387 --rc geninfo_all_blocks=1 00:11:55.387 --rc geninfo_unexecuted_blocks=1 00:11:55.387 00:11:55.387 ' 00:11:55.387 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:55.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.387 --rc genhtml_branch_coverage=1 00:11:55.387 --rc genhtml_function_coverage=1 00:11:55.387 --rc genhtml_legend=1 00:11:55.387 --rc geninfo_all_blocks=1 00:11:55.387 --rc geninfo_unexecuted_blocks=1 00:11:55.387 00:11:55.387 ' 00:11:55.387 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:55.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.387 --rc genhtml_branch_coverage=1 00:11:55.387 --rc genhtml_function_coverage=1 00:11:55.387 --rc genhtml_legend=1 00:11:55.387 --rc geninfo_all_blocks=1 00:11:55.387 --rc geninfo_unexecuted_blocks=1 00:11:55.387 00:11:55.387 ' 00:11:55.387 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:55.387 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:55.387 --rc genhtml_branch_coverage=1 00:11:55.387 --rc genhtml_function_coverage=1 00:11:55.387 --rc genhtml_legend=1 00:11:55.387 --rc geninfo_all_blocks=1 00:11:55.387 --rc geninfo_unexecuted_blocks=1 00:11:55.387 00:11:55.387 ' 00:11:55.387 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:11:55.387 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:55.387 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:55.387 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:55.387 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:55.387 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:55.387 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:55.387 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:55.387 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:55.387 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:55.387 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:55.387 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:55.387 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:55.387 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:55.387 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:55.387 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:55.387 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:55.387 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:55.387 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:11:55.387 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:11:55.387 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:55.387 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:55.387 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:55.387 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.387 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.387 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.387 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:55.387 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.387 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:11:55.387 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:55.387 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:55.387 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:55.387 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:55.387 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:55.387 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:55.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:55.387 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:55.387 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:55.387 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:55.387 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:55.387 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:55.387 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:55.387 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:55.387 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:55.387 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:55.387 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:55.387 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:55.387 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:55.387 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:55.387 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:55.387 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:55.387 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:55.387 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:11:55.387 22:43:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:01.957 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:01.957 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:12:01.957 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:01.957 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:01.957 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:01.957 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:01.957 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:01.957 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:12:01.957 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:01.957 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:12:01.957 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:12:01.957 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:12:01.957 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:12:01.957 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:12:01.957 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:12:01.957 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:01.957 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:01.957 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:01.957 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:01.957 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:01.957 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:01.957 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:01.957 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:01.957 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:01.957 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:01.957 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:01.957 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:01.957 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:01.957 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:01.957 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:01.957 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:01.957 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:01.957 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:01.957 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:01.957 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:01.957 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:01.958 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:01.958 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:01.958 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:01.958 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:01.958 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:01.958 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:01.958 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:01.958 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:01.958 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:01.958 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:01.958 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:01.958 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:01.958 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:01.958 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:01.958 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:01.958 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:01.958 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:01.958 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:01.958 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:01.958 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:01.958 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:01.958 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:01.958 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:01.958 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:01.958 Found net devices under 0000:86:00.0: cvl_0_0 00:12:01.958 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:01.958 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:01.958 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:01.958 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:01.958 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:01.958 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:01.958 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:01.958 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:01.958 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:01.958 Found net devices under 0000:86:00.1: cvl_0_1 00:12:01.958 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:01.958 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:01.958 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:12:01.958 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:01.958 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:01.958 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:01.958 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:01.958 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:01.958 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:01.958 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:01.958 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:01.958 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:01.958 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:01.958 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:01.958 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:01.958 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:01.958 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:01.958 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:01.958 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:01.958 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:01.958 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:01.958 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:01.958 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:01.958 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:01.958 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:01.958 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:01.958 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:01.958 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:01.958 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:01.958 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:01.958 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.308 ms 00:12:01.958 00:12:01.958 --- 10.0.0.2 ping statistics --- 00:12:01.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:01.958 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:12:01.958 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:01.958 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:01.958 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:12:01.958 00:12:01.958 --- 10.0.0.1 ping statistics --- 00:12:01.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:01.958 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:12:01.958 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:01.958 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:12:01.958 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:01.958 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:01.958 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:01.958 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:01.958 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:01.958 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:01.958 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:01.958 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:01.958 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:01.958 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:01.958 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:01.958 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=2296259 00:12:01.958 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 2296259 00:12:01.958 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:01.958 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 2296259 ']' 00:12:01.958 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:01.958 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:01.958 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:01.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:01.958 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:01.958 22:43:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:01.958 [2024-12-10 22:43:08.912941] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:12:01.958 [2024-12-10 22:43:08.912985] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:01.958 [2024-12-10 22:43:08.992770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:01.958 [2024-12-10 22:43:09.035140] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:01.958 [2024-12-10 22:43:09.035179] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:01.958 [2024-12-10 22:43:09.035188] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:01.958 [2024-12-10 22:43:09.035197] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:01.958 [2024-12-10 22:43:09.035203] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:01.958 [2024-12-10 22:43:09.036726] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:12:01.958 [2024-12-10 22:43:09.036832] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:12:01.958 [2024-12-10 22:43:09.036874] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:01.958 [2024-12-10 22:43:09.036875] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:12:01.959 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:01.959 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:12:01.959 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:01.959 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:01.959 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:01.959 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:01.959 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:01.959 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.959 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:01.959 [2024-12-10 22:43:09.187877] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:01.959 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.959 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:01.959 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.959 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:01.959 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.959 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:01.959 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:01.959 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.959 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:01.959 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.959 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:01.959 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.959 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:01.959 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.959 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:01.959 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.959 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:01.959 [2024-12-10 22:43:09.248467] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:01.959 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.959 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:12:01.959 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:12:01.959 22:43:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:05.238 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:08.520 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:11.806 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:15.092 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:18.377 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:18.377 22:43:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:18.377 22:43:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:18.377 22:43:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:18.377 22:43:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:12:18.377 22:43:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:18.377 22:43:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:12:18.377 22:43:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:18.377 22:43:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:18.377 rmmod nvme_tcp 00:12:18.377 rmmod nvme_fabrics 00:12:18.377 rmmod nvme_keyring 00:12:18.377 22:43:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:18.377 22:43:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:12:18.377 22:43:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:12:18.377 22:43:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 2296259 ']' 00:12:18.377 22:43:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 2296259 00:12:18.377 22:43:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2296259 ']' 00:12:18.377 22:43:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 2296259 00:12:18.377 22:43:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:12:18.377 22:43:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:18.377 22:43:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2296259 00:12:18.377 22:43:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:18.377 22:43:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:18.378 22:43:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2296259' 00:12:18.378 killing process with pid 2296259 00:12:18.378 22:43:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 2296259 00:12:18.378 22:43:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 2296259 00:12:18.378 22:43:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:18.378 22:43:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:18.378 22:43:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:18.378 22:43:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:12:18.378 22:43:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:12:18.378 22:43:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:18.378 22:43:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:12:18.378 22:43:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:18.378 22:43:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:18.378 22:43:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:18.378 22:43:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:18.378 22:43:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:20.284 22:43:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:20.284 00:12:20.284 real 0m25.341s 00:12:20.284 user 1m8.746s 00:12:20.284 sys 0m5.886s 00:12:20.284 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:20.284 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:20.284 ************************************ 00:12:20.284 END TEST nvmf_connect_disconnect 00:12:20.284 ************************************ 00:12:20.544 22:43:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:20.544 22:43:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:20.544 22:43:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:20.544 22:43:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:20.544 ************************************ 00:12:20.544 START TEST nvmf_multitarget 00:12:20.544 ************************************ 00:12:20.544 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:20.544 * Looking for test storage... 00:12:20.544 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:12:20.544 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:20.544 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version 00:12:20.544 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:20.544 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:20.544 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:20.544 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:20.544 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:20.544 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:12:20.544 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:12:20.544 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:12:20.544 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:12:20.544 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:12:20.544 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:12:20.544 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:12:20.544 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:20.544 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:12:20.544 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:12:20.544 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:20.544 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:20.544 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:12:20.544 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:12:20.544 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:20.544 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:12:20.544 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:12:20.544 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:12:20.544 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:12:20.544 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:20.544 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:12:20.544 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:12:20.544 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:20.544 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:20.544 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:12:20.544 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:20.544 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:20.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:20.544 --rc genhtml_branch_coverage=1 00:12:20.544 --rc genhtml_function_coverage=1 00:12:20.544 --rc genhtml_legend=1 00:12:20.544 --rc geninfo_all_blocks=1 00:12:20.544 --rc geninfo_unexecuted_blocks=1 00:12:20.544 00:12:20.544 ' 00:12:20.544 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:20.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:20.544 --rc genhtml_branch_coverage=1 00:12:20.544 --rc genhtml_function_coverage=1 00:12:20.544 --rc genhtml_legend=1 00:12:20.544 --rc geninfo_all_blocks=1 00:12:20.544 --rc geninfo_unexecuted_blocks=1 00:12:20.544 00:12:20.544 ' 00:12:20.544 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:20.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:20.545 --rc genhtml_branch_coverage=1 00:12:20.545 --rc genhtml_function_coverage=1 00:12:20.545 --rc genhtml_legend=1 00:12:20.545 --rc geninfo_all_blocks=1 00:12:20.545 --rc geninfo_unexecuted_blocks=1 00:12:20.545 00:12:20.545 ' 00:12:20.545 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:20.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:20.545 --rc genhtml_branch_coverage=1 00:12:20.545 --rc genhtml_function_coverage=1 00:12:20.545 --rc genhtml_legend=1 00:12:20.545 --rc geninfo_all_blocks=1 00:12:20.545 --rc geninfo_unexecuted_blocks=1 00:12:20.545 00:12:20.545 ' 00:12:20.545 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:12:20.545 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:20.545 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:20.545 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:20.545 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:20.545 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:20.545 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:20.545 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:20.545 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:20.545 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:20.545 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:20.545 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:20.545 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:20.545 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:20.545 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:20.545 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:20.545 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:20.545 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:20.545 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:12:20.545 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:12:20.545 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:20.545 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:20.545 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:20.545 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.545 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.545 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.545 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:20.545 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:20.545 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:12:20.545 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:20.545 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:20.545 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:20.545 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:20.545 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:20.545 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:20.545 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:20.545 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:20.805 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:20.805 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:20.805 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/multitarget_rpc.py 00:12:20.805 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:20.805 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:20.805 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:20.805 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:20.805 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:20.805 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:20.805 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:20.805 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:20.805 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:20.805 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:20.805 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:20.805 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:12:20.805 22:43:28 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:27.379 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:27.379 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:12:27.379 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:27.379 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:27.379 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:27.379 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:27.379 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:27.379 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:12:27.379 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:27.379 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:12:27.379 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:12:27.379 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:12:27.379 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:12:27.379 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:12:27.379 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:12:27.379 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:27.379 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:27.379 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:27.379 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:27.379 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:27.379 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:27.379 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:27.379 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:27.379 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:27.379 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:27.379 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:27.379 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:27.380 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:27.380 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:27.380 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:27.380 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:27.380 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:27.380 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:27.380 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:27.380 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:27.380 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:27.380 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:27.380 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:27.380 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:27.380 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:27.380 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:27.380 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:27.380 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:27.380 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:27.380 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:27.380 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:27.380 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:27.380 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:27.380 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:27.380 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:27.380 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:27.380 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:27.380 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:27.380 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:27.380 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:27.380 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:27.380 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:27.380 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:27.380 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:27.380 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:27.380 Found net devices under 0000:86:00.0: cvl_0_0 00:12:27.380 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:27.380 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:27.380 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:27.380 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:27.380 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:27.380 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:27.380 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:27.380 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:27.380 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:27.380 Found net devices under 0000:86:00.1: cvl_0_1 00:12:27.380 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:27.380 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:27.380 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:12:27.380 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:27.380 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:27.380 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:27.380 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:27.380 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:27.380 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:27.380 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:27.380 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:27.380 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:27.380 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:27.380 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:27.380 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:27.380 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:27.380 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:27.380 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:27.380 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:27.380 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:27.380 22:43:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:27.380 22:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:27.380 22:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:27.380 22:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:27.380 22:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:27.380 22:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:27.380 22:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:27.380 22:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:27.380 22:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:27.380 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:27.380 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.354 ms 00:12:27.380 00:12:27.380 --- 10.0.0.2 ping statistics --- 00:12:27.380 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:27.380 rtt min/avg/max/mdev = 0.354/0.354/0.354/0.000 ms 00:12:27.380 22:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:27.380 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:27.380 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:12:27.380 00:12:27.380 --- 10.0.0.1 ping statistics --- 00:12:27.380 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:27.380 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:12:27.380 22:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:27.380 22:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:12:27.380 22:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:27.380 22:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:27.380 22:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:27.380 22:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:27.380 22:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:27.380 22:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:27.380 22:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:27.380 22:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:27.380 22:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:27.380 22:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:27.380 22:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:27.380 22:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=2302651 00:12:27.380 22:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 2302651 00:12:27.380 22:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:27.380 22:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 2302651 ']' 00:12:27.380 22:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:27.380 22:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:27.380 22:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:27.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:27.380 22:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:27.380 22:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:27.380 [2024-12-10 22:43:34.385637] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:12:27.380 [2024-12-10 22:43:34.385683] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:27.380 [2024-12-10 22:43:34.464519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:27.380 [2024-12-10 22:43:34.505163] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:27.380 [2024-12-10 22:43:34.505202] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:27.380 [2024-12-10 22:43:34.505210] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:27.380 [2024-12-10 22:43:34.505217] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:27.381 [2024-12-10 22:43:34.505222] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:27.381 [2024-12-10 22:43:34.506811] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:12:27.381 [2024-12-10 22:43:34.506921] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:12:27.381 [2024-12-10 22:43:34.507026] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.381 [2024-12-10 22:43:34.507027] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:12:27.381 22:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:27.381 22:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:12:27.381 22:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:27.381 22:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:27.381 22:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:27.381 22:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:27.381 22:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:27.381 22:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:27.381 22:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:27.381 22:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:27.381 22:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:27.381 "nvmf_tgt_1" 00:12:27.381 22:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:27.381 "nvmf_tgt_2" 00:12:27.381 22:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:27.381 22:43:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:27.381 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:27.381 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:27.639 true 00:12:27.639 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:27.639 true 00:12:27.639 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:27.639 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:27.898 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:27.898 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:27.898 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:27.898 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:27.898 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:12:27.898 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:27.898 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:12:27.898 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:27.898 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:27.898 rmmod nvme_tcp 00:12:27.898 rmmod nvme_fabrics 00:12:27.898 rmmod nvme_keyring 00:12:27.898 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:27.898 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:12:27.898 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:12:27.898 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 2302651 ']' 00:12:27.898 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 2302651 00:12:27.898 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 2302651 ']' 00:12:27.898 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 2302651 00:12:27.898 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:12:27.898 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:27.898 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2302651 00:12:27.898 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:27.898 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:27.898 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2302651' 00:12:27.898 killing process with pid 2302651 00:12:27.898 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 2302651 00:12:27.898 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 2302651 00:12:28.158 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:28.158 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:28.158 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:28.158 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:12:28.158 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:12:28.158 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:28.158 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:12:28.158 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:28.158 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:28.158 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:28.158 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:28.158 22:43:35 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:30.062 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:30.062 00:12:30.062 real 0m9.684s 00:12:30.062 user 0m7.161s 00:12:30.062 sys 0m4.987s 00:12:30.062 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:30.062 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:30.062 ************************************ 00:12:30.062 END TEST nvmf_multitarget 00:12:30.062 ************************************ 00:12:30.323 22:43:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:30.323 22:43:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:30.323 22:43:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:30.323 22:43:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:30.323 ************************************ 00:12:30.323 START TEST nvmf_rpc 00:12:30.323 ************************************ 00:12:30.323 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:30.323 * Looking for test storage... 00:12:30.323 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:12:30.323 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:30.323 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:12:30.323 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:30.323 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:30.323 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:30.323 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:30.323 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:30.323 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:12:30.323 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:12:30.323 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:12:30.323 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:12:30.323 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:12:30.323 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:12:30.323 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:12:30.323 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:30.323 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:12:30.323 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:12:30.323 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:30.323 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:30.323 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:12:30.323 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:12:30.323 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:30.323 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:12:30.323 22:43:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:12:30.323 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:12:30.323 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:12:30.323 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:30.323 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:12:30.323 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:12:30.323 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:30.323 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:30.323 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:12:30.323 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:30.323 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:30.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:30.323 --rc genhtml_branch_coverage=1 00:12:30.323 --rc genhtml_function_coverage=1 00:12:30.323 --rc genhtml_legend=1 00:12:30.323 --rc geninfo_all_blocks=1 00:12:30.323 --rc geninfo_unexecuted_blocks=1 00:12:30.323 00:12:30.323 ' 00:12:30.324 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:30.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:30.324 --rc genhtml_branch_coverage=1 00:12:30.324 --rc genhtml_function_coverage=1 00:12:30.324 --rc genhtml_legend=1 00:12:30.324 --rc geninfo_all_blocks=1 00:12:30.324 --rc geninfo_unexecuted_blocks=1 00:12:30.324 00:12:30.324 ' 00:12:30.324 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:30.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:30.324 --rc genhtml_branch_coverage=1 00:12:30.324 --rc genhtml_function_coverage=1 00:12:30.324 --rc genhtml_legend=1 00:12:30.324 --rc geninfo_all_blocks=1 00:12:30.324 --rc geninfo_unexecuted_blocks=1 00:12:30.324 00:12:30.324 ' 00:12:30.324 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:30.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:30.324 --rc genhtml_branch_coverage=1 00:12:30.324 --rc genhtml_function_coverage=1 00:12:30.324 --rc genhtml_legend=1 00:12:30.324 --rc geninfo_all_blocks=1 00:12:30.324 --rc geninfo_unexecuted_blocks=1 00:12:30.324 00:12:30.324 ' 00:12:30.324 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:12:30.324 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:30.324 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:30.324 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:30.324 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:30.324 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:30.324 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:30.324 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:30.324 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:30.324 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:30.324 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:30.324 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:30.324 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:30.324 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:30.324 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:30.324 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:30.324 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:30.324 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:30.324 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:12:30.324 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:12:30.324 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:30.324 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:30.324 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:30.324 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.324 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.324 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.324 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:30.324 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.324 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:12:30.324 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:30.324 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:30.324 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:30.324 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:30.324 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:30.324 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:30.324 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:30.324 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:30.324 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:30.324 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:30.324 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:30.324 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:30.324 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:30.324 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:30.324 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:30.324 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:30.324 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:30.324 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:30.324 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:30.324 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:30.324 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:30.324 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:30.324 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:12:30.324 22:43:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.896 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:36.896 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:12:36.896 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:36.896 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:36.896 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:36.896 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:36.896 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:36.896 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:12:36.896 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:36.896 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:12:36.896 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:12:36.896 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:12:36.896 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:12:36.896 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:12:36.896 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:12:36.896 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:36.896 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:36.896 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:36.896 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:36.896 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:36.896 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:36.896 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:36.896 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:36.896 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:36.896 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:36.896 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:36.896 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:36.896 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:36.896 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:36.896 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:36.896 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:36.896 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:36.896 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:36.896 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:36.896 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:36.896 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:36.897 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:36.897 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:36.897 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:36.897 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:36.897 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:36.897 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:36.897 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:36.897 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:36.897 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:36.897 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:36.897 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:36.897 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:36.897 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:36.897 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:36.897 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:36.897 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:36.897 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:36.897 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:36.897 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:36.897 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:36.897 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:36.897 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:36.897 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:36.897 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:36.897 Found net devices under 0000:86:00.0: cvl_0_0 00:12:36.897 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:36.897 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:36.897 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:36.897 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:36.897 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:36.897 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:36.897 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:36.897 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:36.897 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:36.897 Found net devices under 0000:86:00.1: cvl_0_1 00:12:36.897 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:36.897 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:36.897 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:12:36.897 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:36.897 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:36.897 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:36.897 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:36.897 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:36.897 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:36.897 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:36.897 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:36.897 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:36.897 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:36.897 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:36.897 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:36.897 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:36.897 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:36.897 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:36.897 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:36.897 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:36.897 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:36.897 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:36.897 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:36.897 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:36.897 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:36.897 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:36.897 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:36.897 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:36.897 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:36.897 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:36.897 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.400 ms 00:12:36.897 00:12:36.897 --- 10.0.0.2 ping statistics --- 00:12:36.897 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:36.897 rtt min/avg/max/mdev = 0.400/0.400/0.400/0.000 ms 00:12:36.897 22:43:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:36.897 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:36.897 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:12:36.897 00:12:36.897 --- 10.0.0.1 ping statistics --- 00:12:36.897 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:36.897 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:12:36.897 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:36.897 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:12:36.897 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:36.897 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:36.897 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:36.897 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:36.897 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:36.897 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:36.897 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:36.897 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:36.897 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:36.897 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:36.897 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.897 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=2306438 00:12:36.897 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:36.897 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 2306438 00:12:36.897 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 2306438 ']' 00:12:36.897 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:36.897 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:36.897 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:36.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:36.897 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:36.897 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.897 [2024-12-10 22:43:44.103696] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:12:36.897 [2024-12-10 22:43:44.103749] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:36.897 [2024-12-10 22:43:44.183234] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:36.897 [2024-12-10 22:43:44.225034] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:36.897 [2024-12-10 22:43:44.225071] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:36.897 [2024-12-10 22:43:44.225078] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:36.897 [2024-12-10 22:43:44.225084] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:36.897 [2024-12-10 22:43:44.225089] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:36.897 [2024-12-10 22:43:44.226615] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:12:36.897 [2024-12-10 22:43:44.226723] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:12:36.897 [2024-12-10 22:43:44.226834] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:36.897 [2024-12-10 22:43:44.226835] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:12:36.897 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:36.897 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:36.897 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:36.897 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:36.897 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.897 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:36.897 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:36.897 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.898 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.898 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.898 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:36.898 "tick_rate": 2300000000, 00:12:36.898 "poll_groups": [ 00:12:36.898 { 00:12:36.898 "name": "nvmf_tgt_poll_group_000", 00:12:36.898 "admin_qpairs": 0, 00:12:36.898 "io_qpairs": 0, 00:12:36.898 "current_admin_qpairs": 0, 00:12:36.898 "current_io_qpairs": 0, 00:12:36.898 "pending_bdev_io": 0, 00:12:36.898 "completed_nvme_io": 0, 00:12:36.898 "transports": [] 00:12:36.898 }, 00:12:36.898 { 00:12:36.898 "name": "nvmf_tgt_poll_group_001", 00:12:36.898 "admin_qpairs": 0, 00:12:36.898 "io_qpairs": 0, 00:12:36.898 "current_admin_qpairs": 0, 00:12:36.898 "current_io_qpairs": 0, 00:12:36.898 "pending_bdev_io": 0, 00:12:36.898 "completed_nvme_io": 0, 00:12:36.898 "transports": [] 00:12:36.898 }, 00:12:36.898 { 00:12:36.898 "name": "nvmf_tgt_poll_group_002", 00:12:36.898 "admin_qpairs": 0, 00:12:36.898 "io_qpairs": 0, 00:12:36.898 "current_admin_qpairs": 0, 00:12:36.898 "current_io_qpairs": 0, 00:12:36.898 "pending_bdev_io": 0, 00:12:36.898 "completed_nvme_io": 0, 00:12:36.898 "transports": [] 00:12:36.898 }, 00:12:36.898 { 00:12:36.898 "name": "nvmf_tgt_poll_group_003", 00:12:36.898 "admin_qpairs": 0, 00:12:36.898 "io_qpairs": 0, 00:12:36.898 "current_admin_qpairs": 0, 00:12:36.898 "current_io_qpairs": 0, 00:12:36.898 "pending_bdev_io": 0, 00:12:36.898 "completed_nvme_io": 0, 00:12:36.898 "transports": [] 00:12:36.898 } 00:12:36.898 ] 00:12:36.898 }' 00:12:36.898 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:36.898 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:36.898 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:36.898 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:36.898 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:36.898 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:36.898 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:36.898 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:36.898 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.898 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.898 [2024-12-10 22:43:44.476609] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:36.898 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.898 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:36.898 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.898 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.898 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.898 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:36.898 "tick_rate": 2300000000, 00:12:36.898 "poll_groups": [ 00:12:36.898 { 00:12:36.898 "name": "nvmf_tgt_poll_group_000", 00:12:36.898 "admin_qpairs": 0, 00:12:36.898 "io_qpairs": 0, 00:12:36.898 "current_admin_qpairs": 0, 00:12:36.898 "current_io_qpairs": 0, 00:12:36.898 "pending_bdev_io": 0, 00:12:36.898 "completed_nvme_io": 0, 00:12:36.898 "transports": [ 00:12:36.898 { 00:12:36.898 "trtype": "TCP" 00:12:36.898 } 00:12:36.898 ] 00:12:36.898 }, 00:12:36.898 { 00:12:36.898 "name": "nvmf_tgt_poll_group_001", 00:12:36.898 "admin_qpairs": 0, 00:12:36.898 "io_qpairs": 0, 00:12:36.898 "current_admin_qpairs": 0, 00:12:36.898 "current_io_qpairs": 0, 00:12:36.898 "pending_bdev_io": 0, 00:12:36.898 "completed_nvme_io": 0, 00:12:36.898 "transports": [ 00:12:36.898 { 00:12:36.898 "trtype": "TCP" 00:12:36.898 } 00:12:36.898 ] 00:12:36.898 }, 00:12:36.898 { 00:12:36.898 "name": "nvmf_tgt_poll_group_002", 00:12:36.898 "admin_qpairs": 0, 00:12:36.898 "io_qpairs": 0, 00:12:36.898 "current_admin_qpairs": 0, 00:12:36.898 "current_io_qpairs": 0, 00:12:36.898 "pending_bdev_io": 0, 00:12:36.898 "completed_nvme_io": 0, 00:12:36.898 "transports": [ 00:12:36.898 { 00:12:36.898 "trtype": "TCP" 00:12:36.898 } 00:12:36.898 ] 00:12:36.898 }, 00:12:36.898 { 00:12:36.898 "name": "nvmf_tgt_poll_group_003", 00:12:36.898 "admin_qpairs": 0, 00:12:36.898 "io_qpairs": 0, 00:12:36.898 "current_admin_qpairs": 0, 00:12:36.898 "current_io_qpairs": 0, 00:12:36.898 "pending_bdev_io": 0, 00:12:36.898 "completed_nvme_io": 0, 00:12:36.898 "transports": [ 00:12:36.898 { 00:12:36.898 "trtype": "TCP" 00:12:36.898 } 00:12:36.898 ] 00:12:36.898 } 00:12:36.898 ] 00:12:36.898 }' 00:12:36.898 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:36.898 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:36.898 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:36.898 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:36.898 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:36.898 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:36.898 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:36.898 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:36.898 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:36.898 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:36.898 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:36.898 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:36.898 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:36.898 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:36.898 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.898 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.156 Malloc1 00:12:37.157 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.157 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:37.157 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.157 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.157 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.157 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:37.157 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.157 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.157 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.157 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:37.157 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.157 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.157 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.157 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:37.157 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.157 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.157 [2024-12-10 22:43:44.660329] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:37.157 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.157 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:37.157 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:12:37.157 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:37.157 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:12:37.157 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:37.157 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:12:37.157 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:37.157 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:12:37.157 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:37.157 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:12:37.157 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:12:37.157 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:37.157 [2024-12-10 22:43:44.688890] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:12:37.157 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:37.157 could not add new controller: failed to write to nvme-fabrics device 00:12:37.157 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:12:37.157 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:37.157 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:37.157 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:37.157 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:37.157 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:37.157 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:37.157 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:37.157 22:43:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:38.529 22:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:38.529 22:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:38.529 22:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:38.529 22:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:38.529 22:43:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:40.426 22:43:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:40.426 22:43:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:40.426 22:43:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:40.426 22:43:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:40.426 22:43:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:40.426 22:43:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:40.426 22:43:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:40.426 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.426 22:43:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:40.426 22:43:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:40.426 22:43:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:40.426 22:43:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:40.426 22:43:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:40.426 22:43:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:40.426 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:40.426 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:40.426 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.426 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.426 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.426 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:40.426 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:12:40.426 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:40.426 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:12:40.426 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:40.426 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:12:40.426 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:40.426 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:12:40.426 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:40.427 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:12:40.427 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:12:40.427 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:40.427 [2024-12-10 22:43:48.035928] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:12:40.427 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:40.427 could not add new controller: failed to write to nvme-fabrics device 00:12:40.427 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:12:40.427 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:40.427 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:40.427 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:40.427 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:40.427 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.427 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.427 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.427 22:43:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:41.798 22:43:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:41.798 22:43:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:41.798 22:43:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:41.798 22:43:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:41.798 22:43:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:43.746 22:43:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:43.746 22:43:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:43.746 22:43:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:43.746 22:43:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:43.746 22:43:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:43.746 22:43:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:43.746 22:43:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:43.746 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:43.746 22:43:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:43.746 22:43:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:43.746 22:43:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:43.746 22:43:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:43.746 22:43:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:43.746 22:43:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:43.746 22:43:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:43.746 22:43:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:43.746 22:43:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.746 22:43:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.747 22:43:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.747 22:43:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:43.747 22:43:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:43.747 22:43:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:43.747 22:43:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.747 22:43:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.747 22:43:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.747 22:43:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:43.747 22:43:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.747 22:43:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.747 [2024-12-10 22:43:51.341447] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:43.747 22:43:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.747 22:43:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:43.747 22:43:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.747 22:43:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.747 22:43:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.747 22:43:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:43.747 22:43:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.747 22:43:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:43.747 22:43:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.747 22:43:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:45.121 22:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:45.121 22:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:45.122 22:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:45.122 22:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:45.122 22:43:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:47.021 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:47.021 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:47.021 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:47.021 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:47.021 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:47.021 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:47.021 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:47.021 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:47.021 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:47.021 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:47.021 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:47.021 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:47.021 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:47.021 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:47.021 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:47.021 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:47.021 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.021 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.021 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.021 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:47.021 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.021 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.021 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.021 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:47.021 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:47.021 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.021 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.021 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.021 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:47.021 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.021 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.021 [2024-12-10 22:43:54.725128] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:47.021 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.021 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:47.021 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.021 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.021 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.021 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:47.021 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.022 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.022 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.022 22:43:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:48.394 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:48.394 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:48.394 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:48.394 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:48.394 22:43:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:50.293 22:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:50.293 22:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:50.293 22:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:50.293 22:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:50.293 22:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:50.293 22:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:50.293 22:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:50.293 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:50.293 22:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:50.293 22:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:50.293 22:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:50.293 22:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:50.293 22:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:50.293 22:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:50.293 22:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:50.293 22:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:50.293 22:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.293 22:43:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:50.293 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.293 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:50.293 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.293 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:50.293 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.293 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:50.293 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:50.293 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.293 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:50.293 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.551 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:50.551 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.551 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:50.551 [2024-12-10 22:43:58.024375] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:50.551 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.551 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:50.552 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.552 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:50.552 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.552 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:50.552 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.552 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:50.552 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.552 22:43:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:51.485 22:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:51.485 22:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:51.485 22:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:51.485 22:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:51.485 22:43:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:54.012 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:54.012 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:54.012 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:54.012 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:54.012 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:54.012 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:54.012 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:54.012 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:54.012 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:54.012 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:54.012 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:54.012 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:54.012 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:54.012 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:54.012 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:54.012 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:54.012 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.012 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.012 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.012 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:54.013 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.013 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.013 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.013 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:54.013 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:54.013 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.013 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.013 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.013 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:54.013 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.013 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.013 [2024-12-10 22:44:01.325274] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:54.013 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.013 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:54.013 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.013 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.013 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.013 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:54.013 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.013 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.013 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.013 22:44:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:54.951 22:44:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:54.951 22:44:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:54.951 22:44:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:54.951 22:44:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:54.951 22:44:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:56.851 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:56.851 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:56.851 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:56.851 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:56.851 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:56.851 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:56.851 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:56.851 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:56.851 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:56.851 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:56.851 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:56.851 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:57.109 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:57.109 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:57.109 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:57.109 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:57.109 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.109 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.109 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.109 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:57.109 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.109 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.109 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.109 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:57.109 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:57.109 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.109 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.109 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.109 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:57.109 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.109 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.109 [2024-12-10 22:44:04.629491] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:57.109 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.109 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:57.109 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.109 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.109 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.109 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:57.109 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.109 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:57.109 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.109 22:44:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:58.490 22:44:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:58.490 22:44:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:58.490 22:44:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:58.490 22:44:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:58.490 22:44:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:13:00.390 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:00.390 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:00.390 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:00.390 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:00.390 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:00.390 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:13:00.390 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:00.390 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:00.390 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:00.390 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:13:00.390 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:00.390 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:00.390 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:00.390 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:00.390 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:13:00.390 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:00.390 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.390 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.390 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.390 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:00.390 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.390 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.390 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.390 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:13:00.390 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:00.390 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:00.390 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.390 22:44:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.390 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.390 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:00.390 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.390 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.390 [2024-12-10 22:44:08.009101] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:00.390 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.390 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:00.390 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.390 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.390 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.390 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:00.390 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.390 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.390 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.390 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:00.390 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.390 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.390 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.390 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:00.390 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.390 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.390 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.390 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:00.390 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:00.390 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.390 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.390 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.390 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:00.390 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.390 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.390 [2024-12-10 22:44:08.057135] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:00.390 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.390 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:00.390 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.390 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.390 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.390 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:00.390 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.390 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.390 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.390 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:00.390 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.390 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.390 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.390 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:00.390 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.390 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.390 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.390 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:00.390 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:00.390 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.390 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.390 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.391 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:00.391 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.391 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.391 [2024-12-10 22:44:08.105263] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:00.391 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.391 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:00.391 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.391 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.649 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.649 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:00.649 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.649 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.649 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.649 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:00.649 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.649 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.649 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.649 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:00.649 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.649 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.649 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.649 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:00.649 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:00.649 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.649 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.649 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.649 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:00.649 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.649 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.649 [2024-12-10 22:44:08.153432] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:00.649 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.649 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:00.649 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.649 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.649 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.649 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:00.649 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.649 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.649 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.649 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:00.649 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.649 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.649 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.649 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:00.649 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.649 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.649 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.649 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:00.650 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:00.650 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.650 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.650 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.650 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:00.650 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.650 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.650 [2024-12-10 22:44:08.201609] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:00.650 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.650 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:00.650 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.650 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.650 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.650 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:00.650 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.650 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.650 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.650 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:00.650 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.650 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.650 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.650 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:00.650 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.650 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.650 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.650 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:00.650 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.650 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:00.650 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.650 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:13:00.650 "tick_rate": 2300000000, 00:13:00.650 "poll_groups": [ 00:13:00.650 { 00:13:00.650 "name": "nvmf_tgt_poll_group_000", 00:13:00.650 "admin_qpairs": 2, 00:13:00.650 "io_qpairs": 168, 00:13:00.650 "current_admin_qpairs": 0, 00:13:00.650 "current_io_qpairs": 0, 00:13:00.650 "pending_bdev_io": 0, 00:13:00.650 "completed_nvme_io": 268, 00:13:00.650 "transports": [ 00:13:00.650 { 00:13:00.650 "trtype": "TCP" 00:13:00.650 } 00:13:00.650 ] 00:13:00.650 }, 00:13:00.650 { 00:13:00.650 "name": "nvmf_tgt_poll_group_001", 00:13:00.650 "admin_qpairs": 2, 00:13:00.650 "io_qpairs": 168, 00:13:00.650 "current_admin_qpairs": 0, 00:13:00.650 "current_io_qpairs": 0, 00:13:00.650 "pending_bdev_io": 0, 00:13:00.650 "completed_nvme_io": 268, 00:13:00.650 "transports": [ 00:13:00.650 { 00:13:00.650 "trtype": "TCP" 00:13:00.650 } 00:13:00.650 ] 00:13:00.650 }, 00:13:00.650 { 00:13:00.650 "name": "nvmf_tgt_poll_group_002", 00:13:00.650 "admin_qpairs": 1, 00:13:00.650 "io_qpairs": 168, 00:13:00.650 "current_admin_qpairs": 0, 00:13:00.650 "current_io_qpairs": 0, 00:13:00.650 "pending_bdev_io": 0, 00:13:00.650 "completed_nvme_io": 316, 00:13:00.650 "transports": [ 00:13:00.650 { 00:13:00.650 "trtype": "TCP" 00:13:00.650 } 00:13:00.650 ] 00:13:00.650 }, 00:13:00.650 { 00:13:00.650 "name": "nvmf_tgt_poll_group_003", 00:13:00.650 "admin_qpairs": 2, 00:13:00.650 "io_qpairs": 168, 00:13:00.650 "current_admin_qpairs": 0, 00:13:00.650 "current_io_qpairs": 0, 00:13:00.650 "pending_bdev_io": 0, 00:13:00.650 "completed_nvme_io": 170, 00:13:00.650 "transports": [ 00:13:00.650 { 00:13:00.650 "trtype": "TCP" 00:13:00.650 } 00:13:00.650 ] 00:13:00.650 } 00:13:00.650 ] 00:13:00.650 }' 00:13:00.650 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:00.650 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:00.650 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:00.650 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:00.650 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:00.650 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:00.650 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:00.650 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:00.650 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:00.650 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:13:00.650 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:00.650 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:00.650 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:13:00.650 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:00.650 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:13:00.650 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:00.650 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:13:00.650 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:00.650 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:00.650 rmmod nvme_tcp 00:13:00.909 rmmod nvme_fabrics 00:13:00.909 rmmod nvme_keyring 00:13:00.909 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:00.909 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:13:00.909 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:13:00.909 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 2306438 ']' 00:13:00.909 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 2306438 00:13:00.909 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 2306438 ']' 00:13:00.909 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 2306438 00:13:00.909 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:13:00.909 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:00.909 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2306438 00:13:00.909 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:00.909 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:00.909 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2306438' 00:13:00.909 killing process with pid 2306438 00:13:00.909 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 2306438 00:13:00.909 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 2306438 00:13:01.168 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:01.168 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:01.168 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:01.168 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:13:01.168 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:13:01.168 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:01.168 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:13:01.168 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:01.168 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:01.168 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:01.168 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:01.168 22:44:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:03.073 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:03.073 00:13:03.074 real 0m32.899s 00:13:03.074 user 1m39.114s 00:13:03.074 sys 0m6.453s 00:13:03.074 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:03.074 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.074 ************************************ 00:13:03.074 END TEST nvmf_rpc 00:13:03.074 ************************************ 00:13:03.074 22:44:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:03.074 22:44:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:03.074 22:44:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:03.074 22:44:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:03.333 ************************************ 00:13:03.333 START TEST nvmf_invalid 00:13:03.333 ************************************ 00:13:03.333 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:03.333 * Looking for test storage... 00:13:03.333 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:13:03.333 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:03.333 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version 00:13:03.333 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:03.333 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:03.334 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:03.334 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:03.334 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:03.334 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:13:03.334 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:13:03.334 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:13:03.334 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:13:03.334 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:13:03.334 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:13:03.334 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:13:03.334 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:03.334 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:13:03.334 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:13:03.334 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:03.334 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:03.334 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:13:03.334 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:13:03.334 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:03.334 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:13:03.334 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:13:03.334 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:13:03.334 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:13:03.334 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:03.334 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:13:03.334 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:13:03.334 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:03.334 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:03.334 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:13:03.334 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:03.334 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:03.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:03.334 --rc genhtml_branch_coverage=1 00:13:03.334 --rc genhtml_function_coverage=1 00:13:03.334 --rc genhtml_legend=1 00:13:03.334 --rc geninfo_all_blocks=1 00:13:03.334 --rc geninfo_unexecuted_blocks=1 00:13:03.334 00:13:03.334 ' 00:13:03.334 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:03.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:03.334 --rc genhtml_branch_coverage=1 00:13:03.334 --rc genhtml_function_coverage=1 00:13:03.334 --rc genhtml_legend=1 00:13:03.334 --rc geninfo_all_blocks=1 00:13:03.334 --rc geninfo_unexecuted_blocks=1 00:13:03.334 00:13:03.334 ' 00:13:03.334 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:03.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:03.334 --rc genhtml_branch_coverage=1 00:13:03.334 --rc genhtml_function_coverage=1 00:13:03.334 --rc genhtml_legend=1 00:13:03.334 --rc geninfo_all_blocks=1 00:13:03.334 --rc geninfo_unexecuted_blocks=1 00:13:03.334 00:13:03.334 ' 00:13:03.334 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:03.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:03.334 --rc genhtml_branch_coverage=1 00:13:03.334 --rc genhtml_function_coverage=1 00:13:03.334 --rc genhtml_legend=1 00:13:03.334 --rc geninfo_all_blocks=1 00:13:03.334 --rc geninfo_unexecuted_blocks=1 00:13:03.334 00:13:03.334 ' 00:13:03.334 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:13:03.334 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:03.334 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:03.334 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:03.334 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:03.334 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:03.334 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:03.334 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:03.334 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:03.334 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:03.334 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:03.334 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:03.334 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:03.334 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:03.334 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:03.334 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:03.334 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:03.334 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:03.334 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:13:03.334 22:44:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:13:03.334 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:03.334 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:03.334 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:03.334 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.334 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.334 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.334 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:03.334 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.334 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:13:03.334 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:03.334 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:03.334 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:03.334 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:03.334 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:03.334 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:03.334 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:03.334 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:03.334 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:03.334 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:03.334 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/multitarget_rpc.py 00:13:03.334 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:13:03.335 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:03.335 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:03.335 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:03.335 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:03.335 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:03.335 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:03.335 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:03.335 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:03.335 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:03.335 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:03.335 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:03.335 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:03.335 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:03.335 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:03.335 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:13:03.335 22:44:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:09.908 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:09.908 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:13:09.908 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:09.908 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:09.908 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:09.908 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:09.908 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:09.908 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:13:09.908 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:09.908 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:13:09.908 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:13:09.908 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:13:09.908 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:13:09.908 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:13:09.908 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:13:09.908 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:09.908 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:09.908 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:09.908 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:09.908 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:09.908 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:09.908 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:09.908 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:09.908 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:09.908 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:09.908 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:09.908 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:09.908 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:09.908 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:09.908 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:09.908 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:09.908 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:09.908 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:09.908 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:09.908 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:09.908 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:09.908 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:09.908 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:09.908 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:09.908 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:09.908 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:09.908 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:09.908 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:09.908 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:09.908 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:09.908 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:09.908 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:09.908 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:09.908 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:09.908 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:09.908 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:09.908 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:09.908 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:09.908 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:09.908 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:09.908 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:09.908 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:09.908 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:09.908 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:09.908 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:09.908 Found net devices under 0000:86:00.0: cvl_0_0 00:13:09.908 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:09.908 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:09.908 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:09.908 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:09.908 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:09.908 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:09.908 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:09.908 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:09.908 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:09.909 Found net devices under 0000:86:00.1: cvl_0_1 00:13:09.909 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:09.909 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:09.909 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:13:09.909 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:09.909 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:09.909 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:09.909 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:09.909 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:09.909 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:09.909 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:09.909 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:09.909 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:09.909 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:09.909 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:09.909 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:09.909 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:09.909 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:09.909 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:09.909 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:09.909 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:09.909 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:09.909 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:09.909 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:09.909 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:09.909 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:09.909 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:09.909 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:09.909 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:09.909 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:09.909 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:09.909 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.280 ms 00:13:09.909 00:13:09.909 --- 10.0.0.2 ping statistics --- 00:13:09.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:09.909 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:13:09.909 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:09.909 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:09.909 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:13:09.909 00:13:09.909 --- 10.0.0.1 ping statistics --- 00:13:09.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:09.909 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:13:09.909 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:09.909 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:13:09.909 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:09.909 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:09.909 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:09.909 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:09.909 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:09.909 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:09.909 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:09.909 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:09.909 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:09.909 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:09.909 22:44:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:09.909 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=2314158 00:13:09.909 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 2314158 00:13:09.909 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:09.909 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 2314158 ']' 00:13:09.909 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:09.909 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:09.909 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:09.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:09.909 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:09.909 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:09.909 [2024-12-10 22:44:17.061395] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:13:09.909 [2024-12-10 22:44:17.061448] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:09.909 [2024-12-10 22:44:17.142796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:09.909 [2024-12-10 22:44:17.183686] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:09.909 [2024-12-10 22:44:17.183723] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:09.909 [2024-12-10 22:44:17.183731] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:09.909 [2024-12-10 22:44:17.183736] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:09.909 [2024-12-10 22:44:17.183741] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:09.909 [2024-12-10 22:44:17.185272] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:13:09.909 [2024-12-10 22:44:17.185381] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:13:09.909 [2024-12-10 22:44:17.185511] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:09.909 [2024-12-10 22:44:17.185512] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:13:09.909 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:09.909 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:13:09.909 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:09.909 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:09.909 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:09.909 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:09.909 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:09.909 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode630 00:13:09.909 [2024-12-10 22:44:17.496498] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:09.909 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:09.909 { 00:13:09.909 "nqn": "nqn.2016-06.io.spdk:cnode630", 00:13:09.909 "tgt_name": "foobar", 00:13:09.909 "method": "nvmf_create_subsystem", 00:13:09.909 "req_id": 1 00:13:09.909 } 00:13:09.909 Got JSON-RPC error response 00:13:09.909 response: 00:13:09.909 { 00:13:09.909 "code": -32603, 00:13:09.909 "message": "Unable to find target foobar" 00:13:09.909 }' 00:13:09.909 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:09.909 { 00:13:09.909 "nqn": "nqn.2016-06.io.spdk:cnode630", 00:13:09.909 "tgt_name": "foobar", 00:13:09.909 "method": "nvmf_create_subsystem", 00:13:09.909 "req_id": 1 00:13:09.909 } 00:13:09.909 Got JSON-RPC error response 00:13:09.909 response: 00:13:09.909 { 00:13:09.909 "code": -32603, 00:13:09.909 "message": "Unable to find target foobar" 00:13:09.909 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:09.909 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:09.909 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode3354 00:13:10.167 [2024-12-10 22:44:17.701184] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3354: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:10.167 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:10.167 { 00:13:10.167 "nqn": "nqn.2016-06.io.spdk:cnode3354", 00:13:10.167 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:10.167 "method": "nvmf_create_subsystem", 00:13:10.167 "req_id": 1 00:13:10.167 } 00:13:10.167 Got JSON-RPC error response 00:13:10.167 response: 00:13:10.167 { 00:13:10.167 "code": -32602, 00:13:10.167 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:10.167 }' 00:13:10.167 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:10.167 { 00:13:10.167 "nqn": "nqn.2016-06.io.spdk:cnode3354", 00:13:10.167 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:10.167 "method": "nvmf_create_subsystem", 00:13:10.167 "req_id": 1 00:13:10.167 } 00:13:10.167 Got JSON-RPC error response 00:13:10.167 response: 00:13:10.167 { 00:13:10.167 "code": -32602, 00:13:10.167 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:10.167 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:10.167 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:10.167 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode5209 00:13:10.425 [2024-12-10 22:44:17.901836] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5209: invalid model number 'SPDK_Controller' 00:13:10.425 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:10.425 { 00:13:10.425 "nqn": "nqn.2016-06.io.spdk:cnode5209", 00:13:10.425 "model_number": "SPDK_Controller\u001f", 00:13:10.425 "method": "nvmf_create_subsystem", 00:13:10.425 "req_id": 1 00:13:10.425 } 00:13:10.425 Got JSON-RPC error response 00:13:10.425 response: 00:13:10.425 { 00:13:10.425 "code": -32602, 00:13:10.425 "message": "Invalid MN SPDK_Controller\u001f" 00:13:10.425 }' 00:13:10.425 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:10.425 { 00:13:10.425 "nqn": "nqn.2016-06.io.spdk:cnode5209", 00:13:10.425 "model_number": "SPDK_Controller\u001f", 00:13:10.425 "method": "nvmf_create_subsystem", 00:13:10.425 "req_id": 1 00:13:10.425 } 00:13:10.425 Got JSON-RPC error response 00:13:10.425 response: 00:13:10.425 { 00:13:10.425 "code": -32602, 00:13:10.425 "message": "Invalid MN SPDK_Controller\u001f" 00:13:10.425 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:10.425 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:10.425 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:10.425 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:10.425 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:10.425 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:10.425 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:10.425 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.425 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:13:10.426 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:10.426 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:13:10.426 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.426 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.426 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:13:10.426 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:13:10.426 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:13:10.426 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.426 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.426 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:13:10.426 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:13:10.426 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:13:10.426 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.426 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.426 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:13:10.426 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:13:10.426 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:13:10.426 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.426 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.426 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:13:10.426 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:13:10.426 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:13:10.426 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.426 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.426 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:13:10.426 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:13:10.426 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:13:10.426 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.426 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.426 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:13:10.426 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:13:10.426 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:13:10.426 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.426 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.426 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:13:10.426 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:13:10.426 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:13:10.426 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.426 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.426 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:13:10.426 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:13:10.426 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:13:10.426 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.426 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.426 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:13:10.426 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:13:10.426 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:13:10.426 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.426 22:44:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.426 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:13:10.426 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:13:10.426 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:13:10.426 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.426 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.426 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:13:10.426 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:13:10.426 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:13:10.426 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.426 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.426 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:13:10.426 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:13:10.426 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:13:10.426 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.426 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.426 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:13:10.426 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:13:10.426 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:13:10.426 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.426 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.426 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:13:10.426 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:13:10.426 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:13:10.426 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.426 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.426 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:13:10.426 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:13:10.426 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:13:10.426 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.426 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.426 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:13:10.426 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:13:10.426 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:13:10.426 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.426 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.426 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:13:10.426 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:13:10.426 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:13:10.426 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.426 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.426 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:13:10.426 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:13:10.426 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:13:10.426 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.426 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.426 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:13:10.426 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:13:10.426 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:13:10.426 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.426 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.426 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:13:10.426 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:13:10.426 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:13:10.426 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.426 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.426 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ y == \- ]] 00:13:10.426 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'y['\''=VkhK-1'\''svVLf`f3ru' 00:13:10.426 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem -s 'y['\''=VkhK-1'\''svVLf`f3ru' nqn.2016-06.io.spdk:cnode30761 00:13:10.685 [2024-12-10 22:44:18.247015] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30761: invalid serial number 'y['=VkhK-1'svVLf`f3ru' 00:13:10.685 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:13:10.685 { 00:13:10.685 "nqn": "nqn.2016-06.io.spdk:cnode30761", 00:13:10.685 "serial_number": "y['\''=VkhK-1'\''svVLf`f3ru", 00:13:10.685 "method": "nvmf_create_subsystem", 00:13:10.685 "req_id": 1 00:13:10.685 } 00:13:10.685 Got JSON-RPC error response 00:13:10.685 response: 00:13:10.685 { 00:13:10.685 "code": -32602, 00:13:10.685 "message": "Invalid SN y['\''=VkhK-1'\''svVLf`f3ru" 00:13:10.685 }' 00:13:10.685 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:13:10.685 { 00:13:10.685 "nqn": "nqn.2016-06.io.spdk:cnode30761", 00:13:10.685 "serial_number": "y['=VkhK-1'svVLf`f3ru", 00:13:10.685 "method": "nvmf_create_subsystem", 00:13:10.685 "req_id": 1 00:13:10.685 } 00:13:10.685 Got JSON-RPC error response 00:13:10.685 response: 00:13:10.685 { 00:13:10.685 "code": -32602, 00:13:10.685 "message": "Invalid SN y['=VkhK-1'svVLf`f3ru" 00:13:10.685 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:10.685 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:13:10.685 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:13:10.685 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:10.685 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:10.685 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:10.685 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:10.685 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.685 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:13:10.685 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:13:10.685 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:13:10.685 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.685 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.685 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:13:10.685 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:13:10.685 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:13:10.685 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.685 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.685 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:13:10.685 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:13:10.685 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:13:10.685 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.685 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.685 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:13:10.685 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:13:10.685 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:13:10.685 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.685 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.685 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:13:10.685 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:13:10.685 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:13:10.685 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.685 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.685 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:13:10.685 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:13:10.685 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:13:10.685 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.685 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.685 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:13:10.686 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:13:10.686 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:13:10.686 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.686 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.686 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:13:10.686 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:13:10.686 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:13:10.686 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.686 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.686 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:13:10.686 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:13:10.686 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:13:10.686 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.686 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.686 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:13:10.686 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:13:10.686 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:13:10.686 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.686 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.686 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:13:10.686 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:13:10.686 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:13:10.686 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.686 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.686 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:13:10.686 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:13:10.686 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:13:10.686 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.686 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.686 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:13:10.686 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:13:10.686 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:13:10.686 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.686 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.686 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:13:10.686 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:13:10.686 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:13:10.686 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.686 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.686 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:13:10.686 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:13:10.686 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:13:10.686 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.686 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.686 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:13:10.686 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:13:10.686 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:13:10.686 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.686 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.686 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:13:10.686 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:10.686 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:13:10.686 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.686 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.686 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:13:10.686 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:13:10.686 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:13:10.686 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.686 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.686 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:13:10.686 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:13:10.686 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:13:10.686 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.686 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.686 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:13:10.686 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:13:10.686 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:13:10.945 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.945 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.945 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:13:10.945 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:13:10.945 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:13:10.945 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.945 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.945 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:13:10.945 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:13:10.945 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:13:10.945 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.945 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.945 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:13:10.945 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:13:10.945 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:13:10.945 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.945 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.945 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:13:10.945 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:13:10.945 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:13:10.945 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.945 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.945 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:13:10.945 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:13:10.945 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:13:10.945 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.945 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.945 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:13:10.945 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:13:10.945 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:13:10.945 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.945 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.945 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:13:10.945 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:13:10.945 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:13:10.945 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.945 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.945 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:13:10.945 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:10.945 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:13:10.945 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.945 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.945 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:13:10.945 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:13:10.945 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:13:10.945 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.945 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.945 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:13:10.945 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:13:10.945 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:13:10.945 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.945 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.945 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:13:10.945 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:13:10.945 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:13:10.945 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.945 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.945 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:13:10.945 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:13:10.945 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:13:10.945 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.945 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.945 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:13:10.945 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:13:10.945 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:13:10.945 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.945 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.945 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:13:10.945 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:13:10.945 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:13:10.945 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.945 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.945 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:13:10.945 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:13:10.946 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:13:10.946 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.946 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.946 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:13:10.946 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:10.946 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:13:10.946 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.946 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.946 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:13:10.946 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:13:10.946 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:13:10.946 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.946 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.946 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:13:10.946 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:13:10.946 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:13:10.946 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.946 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.946 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:13:10.946 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:13:10.946 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:13:10.946 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.946 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.946 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:13:10.946 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:13:10.946 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:13:10.946 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.946 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.946 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:13:10.946 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:13:10.946 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:13:10.946 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:10.946 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:10.946 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ r == \- ]] 00:13:10.946 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'r'\''@}iH'\''~ub0kgqDxO_q)Q12^Ivy1oO{LvTwdk2.H' 00:13:10.946 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem -d 'r'\''@}iH'\''~ub0kgqDxO_q)Q12^Ivy1oO{LvTwdk2.H' nqn.2016-06.io.spdk:cnode26681 00:13:11.204 [2024-12-10 22:44:18.720543] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26681: invalid model number 'r'@}iH'~ub0kgqDxO_q)Q12^Ivy1oO{LvTwdk2.H' 00:13:11.204 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:13:11.204 { 00:13:11.204 "nqn": "nqn.2016-06.io.spdk:cnode26681", 00:13:11.204 "model_number": "r'\''@}iH'\''~ub\u007f0kgqDxO_q)Q12^Ivy1oO{LvTwdk2.H", 00:13:11.204 "method": "nvmf_create_subsystem", 00:13:11.204 "req_id": 1 00:13:11.204 } 00:13:11.204 Got JSON-RPC error response 00:13:11.204 response: 00:13:11.204 { 00:13:11.204 "code": -32602, 00:13:11.204 "message": "Invalid MN r'\''@}iH'\''~ub\u007f0kgqDxO_q)Q12^Ivy1oO{LvTwdk2.H" 00:13:11.204 }' 00:13:11.204 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:13:11.204 { 00:13:11.204 "nqn": "nqn.2016-06.io.spdk:cnode26681", 00:13:11.204 "model_number": "r'@}iH'~ub\u007f0kgqDxO_q)Q12^Ivy1oO{LvTwdk2.H", 00:13:11.204 "method": "nvmf_create_subsystem", 00:13:11.204 "req_id": 1 00:13:11.204 } 00:13:11.204 Got JSON-RPC error response 00:13:11.204 response: 00:13:11.204 { 00:13:11.204 "code": -32602, 00:13:11.204 "message": "Invalid MN r'@}iH'~ub\u007f0kgqDxO_q)Q12^Ivy1oO{LvTwdk2.H" 00:13:11.204 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:11.204 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:11.204 [2024-12-10 22:44:18.917244] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:11.462 22:44:18 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:11.462 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:11.462 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:13:11.462 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:13:11.462 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:13:11.462 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:11.721 [2024-12-10 22:44:19.322609] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:11.721 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:13:11.721 { 00:13:11.721 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:11.721 "listen_address": { 00:13:11.721 "trtype": "tcp", 00:13:11.721 "traddr": "", 00:13:11.721 "trsvcid": "4421" 00:13:11.721 }, 00:13:11.721 "method": "nvmf_subsystem_remove_listener", 00:13:11.721 "req_id": 1 00:13:11.721 } 00:13:11.721 Got JSON-RPC error response 00:13:11.721 response: 00:13:11.721 { 00:13:11.721 "code": -32602, 00:13:11.721 "message": "Invalid parameters" 00:13:11.721 }' 00:13:11.721 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:13:11.721 { 00:13:11.721 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:11.721 "listen_address": { 00:13:11.721 "trtype": "tcp", 00:13:11.721 "traddr": "", 00:13:11.721 "trsvcid": "4421" 00:13:11.721 }, 00:13:11.721 "method": "nvmf_subsystem_remove_listener", 00:13:11.721 "req_id": 1 00:13:11.721 } 00:13:11.721 Got JSON-RPC error response 00:13:11.721 response: 00:13:11.721 { 00:13:11.721 "code": -32602, 00:13:11.721 "message": "Invalid parameters" 00:13:11.721 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:11.721 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode14881 -i 0 00:13:11.980 [2024-12-10 22:44:19.523225] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14881: invalid cntlid range [0-65519] 00:13:11.980 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:13:11.980 { 00:13:11.980 "nqn": "nqn.2016-06.io.spdk:cnode14881", 00:13:11.980 "min_cntlid": 0, 00:13:11.980 "method": "nvmf_create_subsystem", 00:13:11.980 "req_id": 1 00:13:11.980 } 00:13:11.980 Got JSON-RPC error response 00:13:11.980 response: 00:13:11.980 { 00:13:11.980 "code": -32602, 00:13:11.980 "message": "Invalid cntlid range [0-65519]" 00:13:11.980 }' 00:13:11.980 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:13:11.980 { 00:13:11.980 "nqn": "nqn.2016-06.io.spdk:cnode14881", 00:13:11.980 "min_cntlid": 0, 00:13:11.980 "method": "nvmf_create_subsystem", 00:13:11.980 "req_id": 1 00:13:11.980 } 00:13:11.980 Got JSON-RPC error response 00:13:11.980 response: 00:13:11.980 { 00:13:11.980 "code": -32602, 00:13:11.980 "message": "Invalid cntlid range [0-65519]" 00:13:11.980 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:11.980 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20215 -i 65520 00:13:12.239 [2024-12-10 22:44:19.727900] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20215: invalid cntlid range [65520-65519] 00:13:12.239 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:13:12.239 { 00:13:12.239 "nqn": "nqn.2016-06.io.spdk:cnode20215", 00:13:12.239 "min_cntlid": 65520, 00:13:12.239 "method": "nvmf_create_subsystem", 00:13:12.239 "req_id": 1 00:13:12.239 } 00:13:12.239 Got JSON-RPC error response 00:13:12.239 response: 00:13:12.239 { 00:13:12.239 "code": -32602, 00:13:12.239 "message": "Invalid cntlid range [65520-65519]" 00:13:12.239 }' 00:13:12.239 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:13:12.239 { 00:13:12.239 "nqn": "nqn.2016-06.io.spdk:cnode20215", 00:13:12.239 "min_cntlid": 65520, 00:13:12.239 "method": "nvmf_create_subsystem", 00:13:12.239 "req_id": 1 00:13:12.239 } 00:13:12.239 Got JSON-RPC error response 00:13:12.239 response: 00:13:12.239 { 00:13:12.239 "code": -32602, 00:13:12.239 "message": "Invalid cntlid range [65520-65519]" 00:13:12.239 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:12.239 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3708 -I 0 00:13:12.239 [2024-12-10 22:44:19.944628] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3708: invalid cntlid range [1-0] 00:13:12.498 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:13:12.498 { 00:13:12.498 "nqn": "nqn.2016-06.io.spdk:cnode3708", 00:13:12.498 "max_cntlid": 0, 00:13:12.498 "method": "nvmf_create_subsystem", 00:13:12.498 "req_id": 1 00:13:12.498 } 00:13:12.498 Got JSON-RPC error response 00:13:12.498 response: 00:13:12.498 { 00:13:12.498 "code": -32602, 00:13:12.498 "message": "Invalid cntlid range [1-0]" 00:13:12.498 }' 00:13:12.498 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:13:12.498 { 00:13:12.498 "nqn": "nqn.2016-06.io.spdk:cnode3708", 00:13:12.498 "max_cntlid": 0, 00:13:12.498 "method": "nvmf_create_subsystem", 00:13:12.498 "req_id": 1 00:13:12.498 } 00:13:12.498 Got JSON-RPC error response 00:13:12.498 response: 00:13:12.498 { 00:13:12.498 "code": -32602, 00:13:12.498 "message": "Invalid cntlid range [1-0]" 00:13:12.498 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:12.498 22:44:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode21732 -I 65520 00:13:12.498 [2024-12-10 22:44:20.157337] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21732: invalid cntlid range [1-65520] 00:13:12.498 22:44:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:13:12.498 { 00:13:12.498 "nqn": "nqn.2016-06.io.spdk:cnode21732", 00:13:12.498 "max_cntlid": 65520, 00:13:12.498 "method": "nvmf_create_subsystem", 00:13:12.498 "req_id": 1 00:13:12.498 } 00:13:12.498 Got JSON-RPC error response 00:13:12.498 response: 00:13:12.498 { 00:13:12.498 "code": -32602, 00:13:12.498 "message": "Invalid cntlid range [1-65520]" 00:13:12.498 }' 00:13:12.498 22:44:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:13:12.498 { 00:13:12.498 "nqn": "nqn.2016-06.io.spdk:cnode21732", 00:13:12.498 "max_cntlid": 65520, 00:13:12.498 "method": "nvmf_create_subsystem", 00:13:12.498 "req_id": 1 00:13:12.498 } 00:13:12.498 Got JSON-RPC error response 00:13:12.498 response: 00:13:12.498 { 00:13:12.498 "code": -32602, 00:13:12.498 "message": "Invalid cntlid range [1-65520]" 00:13:12.498 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:12.498 22:44:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9187 -i 6 -I 5 00:13:12.756 [2024-12-10 22:44:20.358013] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9187: invalid cntlid range [6-5] 00:13:12.756 22:44:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:13:12.756 { 00:13:12.756 "nqn": "nqn.2016-06.io.spdk:cnode9187", 00:13:12.756 "min_cntlid": 6, 00:13:12.756 "max_cntlid": 5, 00:13:12.756 "method": "nvmf_create_subsystem", 00:13:12.756 "req_id": 1 00:13:12.756 } 00:13:12.756 Got JSON-RPC error response 00:13:12.756 response: 00:13:12.756 { 00:13:12.756 "code": -32602, 00:13:12.756 "message": "Invalid cntlid range [6-5]" 00:13:12.756 }' 00:13:12.756 22:44:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:13:12.756 { 00:13:12.756 "nqn": "nqn.2016-06.io.spdk:cnode9187", 00:13:12.756 "min_cntlid": 6, 00:13:12.756 "max_cntlid": 5, 00:13:12.756 "method": "nvmf_create_subsystem", 00:13:12.756 "req_id": 1 00:13:12.756 } 00:13:12.756 Got JSON-RPC error response 00:13:12.756 response: 00:13:12.756 { 00:13:12.756 "code": -32602, 00:13:12.756 "message": "Invalid cntlid range [6-5]" 00:13:12.756 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:12.756 22:44:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:13.015 22:44:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:13:13.015 { 00:13:13.015 "name": "foobar", 00:13:13.015 "method": "nvmf_delete_target", 00:13:13.015 "req_id": 1 00:13:13.015 } 00:13:13.015 Got JSON-RPC error response 00:13:13.015 response: 00:13:13.015 { 00:13:13.015 "code": -32602, 00:13:13.015 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:13.015 }' 00:13:13.015 22:44:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:13:13.015 { 00:13:13.015 "name": "foobar", 00:13:13.015 "method": "nvmf_delete_target", 00:13:13.015 "req_id": 1 00:13:13.015 } 00:13:13.015 Got JSON-RPC error response 00:13:13.015 response: 00:13:13.015 { 00:13:13.015 "code": -32602, 00:13:13.015 "message": "The specified target doesn't exist, cannot delete it." 00:13:13.015 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:13.015 22:44:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:13.015 22:44:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:13:13.015 22:44:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:13.015 22:44:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:13:13.015 22:44:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:13.015 22:44:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:13:13.015 22:44:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:13.015 22:44:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:13.015 rmmod nvme_tcp 00:13:13.015 rmmod nvme_fabrics 00:13:13.015 rmmod nvme_keyring 00:13:13.015 22:44:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:13.015 22:44:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:13:13.015 22:44:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:13:13.015 22:44:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 2314158 ']' 00:13:13.015 22:44:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 2314158 00:13:13.015 22:44:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 2314158 ']' 00:13:13.015 22:44:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 2314158 00:13:13.015 22:44:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:13:13.015 22:44:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:13.015 22:44:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2314158 00:13:13.015 22:44:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:13.015 22:44:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:13.015 22:44:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2314158' 00:13:13.015 killing process with pid 2314158 00:13:13.015 22:44:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 2314158 00:13:13.015 22:44:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 2314158 00:13:13.275 22:44:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:13.275 22:44:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:13.275 22:44:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:13.275 22:44:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:13:13.275 22:44:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:13:13.275 22:44:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:13.275 22:44:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:13:13.275 22:44:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:13.275 22:44:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:13.275 22:44:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:13.275 22:44:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:13.275 22:44:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:15.184 22:44:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:15.184 00:13:15.184 real 0m12.060s 00:13:15.184 user 0m18.639s 00:13:15.184 sys 0m5.455s 00:13:15.184 22:44:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:15.184 22:44:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:15.184 ************************************ 00:13:15.184 END TEST nvmf_invalid 00:13:15.184 ************************************ 00:13:15.184 22:44:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:15.184 22:44:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:15.184 22:44:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:15.184 22:44:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:15.444 ************************************ 00:13:15.444 START TEST nvmf_connect_stress 00:13:15.444 ************************************ 00:13:15.444 22:44:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:15.444 * Looking for test storage... 00:13:15.444 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:13:15.444 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:15.444 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:13:15.444 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:15.444 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:15.444 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:15.444 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:15.444 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:15.444 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:13:15.444 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:13:15.444 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:13:15.444 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:13:15.444 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:13:15.444 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:13:15.444 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:13:15.444 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:15.444 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:13:15.444 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:13:15.444 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:15.444 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:15.444 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:13:15.444 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:13:15.444 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:15.444 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:13:15.445 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:13:15.445 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:13:15.445 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:13:15.445 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:15.445 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:13:15.445 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:13:15.445 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:15.445 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:15.445 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:13:15.445 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:15.445 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:15.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.445 --rc genhtml_branch_coverage=1 00:13:15.445 --rc genhtml_function_coverage=1 00:13:15.445 --rc genhtml_legend=1 00:13:15.445 --rc geninfo_all_blocks=1 00:13:15.445 --rc geninfo_unexecuted_blocks=1 00:13:15.445 00:13:15.445 ' 00:13:15.445 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:15.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.445 --rc genhtml_branch_coverage=1 00:13:15.445 --rc genhtml_function_coverage=1 00:13:15.445 --rc genhtml_legend=1 00:13:15.445 --rc geninfo_all_blocks=1 00:13:15.445 --rc geninfo_unexecuted_blocks=1 00:13:15.445 00:13:15.445 ' 00:13:15.445 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:15.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.445 --rc genhtml_branch_coverage=1 00:13:15.445 --rc genhtml_function_coverage=1 00:13:15.445 --rc genhtml_legend=1 00:13:15.445 --rc geninfo_all_blocks=1 00:13:15.445 --rc geninfo_unexecuted_blocks=1 00:13:15.445 00:13:15.445 ' 00:13:15.445 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:15.445 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.445 --rc genhtml_branch_coverage=1 00:13:15.445 --rc genhtml_function_coverage=1 00:13:15.445 --rc genhtml_legend=1 00:13:15.445 --rc geninfo_all_blocks=1 00:13:15.445 --rc geninfo_unexecuted_blocks=1 00:13:15.445 00:13:15.445 ' 00:13:15.445 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:13:15.445 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:15.445 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:15.445 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:15.445 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:15.445 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:15.445 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:15.445 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:15.445 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:15.445 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:15.445 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:15.445 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:15.445 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:15.445 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:15.445 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:15.445 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:15.445 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:15.445 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:15.445 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:13:15.445 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:13:15.445 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:15.445 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:15.445 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:15.445 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.445 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.445 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.445 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:15.445 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.445 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:13:15.445 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:15.445 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:15.445 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:15.445 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:15.445 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:15.445 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:15.445 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:15.445 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:15.445 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:15.445 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:15.445 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:15.445 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:15.445 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:15.445 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:15.445 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:15.445 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:15.445 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:15.446 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:15.446 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:15.446 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:15.446 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:15.446 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:13:15.446 22:44:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.023 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:22.023 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:13:22.023 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:22.023 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:22.023 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:22.023 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:22.023 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:22.023 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:13:22.023 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:22.023 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:13:22.023 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:13:22.023 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:13:22.023 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:13:22.023 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:13:22.023 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:13:22.023 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:22.023 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:22.023 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:22.023 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:22.023 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:22.023 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:22.023 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:22.023 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:22.023 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:22.023 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:22.023 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:22.023 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:22.023 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:22.023 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:22.023 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:22.023 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:22.023 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:22.023 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:22.023 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:22.023 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:22.023 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:22.023 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:22.023 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:22.023 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:22.023 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:22.023 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:22.023 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:22.023 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:22.023 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:22.023 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:22.023 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:22.023 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:22.023 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:22.023 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:22.023 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:22.023 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:22.023 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:22.023 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:22.023 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:22.023 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:22.023 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:22.023 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:22.023 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:22.023 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:22.023 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:22.023 Found net devices under 0000:86:00.0: cvl_0_0 00:13:22.023 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:22.023 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:22.023 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:22.023 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:22.024 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:22.024 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:22.024 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:22.024 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:22.024 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:22.024 Found net devices under 0000:86:00.1: cvl_0_1 00:13:22.024 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:22.024 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:22.024 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:13:22.024 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:22.024 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:22.024 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:22.024 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:22.024 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:22.024 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:22.024 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:22.024 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:22.024 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:22.024 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:22.024 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:22.024 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:22.024 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:22.024 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:22.024 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:22.024 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:22.024 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:22.024 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:22.024 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:22.024 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:22.024 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:22.024 22:44:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:22.024 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:22.024 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:22.024 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:22.024 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:22.024 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:22.024 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.413 ms 00:13:22.024 00:13:22.024 --- 10.0.0.2 ping statistics --- 00:13:22.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:22.024 rtt min/avg/max/mdev = 0.413/0.413/0.413/0.000 ms 00:13:22.024 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:22.024 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:22.024 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:13:22.024 00:13:22.024 --- 10.0.0.1 ping statistics --- 00:13:22.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:22.024 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:13:22.024 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:22.024 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:13:22.024 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:22.024 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:22.024 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:22.024 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:22.024 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:22.024 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:22.024 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:22.024 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:22.024 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:22.024 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:22.024 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.024 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=2318437 00:13:22.024 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 2318437 00:13:22.024 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:22.024 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 2318437 ']' 00:13:22.024 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:22.024 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:22.024 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:22.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:22.024 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:22.024 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.024 [2024-12-10 22:44:29.260924] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:13:22.024 [2024-12-10 22:44:29.260971] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:22.024 [2024-12-10 22:44:29.340361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:22.024 [2024-12-10 22:44:29.381931] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:22.024 [2024-12-10 22:44:29.381962] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:22.024 [2024-12-10 22:44:29.381970] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:22.024 [2024-12-10 22:44:29.381976] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:22.024 [2024-12-10 22:44:29.381981] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:22.024 [2024-12-10 22:44:29.383320] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:13:22.024 [2024-12-10 22:44:29.383428] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:13:22.024 [2024-12-10 22:44:29.383428] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:13:22.024 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:22.024 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:13:22.024 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:22.024 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:22.024 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.024 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:22.024 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:22.024 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.024 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.024 [2024-12-10 22:44:29.520861] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:22.024 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.024 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:22.024 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.024 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.024 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.024 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:22.024 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.025 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.025 [2024-12-10 22:44:29.541072] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:22.025 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.025 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:22.025 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.025 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.025 NULL1 00:13:22.025 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.025 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2318500 00:13:22.025 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/rpc.txt 00:13:22.025 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:22.025 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/rpc.txt 00:13:22.025 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:22.025 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:22.025 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:22.025 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:22.025 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:22.025 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:22.025 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:22.025 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:22.025 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:22.025 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:22.025 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:22.025 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:22.025 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:22.025 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:22.025 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:22.025 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:22.025 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:22.025 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:22.025 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:22.025 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:22.025 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:22.025 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:22.025 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:22.025 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:22.025 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:22.025 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:22.025 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:22.025 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:22.025 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:22.025 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:22.025 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:22.025 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:22.025 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:22.025 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:22.025 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:22.025 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:22.025 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:22.025 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:22.025 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:22.025 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:22.025 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:22.025 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2318500 00:13:22.025 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:22.025 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.025 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.283 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.284 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2318500 00:13:22.284 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:22.284 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.284 22:44:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:22.850 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.850 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2318500 00:13:22.850 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:22.850 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.850 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:23.109 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.109 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2318500 00:13:23.109 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:23.109 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.109 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:23.367 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.367 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2318500 00:13:23.367 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:23.367 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.367 22:44:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:23.628 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.628 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2318500 00:13:23.628 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:23.628 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.628 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:23.900 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.900 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2318500 00:13:23.900 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:23.900 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.900 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:24.502 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.502 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2318500 00:13:24.502 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:24.502 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.502 22:44:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:24.761 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.761 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2318500 00:13:24.761 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:24.761 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.761 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:25.020 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.020 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2318500 00:13:25.020 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:25.020 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.020 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:25.279 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.279 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2318500 00:13:25.279 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:25.279 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.279 22:44:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:25.537 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.537 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2318500 00:13:25.537 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:25.537 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.537 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.104 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.104 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2318500 00:13:26.104 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:26.104 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.104 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.363 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.363 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2318500 00:13:26.363 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:26.363 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.363 22:44:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.621 22:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.621 22:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2318500 00:13:26.621 22:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:26.621 22:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.621 22:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:26.880 22:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.880 22:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2318500 00:13:26.880 22:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:26.880 22:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.880 22:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:27.139 22:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.139 22:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2318500 00:13:27.139 22:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.139 22:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.139 22:44:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:27.705 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.705 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2318500 00:13:27.705 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.705 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.705 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:27.964 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.964 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2318500 00:13:27.964 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:27.964 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.964 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:28.222 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.222 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2318500 00:13:28.222 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:28.222 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.222 22:44:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:28.481 22:44:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.481 22:44:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2318500 00:13:28.481 22:44:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:28.481 22:44:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.481 22:44:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:29.049 22:44:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.049 22:44:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2318500 00:13:29.049 22:44:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:29.049 22:44:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.049 22:44:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:29.308 22:44:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.308 22:44:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2318500 00:13:29.308 22:44:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:29.308 22:44:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.308 22:44:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:29.566 22:44:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.566 22:44:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2318500 00:13:29.566 22:44:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:29.566 22:44:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.566 22:44:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:29.825 22:44:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.825 22:44:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2318500 00:13:29.825 22:44:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:29.825 22:44:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.825 22:44:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.084 22:44:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.084 22:44:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2318500 00:13:30.084 22:44:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:30.084 22:44:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.084 22:44:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.650 22:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.650 22:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2318500 00:13:30.650 22:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:30.650 22:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.650 22:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.909 22:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.909 22:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2318500 00:13:30.909 22:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:30.909 22:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.909 22:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:31.168 22:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.168 22:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2318500 00:13:31.168 22:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:31.168 22:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.168 22:44:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:31.426 22:44:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.426 22:44:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2318500 00:13:31.426 22:44:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:31.426 22:44:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.426 22:44:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:31.685 22:44:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.685 22:44:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2318500 00:13:31.685 22:44:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:31.685 22:44:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.685 22:44:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:32.252 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:32.252 22:44:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.252 22:44:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2318500 00:13:32.252 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2318500) - No such process 00:13:32.252 22:44:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2318500 00:13:32.252 22:44:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/rpc.txt 00:13:32.252 22:44:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:32.252 22:44:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:32.252 22:44:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:32.252 22:44:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:13:32.252 22:44:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:32.252 22:44:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:13:32.252 22:44:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:32.252 22:44:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:32.252 rmmod nvme_tcp 00:13:32.252 rmmod nvme_fabrics 00:13:32.252 rmmod nvme_keyring 00:13:32.252 22:44:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:32.252 22:44:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:13:32.252 22:44:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:13:32.252 22:44:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 2318437 ']' 00:13:32.252 22:44:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 2318437 00:13:32.252 22:44:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 2318437 ']' 00:13:32.252 22:44:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 2318437 00:13:32.252 22:44:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:13:32.252 22:44:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:32.252 22:44:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2318437 00:13:32.252 22:44:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:32.252 22:44:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:32.252 22:44:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2318437' 00:13:32.252 killing process with pid 2318437 00:13:32.252 22:44:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 2318437 00:13:32.252 22:44:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 2318437 00:13:32.512 22:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:32.512 22:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:32.512 22:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:32.512 22:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:13:32.512 22:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:13:32.512 22:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:32.512 22:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:13:32.512 22:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:32.512 22:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:32.512 22:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:32.512 22:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:32.512 22:44:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:34.420 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:34.420 00:13:34.420 real 0m19.157s 00:13:34.420 user 0m39.389s 00:13:34.420 sys 0m8.575s 00:13:34.420 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:34.420 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:34.420 ************************************ 00:13:34.420 END TEST nvmf_connect_stress 00:13:34.420 ************************************ 00:13:34.420 22:44:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:34.420 22:44:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:34.420 22:44:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:34.420 22:44:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:34.679 ************************************ 00:13:34.679 START TEST nvmf_fused_ordering 00:13:34.679 ************************************ 00:13:34.679 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:34.679 * Looking for test storage... 00:13:34.679 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:13:34.679 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:34.679 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version 00:13:34.679 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:34.679 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:34.679 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:34.679 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:34.679 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:34.679 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:13:34.679 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:13:34.679 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:13:34.679 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:13:34.679 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:13:34.680 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:13:34.680 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:13:34.680 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:34.680 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:13:34.680 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:13:34.680 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:34.680 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:34.680 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:13:34.680 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:13:34.680 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:34.680 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:13:34.680 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:13:34.680 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:13:34.680 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:13:34.680 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:34.680 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:13:34.680 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:13:34.680 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:34.680 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:34.680 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:13:34.680 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:34.680 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:34.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:34.680 --rc genhtml_branch_coverage=1 00:13:34.680 --rc genhtml_function_coverage=1 00:13:34.680 --rc genhtml_legend=1 00:13:34.680 --rc geninfo_all_blocks=1 00:13:34.680 --rc geninfo_unexecuted_blocks=1 00:13:34.680 00:13:34.680 ' 00:13:34.680 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:34.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:34.680 --rc genhtml_branch_coverage=1 00:13:34.680 --rc genhtml_function_coverage=1 00:13:34.680 --rc genhtml_legend=1 00:13:34.680 --rc geninfo_all_blocks=1 00:13:34.680 --rc geninfo_unexecuted_blocks=1 00:13:34.680 00:13:34.680 ' 00:13:34.680 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:34.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:34.680 --rc genhtml_branch_coverage=1 00:13:34.680 --rc genhtml_function_coverage=1 00:13:34.680 --rc genhtml_legend=1 00:13:34.680 --rc geninfo_all_blocks=1 00:13:34.680 --rc geninfo_unexecuted_blocks=1 00:13:34.680 00:13:34.680 ' 00:13:34.680 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:34.680 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:34.680 --rc genhtml_branch_coverage=1 00:13:34.680 --rc genhtml_function_coverage=1 00:13:34.680 --rc genhtml_legend=1 00:13:34.680 --rc geninfo_all_blocks=1 00:13:34.680 --rc geninfo_unexecuted_blocks=1 00:13:34.680 00:13:34.680 ' 00:13:34.680 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:13:34.680 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:34.680 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:34.680 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:34.680 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:34.680 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:34.680 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:34.680 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:34.680 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:34.680 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:34.680 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:34.680 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:34.680 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:34.680 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:34.680 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:34.680 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:34.680 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:34.680 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:34.680 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:13:34.680 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:13:34.680 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:34.680 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:34.680 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:34.680 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.680 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.680 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.680 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:34.680 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.680 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:13:34.680 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:34.680 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:34.680 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:34.680 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:34.681 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:34.681 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:34.681 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:34.681 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:34.681 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:34.681 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:34.681 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:34.681 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:34.681 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:34.681 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:34.681 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:34.681 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:34.681 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:34.681 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:34.681 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:34.681 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:34.681 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:34.681 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:13:34.681 22:44:42 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:41.254 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:41.254 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:13:41.254 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:41.254 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:41.254 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:41.254 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:41.254 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:41.254 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:13:41.254 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:41.254 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:13:41.254 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:13:41.254 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:13:41.254 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:13:41.254 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:13:41.254 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:13:41.254 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:41.254 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:41.254 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:41.254 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:41.254 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:41.254 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:41.254 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:41.254 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:41.254 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:41.254 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:41.254 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:41.254 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:41.254 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:41.254 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:41.254 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:41.254 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:41.254 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:41.254 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:41.254 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:41.254 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:41.254 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:41.254 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:41.254 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:41.254 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:41.254 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:41.254 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:41.254 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:41.254 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:41.254 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:41.254 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:41.254 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:41.254 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:41.254 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:41.254 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:41.254 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:41.254 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:41.254 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:41.254 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:41.254 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:41.254 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:41.254 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:41.254 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:41.254 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:41.254 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:41.254 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:41.254 Found net devices under 0000:86:00.0: cvl_0_0 00:13:41.254 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:41.254 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:41.254 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:41.254 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:41.254 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:41.254 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:41.254 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:41.254 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:41.254 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:41.254 Found net devices under 0000:86:00.1: cvl_0_1 00:13:41.254 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:41.254 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:41.254 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:13:41.254 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:41.254 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:41.254 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:41.254 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:41.254 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:41.254 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:41.254 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:41.254 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:41.254 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:41.254 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:41.254 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:41.254 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:41.254 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:41.254 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:41.254 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:41.255 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:41.255 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:41.255 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:41.255 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:41.255 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:41.255 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:41.255 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:41.255 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:41.255 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:41.255 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:41.255 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:41.255 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:41.255 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.428 ms 00:13:41.255 00:13:41.255 --- 10.0.0.2 ping statistics --- 00:13:41.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:41.255 rtt min/avg/max/mdev = 0.428/0.428/0.428/0.000 ms 00:13:41.255 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:41.255 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:41.255 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:13:41.255 00:13:41.255 --- 10.0.0.1 ping statistics --- 00:13:41.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:41.255 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:13:41.255 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:41.255 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:13:41.255 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:41.255 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:41.255 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:41.255 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:41.255 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:41.255 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:41.255 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:41.255 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:41.255 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:41.255 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:41.255 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:41.255 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=2323824 00:13:41.255 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 2323824 00:13:41.255 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:41.255 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 2323824 ']' 00:13:41.255 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:41.255 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:41.255 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:41.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:41.255 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:41.255 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:41.255 [2024-12-10 22:44:48.415517] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:13:41.255 [2024-12-10 22:44:48.415568] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:41.255 [2024-12-10 22:44:48.495194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:41.255 [2024-12-10 22:44:48.535660] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:41.255 [2024-12-10 22:44:48.535695] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:41.255 [2024-12-10 22:44:48.535702] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:41.255 [2024-12-10 22:44:48.535708] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:41.255 [2024-12-10 22:44:48.535713] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:41.255 [2024-12-10 22:44:48.536265] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:13:41.255 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:41.255 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:13:41.255 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:41.255 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:41.255 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:41.255 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:41.255 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:41.255 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.255 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:41.255 [2024-12-10 22:44:48.672607] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:41.255 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.255 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:41.255 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.255 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:41.255 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.255 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:41.255 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.255 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:41.255 [2024-12-10 22:44:48.692776] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:41.255 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.255 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:41.255 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.255 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:41.255 NULL1 00:13:41.255 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.255 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:41.255 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.255 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:41.255 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.255 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:41.255 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.255 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:41.255 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.255 22:44:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:41.255 [2024-12-10 22:44:48.752621] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:13:41.255 [2024-12-10 22:44:48.752672] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2323855 ] 00:13:41.514 Attached to nqn.2016-06.io.spdk:cnode1 00:13:41.514 Namespace ID: 1 size: 1GB 00:13:41.514 fused_ordering(0) 00:13:41.514 fused_ordering(1) 00:13:41.514 fused_ordering(2) 00:13:41.514 fused_ordering(3) 00:13:41.514 fused_ordering(4) 00:13:41.514 fused_ordering(5) 00:13:41.514 fused_ordering(6) 00:13:41.514 fused_ordering(7) 00:13:41.514 fused_ordering(8) 00:13:41.514 fused_ordering(9) 00:13:41.514 fused_ordering(10) 00:13:41.514 fused_ordering(11) 00:13:41.514 fused_ordering(12) 00:13:41.514 fused_ordering(13) 00:13:41.514 fused_ordering(14) 00:13:41.514 fused_ordering(15) 00:13:41.514 fused_ordering(16) 00:13:41.514 fused_ordering(17) 00:13:41.514 fused_ordering(18) 00:13:41.514 fused_ordering(19) 00:13:41.514 fused_ordering(20) 00:13:41.514 fused_ordering(21) 00:13:41.514 fused_ordering(22) 00:13:41.514 fused_ordering(23) 00:13:41.514 fused_ordering(24) 00:13:41.514 fused_ordering(25) 00:13:41.514 fused_ordering(26) 00:13:41.514 fused_ordering(27) 00:13:41.514 fused_ordering(28) 00:13:41.514 fused_ordering(29) 00:13:41.514 fused_ordering(30) 00:13:41.514 fused_ordering(31) 00:13:41.514 fused_ordering(32) 00:13:41.514 fused_ordering(33) 00:13:41.514 fused_ordering(34) 00:13:41.514 fused_ordering(35) 00:13:41.514 fused_ordering(36) 00:13:41.514 fused_ordering(37) 00:13:41.514 fused_ordering(38) 00:13:41.514 fused_ordering(39) 00:13:41.514 fused_ordering(40) 00:13:41.514 fused_ordering(41) 00:13:41.514 fused_ordering(42) 00:13:41.514 fused_ordering(43) 00:13:41.514 fused_ordering(44) 00:13:41.514 fused_ordering(45) 00:13:41.514 fused_ordering(46) 00:13:41.514 fused_ordering(47) 00:13:41.514 fused_ordering(48) 00:13:41.514 fused_ordering(49) 00:13:41.514 fused_ordering(50) 00:13:41.514 fused_ordering(51) 00:13:41.514 fused_ordering(52) 00:13:41.514 fused_ordering(53) 00:13:41.514 fused_ordering(54) 00:13:41.514 fused_ordering(55) 00:13:41.514 fused_ordering(56) 00:13:41.514 fused_ordering(57) 00:13:41.514 fused_ordering(58) 00:13:41.514 fused_ordering(59) 00:13:41.514 fused_ordering(60) 00:13:41.514 fused_ordering(61) 00:13:41.514 fused_ordering(62) 00:13:41.514 fused_ordering(63) 00:13:41.514 fused_ordering(64) 00:13:41.514 fused_ordering(65) 00:13:41.514 fused_ordering(66) 00:13:41.514 fused_ordering(67) 00:13:41.514 fused_ordering(68) 00:13:41.514 fused_ordering(69) 00:13:41.514 fused_ordering(70) 00:13:41.514 fused_ordering(71) 00:13:41.514 fused_ordering(72) 00:13:41.514 fused_ordering(73) 00:13:41.514 fused_ordering(74) 00:13:41.514 fused_ordering(75) 00:13:41.514 fused_ordering(76) 00:13:41.514 fused_ordering(77) 00:13:41.515 fused_ordering(78) 00:13:41.515 fused_ordering(79) 00:13:41.515 fused_ordering(80) 00:13:41.515 fused_ordering(81) 00:13:41.515 fused_ordering(82) 00:13:41.515 fused_ordering(83) 00:13:41.515 fused_ordering(84) 00:13:41.515 fused_ordering(85) 00:13:41.515 fused_ordering(86) 00:13:41.515 fused_ordering(87) 00:13:41.515 fused_ordering(88) 00:13:41.515 fused_ordering(89) 00:13:41.515 fused_ordering(90) 00:13:41.515 fused_ordering(91) 00:13:41.515 fused_ordering(92) 00:13:41.515 fused_ordering(93) 00:13:41.515 fused_ordering(94) 00:13:41.515 fused_ordering(95) 00:13:41.515 fused_ordering(96) 00:13:41.515 fused_ordering(97) 00:13:41.515 fused_ordering(98) 00:13:41.515 fused_ordering(99) 00:13:41.515 fused_ordering(100) 00:13:41.515 fused_ordering(101) 00:13:41.515 fused_ordering(102) 00:13:41.515 fused_ordering(103) 00:13:41.515 fused_ordering(104) 00:13:41.515 fused_ordering(105) 00:13:41.515 fused_ordering(106) 00:13:41.515 fused_ordering(107) 00:13:41.515 fused_ordering(108) 00:13:41.515 fused_ordering(109) 00:13:41.515 fused_ordering(110) 00:13:41.515 fused_ordering(111) 00:13:41.515 fused_ordering(112) 00:13:41.515 fused_ordering(113) 00:13:41.515 fused_ordering(114) 00:13:41.515 fused_ordering(115) 00:13:41.515 fused_ordering(116) 00:13:41.515 fused_ordering(117) 00:13:41.515 fused_ordering(118) 00:13:41.515 fused_ordering(119) 00:13:41.515 fused_ordering(120) 00:13:41.515 fused_ordering(121) 00:13:41.515 fused_ordering(122) 00:13:41.515 fused_ordering(123) 00:13:41.515 fused_ordering(124) 00:13:41.515 fused_ordering(125) 00:13:41.515 fused_ordering(126) 00:13:41.515 fused_ordering(127) 00:13:41.515 fused_ordering(128) 00:13:41.515 fused_ordering(129) 00:13:41.515 fused_ordering(130) 00:13:41.515 fused_ordering(131) 00:13:41.515 fused_ordering(132) 00:13:41.515 fused_ordering(133) 00:13:41.515 fused_ordering(134) 00:13:41.515 fused_ordering(135) 00:13:41.515 fused_ordering(136) 00:13:41.515 fused_ordering(137) 00:13:41.515 fused_ordering(138) 00:13:41.515 fused_ordering(139) 00:13:41.515 fused_ordering(140) 00:13:41.515 fused_ordering(141) 00:13:41.515 fused_ordering(142) 00:13:41.515 fused_ordering(143) 00:13:41.515 fused_ordering(144) 00:13:41.515 fused_ordering(145) 00:13:41.515 fused_ordering(146) 00:13:41.515 fused_ordering(147) 00:13:41.515 fused_ordering(148) 00:13:41.515 fused_ordering(149) 00:13:41.515 fused_ordering(150) 00:13:41.515 fused_ordering(151) 00:13:41.515 fused_ordering(152) 00:13:41.515 fused_ordering(153) 00:13:41.515 fused_ordering(154) 00:13:41.515 fused_ordering(155) 00:13:41.515 fused_ordering(156) 00:13:41.515 fused_ordering(157) 00:13:41.515 fused_ordering(158) 00:13:41.515 fused_ordering(159) 00:13:41.515 fused_ordering(160) 00:13:41.515 fused_ordering(161) 00:13:41.515 fused_ordering(162) 00:13:41.515 fused_ordering(163) 00:13:41.515 fused_ordering(164) 00:13:41.515 fused_ordering(165) 00:13:41.515 fused_ordering(166) 00:13:41.515 fused_ordering(167) 00:13:41.515 fused_ordering(168) 00:13:41.515 fused_ordering(169) 00:13:41.515 fused_ordering(170) 00:13:41.515 fused_ordering(171) 00:13:41.515 fused_ordering(172) 00:13:41.515 fused_ordering(173) 00:13:41.515 fused_ordering(174) 00:13:41.515 fused_ordering(175) 00:13:41.515 fused_ordering(176) 00:13:41.515 fused_ordering(177) 00:13:41.515 fused_ordering(178) 00:13:41.515 fused_ordering(179) 00:13:41.515 fused_ordering(180) 00:13:41.515 fused_ordering(181) 00:13:41.515 fused_ordering(182) 00:13:41.515 fused_ordering(183) 00:13:41.515 fused_ordering(184) 00:13:41.515 fused_ordering(185) 00:13:41.515 fused_ordering(186) 00:13:41.515 fused_ordering(187) 00:13:41.515 fused_ordering(188) 00:13:41.515 fused_ordering(189) 00:13:41.515 fused_ordering(190) 00:13:41.515 fused_ordering(191) 00:13:41.515 fused_ordering(192) 00:13:41.515 fused_ordering(193) 00:13:41.515 fused_ordering(194) 00:13:41.515 fused_ordering(195) 00:13:41.515 fused_ordering(196) 00:13:41.515 fused_ordering(197) 00:13:41.515 fused_ordering(198) 00:13:41.515 fused_ordering(199) 00:13:41.515 fused_ordering(200) 00:13:41.515 fused_ordering(201) 00:13:41.515 fused_ordering(202) 00:13:41.515 fused_ordering(203) 00:13:41.515 fused_ordering(204) 00:13:41.515 fused_ordering(205) 00:13:41.772 fused_ordering(206) 00:13:41.772 fused_ordering(207) 00:13:41.772 fused_ordering(208) 00:13:41.772 fused_ordering(209) 00:13:41.772 fused_ordering(210) 00:13:41.772 fused_ordering(211) 00:13:41.772 fused_ordering(212) 00:13:41.772 fused_ordering(213) 00:13:41.772 fused_ordering(214) 00:13:41.772 fused_ordering(215) 00:13:41.772 fused_ordering(216) 00:13:41.772 fused_ordering(217) 00:13:41.772 fused_ordering(218) 00:13:41.772 fused_ordering(219) 00:13:41.772 fused_ordering(220) 00:13:41.772 fused_ordering(221) 00:13:41.772 fused_ordering(222) 00:13:41.772 fused_ordering(223) 00:13:41.772 fused_ordering(224) 00:13:41.772 fused_ordering(225) 00:13:41.772 fused_ordering(226) 00:13:41.772 fused_ordering(227) 00:13:41.772 fused_ordering(228) 00:13:41.772 fused_ordering(229) 00:13:41.772 fused_ordering(230) 00:13:41.772 fused_ordering(231) 00:13:41.772 fused_ordering(232) 00:13:41.772 fused_ordering(233) 00:13:41.772 fused_ordering(234) 00:13:41.772 fused_ordering(235) 00:13:41.772 fused_ordering(236) 00:13:41.772 fused_ordering(237) 00:13:41.772 fused_ordering(238) 00:13:41.772 fused_ordering(239) 00:13:41.772 fused_ordering(240) 00:13:41.772 fused_ordering(241) 00:13:41.772 fused_ordering(242) 00:13:41.772 fused_ordering(243) 00:13:41.772 fused_ordering(244) 00:13:41.772 fused_ordering(245) 00:13:41.772 fused_ordering(246) 00:13:41.772 fused_ordering(247) 00:13:41.772 fused_ordering(248) 00:13:41.772 fused_ordering(249) 00:13:41.772 fused_ordering(250) 00:13:41.772 fused_ordering(251) 00:13:41.772 fused_ordering(252) 00:13:41.772 fused_ordering(253) 00:13:41.772 fused_ordering(254) 00:13:41.772 fused_ordering(255) 00:13:41.772 fused_ordering(256) 00:13:41.772 fused_ordering(257) 00:13:41.772 fused_ordering(258) 00:13:41.772 fused_ordering(259) 00:13:41.772 fused_ordering(260) 00:13:41.772 fused_ordering(261) 00:13:41.772 fused_ordering(262) 00:13:41.772 fused_ordering(263) 00:13:41.772 fused_ordering(264) 00:13:41.772 fused_ordering(265) 00:13:41.772 fused_ordering(266) 00:13:41.772 fused_ordering(267) 00:13:41.772 fused_ordering(268) 00:13:41.772 fused_ordering(269) 00:13:41.772 fused_ordering(270) 00:13:41.772 fused_ordering(271) 00:13:41.772 fused_ordering(272) 00:13:41.772 fused_ordering(273) 00:13:41.772 fused_ordering(274) 00:13:41.772 fused_ordering(275) 00:13:41.772 fused_ordering(276) 00:13:41.772 fused_ordering(277) 00:13:41.772 fused_ordering(278) 00:13:41.772 fused_ordering(279) 00:13:41.772 fused_ordering(280) 00:13:41.772 fused_ordering(281) 00:13:41.772 fused_ordering(282) 00:13:41.772 fused_ordering(283) 00:13:41.772 fused_ordering(284) 00:13:41.772 fused_ordering(285) 00:13:41.772 fused_ordering(286) 00:13:41.772 fused_ordering(287) 00:13:41.772 fused_ordering(288) 00:13:41.772 fused_ordering(289) 00:13:41.772 fused_ordering(290) 00:13:41.772 fused_ordering(291) 00:13:41.772 fused_ordering(292) 00:13:41.772 fused_ordering(293) 00:13:41.772 fused_ordering(294) 00:13:41.772 fused_ordering(295) 00:13:41.773 fused_ordering(296) 00:13:41.773 fused_ordering(297) 00:13:41.773 fused_ordering(298) 00:13:41.773 fused_ordering(299) 00:13:41.773 fused_ordering(300) 00:13:41.773 fused_ordering(301) 00:13:41.773 fused_ordering(302) 00:13:41.773 fused_ordering(303) 00:13:41.773 fused_ordering(304) 00:13:41.773 fused_ordering(305) 00:13:41.773 fused_ordering(306) 00:13:41.773 fused_ordering(307) 00:13:41.773 fused_ordering(308) 00:13:41.773 fused_ordering(309) 00:13:41.773 fused_ordering(310) 00:13:41.773 fused_ordering(311) 00:13:41.773 fused_ordering(312) 00:13:41.773 fused_ordering(313) 00:13:41.773 fused_ordering(314) 00:13:41.773 fused_ordering(315) 00:13:41.773 fused_ordering(316) 00:13:41.773 fused_ordering(317) 00:13:41.773 fused_ordering(318) 00:13:41.773 fused_ordering(319) 00:13:41.773 fused_ordering(320) 00:13:41.773 fused_ordering(321) 00:13:41.773 fused_ordering(322) 00:13:41.773 fused_ordering(323) 00:13:41.773 fused_ordering(324) 00:13:41.773 fused_ordering(325) 00:13:41.773 fused_ordering(326) 00:13:41.773 fused_ordering(327) 00:13:41.773 fused_ordering(328) 00:13:41.773 fused_ordering(329) 00:13:41.773 fused_ordering(330) 00:13:41.773 fused_ordering(331) 00:13:41.773 fused_ordering(332) 00:13:41.773 fused_ordering(333) 00:13:41.773 fused_ordering(334) 00:13:41.773 fused_ordering(335) 00:13:41.773 fused_ordering(336) 00:13:41.773 fused_ordering(337) 00:13:41.773 fused_ordering(338) 00:13:41.773 fused_ordering(339) 00:13:41.773 fused_ordering(340) 00:13:41.773 fused_ordering(341) 00:13:41.773 fused_ordering(342) 00:13:41.773 fused_ordering(343) 00:13:41.773 fused_ordering(344) 00:13:41.773 fused_ordering(345) 00:13:41.773 fused_ordering(346) 00:13:41.773 fused_ordering(347) 00:13:41.773 fused_ordering(348) 00:13:41.773 fused_ordering(349) 00:13:41.773 fused_ordering(350) 00:13:41.773 fused_ordering(351) 00:13:41.773 fused_ordering(352) 00:13:41.773 fused_ordering(353) 00:13:41.773 fused_ordering(354) 00:13:41.773 fused_ordering(355) 00:13:41.773 fused_ordering(356) 00:13:41.773 fused_ordering(357) 00:13:41.773 fused_ordering(358) 00:13:41.773 fused_ordering(359) 00:13:41.773 fused_ordering(360) 00:13:41.773 fused_ordering(361) 00:13:41.773 fused_ordering(362) 00:13:41.773 fused_ordering(363) 00:13:41.773 fused_ordering(364) 00:13:41.773 fused_ordering(365) 00:13:41.773 fused_ordering(366) 00:13:41.773 fused_ordering(367) 00:13:41.773 fused_ordering(368) 00:13:41.773 fused_ordering(369) 00:13:41.773 fused_ordering(370) 00:13:41.773 fused_ordering(371) 00:13:41.773 fused_ordering(372) 00:13:41.773 fused_ordering(373) 00:13:41.773 fused_ordering(374) 00:13:41.773 fused_ordering(375) 00:13:41.773 fused_ordering(376) 00:13:41.773 fused_ordering(377) 00:13:41.773 fused_ordering(378) 00:13:41.773 fused_ordering(379) 00:13:41.773 fused_ordering(380) 00:13:41.773 fused_ordering(381) 00:13:41.773 fused_ordering(382) 00:13:41.773 fused_ordering(383) 00:13:41.773 fused_ordering(384) 00:13:41.773 fused_ordering(385) 00:13:41.773 fused_ordering(386) 00:13:41.773 fused_ordering(387) 00:13:41.773 fused_ordering(388) 00:13:41.773 fused_ordering(389) 00:13:41.773 fused_ordering(390) 00:13:41.773 fused_ordering(391) 00:13:41.773 fused_ordering(392) 00:13:41.773 fused_ordering(393) 00:13:41.773 fused_ordering(394) 00:13:41.773 fused_ordering(395) 00:13:41.773 fused_ordering(396) 00:13:41.773 fused_ordering(397) 00:13:41.773 fused_ordering(398) 00:13:41.773 fused_ordering(399) 00:13:41.773 fused_ordering(400) 00:13:41.773 fused_ordering(401) 00:13:41.773 fused_ordering(402) 00:13:41.773 fused_ordering(403) 00:13:41.773 fused_ordering(404) 00:13:41.773 fused_ordering(405) 00:13:41.773 fused_ordering(406) 00:13:41.773 fused_ordering(407) 00:13:41.773 fused_ordering(408) 00:13:41.773 fused_ordering(409) 00:13:41.773 fused_ordering(410) 00:13:42.031 fused_ordering(411) 00:13:42.031 fused_ordering(412) 00:13:42.031 fused_ordering(413) 00:13:42.031 fused_ordering(414) 00:13:42.031 fused_ordering(415) 00:13:42.031 fused_ordering(416) 00:13:42.031 fused_ordering(417) 00:13:42.031 fused_ordering(418) 00:13:42.031 fused_ordering(419) 00:13:42.031 fused_ordering(420) 00:13:42.031 fused_ordering(421) 00:13:42.031 fused_ordering(422) 00:13:42.031 fused_ordering(423) 00:13:42.031 fused_ordering(424) 00:13:42.031 fused_ordering(425) 00:13:42.031 fused_ordering(426) 00:13:42.031 fused_ordering(427) 00:13:42.031 fused_ordering(428) 00:13:42.031 fused_ordering(429) 00:13:42.031 fused_ordering(430) 00:13:42.031 fused_ordering(431) 00:13:42.031 fused_ordering(432) 00:13:42.031 fused_ordering(433) 00:13:42.031 fused_ordering(434) 00:13:42.031 fused_ordering(435) 00:13:42.031 fused_ordering(436) 00:13:42.031 fused_ordering(437) 00:13:42.031 fused_ordering(438) 00:13:42.031 fused_ordering(439) 00:13:42.031 fused_ordering(440) 00:13:42.031 fused_ordering(441) 00:13:42.031 fused_ordering(442) 00:13:42.031 fused_ordering(443) 00:13:42.031 fused_ordering(444) 00:13:42.031 fused_ordering(445) 00:13:42.031 fused_ordering(446) 00:13:42.031 fused_ordering(447) 00:13:42.031 fused_ordering(448) 00:13:42.031 fused_ordering(449) 00:13:42.031 fused_ordering(450) 00:13:42.031 fused_ordering(451) 00:13:42.031 fused_ordering(452) 00:13:42.031 fused_ordering(453) 00:13:42.031 fused_ordering(454) 00:13:42.031 fused_ordering(455) 00:13:42.031 fused_ordering(456) 00:13:42.031 fused_ordering(457) 00:13:42.031 fused_ordering(458) 00:13:42.031 fused_ordering(459) 00:13:42.031 fused_ordering(460) 00:13:42.031 fused_ordering(461) 00:13:42.031 fused_ordering(462) 00:13:42.031 fused_ordering(463) 00:13:42.031 fused_ordering(464) 00:13:42.031 fused_ordering(465) 00:13:42.031 fused_ordering(466) 00:13:42.031 fused_ordering(467) 00:13:42.031 fused_ordering(468) 00:13:42.031 fused_ordering(469) 00:13:42.031 fused_ordering(470) 00:13:42.031 fused_ordering(471) 00:13:42.031 fused_ordering(472) 00:13:42.031 fused_ordering(473) 00:13:42.031 fused_ordering(474) 00:13:42.031 fused_ordering(475) 00:13:42.031 fused_ordering(476) 00:13:42.031 fused_ordering(477) 00:13:42.031 fused_ordering(478) 00:13:42.031 fused_ordering(479) 00:13:42.031 fused_ordering(480) 00:13:42.031 fused_ordering(481) 00:13:42.031 fused_ordering(482) 00:13:42.031 fused_ordering(483) 00:13:42.031 fused_ordering(484) 00:13:42.031 fused_ordering(485) 00:13:42.031 fused_ordering(486) 00:13:42.031 fused_ordering(487) 00:13:42.031 fused_ordering(488) 00:13:42.031 fused_ordering(489) 00:13:42.031 fused_ordering(490) 00:13:42.031 fused_ordering(491) 00:13:42.031 fused_ordering(492) 00:13:42.031 fused_ordering(493) 00:13:42.031 fused_ordering(494) 00:13:42.031 fused_ordering(495) 00:13:42.031 fused_ordering(496) 00:13:42.031 fused_ordering(497) 00:13:42.031 fused_ordering(498) 00:13:42.031 fused_ordering(499) 00:13:42.031 fused_ordering(500) 00:13:42.031 fused_ordering(501) 00:13:42.031 fused_ordering(502) 00:13:42.031 fused_ordering(503) 00:13:42.031 fused_ordering(504) 00:13:42.031 fused_ordering(505) 00:13:42.031 fused_ordering(506) 00:13:42.031 fused_ordering(507) 00:13:42.031 fused_ordering(508) 00:13:42.031 fused_ordering(509) 00:13:42.031 fused_ordering(510) 00:13:42.032 fused_ordering(511) 00:13:42.032 fused_ordering(512) 00:13:42.032 fused_ordering(513) 00:13:42.032 fused_ordering(514) 00:13:42.032 fused_ordering(515) 00:13:42.032 fused_ordering(516) 00:13:42.032 fused_ordering(517) 00:13:42.032 fused_ordering(518) 00:13:42.032 fused_ordering(519) 00:13:42.032 fused_ordering(520) 00:13:42.032 fused_ordering(521) 00:13:42.032 fused_ordering(522) 00:13:42.032 fused_ordering(523) 00:13:42.032 fused_ordering(524) 00:13:42.032 fused_ordering(525) 00:13:42.032 fused_ordering(526) 00:13:42.032 fused_ordering(527) 00:13:42.032 fused_ordering(528) 00:13:42.032 fused_ordering(529) 00:13:42.032 fused_ordering(530) 00:13:42.032 fused_ordering(531) 00:13:42.032 fused_ordering(532) 00:13:42.032 fused_ordering(533) 00:13:42.032 fused_ordering(534) 00:13:42.032 fused_ordering(535) 00:13:42.032 fused_ordering(536) 00:13:42.032 fused_ordering(537) 00:13:42.032 fused_ordering(538) 00:13:42.032 fused_ordering(539) 00:13:42.032 fused_ordering(540) 00:13:42.032 fused_ordering(541) 00:13:42.032 fused_ordering(542) 00:13:42.032 fused_ordering(543) 00:13:42.032 fused_ordering(544) 00:13:42.032 fused_ordering(545) 00:13:42.032 fused_ordering(546) 00:13:42.032 fused_ordering(547) 00:13:42.032 fused_ordering(548) 00:13:42.032 fused_ordering(549) 00:13:42.032 fused_ordering(550) 00:13:42.032 fused_ordering(551) 00:13:42.032 fused_ordering(552) 00:13:42.032 fused_ordering(553) 00:13:42.032 fused_ordering(554) 00:13:42.032 fused_ordering(555) 00:13:42.032 fused_ordering(556) 00:13:42.032 fused_ordering(557) 00:13:42.032 fused_ordering(558) 00:13:42.032 fused_ordering(559) 00:13:42.032 fused_ordering(560) 00:13:42.032 fused_ordering(561) 00:13:42.032 fused_ordering(562) 00:13:42.032 fused_ordering(563) 00:13:42.032 fused_ordering(564) 00:13:42.032 fused_ordering(565) 00:13:42.032 fused_ordering(566) 00:13:42.032 fused_ordering(567) 00:13:42.032 fused_ordering(568) 00:13:42.032 fused_ordering(569) 00:13:42.032 fused_ordering(570) 00:13:42.032 fused_ordering(571) 00:13:42.032 fused_ordering(572) 00:13:42.032 fused_ordering(573) 00:13:42.032 fused_ordering(574) 00:13:42.032 fused_ordering(575) 00:13:42.032 fused_ordering(576) 00:13:42.032 fused_ordering(577) 00:13:42.032 fused_ordering(578) 00:13:42.032 fused_ordering(579) 00:13:42.032 fused_ordering(580) 00:13:42.032 fused_ordering(581) 00:13:42.032 fused_ordering(582) 00:13:42.032 fused_ordering(583) 00:13:42.032 fused_ordering(584) 00:13:42.032 fused_ordering(585) 00:13:42.032 fused_ordering(586) 00:13:42.032 fused_ordering(587) 00:13:42.032 fused_ordering(588) 00:13:42.032 fused_ordering(589) 00:13:42.032 fused_ordering(590) 00:13:42.032 fused_ordering(591) 00:13:42.032 fused_ordering(592) 00:13:42.032 fused_ordering(593) 00:13:42.032 fused_ordering(594) 00:13:42.032 fused_ordering(595) 00:13:42.032 fused_ordering(596) 00:13:42.032 fused_ordering(597) 00:13:42.032 fused_ordering(598) 00:13:42.032 fused_ordering(599) 00:13:42.032 fused_ordering(600) 00:13:42.032 fused_ordering(601) 00:13:42.032 fused_ordering(602) 00:13:42.032 fused_ordering(603) 00:13:42.032 fused_ordering(604) 00:13:42.032 fused_ordering(605) 00:13:42.032 fused_ordering(606) 00:13:42.032 fused_ordering(607) 00:13:42.032 fused_ordering(608) 00:13:42.032 fused_ordering(609) 00:13:42.032 fused_ordering(610) 00:13:42.032 fused_ordering(611) 00:13:42.032 fused_ordering(612) 00:13:42.032 fused_ordering(613) 00:13:42.032 fused_ordering(614) 00:13:42.032 fused_ordering(615) 00:13:42.600 fused_ordering(616) 00:13:42.600 fused_ordering(617) 00:13:42.600 fused_ordering(618) 00:13:42.600 fused_ordering(619) 00:13:42.600 fused_ordering(620) 00:13:42.600 fused_ordering(621) 00:13:42.600 fused_ordering(622) 00:13:42.600 fused_ordering(623) 00:13:42.600 fused_ordering(624) 00:13:42.600 fused_ordering(625) 00:13:42.600 fused_ordering(626) 00:13:42.600 fused_ordering(627) 00:13:42.600 fused_ordering(628) 00:13:42.600 fused_ordering(629) 00:13:42.600 fused_ordering(630) 00:13:42.600 fused_ordering(631) 00:13:42.600 fused_ordering(632) 00:13:42.600 fused_ordering(633) 00:13:42.600 fused_ordering(634) 00:13:42.600 fused_ordering(635) 00:13:42.600 fused_ordering(636) 00:13:42.600 fused_ordering(637) 00:13:42.600 fused_ordering(638) 00:13:42.600 fused_ordering(639) 00:13:42.600 fused_ordering(640) 00:13:42.600 fused_ordering(641) 00:13:42.600 fused_ordering(642) 00:13:42.600 fused_ordering(643) 00:13:42.600 fused_ordering(644) 00:13:42.600 fused_ordering(645) 00:13:42.600 fused_ordering(646) 00:13:42.600 fused_ordering(647) 00:13:42.600 fused_ordering(648) 00:13:42.600 fused_ordering(649) 00:13:42.600 fused_ordering(650) 00:13:42.600 fused_ordering(651) 00:13:42.600 fused_ordering(652) 00:13:42.600 fused_ordering(653) 00:13:42.600 fused_ordering(654) 00:13:42.600 fused_ordering(655) 00:13:42.600 fused_ordering(656) 00:13:42.600 fused_ordering(657) 00:13:42.600 fused_ordering(658) 00:13:42.600 fused_ordering(659) 00:13:42.600 fused_ordering(660) 00:13:42.600 fused_ordering(661) 00:13:42.600 fused_ordering(662) 00:13:42.600 fused_ordering(663) 00:13:42.600 fused_ordering(664) 00:13:42.600 fused_ordering(665) 00:13:42.600 fused_ordering(666) 00:13:42.600 fused_ordering(667) 00:13:42.600 fused_ordering(668) 00:13:42.600 fused_ordering(669) 00:13:42.600 fused_ordering(670) 00:13:42.600 fused_ordering(671) 00:13:42.600 fused_ordering(672) 00:13:42.600 fused_ordering(673) 00:13:42.600 fused_ordering(674) 00:13:42.600 fused_ordering(675) 00:13:42.600 fused_ordering(676) 00:13:42.600 fused_ordering(677) 00:13:42.600 fused_ordering(678) 00:13:42.600 fused_ordering(679) 00:13:42.600 fused_ordering(680) 00:13:42.600 fused_ordering(681) 00:13:42.600 fused_ordering(682) 00:13:42.600 fused_ordering(683) 00:13:42.600 fused_ordering(684) 00:13:42.600 fused_ordering(685) 00:13:42.600 fused_ordering(686) 00:13:42.600 fused_ordering(687) 00:13:42.600 fused_ordering(688) 00:13:42.600 fused_ordering(689) 00:13:42.600 fused_ordering(690) 00:13:42.600 fused_ordering(691) 00:13:42.600 fused_ordering(692) 00:13:42.600 fused_ordering(693) 00:13:42.600 fused_ordering(694) 00:13:42.600 fused_ordering(695) 00:13:42.600 fused_ordering(696) 00:13:42.600 fused_ordering(697) 00:13:42.600 fused_ordering(698) 00:13:42.600 fused_ordering(699) 00:13:42.600 fused_ordering(700) 00:13:42.600 fused_ordering(701) 00:13:42.600 fused_ordering(702) 00:13:42.600 fused_ordering(703) 00:13:42.600 fused_ordering(704) 00:13:42.600 fused_ordering(705) 00:13:42.600 fused_ordering(706) 00:13:42.600 fused_ordering(707) 00:13:42.600 fused_ordering(708) 00:13:42.600 fused_ordering(709) 00:13:42.600 fused_ordering(710) 00:13:42.600 fused_ordering(711) 00:13:42.600 fused_ordering(712) 00:13:42.600 fused_ordering(713) 00:13:42.600 fused_ordering(714) 00:13:42.600 fused_ordering(715) 00:13:42.600 fused_ordering(716) 00:13:42.600 fused_ordering(717) 00:13:42.600 fused_ordering(718) 00:13:42.600 fused_ordering(719) 00:13:42.600 fused_ordering(720) 00:13:42.600 fused_ordering(721) 00:13:42.600 fused_ordering(722) 00:13:42.600 fused_ordering(723) 00:13:42.600 fused_ordering(724) 00:13:42.600 fused_ordering(725) 00:13:42.600 fused_ordering(726) 00:13:42.600 fused_ordering(727) 00:13:42.600 fused_ordering(728) 00:13:42.600 fused_ordering(729) 00:13:42.600 fused_ordering(730) 00:13:42.600 fused_ordering(731) 00:13:42.600 fused_ordering(732) 00:13:42.600 fused_ordering(733) 00:13:42.600 fused_ordering(734) 00:13:42.600 fused_ordering(735) 00:13:42.600 fused_ordering(736) 00:13:42.600 fused_ordering(737) 00:13:42.600 fused_ordering(738) 00:13:42.600 fused_ordering(739) 00:13:42.600 fused_ordering(740) 00:13:42.600 fused_ordering(741) 00:13:42.600 fused_ordering(742) 00:13:42.600 fused_ordering(743) 00:13:42.600 fused_ordering(744) 00:13:42.600 fused_ordering(745) 00:13:42.600 fused_ordering(746) 00:13:42.600 fused_ordering(747) 00:13:42.600 fused_ordering(748) 00:13:42.600 fused_ordering(749) 00:13:42.600 fused_ordering(750) 00:13:42.600 fused_ordering(751) 00:13:42.600 fused_ordering(752) 00:13:42.600 fused_ordering(753) 00:13:42.600 fused_ordering(754) 00:13:42.600 fused_ordering(755) 00:13:42.600 fused_ordering(756) 00:13:42.600 fused_ordering(757) 00:13:42.600 fused_ordering(758) 00:13:42.600 fused_ordering(759) 00:13:42.600 fused_ordering(760) 00:13:42.600 fused_ordering(761) 00:13:42.600 fused_ordering(762) 00:13:42.600 fused_ordering(763) 00:13:42.601 fused_ordering(764) 00:13:42.601 fused_ordering(765) 00:13:42.601 fused_ordering(766) 00:13:42.601 fused_ordering(767) 00:13:42.601 fused_ordering(768) 00:13:42.601 fused_ordering(769) 00:13:42.601 fused_ordering(770) 00:13:42.601 fused_ordering(771) 00:13:42.601 fused_ordering(772) 00:13:42.601 fused_ordering(773) 00:13:42.601 fused_ordering(774) 00:13:42.601 fused_ordering(775) 00:13:42.601 fused_ordering(776) 00:13:42.601 fused_ordering(777) 00:13:42.601 fused_ordering(778) 00:13:42.601 fused_ordering(779) 00:13:42.601 fused_ordering(780) 00:13:42.601 fused_ordering(781) 00:13:42.601 fused_ordering(782) 00:13:42.601 fused_ordering(783) 00:13:42.601 fused_ordering(784) 00:13:42.601 fused_ordering(785) 00:13:42.601 fused_ordering(786) 00:13:42.601 fused_ordering(787) 00:13:42.601 fused_ordering(788) 00:13:42.601 fused_ordering(789) 00:13:42.601 fused_ordering(790) 00:13:42.601 fused_ordering(791) 00:13:42.601 fused_ordering(792) 00:13:42.601 fused_ordering(793) 00:13:42.601 fused_ordering(794) 00:13:42.601 fused_ordering(795) 00:13:42.601 fused_ordering(796) 00:13:42.601 fused_ordering(797) 00:13:42.601 fused_ordering(798) 00:13:42.601 fused_ordering(799) 00:13:42.601 fused_ordering(800) 00:13:42.601 fused_ordering(801) 00:13:42.601 fused_ordering(802) 00:13:42.601 fused_ordering(803) 00:13:42.601 fused_ordering(804) 00:13:42.601 fused_ordering(805) 00:13:42.601 fused_ordering(806) 00:13:42.601 fused_ordering(807) 00:13:42.601 fused_ordering(808) 00:13:42.601 fused_ordering(809) 00:13:42.601 fused_ordering(810) 00:13:42.601 fused_ordering(811) 00:13:42.601 fused_ordering(812) 00:13:42.601 fused_ordering(813) 00:13:42.601 fused_ordering(814) 00:13:42.601 fused_ordering(815) 00:13:42.601 fused_ordering(816) 00:13:42.601 fused_ordering(817) 00:13:42.601 fused_ordering(818) 00:13:42.601 fused_ordering(819) 00:13:42.601 fused_ordering(820) 00:13:43.169 fused_ordering(821) 00:13:43.169 fused_ordering(822) 00:13:43.169 fused_ordering(823) 00:13:43.169 fused_ordering(824) 00:13:43.169 fused_ordering(825) 00:13:43.169 fused_ordering(826) 00:13:43.169 fused_ordering(827) 00:13:43.169 fused_ordering(828) 00:13:43.169 fused_ordering(829) 00:13:43.169 fused_ordering(830) 00:13:43.169 fused_ordering(831) 00:13:43.169 fused_ordering(832) 00:13:43.169 fused_ordering(833) 00:13:43.169 fused_ordering(834) 00:13:43.169 fused_ordering(835) 00:13:43.169 fused_ordering(836) 00:13:43.169 fused_ordering(837) 00:13:43.169 fused_ordering(838) 00:13:43.169 fused_ordering(839) 00:13:43.169 fused_ordering(840) 00:13:43.169 fused_ordering(841) 00:13:43.169 fused_ordering(842) 00:13:43.169 fused_ordering(843) 00:13:43.169 fused_ordering(844) 00:13:43.169 fused_ordering(845) 00:13:43.169 fused_ordering(846) 00:13:43.169 fused_ordering(847) 00:13:43.169 fused_ordering(848) 00:13:43.169 fused_ordering(849) 00:13:43.169 fused_ordering(850) 00:13:43.169 fused_ordering(851) 00:13:43.169 fused_ordering(852) 00:13:43.169 fused_ordering(853) 00:13:43.169 fused_ordering(854) 00:13:43.169 fused_ordering(855) 00:13:43.169 fused_ordering(856) 00:13:43.169 fused_ordering(857) 00:13:43.169 fused_ordering(858) 00:13:43.169 fused_ordering(859) 00:13:43.169 fused_ordering(860) 00:13:43.169 fused_ordering(861) 00:13:43.169 fused_ordering(862) 00:13:43.169 fused_ordering(863) 00:13:43.169 fused_ordering(864) 00:13:43.169 fused_ordering(865) 00:13:43.169 fused_ordering(866) 00:13:43.169 fused_ordering(867) 00:13:43.169 fused_ordering(868) 00:13:43.169 fused_ordering(869) 00:13:43.169 fused_ordering(870) 00:13:43.169 fused_ordering(871) 00:13:43.169 fused_ordering(872) 00:13:43.169 fused_ordering(873) 00:13:43.169 fused_ordering(874) 00:13:43.169 fused_ordering(875) 00:13:43.169 fused_ordering(876) 00:13:43.169 fused_ordering(877) 00:13:43.169 fused_ordering(878) 00:13:43.169 fused_ordering(879) 00:13:43.169 fused_ordering(880) 00:13:43.169 fused_ordering(881) 00:13:43.169 fused_ordering(882) 00:13:43.169 fused_ordering(883) 00:13:43.169 fused_ordering(884) 00:13:43.169 fused_ordering(885) 00:13:43.169 fused_ordering(886) 00:13:43.169 fused_ordering(887) 00:13:43.169 fused_ordering(888) 00:13:43.169 fused_ordering(889) 00:13:43.169 fused_ordering(890) 00:13:43.169 fused_ordering(891) 00:13:43.169 fused_ordering(892) 00:13:43.169 fused_ordering(893) 00:13:43.169 fused_ordering(894) 00:13:43.169 fused_ordering(895) 00:13:43.169 fused_ordering(896) 00:13:43.169 fused_ordering(897) 00:13:43.169 fused_ordering(898) 00:13:43.169 fused_ordering(899) 00:13:43.169 fused_ordering(900) 00:13:43.169 fused_ordering(901) 00:13:43.169 fused_ordering(902) 00:13:43.169 fused_ordering(903) 00:13:43.169 fused_ordering(904) 00:13:43.169 fused_ordering(905) 00:13:43.169 fused_ordering(906) 00:13:43.169 fused_ordering(907) 00:13:43.169 fused_ordering(908) 00:13:43.169 fused_ordering(909) 00:13:43.169 fused_ordering(910) 00:13:43.169 fused_ordering(911) 00:13:43.169 fused_ordering(912) 00:13:43.169 fused_ordering(913) 00:13:43.169 fused_ordering(914) 00:13:43.169 fused_ordering(915) 00:13:43.169 fused_ordering(916) 00:13:43.169 fused_ordering(917) 00:13:43.169 fused_ordering(918) 00:13:43.169 fused_ordering(919) 00:13:43.169 fused_ordering(920) 00:13:43.169 fused_ordering(921) 00:13:43.169 fused_ordering(922) 00:13:43.169 fused_ordering(923) 00:13:43.169 fused_ordering(924) 00:13:43.169 fused_ordering(925) 00:13:43.169 fused_ordering(926) 00:13:43.169 fused_ordering(927) 00:13:43.169 fused_ordering(928) 00:13:43.169 fused_ordering(929) 00:13:43.169 fused_ordering(930) 00:13:43.169 fused_ordering(931) 00:13:43.169 fused_ordering(932) 00:13:43.169 fused_ordering(933) 00:13:43.169 fused_ordering(934) 00:13:43.169 fused_ordering(935) 00:13:43.169 fused_ordering(936) 00:13:43.169 fused_ordering(937) 00:13:43.169 fused_ordering(938) 00:13:43.169 fused_ordering(939) 00:13:43.169 fused_ordering(940) 00:13:43.169 fused_ordering(941) 00:13:43.169 fused_ordering(942) 00:13:43.169 fused_ordering(943) 00:13:43.169 fused_ordering(944) 00:13:43.169 fused_ordering(945) 00:13:43.169 fused_ordering(946) 00:13:43.169 fused_ordering(947) 00:13:43.169 fused_ordering(948) 00:13:43.169 fused_ordering(949) 00:13:43.169 fused_ordering(950) 00:13:43.169 fused_ordering(951) 00:13:43.169 fused_ordering(952) 00:13:43.169 fused_ordering(953) 00:13:43.169 fused_ordering(954) 00:13:43.169 fused_ordering(955) 00:13:43.169 fused_ordering(956) 00:13:43.169 fused_ordering(957) 00:13:43.169 fused_ordering(958) 00:13:43.169 fused_ordering(959) 00:13:43.169 fused_ordering(960) 00:13:43.169 fused_ordering(961) 00:13:43.169 fused_ordering(962) 00:13:43.169 fused_ordering(963) 00:13:43.169 fused_ordering(964) 00:13:43.169 fused_ordering(965) 00:13:43.169 fused_ordering(966) 00:13:43.169 fused_ordering(967) 00:13:43.169 fused_ordering(968) 00:13:43.169 fused_ordering(969) 00:13:43.169 fused_ordering(970) 00:13:43.169 fused_ordering(971) 00:13:43.169 fused_ordering(972) 00:13:43.169 fused_ordering(973) 00:13:43.169 fused_ordering(974) 00:13:43.169 fused_ordering(975) 00:13:43.169 fused_ordering(976) 00:13:43.169 fused_ordering(977) 00:13:43.169 fused_ordering(978) 00:13:43.169 fused_ordering(979) 00:13:43.169 fused_ordering(980) 00:13:43.169 fused_ordering(981) 00:13:43.169 fused_ordering(982) 00:13:43.169 fused_ordering(983) 00:13:43.169 fused_ordering(984) 00:13:43.169 fused_ordering(985) 00:13:43.169 fused_ordering(986) 00:13:43.169 fused_ordering(987) 00:13:43.169 fused_ordering(988) 00:13:43.169 fused_ordering(989) 00:13:43.169 fused_ordering(990) 00:13:43.169 fused_ordering(991) 00:13:43.169 fused_ordering(992) 00:13:43.169 fused_ordering(993) 00:13:43.169 fused_ordering(994) 00:13:43.169 fused_ordering(995) 00:13:43.169 fused_ordering(996) 00:13:43.169 fused_ordering(997) 00:13:43.169 fused_ordering(998) 00:13:43.169 fused_ordering(999) 00:13:43.169 fused_ordering(1000) 00:13:43.169 fused_ordering(1001) 00:13:43.169 fused_ordering(1002) 00:13:43.169 fused_ordering(1003) 00:13:43.169 fused_ordering(1004) 00:13:43.169 fused_ordering(1005) 00:13:43.169 fused_ordering(1006) 00:13:43.169 fused_ordering(1007) 00:13:43.169 fused_ordering(1008) 00:13:43.169 fused_ordering(1009) 00:13:43.169 fused_ordering(1010) 00:13:43.169 fused_ordering(1011) 00:13:43.169 fused_ordering(1012) 00:13:43.169 fused_ordering(1013) 00:13:43.169 fused_ordering(1014) 00:13:43.169 fused_ordering(1015) 00:13:43.169 fused_ordering(1016) 00:13:43.169 fused_ordering(1017) 00:13:43.169 fused_ordering(1018) 00:13:43.169 fused_ordering(1019) 00:13:43.169 fused_ordering(1020) 00:13:43.169 fused_ordering(1021) 00:13:43.169 fused_ordering(1022) 00:13:43.169 fused_ordering(1023) 00:13:43.169 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:43.169 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:43.169 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:43.169 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:13:43.169 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:43.169 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:13:43.169 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:43.169 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:43.169 rmmod nvme_tcp 00:13:43.169 rmmod nvme_fabrics 00:13:43.169 rmmod nvme_keyring 00:13:43.169 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:43.169 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:13:43.169 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:13:43.170 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 2323824 ']' 00:13:43.170 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 2323824 00:13:43.170 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 2323824 ']' 00:13:43.170 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 2323824 00:13:43.170 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:13:43.170 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:43.170 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2323824 00:13:43.170 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:43.170 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:43.170 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2323824' 00:13:43.170 killing process with pid 2323824 00:13:43.170 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 2323824 00:13:43.170 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 2323824 00:13:43.170 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:43.170 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:43.170 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:43.170 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:13:43.170 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:13:43.170 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:43.170 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:13:43.170 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:43.429 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:43.429 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:43.429 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:43.429 22:44:50 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:45.336 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:45.336 00:13:45.336 real 0m10.789s 00:13:45.336 user 0m5.082s 00:13:45.336 sys 0m5.922s 00:13:45.336 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:45.336 22:44:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:45.336 ************************************ 00:13:45.336 END TEST nvmf_fused_ordering 00:13:45.336 ************************************ 00:13:45.336 22:44:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:13:45.336 22:44:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:45.336 22:44:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:45.336 22:44:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:45.336 ************************************ 00:13:45.336 START TEST nvmf_ns_masking 00:13:45.336 ************************************ 00:13:45.336 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:13:45.596 * Looking for test storage... 00:13:45.596 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:13:45.596 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:45.596 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version 00:13:45.596 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:45.596 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:45.596 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:45.596 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:45.596 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:45.596 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:13:45.597 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:13:45.597 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:13:45.597 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:13:45.597 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:13:45.597 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:13:45.597 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:13:45.597 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:45.597 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:13:45.597 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:13:45.597 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:45.597 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:45.597 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:13:45.597 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:13:45.597 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:45.597 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:13:45.597 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:13:45.597 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:13:45.597 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:13:45.597 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:45.597 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:13:45.597 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:13:45.597 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:45.597 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:45.597 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:13:45.597 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:45.597 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:45.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:45.597 --rc genhtml_branch_coverage=1 00:13:45.597 --rc genhtml_function_coverage=1 00:13:45.597 --rc genhtml_legend=1 00:13:45.597 --rc geninfo_all_blocks=1 00:13:45.597 --rc geninfo_unexecuted_blocks=1 00:13:45.597 00:13:45.597 ' 00:13:45.597 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:45.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:45.597 --rc genhtml_branch_coverage=1 00:13:45.597 --rc genhtml_function_coverage=1 00:13:45.597 --rc genhtml_legend=1 00:13:45.597 --rc geninfo_all_blocks=1 00:13:45.597 --rc geninfo_unexecuted_blocks=1 00:13:45.597 00:13:45.597 ' 00:13:45.597 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:45.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:45.597 --rc genhtml_branch_coverage=1 00:13:45.597 --rc genhtml_function_coverage=1 00:13:45.597 --rc genhtml_legend=1 00:13:45.597 --rc geninfo_all_blocks=1 00:13:45.597 --rc geninfo_unexecuted_blocks=1 00:13:45.597 00:13:45.597 ' 00:13:45.597 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:45.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:45.597 --rc genhtml_branch_coverage=1 00:13:45.597 --rc genhtml_function_coverage=1 00:13:45.597 --rc genhtml_legend=1 00:13:45.597 --rc geninfo_all_blocks=1 00:13:45.597 --rc geninfo_unexecuted_blocks=1 00:13:45.597 00:13:45.597 ' 00:13:45.597 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:13:45.597 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:13:45.597 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:45.597 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:45.597 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:45.597 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:45.597 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:45.597 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:45.597 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:45.597 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:45.597 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:45.597 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:45.597 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:45.597 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:45.597 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:45.597 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:45.597 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:45.597 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:45.597 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:13:45.597 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:13:45.597 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:45.597 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:45.597 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:45.597 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.597 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.597 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.597 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:13:45.597 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.597 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:13:45.597 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:45.597 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:45.597 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:45.597 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:45.597 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:45.597 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:45.597 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:45.597 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:45.597 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:45.597 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:45.597 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:13:45.597 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:13:45.597 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:13:45.597 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:13:45.597 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=5e242591-bd38-4ce6-902b-699f182cae3c 00:13:45.597 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:13:45.597 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=d0e3d8ca-dfb4-404a-bcae-5b78f5b07702 00:13:45.597 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:13:45.597 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:13:45.597 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:13:45.597 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:13:45.598 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=453ba234-6372-498f-9cfc-dc609f0b7960 00:13:45.598 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:13:45.598 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:45.598 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:45.598 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:45.598 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:45.598 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:45.598 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:45.598 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:45.598 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:45.598 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:45.598 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:45.598 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:13:45.598 22:44:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:52.167 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:52.167 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:13:52.167 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:52.167 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:52.167 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:52.167 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:52.167 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:52.167 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:13:52.167 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:52.167 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:13:52.167 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:13:52.167 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:13:52.167 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:13:52.167 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:13:52.167 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:13:52.167 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:52.167 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:52.167 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:52.167 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:52.167 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:52.167 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:52.167 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:52.167 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:52.167 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:52.167 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:52.167 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:52.167 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:52.167 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:52.167 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:52.167 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:52.167 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:52.167 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:52.167 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:52.167 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:52.167 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:52.167 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:52.167 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:52.167 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:52.167 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:52.167 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:52.167 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:52.167 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:52.167 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:52.167 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:52.167 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:52.167 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:52.167 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:52.167 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:52.167 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:52.167 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:52.167 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:52.167 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:52.167 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:52.167 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:52.167 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:52.167 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:52.167 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:52.167 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:52.167 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:52.167 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:52.167 Found net devices under 0000:86:00.0: cvl_0_0 00:13:52.167 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:52.167 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:52.167 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:52.167 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:52.167 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:52.167 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:52.167 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:52.167 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:52.167 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:52.167 Found net devices under 0000:86:00.1: cvl_0_1 00:13:52.168 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:52.168 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:52.168 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:13:52.168 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:52.168 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:52.168 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:52.168 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:52.168 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:52.168 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:52.168 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:52.168 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:52.168 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:52.168 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:52.168 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:52.168 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:52.168 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:52.168 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:52.168 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:52.168 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:52.168 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:52.168 22:44:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:52.168 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:52.168 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:52.168 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:52.168 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:52.168 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:52.168 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:52.168 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:52.168 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:52.168 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:52.168 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.342 ms 00:13:52.168 00:13:52.168 --- 10.0.0.2 ping statistics --- 00:13:52.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:52.168 rtt min/avg/max/mdev = 0.342/0.342/0.342/0.000 ms 00:13:52.168 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:52.168 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:52.168 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:13:52.168 00:13:52.168 --- 10.0.0.1 ping statistics --- 00:13:52.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:52.168 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:13:52.168 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:52.168 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:13:52.168 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:52.168 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:52.168 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:52.168 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:52.168 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:52.168 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:52.168 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:52.168 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:13:52.168 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:52.168 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:52.168 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:52.168 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=2327737 00:13:52.168 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 2327737 00:13:52.168 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:52.168 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2327737 ']' 00:13:52.168 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:52.168 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:52.168 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:52.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:52.168 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:52.168 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:52.168 [2024-12-10 22:44:59.381712] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:13:52.168 [2024-12-10 22:44:59.381761] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:52.168 [2024-12-10 22:44:59.460505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:52.168 [2024-12-10 22:44:59.500561] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:52.168 [2024-12-10 22:44:59.500597] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:52.168 [2024-12-10 22:44:59.500605] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:52.168 [2024-12-10 22:44:59.500611] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:52.168 [2024-12-10 22:44:59.500616] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:52.168 [2024-12-10 22:44:59.501160] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:52.168 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:52.168 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:13:52.168 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:52.168 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:52.168 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:52.168 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:52.168 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:52.168 [2024-12-10 22:44:59.810774] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:52.168 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:13:52.168 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:13:52.168 22:44:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:52.427 Malloc1 00:13:52.427 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:52.686 Malloc2 00:13:52.686 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:52.944 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:13:53.203 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:53.203 [2024-12-10 22:45:00.872804] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:53.203 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:13:53.203 22:45:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 453ba234-6372-498f-9cfc-dc609f0b7960 -a 10.0.0.2 -s 4420 -i 4 00:13:53.462 22:45:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:13:53.462 22:45:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:53.462 22:45:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:53.462 22:45:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:53.462 22:45:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:55.995 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:55.995 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:55.995 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:55.995 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:55.995 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:55.995 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:55.995 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:55.995 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:55.995 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:55.995 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:55.995 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:13:55.995 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:55.995 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:55.995 [ 0]:0x1 00:13:55.995 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:55.995 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:55.995 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=83b63f96f42e4a499c7ccb50c994256c 00:13:55.995 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 83b63f96f42e4a499c7ccb50c994256c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:55.995 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:13:55.995 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:13:55.995 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:55.995 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:55.995 [ 0]:0x1 00:13:55.995 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:55.995 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:55.995 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=83b63f96f42e4a499c7ccb50c994256c 00:13:55.995 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 83b63f96f42e4a499c7ccb50c994256c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:55.995 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:13:55.995 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:55.995 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:55.995 [ 1]:0x2 00:13:55.995 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:55.995 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:55.995 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5a11122e2c994f89b10784d519db0e5e 00:13:55.995 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5a11122e2c994f89b10784d519db0e5e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:55.995 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:13:55.995 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:55.995 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:55.995 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:56.254 22:45:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:13:56.513 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:13:56.513 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 453ba234-6372-498f-9cfc-dc609f0b7960 -a 10.0.0.2 -s 4420 -i 4 00:13:56.513 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:13:56.513 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:56.513 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:56.513 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:13:56.513 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:13:56.513 22:45:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:58.413 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:58.413 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:58.413 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:58.672 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:58.672 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:58.672 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:58.672 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:58.672 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:58.672 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:58.672 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:58.672 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:13:58.672 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:58.672 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:58.672 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:58.672 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:58.672 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:58.672 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:58.672 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:58.672 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:58.672 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:58.672 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:58.672 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:58.672 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:58.672 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:58.672 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:58.672 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:58.672 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:58.672 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:58.672 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:13:58.672 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:58.672 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:58.672 [ 0]:0x2 00:13:58.672 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:58.672 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:58.931 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5a11122e2c994f89b10784d519db0e5e 00:13:58.931 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5a11122e2c994f89b10784d519db0e5e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:58.931 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:58.931 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:13:58.931 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:58.931 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:58.931 [ 0]:0x1 00:13:58.931 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:58.931 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:58.931 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=83b63f96f42e4a499c7ccb50c994256c 00:13:58.931 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 83b63f96f42e4a499c7ccb50c994256c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:58.931 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:13:59.190 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:59.190 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:59.190 [ 1]:0x2 00:13:59.190 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:59.190 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:59.190 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5a11122e2c994f89b10784d519db0e5e 00:13:59.190 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5a11122e2c994f89b10784d519db0e5e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:59.190 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:59.190 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:13:59.190 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:59.190 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:59.190 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:59.190 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:59.190 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:59.190 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:59.190 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:59.190 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:59.190 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:59.448 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:59.448 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:59.448 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:59.448 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:59.448 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:59.449 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:59.449 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:59.449 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:59.449 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:13:59.449 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:59.449 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:59.449 [ 0]:0x2 00:13:59.449 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:59.449 22:45:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:59.449 22:45:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5a11122e2c994f89b10784d519db0e5e 00:13:59.449 22:45:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5a11122e2c994f89b10784d519db0e5e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:59.449 22:45:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:13:59.449 22:45:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:59.449 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:59.449 22:45:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:59.707 22:45:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:13:59.707 22:45:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 453ba234-6372-498f-9cfc-dc609f0b7960 -a 10.0.0.2 -s 4420 -i 4 00:13:59.966 22:45:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:59.966 22:45:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:59.966 22:45:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:59.966 22:45:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:13:59.966 22:45:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:13:59.966 22:45:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:14:01.869 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:01.869 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:01.869 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:01.869 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:14:01.869 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:01.869 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:14:01.869 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:01.869 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:01.869 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:01.869 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:01.869 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:14:01.869 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:01.869 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:01.869 [ 0]:0x1 00:14:01.869 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:01.869 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:01.869 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=83b63f96f42e4a499c7ccb50c994256c 00:14:01.869 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 83b63f96f42e4a499c7ccb50c994256c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:01.869 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:14:01.869 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:01.869 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:01.870 [ 1]:0x2 00:14:01.870 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:01.870 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:02.128 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5a11122e2c994f89b10784d519db0e5e 00:14:02.128 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5a11122e2c994f89b10784d519db0e5e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:02.128 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:02.128 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:14:02.128 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:02.128 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:02.128 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:02.128 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:02.128 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:02.128 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:02.128 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:02.128 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:02.128 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:02.387 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:02.387 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:02.387 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:02.387 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:02.387 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:02.387 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:02.387 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:02.387 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:02.387 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:14:02.387 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:02.387 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:02.387 [ 0]:0x2 00:14:02.387 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:02.387 22:45:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:02.387 22:45:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5a11122e2c994f89b10784d519db0e5e 00:14:02.387 22:45:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5a11122e2c994f89b10784d519db0e5e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:02.387 22:45:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:02.387 22:45:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:02.387 22:45:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:02.387 22:45:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:14:02.387 22:45:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:02.387 22:45:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:14:02.387 22:45:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:02.387 22:45:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:14:02.387 22:45:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:02.387 22:45:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:14:02.387 22:45:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py ]] 00:14:02.387 22:45:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:02.646 [2024-12-10 22:45:10.207647] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:02.646 request: 00:14:02.646 { 00:14:02.646 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:02.646 "nsid": 2, 00:14:02.646 "host": "nqn.2016-06.io.spdk:host1", 00:14:02.646 "method": "nvmf_ns_remove_host", 00:14:02.646 "req_id": 1 00:14:02.646 } 00:14:02.646 Got JSON-RPC error response 00:14:02.646 response: 00:14:02.646 { 00:14:02.646 "code": -32602, 00:14:02.646 "message": "Invalid parameters" 00:14:02.646 } 00:14:02.646 22:45:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:02.646 22:45:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:02.646 22:45:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:02.646 22:45:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:02.646 22:45:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:14:02.646 22:45:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:02.646 22:45:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:14:02.646 22:45:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:14:02.646 22:45:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:02.646 22:45:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:14:02.646 22:45:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:02.646 22:45:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:14:02.646 22:45:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:02.646 22:45:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:02.646 22:45:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:02.646 22:45:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:02.646 22:45:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:02.646 22:45:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:02.646 22:45:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:02.646 22:45:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:02.646 22:45:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:02.646 22:45:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:02.646 22:45:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:14:02.646 22:45:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:02.646 22:45:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:02.646 [ 0]:0x2 00:14:02.646 22:45:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:02.646 22:45:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:02.646 22:45:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=5a11122e2c994f89b10784d519db0e5e 00:14:02.646 22:45:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 5a11122e2c994f89b10784d519db0e5e != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:02.646 22:45:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:14:02.646 22:45:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:02.907 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:02.907 22:45:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2330174 00:14:02.908 22:45:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:14:02.908 22:45:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:14:02.908 22:45:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2330174 /var/tmp/host.sock 00:14:02.908 22:45:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2330174 ']' 00:14:02.908 22:45:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:14:02.908 22:45:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:02.908 22:45:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:02.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:02.908 22:45:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:02.908 22:45:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:02.908 [2024-12-10 22:45:10.467394] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:14:02.908 [2024-12-10 22:45:10.467440] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2330174 ] 00:14:02.908 [2024-12-10 22:45:10.543490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:02.908 [2024-12-10 22:45:10.583540] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:14:03.222 22:45:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:03.222 22:45:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:14:03.222 22:45:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:03.507 22:45:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:03.507 22:45:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 5e242591-bd38-4ce6-902b-699f182cae3c 00:14:03.507 22:45:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:03.507 22:45:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 5E242591BD384CE6902B699F182CAE3C -i 00:14:03.766 22:45:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid d0e3d8ca-dfb4-404a-bcae-5b78f5b07702 00:14:03.766 22:45:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:03.766 22:45:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g D0E3D8CADFB4404ABCAE5B78F5B07702 -i 00:14:04.024 22:45:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:04.282 22:45:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:14:04.541 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:04.541 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:04.799 nvme0n1 00:14:04.799 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:04.799 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:05.059 nvme1n2 00:14:05.059 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:14:05.059 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:14:05.059 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:14:05.059 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:14:05.059 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:05.318 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:14:05.318 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:14:05.318 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:14:05.318 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:14:05.318 22:45:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 5e242591-bd38-4ce6-902b-699f182cae3c == \5\e\2\4\2\5\9\1\-\b\d\3\8\-\4\c\e\6\-\9\0\2\b\-\6\9\9\f\1\8\2\c\a\e\3\c ]] 00:14:05.318 22:45:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:14:05.318 22:45:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:14:05.318 22:45:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:14:05.577 22:45:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ d0e3d8ca-dfb4-404a-bcae-5b78f5b07702 == \d\0\e\3\d\8\c\a\-\d\f\b\4\-\4\0\4\a\-\b\c\a\e\-\5\b\7\8\f\5\b\0\7\7\0\2 ]] 00:14:05.577 22:45:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:05.835 22:45:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:06.095 22:45:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 5e242591-bd38-4ce6-902b-699f182cae3c 00:14:06.095 22:45:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:06.095 22:45:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 5E242591BD384CE6902B699F182CAE3C 00:14:06.095 22:45:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:14:06.095 22:45:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 5E242591BD384CE6902B699F182CAE3C 00:14:06.095 22:45:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:14:06.095 22:45:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:06.095 22:45:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:14:06.095 22:45:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:06.095 22:45:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:14:06.095 22:45:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:06.095 22:45:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:14:06.095 22:45:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py ]] 00:14:06.095 22:45:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 5E242591BD384CE6902B699F182CAE3C 00:14:06.095 [2024-12-10 22:45:13.765431] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:14:06.095 [2024-12-10 22:45:13.765461] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:14:06.095 [2024-12-10 22:45:13.765470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:06.095 request: 00:14:06.095 { 00:14:06.095 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:06.095 "namespace": { 00:14:06.095 "bdev_name": "invalid", 00:14:06.095 "nsid": 1, 00:14:06.095 "nguid": "5E242591BD384CE6902B699F182CAE3C", 00:14:06.095 "no_auto_visible": false, 00:14:06.095 "hide_metadata": false 00:14:06.095 }, 00:14:06.095 "method": "nvmf_subsystem_add_ns", 00:14:06.095 "req_id": 1 00:14:06.095 } 00:14:06.095 Got JSON-RPC error response 00:14:06.095 response: 00:14:06.095 { 00:14:06.095 "code": -32602, 00:14:06.095 "message": "Invalid parameters" 00:14:06.095 } 00:14:06.095 22:45:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:14:06.095 22:45:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:06.095 22:45:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:06.095 22:45:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:06.095 22:45:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 5e242591-bd38-4ce6-902b-699f182cae3c 00:14:06.095 22:45:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:14:06.095 22:45:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 5E242591BD384CE6902B699F182CAE3C -i 00:14:06.354 22:45:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:14:08.885 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:14:08.885 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:14:08.885 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:08.885 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:14:08.885 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 2330174 00:14:08.885 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2330174 ']' 00:14:08.885 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2330174 00:14:08.885 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:14:08.885 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:08.885 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2330174 00:14:08.885 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:08.885 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:08.885 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2330174' 00:14:08.885 killing process with pid 2330174 00:14:08.885 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2330174 00:14:08.885 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2330174 00:14:08.885 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:09.144 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:14:09.144 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:14:09.144 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:09.144 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:14:09.144 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:09.144 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:14:09.144 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:09.144 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:09.144 rmmod nvme_tcp 00:14:09.144 rmmod nvme_fabrics 00:14:09.144 rmmod nvme_keyring 00:14:09.144 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:09.144 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:14:09.144 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:14:09.144 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 2327737 ']' 00:14:09.144 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 2327737 00:14:09.144 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2327737 ']' 00:14:09.144 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2327737 00:14:09.144 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:14:09.144 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:09.144 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2327737 00:14:09.402 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:09.402 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:09.402 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2327737' 00:14:09.402 killing process with pid 2327737 00:14:09.402 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2327737 00:14:09.402 22:45:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2327737 00:14:09.402 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:09.402 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:09.402 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:09.402 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:14:09.402 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:14:09.402 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:09.402 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:14:09.402 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:09.402 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:09.402 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:09.402 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:09.402 22:45:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:11.937 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:11.937 00:14:11.937 real 0m26.149s 00:14:11.937 user 0m31.090s 00:14:11.937 sys 0m7.151s 00:14:11.937 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:11.937 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:11.937 ************************************ 00:14:11.937 END TEST nvmf_ns_masking 00:14:11.937 ************************************ 00:14:11.937 22:45:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:14:11.937 22:45:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:11.937 22:45:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:11.937 22:45:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:11.937 22:45:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:11.937 ************************************ 00:14:11.937 START TEST nvmf_nvme_cli 00:14:11.937 ************************************ 00:14:11.937 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:11.937 * Looking for test storage... 00:14:11.937 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:14:11.937 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:11.937 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lcov --version 00:14:11.937 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:11.937 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:11.937 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:11.937 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:11.937 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:11.937 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:14:11.937 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:14:11.937 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:14:11.937 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:14:11.937 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:14:11.937 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:14:11.937 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:14:11.937 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:11.937 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:14:11.937 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:14:11.937 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:11.937 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:11.937 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:14:11.937 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:14:11.937 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:11.937 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:14:11.937 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:14:11.937 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:14:11.937 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:14:11.937 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:11.937 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:14:11.937 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:14:11.937 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:11.937 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:11.937 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:14:11.937 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:11.937 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:11.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:11.937 --rc genhtml_branch_coverage=1 00:14:11.937 --rc genhtml_function_coverage=1 00:14:11.937 --rc genhtml_legend=1 00:14:11.937 --rc geninfo_all_blocks=1 00:14:11.937 --rc geninfo_unexecuted_blocks=1 00:14:11.937 00:14:11.937 ' 00:14:11.937 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:11.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:11.937 --rc genhtml_branch_coverage=1 00:14:11.937 --rc genhtml_function_coverage=1 00:14:11.937 --rc genhtml_legend=1 00:14:11.937 --rc geninfo_all_blocks=1 00:14:11.937 --rc geninfo_unexecuted_blocks=1 00:14:11.937 00:14:11.937 ' 00:14:11.937 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:11.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:11.937 --rc genhtml_branch_coverage=1 00:14:11.937 --rc genhtml_function_coverage=1 00:14:11.937 --rc genhtml_legend=1 00:14:11.938 --rc geninfo_all_blocks=1 00:14:11.938 --rc geninfo_unexecuted_blocks=1 00:14:11.938 00:14:11.938 ' 00:14:11.938 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:11.938 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:11.938 --rc genhtml_branch_coverage=1 00:14:11.938 --rc genhtml_function_coverage=1 00:14:11.938 --rc genhtml_legend=1 00:14:11.938 --rc geninfo_all_blocks=1 00:14:11.938 --rc geninfo_unexecuted_blocks=1 00:14:11.938 00:14:11.938 ' 00:14:11.938 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:14:11.938 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:11.938 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:11.938 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:11.938 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:11.938 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:11.938 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:11.938 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:11.938 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:11.938 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:11.938 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:11.938 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:11.938 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:11.938 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:11.938 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:11.938 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:11.938 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:11.938 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:11.938 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:14:11.938 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:14:11.938 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:11.938 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:11.938 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:11.938 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.938 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.938 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.938 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:11.938 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:11.938 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:14:11.938 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:11.938 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:11.938 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:11.938 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:11.938 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:11.938 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:11.938 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:11.938 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:11.938 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:11.938 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:11.938 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:11.938 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:11.938 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:11.938 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:11.938 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:11.938 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:11.938 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:11.938 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:11.938 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:11.938 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:11.938 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:11.938 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:11.938 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:11.938 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:11.938 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:14:11.938 22:45:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:18.510 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:18.510 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:14:18.510 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:18.510 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:18.510 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:18.510 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:18.510 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:18.510 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:14:18.510 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:18.510 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:14:18.510 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:14:18.510 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:14:18.510 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:14:18.510 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:14:18.510 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:14:18.510 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:18.510 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:18.510 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:18.510 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:18.510 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:18.510 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:18.510 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:18.510 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:18.510 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:18.510 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:18.510 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:18.510 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:18.510 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:18.510 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:18.510 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:18.510 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:18.510 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:18.510 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:18.510 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:18.510 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:18.510 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:18.510 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:18.510 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:18.510 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:18.510 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:18.510 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:18.510 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:18.510 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:18.510 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:18.510 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:18.510 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:18.510 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:18.510 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:18.510 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:18.510 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:18.510 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:18.510 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:18.510 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:18.510 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:18.510 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:18.511 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:18.511 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:18.511 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:18.511 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:18.511 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:18.511 Found net devices under 0000:86:00.0: cvl_0_0 00:14:18.511 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:18.511 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:18.511 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:18.511 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:18.511 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:18.511 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:18.511 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:18.511 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:18.511 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:18.511 Found net devices under 0000:86:00.1: cvl_0_1 00:14:18.511 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:18.511 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:18.511 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:14:18.511 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:18.511 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:18.511 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:18.511 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:18.511 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:18.511 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:18.511 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:18.511 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:18.511 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:18.511 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:18.511 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:18.511 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:18.511 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:18.511 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:18.511 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:18.511 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:18.511 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:18.511 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:18.511 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:18.511 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:18.511 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:18.511 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:18.511 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:18.511 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:18.511 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:18.511 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:18.511 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:18.511 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.322 ms 00:14:18.511 00:14:18.511 --- 10.0.0.2 ping statistics --- 00:14:18.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:18.511 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:14:18.511 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:18.511 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:18.511 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:14:18.511 00:14:18.511 --- 10.0.0.1 ping statistics --- 00:14:18.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:18.511 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:14:18.511 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:18.511 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:14:18.511 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:18.511 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:18.511 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:18.511 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:18.511 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:18.511 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:18.511 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:18.511 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:18.511 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:18.511 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:18.511 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:18.511 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=2334847 00:14:18.511 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:18.511 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 2334847 00:14:18.511 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 2334847 ']' 00:14:18.511 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:18.511 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:18.511 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:18.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:18.511 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:18.511 22:45:25 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:18.511 [2024-12-10 22:45:25.463216] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:14:18.511 [2024-12-10 22:45:25.463260] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:18.511 [2024-12-10 22:45:25.543541] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:18.511 [2024-12-10 22:45:25.587055] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:18.511 [2024-12-10 22:45:25.587093] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:18.511 [2024-12-10 22:45:25.587101] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:18.511 [2024-12-10 22:45:25.587107] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:18.511 [2024-12-10 22:45:25.587112] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:18.511 [2024-12-10 22:45:25.588523] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:14:18.511 [2024-12-10 22:45:25.588631] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:14:18.511 [2024-12-10 22:45:25.588740] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:18.511 [2024-12-10 22:45:25.588741] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:14:18.769 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:18.769 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:14:18.769 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:18.769 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:18.769 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:18.769 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:18.769 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:18.769 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.769 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:18.769 [2024-12-10 22:45:26.345114] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:18.769 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.769 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:18.769 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.769 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:18.769 Malloc0 00:14:18.769 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.769 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:18.770 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.770 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:18.770 Malloc1 00:14:18.770 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.770 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:18.770 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.770 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:18.770 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.770 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:18.770 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.770 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:18.770 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.770 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:18.770 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.770 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:18.770 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.770 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:18.770 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.770 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:18.770 [2024-12-10 22:45:26.452538] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:18.770 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.770 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:18.770 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.770 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:18.770 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.770 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:14:19.027 00:14:19.027 Discovery Log Number of Records 2, Generation counter 2 00:14:19.027 =====Discovery Log Entry 0====== 00:14:19.027 trtype: tcp 00:14:19.027 adrfam: ipv4 00:14:19.027 subtype: current discovery subsystem 00:14:19.027 treq: not required 00:14:19.027 portid: 0 00:14:19.027 trsvcid: 4420 00:14:19.027 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:19.027 traddr: 10.0.0.2 00:14:19.027 eflags: explicit discovery connections, duplicate discovery information 00:14:19.027 sectype: none 00:14:19.027 =====Discovery Log Entry 1====== 00:14:19.027 trtype: tcp 00:14:19.027 adrfam: ipv4 00:14:19.027 subtype: nvme subsystem 00:14:19.027 treq: not required 00:14:19.027 portid: 0 00:14:19.027 trsvcid: 4420 00:14:19.027 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:19.027 traddr: 10.0.0.2 00:14:19.027 eflags: none 00:14:19.027 sectype: none 00:14:19.028 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:19.028 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:19.028 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:19.028 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:19.028 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:19.028 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:19.028 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:19.028 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:19.028 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:19.028 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:19.028 22:45:26 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:20.400 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:20.400 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:14:20.400 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:20.400 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:14:20.400 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:14:20.400 22:45:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:14:22.299 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:22.299 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:22.299 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:22.299 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:14:22.299 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:22.299 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:14:22.299 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:22.299 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:22.299 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:22.299 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:22.299 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:22.299 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:22.299 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:22.299 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:22.299 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:22.299 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:14:22.299 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:22.299 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:22.299 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:14:22.299 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:22.299 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:14:22.299 /dev/nvme0n2 ]] 00:14:22.299 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:22.299 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:22.299 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:14:22.299 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:22.299 22:45:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:14:22.558 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:14:22.558 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:22.558 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:14:22.558 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:22.558 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:22.558 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:14:22.558 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:22.558 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:22.558 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:14:22.558 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:14:22.558 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:22.558 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:22.558 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:22.558 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:22.558 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:14:22.558 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:22.558 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:22.558 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:22.558 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:22.558 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:14:22.558 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:22.558 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:22.558 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.558 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:22.558 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.558 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:22.558 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:22.558 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:22.558 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:14:22.558 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:22.558 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:14:22.558 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:22.558 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:22.558 rmmod nvme_tcp 00:14:22.558 rmmod nvme_fabrics 00:14:22.558 rmmod nvme_keyring 00:14:22.558 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:22.558 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:14:22.558 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:14:22.558 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 2334847 ']' 00:14:22.558 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 2334847 00:14:22.558 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 2334847 ']' 00:14:22.558 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 2334847 00:14:22.558 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:14:22.558 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:22.558 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2334847 00:14:22.817 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:22.817 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:22.817 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2334847' 00:14:22.817 killing process with pid 2334847 00:14:22.817 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 2334847 00:14:22.817 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 2334847 00:14:22.817 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:22.817 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:22.817 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:22.817 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:14:22.817 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:14:22.817 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:22.817 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:14:22.817 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:22.817 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:22.817 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:22.817 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:22.818 22:45:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:25.354 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:25.354 00:14:25.354 real 0m13.309s 00:14:25.354 user 0m21.373s 00:14:25.354 sys 0m5.103s 00:14:25.354 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:25.354 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:25.354 ************************************ 00:14:25.354 END TEST nvmf_nvme_cli 00:14:25.354 ************************************ 00:14:25.354 22:45:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:14:25.354 22:45:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:25.354 22:45:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:25.354 22:45:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:25.354 22:45:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:25.354 ************************************ 00:14:25.354 START TEST nvmf_vfio_user 00:14:25.354 ************************************ 00:14:25.354 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:25.354 * Looking for test storage... 00:14:25.354 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:14:25.354 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:25.354 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lcov --version 00:14:25.354 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:25.354 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:25.354 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:25.354 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:25.354 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:25.354 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:14:25.354 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:14:25.354 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:14:25.354 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:14:25.354 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:14:25.354 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:14:25.354 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:14:25.354 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:25.354 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:14:25.354 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:14:25.354 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:25.354 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:25.354 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:14:25.354 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:14:25.354 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:25.354 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:14:25.354 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:14:25.354 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:14:25.354 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:14:25.354 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:25.354 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:14:25.354 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:14:25.354 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:25.354 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:25.354 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:14:25.354 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:25.354 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:25.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:25.354 --rc genhtml_branch_coverage=1 00:14:25.354 --rc genhtml_function_coverage=1 00:14:25.354 --rc genhtml_legend=1 00:14:25.354 --rc geninfo_all_blocks=1 00:14:25.354 --rc geninfo_unexecuted_blocks=1 00:14:25.354 00:14:25.354 ' 00:14:25.354 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:25.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:25.354 --rc genhtml_branch_coverage=1 00:14:25.354 --rc genhtml_function_coverage=1 00:14:25.354 --rc genhtml_legend=1 00:14:25.354 --rc geninfo_all_blocks=1 00:14:25.354 --rc geninfo_unexecuted_blocks=1 00:14:25.354 00:14:25.354 ' 00:14:25.354 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:25.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:25.354 --rc genhtml_branch_coverage=1 00:14:25.354 --rc genhtml_function_coverage=1 00:14:25.354 --rc genhtml_legend=1 00:14:25.354 --rc geninfo_all_blocks=1 00:14:25.354 --rc geninfo_unexecuted_blocks=1 00:14:25.354 00:14:25.354 ' 00:14:25.354 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:25.354 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:25.354 --rc genhtml_branch_coverage=1 00:14:25.354 --rc genhtml_function_coverage=1 00:14:25.354 --rc genhtml_legend=1 00:14:25.354 --rc geninfo_all_blocks=1 00:14:25.354 --rc geninfo_unexecuted_blocks=1 00:14:25.354 00:14:25.354 ' 00:14:25.354 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:14:25.354 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:14:25.354 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:25.354 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:25.354 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:25.354 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:25.354 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:25.355 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:25.355 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:25.355 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:25.355 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:25.355 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:25.355 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:25.355 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:25.355 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:25.355 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:25.355 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:25.355 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:25.355 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:14:25.355 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:14:25.355 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:25.355 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:25.355 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:25.355 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.355 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.355 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.355 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:14:25.355 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:25.355 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:14:25.355 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:25.355 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:25.355 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:25.355 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:25.355 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:25.355 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:25.355 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:25.355 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:25.355 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:25.355 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:25.355 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:25.355 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:25.355 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:25.355 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:14:25.355 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:25.355 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:25.355 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:25.355 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:25.355 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:25.355 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:25.355 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2336167 00:14:25.355 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2336167' 00:14:25.355 Process pid: 2336167 00:14:25.355 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:25.355 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2336167 00:14:25.355 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:25.355 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 2336167 ']' 00:14:25.355 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:25.355 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:25.355 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:25.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:25.355 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:25.355 22:45:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:25.355 [2024-12-10 22:45:32.914359] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:14:25.355 [2024-12-10 22:45:32.914407] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:25.355 [2024-12-10 22:45:32.989489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:25.355 [2024-12-10 22:45:33.032127] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:25.355 [2024-12-10 22:45:33.032171] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:25.355 [2024-12-10 22:45:33.032180] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:25.355 [2024-12-10 22:45:33.032186] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:25.355 [2024-12-10 22:45:33.032192] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:25.355 [2024-12-10 22:45:33.033612] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:14:25.355 [2024-12-10 22:45:33.033797] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:14:25.355 [2024-12-10 22:45:33.033882] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:25.355 [2024-12-10 22:45:33.033882] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:14:25.614 22:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:25.614 22:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:14:25.614 22:45:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:26.549 22:45:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:26.808 22:45:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:26.808 22:45:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:26.808 22:45:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:26.808 22:45:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:26.808 22:45:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:27.067 Malloc1 00:14:27.067 22:45:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:27.325 22:45:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:27.325 22:45:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:27.583 22:45:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:27.583 22:45:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:27.583 22:45:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:27.841 Malloc2 00:14:27.841 22:45:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:28.099 22:45:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:28.099 22:45:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:28.357 22:45:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:14:28.357 22:45:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:14:28.357 22:45:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:28.357 22:45:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:28.357 22:45:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:14:28.357 22:45:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:28.357 [2024-12-10 22:45:36.057205] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:14:28.357 [2024-12-10 22:45:36.057238] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2336849 ] 00:14:28.618 [2024-12-10 22:45:36.100791] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:14:28.618 [2024-12-10 22:45:36.109441] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:28.618 [2024-12-10 22:45:36.109463] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f22b8120000 00:14:28.618 [2024-12-10 22:45:36.110446] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:28.618 [2024-12-10 22:45:36.111436] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:28.618 [2024-12-10 22:45:36.112450] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:28.618 [2024-12-10 22:45:36.113460] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:28.618 [2024-12-10 22:45:36.114467] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:28.618 [2024-12-10 22:45:36.115466] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:28.618 [2024-12-10 22:45:36.116483] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:28.618 [2024-12-10 22:45:36.117487] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:28.618 [2024-12-10 22:45:36.118491] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:28.618 [2024-12-10 22:45:36.118503] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f22b8115000 00:14:28.618 [2024-12-10 22:45:36.119446] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:28.618 [2024-12-10 22:45:36.133046] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:14:28.618 [2024-12-10 22:45:36.133073] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:14:28.618 [2024-12-10 22:45:36.135579] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:28.618 [2024-12-10 22:45:36.135616] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:28.618 [2024-12-10 22:45:36.135690] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:14:28.618 [2024-12-10 22:45:36.135706] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:14:28.618 [2024-12-10 22:45:36.135711] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:14:28.618 [2024-12-10 22:45:36.136574] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:14:28.618 [2024-12-10 22:45:36.136583] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:14:28.618 [2024-12-10 22:45:36.136590] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:14:28.618 [2024-12-10 22:45:36.137577] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:28.618 [2024-12-10 22:45:36.137585] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:14:28.618 [2024-12-10 22:45:36.137592] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:14:28.618 [2024-12-10 22:45:36.138585] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:14:28.618 [2024-12-10 22:45:36.138592] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:28.618 [2024-12-10 22:45:36.139588] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:14:28.618 [2024-12-10 22:45:36.139596] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:14:28.618 [2024-12-10 22:45:36.139601] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:14:28.618 [2024-12-10 22:45:36.139607] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:28.618 [2024-12-10 22:45:36.139714] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:14:28.618 [2024-12-10 22:45:36.139718] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:28.618 [2024-12-10 22:45:36.139723] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:14:28.618 [2024-12-10 22:45:36.140597] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:14:28.618 [2024-12-10 22:45:36.141600] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:14:28.618 [2024-12-10 22:45:36.142610] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:28.618 [2024-12-10 22:45:36.143611] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:28.618 [2024-12-10 22:45:36.143700] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:28.618 [2024-12-10 22:45:36.144624] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:14:28.618 [2024-12-10 22:45:36.144631] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:28.618 [2024-12-10 22:45:36.144636] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:14:28.618 [2024-12-10 22:45:36.144653] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:14:28.618 [2024-12-10 22:45:36.144660] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:14:28.618 [2024-12-10 22:45:36.144672] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:28.618 [2024-12-10 22:45:36.144677] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:28.618 [2024-12-10 22:45:36.144680] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:28.618 [2024-12-10 22:45:36.144692] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:28.618 [2024-12-10 22:45:36.144741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:28.618 [2024-12-10 22:45:36.144749] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:14:28.618 [2024-12-10 22:45:36.144753] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:14:28.618 [2024-12-10 22:45:36.144757] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:14:28.618 [2024-12-10 22:45:36.144762] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:28.618 [2024-12-10 22:45:36.144766] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:14:28.618 [2024-12-10 22:45:36.144770] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:14:28.618 [2024-12-10 22:45:36.144774] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:14:28.618 [2024-12-10 22:45:36.144782] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:14:28.618 [2024-12-10 22:45:36.144792] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:28.618 [2024-12-10 22:45:36.144808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:28.618 [2024-12-10 22:45:36.144817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:28.618 [2024-12-10 22:45:36.144827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:28.618 [2024-12-10 22:45:36.144834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:28.618 [2024-12-10 22:45:36.144842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:28.618 [2024-12-10 22:45:36.144846] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:14:28.618 [2024-12-10 22:45:36.144854] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:28.618 [2024-12-10 22:45:36.144862] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:28.618 [2024-12-10 22:45:36.144874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:28.618 [2024-12-10 22:45:36.144879] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:14:28.618 [2024-12-10 22:45:36.144883] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:28.619 [2024-12-10 22:45:36.144889] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:14:28.619 [2024-12-10 22:45:36.144894] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:14:28.619 [2024-12-10 22:45:36.144902] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:28.619 [2024-12-10 22:45:36.144912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:28.619 [2024-12-10 22:45:36.144961] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:14:28.619 [2024-12-10 22:45:36.144970] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:14:28.619 [2024-12-10 22:45:36.144977] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:28.619 [2024-12-10 22:45:36.144981] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:28.619 [2024-12-10 22:45:36.144984] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:28.619 [2024-12-10 22:45:36.144989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:28.619 [2024-12-10 22:45:36.145001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:28.619 [2024-12-10 22:45:36.145008] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:14:28.619 [2024-12-10 22:45:36.145019] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:14:28.619 [2024-12-10 22:45:36.145025] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:14:28.619 [2024-12-10 22:45:36.145032] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:28.619 [2024-12-10 22:45:36.145035] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:28.619 [2024-12-10 22:45:36.145040] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:28.619 [2024-12-10 22:45:36.145045] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:28.619 [2024-12-10 22:45:36.145069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:28.619 [2024-12-10 22:45:36.145079] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:28.619 [2024-12-10 22:45:36.145086] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:28.619 [2024-12-10 22:45:36.145092] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:28.619 [2024-12-10 22:45:36.145096] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:28.619 [2024-12-10 22:45:36.145099] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:28.619 [2024-12-10 22:45:36.145105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:28.619 [2024-12-10 22:45:36.145116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:28.619 [2024-12-10 22:45:36.145123] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:28.619 [2024-12-10 22:45:36.145129] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:14:28.619 [2024-12-10 22:45:36.145136] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:14:28.619 [2024-12-10 22:45:36.145141] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:14:28.619 [2024-12-10 22:45:36.145145] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:28.619 [2024-12-10 22:45:36.145150] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:14:28.619 [2024-12-10 22:45:36.145154] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:14:28.619 [2024-12-10 22:45:36.145163] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:14:28.619 [2024-12-10 22:45:36.145168] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:14:28.619 [2024-12-10 22:45:36.145184] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:28.619 [2024-12-10 22:45:36.145197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:28.619 [2024-12-10 22:45:36.145207] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:28.619 [2024-12-10 22:45:36.145215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:28.619 [2024-12-10 22:45:36.145225] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:28.619 [2024-12-10 22:45:36.145231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:28.619 [2024-12-10 22:45:36.145243] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:28.619 [2024-12-10 22:45:36.145255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:28.619 [2024-12-10 22:45:36.145267] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:28.619 [2024-12-10 22:45:36.145271] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:28.619 [2024-12-10 22:45:36.145274] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:28.619 [2024-12-10 22:45:36.145277] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:28.619 [2024-12-10 22:45:36.145280] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:28.619 [2024-12-10 22:45:36.145286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:28.619 [2024-12-10 22:45:36.145292] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:28.619 [2024-12-10 22:45:36.145296] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:28.619 [2024-12-10 22:45:36.145299] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:28.619 [2024-12-10 22:45:36.145304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:28.619 [2024-12-10 22:45:36.145310] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:28.619 [2024-12-10 22:45:36.145314] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:28.619 [2024-12-10 22:45:36.145317] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:28.619 [2024-12-10 22:45:36.145323] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:28.619 [2024-12-10 22:45:36.145329] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:28.619 [2024-12-10 22:45:36.145333] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:28.619 [2024-12-10 22:45:36.145336] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:28.619 [2024-12-10 22:45:36.145341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:28.619 [2024-12-10 22:45:36.145348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:28.619 [2024-12-10 22:45:36.145359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:28.619 [2024-12-10 22:45:36.145368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:28.619 [2024-12-10 22:45:36.145374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:28.619 ===================================================== 00:14:28.619 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:28.619 ===================================================== 00:14:28.619 Controller Capabilities/Features 00:14:28.619 ================================ 00:14:28.619 Vendor ID: 4e58 00:14:28.619 Subsystem Vendor ID: 4e58 00:14:28.619 Serial Number: SPDK1 00:14:28.619 Model Number: SPDK bdev Controller 00:14:28.619 Firmware Version: 25.01 00:14:28.619 Recommended Arb Burst: 6 00:14:28.619 IEEE OUI Identifier: 8d 6b 50 00:14:28.619 Multi-path I/O 00:14:28.619 May have multiple subsystem ports: Yes 00:14:28.619 May have multiple controllers: Yes 00:14:28.619 Associated with SR-IOV VF: No 00:14:28.619 Max Data Transfer Size: 131072 00:14:28.619 Max Number of Namespaces: 32 00:14:28.619 Max Number of I/O Queues: 127 00:14:28.619 NVMe Specification Version (VS): 1.3 00:14:28.619 NVMe Specification Version (Identify): 1.3 00:14:28.619 Maximum Queue Entries: 256 00:14:28.619 Contiguous Queues Required: Yes 00:14:28.619 Arbitration Mechanisms Supported 00:14:28.619 Weighted Round Robin: Not Supported 00:14:28.619 Vendor Specific: Not Supported 00:14:28.619 Reset Timeout: 15000 ms 00:14:28.619 Doorbell Stride: 4 bytes 00:14:28.619 NVM Subsystem Reset: Not Supported 00:14:28.619 Command Sets Supported 00:14:28.619 NVM Command Set: Supported 00:14:28.619 Boot Partition: Not Supported 00:14:28.619 Memory Page Size Minimum: 4096 bytes 00:14:28.619 Memory Page Size Maximum: 4096 bytes 00:14:28.619 Persistent Memory Region: Not Supported 00:14:28.619 Optional Asynchronous Events Supported 00:14:28.619 Namespace Attribute Notices: Supported 00:14:28.619 Firmware Activation Notices: Not Supported 00:14:28.619 ANA Change Notices: Not Supported 00:14:28.619 PLE Aggregate Log Change Notices: Not Supported 00:14:28.619 LBA Status Info Alert Notices: Not Supported 00:14:28.619 EGE Aggregate Log Change Notices: Not Supported 00:14:28.619 Normal NVM Subsystem Shutdown event: Not Supported 00:14:28.619 Zone Descriptor Change Notices: Not Supported 00:14:28.619 Discovery Log Change Notices: Not Supported 00:14:28.619 Controller Attributes 00:14:28.619 128-bit Host Identifier: Supported 00:14:28.619 Non-Operational Permissive Mode: Not Supported 00:14:28.620 NVM Sets: Not Supported 00:14:28.620 Read Recovery Levels: Not Supported 00:14:28.620 Endurance Groups: Not Supported 00:14:28.620 Predictable Latency Mode: Not Supported 00:14:28.620 Traffic Based Keep ALive: Not Supported 00:14:28.620 Namespace Granularity: Not Supported 00:14:28.620 SQ Associations: Not Supported 00:14:28.620 UUID List: Not Supported 00:14:28.620 Multi-Domain Subsystem: Not Supported 00:14:28.620 Fixed Capacity Management: Not Supported 00:14:28.620 Variable Capacity Management: Not Supported 00:14:28.620 Delete Endurance Group: Not Supported 00:14:28.620 Delete NVM Set: Not Supported 00:14:28.620 Extended LBA Formats Supported: Not Supported 00:14:28.620 Flexible Data Placement Supported: Not Supported 00:14:28.620 00:14:28.620 Controller Memory Buffer Support 00:14:28.620 ================================ 00:14:28.620 Supported: No 00:14:28.620 00:14:28.620 Persistent Memory Region Support 00:14:28.620 ================================ 00:14:28.620 Supported: No 00:14:28.620 00:14:28.620 Admin Command Set Attributes 00:14:28.620 ============================ 00:14:28.620 Security Send/Receive: Not Supported 00:14:28.620 Format NVM: Not Supported 00:14:28.620 Firmware Activate/Download: Not Supported 00:14:28.620 Namespace Management: Not Supported 00:14:28.620 Device Self-Test: Not Supported 00:14:28.620 Directives: Not Supported 00:14:28.620 NVMe-MI: Not Supported 00:14:28.620 Virtualization Management: Not Supported 00:14:28.620 Doorbell Buffer Config: Not Supported 00:14:28.620 Get LBA Status Capability: Not Supported 00:14:28.620 Command & Feature Lockdown Capability: Not Supported 00:14:28.620 Abort Command Limit: 4 00:14:28.620 Async Event Request Limit: 4 00:14:28.620 Number of Firmware Slots: N/A 00:14:28.620 Firmware Slot 1 Read-Only: N/A 00:14:28.620 Firmware Activation Without Reset: N/A 00:14:28.620 Multiple Update Detection Support: N/A 00:14:28.620 Firmware Update Granularity: No Information Provided 00:14:28.620 Per-Namespace SMART Log: No 00:14:28.620 Asymmetric Namespace Access Log Page: Not Supported 00:14:28.620 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:14:28.620 Command Effects Log Page: Supported 00:14:28.620 Get Log Page Extended Data: Supported 00:14:28.620 Telemetry Log Pages: Not Supported 00:14:28.620 Persistent Event Log Pages: Not Supported 00:14:28.620 Supported Log Pages Log Page: May Support 00:14:28.620 Commands Supported & Effects Log Page: Not Supported 00:14:28.620 Feature Identifiers & Effects Log Page:May Support 00:14:28.620 NVMe-MI Commands & Effects Log Page: May Support 00:14:28.620 Data Area 4 for Telemetry Log: Not Supported 00:14:28.620 Error Log Page Entries Supported: 128 00:14:28.620 Keep Alive: Supported 00:14:28.620 Keep Alive Granularity: 10000 ms 00:14:28.620 00:14:28.620 NVM Command Set Attributes 00:14:28.620 ========================== 00:14:28.620 Submission Queue Entry Size 00:14:28.620 Max: 64 00:14:28.620 Min: 64 00:14:28.620 Completion Queue Entry Size 00:14:28.620 Max: 16 00:14:28.620 Min: 16 00:14:28.620 Number of Namespaces: 32 00:14:28.620 Compare Command: Supported 00:14:28.620 Write Uncorrectable Command: Not Supported 00:14:28.620 Dataset Management Command: Supported 00:14:28.620 Write Zeroes Command: Supported 00:14:28.620 Set Features Save Field: Not Supported 00:14:28.620 Reservations: Not Supported 00:14:28.620 Timestamp: Not Supported 00:14:28.620 Copy: Supported 00:14:28.620 Volatile Write Cache: Present 00:14:28.620 Atomic Write Unit (Normal): 1 00:14:28.620 Atomic Write Unit (PFail): 1 00:14:28.620 Atomic Compare & Write Unit: 1 00:14:28.620 Fused Compare & Write: Supported 00:14:28.620 Scatter-Gather List 00:14:28.620 SGL Command Set: Supported (Dword aligned) 00:14:28.620 SGL Keyed: Not Supported 00:14:28.620 SGL Bit Bucket Descriptor: Not Supported 00:14:28.620 SGL Metadata Pointer: Not Supported 00:14:28.620 Oversized SGL: Not Supported 00:14:28.620 SGL Metadata Address: Not Supported 00:14:28.620 SGL Offset: Not Supported 00:14:28.620 Transport SGL Data Block: Not Supported 00:14:28.620 Replay Protected Memory Block: Not Supported 00:14:28.620 00:14:28.620 Firmware Slot Information 00:14:28.620 ========================= 00:14:28.620 Active slot: 1 00:14:28.620 Slot 1 Firmware Revision: 25.01 00:14:28.620 00:14:28.620 00:14:28.620 Commands Supported and Effects 00:14:28.620 ============================== 00:14:28.620 Admin Commands 00:14:28.620 -------------- 00:14:28.620 Get Log Page (02h): Supported 00:14:28.620 Identify (06h): Supported 00:14:28.620 Abort (08h): Supported 00:14:28.620 Set Features (09h): Supported 00:14:28.620 Get Features (0Ah): Supported 00:14:28.620 Asynchronous Event Request (0Ch): Supported 00:14:28.620 Keep Alive (18h): Supported 00:14:28.620 I/O Commands 00:14:28.620 ------------ 00:14:28.620 Flush (00h): Supported LBA-Change 00:14:28.620 Write (01h): Supported LBA-Change 00:14:28.620 Read (02h): Supported 00:14:28.620 Compare (05h): Supported 00:14:28.620 Write Zeroes (08h): Supported LBA-Change 00:14:28.620 Dataset Management (09h): Supported LBA-Change 00:14:28.620 Copy (19h): Supported LBA-Change 00:14:28.620 00:14:28.620 Error Log 00:14:28.620 ========= 00:14:28.620 00:14:28.620 Arbitration 00:14:28.620 =========== 00:14:28.620 Arbitration Burst: 1 00:14:28.620 00:14:28.620 Power Management 00:14:28.620 ================ 00:14:28.620 Number of Power States: 1 00:14:28.620 Current Power State: Power State #0 00:14:28.620 Power State #0: 00:14:28.620 Max Power: 0.00 W 00:14:28.620 Non-Operational State: Operational 00:14:28.620 Entry Latency: Not Reported 00:14:28.620 Exit Latency: Not Reported 00:14:28.620 Relative Read Throughput: 0 00:14:28.620 Relative Read Latency: 0 00:14:28.620 Relative Write Throughput: 0 00:14:28.620 Relative Write Latency: 0 00:14:28.620 Idle Power: Not Reported 00:14:28.620 Active Power: Not Reported 00:14:28.620 Non-Operational Permissive Mode: Not Supported 00:14:28.620 00:14:28.620 Health Information 00:14:28.620 ================== 00:14:28.620 Critical Warnings: 00:14:28.620 Available Spare Space: OK 00:14:28.620 Temperature: OK 00:14:28.620 Device Reliability: OK 00:14:28.620 Read Only: No 00:14:28.620 Volatile Memory Backup: OK 00:14:28.620 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:28.620 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:28.620 Available Spare: 0% 00:14:28.620 Available Sp[2024-12-10 22:45:36.145456] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:28.620 [2024-12-10 22:45:36.145467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:28.620 [2024-12-10 22:45:36.145490] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:14:28.620 [2024-12-10 22:45:36.145498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.620 [2024-12-10 22:45:36.145504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.620 [2024-12-10 22:45:36.145512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.620 [2024-12-10 22:45:36.145518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:28.620 [2024-12-10 22:45:36.148168] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:28.620 [2024-12-10 22:45:36.148181] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:14:28.620 [2024-12-10 22:45:36.148639] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:28.620 [2024-12-10 22:45:36.148689] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:14:28.620 [2024-12-10 22:45:36.148695] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:14:28.620 [2024-12-10 22:45:36.149643] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:14:28.620 [2024-12-10 22:45:36.149654] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:14:28.620 [2024-12-10 22:45:36.149705] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:14:28.620 [2024-12-10 22:45:36.151684] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:28.620 are Threshold: 0% 00:14:28.620 Life Percentage Used: 0% 00:14:28.620 Data Units Read: 0 00:14:28.620 Data Units Written: 0 00:14:28.620 Host Read Commands: 0 00:14:28.620 Host Write Commands: 0 00:14:28.620 Controller Busy Time: 0 minutes 00:14:28.620 Power Cycles: 0 00:14:28.620 Power On Hours: 0 hours 00:14:28.620 Unsafe Shutdowns: 0 00:14:28.620 Unrecoverable Media Errors: 0 00:14:28.620 Lifetime Error Log Entries: 0 00:14:28.620 Warning Temperature Time: 0 minutes 00:14:28.620 Critical Temperature Time: 0 minutes 00:14:28.620 00:14:28.620 Number of Queues 00:14:28.620 ================ 00:14:28.621 Number of I/O Submission Queues: 127 00:14:28.621 Number of I/O Completion Queues: 127 00:14:28.621 00:14:28.621 Active Namespaces 00:14:28.621 ================= 00:14:28.621 Namespace ID:1 00:14:28.621 Error Recovery Timeout: Unlimited 00:14:28.621 Command Set Identifier: NVM (00h) 00:14:28.621 Deallocate: Supported 00:14:28.621 Deallocated/Unwritten Error: Not Supported 00:14:28.621 Deallocated Read Value: Unknown 00:14:28.621 Deallocate in Write Zeroes: Not Supported 00:14:28.621 Deallocated Guard Field: 0xFFFF 00:14:28.621 Flush: Supported 00:14:28.621 Reservation: Supported 00:14:28.621 Namespace Sharing Capabilities: Multiple Controllers 00:14:28.621 Size (in LBAs): 131072 (0GiB) 00:14:28.621 Capacity (in LBAs): 131072 (0GiB) 00:14:28.621 Utilization (in LBAs): 131072 (0GiB) 00:14:28.621 NGUID: 2FE3BC116A044A848F1DA01DACEA658C 00:14:28.621 UUID: 2fe3bc11-6a04-4a84-8f1d-a01dacea658c 00:14:28.621 Thin Provisioning: Not Supported 00:14:28.621 Per-NS Atomic Units: Yes 00:14:28.621 Atomic Boundary Size (Normal): 0 00:14:28.621 Atomic Boundary Size (PFail): 0 00:14:28.621 Atomic Boundary Offset: 0 00:14:28.621 Maximum Single Source Range Length: 65535 00:14:28.621 Maximum Copy Length: 65535 00:14:28.621 Maximum Source Range Count: 1 00:14:28.621 NGUID/EUI64 Never Reused: No 00:14:28.621 Namespace Write Protected: No 00:14:28.621 Number of LBA Formats: 1 00:14:28.621 Current LBA Format: LBA Format #00 00:14:28.621 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:28.621 00:14:28.621 22:45:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:28.879 [2024-12-10 22:45:36.387996] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:34.147 Initializing NVMe Controllers 00:14:34.147 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:34.147 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:34.147 Initialization complete. Launching workers. 00:14:34.147 ======================================================== 00:14:34.147 Latency(us) 00:14:34.147 Device Information : IOPS MiB/s Average min max 00:14:34.147 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39959.99 156.09 3203.78 991.76 7662.59 00:14:34.147 ======================================================== 00:14:34.147 Total : 39959.99 156.09 3203.78 991.76 7662.59 00:14:34.147 00:14:34.147 [2024-12-10 22:45:41.409237] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:34.147 22:45:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:34.147 [2024-12-10 22:45:41.647373] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:39.413 Initializing NVMe Controllers 00:14:39.413 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:39.413 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:39.413 Initialization complete. Launching workers. 00:14:39.413 ======================================================== 00:14:39.413 Latency(us) 00:14:39.413 Device Information : IOPS MiB/s Average min max 00:14:39.413 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16048.45 62.69 7981.20 5985.27 15481.81 00:14:39.413 ======================================================== 00:14:39.413 Total : 16048.45 62.69 7981.20 5985.27 15481.81 00:14:39.413 00:14:39.413 [2024-12-10 22:45:46.691298] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:39.413 22:45:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:39.413 [2024-12-10 22:45:46.905295] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:44.684 [2024-12-10 22:45:51.973423] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:44.684 Initializing NVMe Controllers 00:14:44.684 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:44.684 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:44.684 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:14:44.684 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:14:44.684 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:14:44.684 Initialization complete. Launching workers. 00:14:44.684 Starting thread on core 2 00:14:44.684 Starting thread on core 3 00:14:44.684 Starting thread on core 1 00:14:44.684 22:45:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:14:44.684 [2024-12-10 22:45:52.278583] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:47.971 [2024-12-10 22:45:55.342584] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:47.971 Initializing NVMe Controllers 00:14:47.971 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:47.971 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:47.971 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:14:47.971 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:14:47.971 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:14:47.971 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:14:47.971 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/arbitration run with configuration: 00:14:47.971 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:47.971 Initialization complete. Launching workers. 00:14:47.971 Starting thread on core 1 with urgent priority queue 00:14:47.971 Starting thread on core 2 with urgent priority queue 00:14:47.971 Starting thread on core 3 with urgent priority queue 00:14:47.971 Starting thread on core 0 with urgent priority queue 00:14:47.971 SPDK bdev Controller (SPDK1 ) core 0: 8496.67 IO/s 11.77 secs/100000 ios 00:14:47.971 SPDK bdev Controller (SPDK1 ) core 1: 9486.00 IO/s 10.54 secs/100000 ios 00:14:47.971 SPDK bdev Controller (SPDK1 ) core 2: 8939.00 IO/s 11.19 secs/100000 ios 00:14:47.971 SPDK bdev Controller (SPDK1 ) core 3: 7714.67 IO/s 12.96 secs/100000 ios 00:14:47.971 ======================================================== 00:14:47.971 00:14:47.971 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:47.971 [2024-12-10 22:45:55.631660] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:47.971 Initializing NVMe Controllers 00:14:47.971 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:47.971 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:47.971 Namespace ID: 1 size: 0GB 00:14:47.971 Initialization complete. 00:14:47.971 INFO: using host memory buffer for IO 00:14:47.971 Hello world! 00:14:47.971 [2024-12-10 22:45:55.665905] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:48.229 22:45:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:48.229 [2024-12-10 22:45:55.950592] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:49.671 Initializing NVMe Controllers 00:14:49.671 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:49.671 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:49.671 Initialization complete. Launching workers. 00:14:49.671 submit (in ns) avg, min, max = 6770.8, 3220.9, 4000368.7 00:14:49.671 complete (in ns) avg, min, max = 20748.2, 1797.4, 4000311.3 00:14:49.671 00:14:49.671 Submit histogram 00:14:49.671 ================ 00:14:49.671 Range in us Cumulative Count 00:14:49.671 3.214 - 3.228: 0.0126% ( 2) 00:14:49.671 3.242 - 3.256: 0.0252% ( 2) 00:14:49.671 3.256 - 3.270: 0.0314% ( 1) 00:14:49.672 3.270 - 3.283: 0.0629% ( 5) 00:14:49.672 3.283 - 3.297: 0.2515% ( 30) 00:14:49.672 3.297 - 3.311: 0.5345% ( 45) 00:14:49.672 3.311 - 3.325: 1.0249% ( 78) 00:14:49.672 3.325 - 3.339: 1.7228% ( 111) 00:14:49.672 3.339 - 3.353: 3.9235% ( 350) 00:14:49.672 3.353 - 3.367: 8.2181% ( 683) 00:14:49.672 3.367 - 3.381: 13.3991% ( 824) 00:14:49.672 3.381 - 3.395: 19.5737% ( 982) 00:14:49.672 3.395 - 3.409: 25.9494% ( 1014) 00:14:49.672 3.409 - 3.423: 31.9416% ( 953) 00:14:49.672 3.423 - 3.437: 37.2674% ( 847) 00:14:49.672 3.437 - 3.450: 42.6811% ( 861) 00:14:49.672 3.450 - 3.464: 47.0699% ( 698) 00:14:49.672 3.464 - 3.478: 51.2827% ( 670) 00:14:49.672 3.478 - 3.492: 56.1745% ( 778) 00:14:49.672 3.492 - 3.506: 63.1791% ( 1114) 00:14:49.672 3.506 - 3.520: 68.2218% ( 802) 00:14:49.672 3.520 - 3.534: 72.3151% ( 651) 00:14:49.672 3.534 - 3.548: 77.4396% ( 815) 00:14:49.672 3.548 - 3.562: 81.6839% ( 675) 00:14:49.672 3.562 - 3.590: 86.1859% ( 716) 00:14:49.672 3.590 - 3.617: 87.4937% ( 208) 00:14:49.672 3.617 - 3.645: 88.1602% ( 106) 00:14:49.672 3.645 - 3.673: 89.6315% ( 234) 00:14:49.672 3.673 - 3.701: 91.4801% ( 294) 00:14:49.672 3.701 - 3.729: 92.9829% ( 239) 00:14:49.672 3.729 - 3.757: 94.7812% ( 286) 00:14:49.672 3.757 - 3.784: 96.4349% ( 263) 00:14:49.672 3.784 - 3.812: 97.7364% ( 207) 00:14:49.672 3.812 - 3.840: 98.5287% ( 126) 00:14:49.672 3.840 - 3.868: 98.9625% ( 69) 00:14:49.672 3.868 - 3.896: 99.3021% ( 54) 00:14:49.672 3.896 - 3.923: 99.4341% ( 21) 00:14:49.672 3.923 - 3.951: 99.4718% ( 6) 00:14:49.672 3.951 - 3.979: 99.4781% ( 1) 00:14:49.672 3.979 - 4.007: 99.4907% ( 2) 00:14:49.672 4.007 - 4.035: 99.4970% ( 1) 00:14:49.672 4.035 - 4.063: 99.5033% ( 1) 00:14:49.672 4.118 - 4.146: 99.5096% ( 1) 00:14:49.672 4.174 - 4.202: 99.5158% ( 1) 00:14:49.672 4.202 - 4.230: 99.5221% ( 1) 00:14:49.672 4.230 - 4.257: 99.5284% ( 1) 00:14:49.672 4.257 - 4.285: 99.5347% ( 1) 00:14:49.672 4.285 - 4.313: 99.5410% ( 1) 00:14:49.672 4.313 - 4.341: 99.5536% ( 2) 00:14:49.672 4.369 - 4.397: 99.5599% ( 1) 00:14:49.672 4.591 - 4.619: 99.5661% ( 1) 00:14:49.672 4.814 - 4.842: 99.5724% ( 1) 00:14:49.672 5.064 - 5.092: 99.5787% ( 1) 00:14:49.672 5.148 - 5.176: 99.5913% ( 2) 00:14:49.672 5.231 - 5.259: 99.5976% ( 1) 00:14:49.672 5.287 - 5.315: 99.6039% ( 1) 00:14:49.672 5.537 - 5.565: 99.6102% ( 1) 00:14:49.672 5.565 - 5.593: 99.6164% ( 1) 00:14:49.672 5.593 - 5.621: 99.6290% ( 2) 00:14:49.672 5.649 - 5.677: 99.6353% ( 1) 00:14:49.672 5.677 - 5.704: 99.6479% ( 2) 00:14:49.672 5.760 - 5.788: 99.6605% ( 2) 00:14:49.672 5.788 - 5.816: 99.6730% ( 2) 00:14:49.672 5.955 - 5.983: 99.6793% ( 1) 00:14:49.672 6.094 - 6.122: 99.6856% ( 1) 00:14:49.672 6.483 - 6.511: 99.6919% ( 1) 00:14:49.672 6.678 - 6.706: 99.6982% ( 1) 00:14:49.672 6.706 - 6.734: 99.7045% ( 1) 00:14:49.672 6.734 - 6.762: 99.7171% ( 2) 00:14:49.672 6.929 - 6.957: 99.7233% ( 1) 00:14:49.672 6.957 - 6.984: 99.7296% ( 1) 00:14:49.672 7.096 - 7.123: 99.7359% ( 1) 00:14:49.672 7.179 - 7.235: 99.7485% ( 2) 00:14:49.672 7.235 - 7.290: 99.7548% ( 1) 00:14:49.672 7.290 - 7.346: 99.7611% ( 1) 00:14:49.672 7.346 - 7.402: 99.7674% ( 1) 00:14:49.672 7.402 - 7.457: 99.7799% ( 2) 00:14:49.672 7.457 - 7.513: 99.7862% ( 1) 00:14:49.672 7.513 - 7.569: 99.7925% ( 1) 00:14:49.672 [2024-12-10 22:45:56.972797] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:49.672 7.680 - 7.736: 99.7988% ( 1) 00:14:49.672 7.791 - 7.847: 99.8114% ( 2) 00:14:49.672 7.847 - 7.903: 99.8177% ( 1) 00:14:49.672 7.903 - 7.958: 99.8302% ( 2) 00:14:49.672 7.958 - 8.014: 99.8365% ( 1) 00:14:49.672 8.070 - 8.125: 99.8428% ( 1) 00:14:49.672 8.403 - 8.459: 99.8491% ( 1) 00:14:49.672 8.515 - 8.570: 99.8554% ( 1) 00:14:49.672 8.570 - 8.626: 99.8617% ( 1) 00:14:49.672 8.737 - 8.793: 99.8680% ( 1) 00:14:49.672 8.849 - 8.904: 99.8805% ( 2) 00:14:49.672 8.904 - 8.960: 99.8868% ( 1) 00:14:49.672 9.127 - 9.183: 99.8931% ( 1) 00:14:49.672 9.294 - 9.350: 99.8994% ( 1) 00:14:49.672 9.739 - 9.795: 99.9057% ( 1) 00:14:49.672 9.795 - 9.850: 99.9120% ( 1) 00:14:49.672 19.144 - 19.256: 99.9183% ( 1) 00:14:49.672 3989.148 - 4017.642: 100.0000% ( 13) 00:14:49.672 00:14:49.672 Complete histogram 00:14:49.672 ================== 00:14:49.672 Range in us Cumulative Count 00:14:49.672 1.795 - 1.809: 0.0377% ( 6) 00:14:49.672 1.809 - 1.823: 0.3584% ( 51) 00:14:49.672 1.823 - 1.837: 0.7420% ( 61) 00:14:49.672 1.837 - 1.850: 1.0626% ( 51) 00:14:49.672 1.850 - 1.864: 5.9545% ( 778) 00:14:49.672 1.864 - 1.878: 37.5755% ( 5029) 00:14:49.672 1.878 - 1.892: 67.1026% ( 4696) 00:14:49.672 1.892 - 1.906: 87.2862% ( 3210) 00:14:49.672 1.906 - 1.920: 93.3350% ( 962) 00:14:49.672 1.920 - 1.934: 95.1270% ( 285) 00:14:49.672 1.934 - 1.948: 96.5103% ( 220) 00:14:49.672 1.948 - 1.962: 97.7930% ( 204) 00:14:49.672 1.962 - 1.976: 98.5790% ( 125) 00:14:49.672 1.976 - 1.990: 98.8808% ( 48) 00:14:49.672 1.990 - 2.003: 98.9940% ( 18) 00:14:49.672 2.003 - 2.017: 99.0820% ( 14) 00:14:49.672 2.017 - 2.031: 99.1071% ( 4) 00:14:49.672 2.031 - 2.045: 99.1323% ( 4) 00:14:49.672 2.045 - 2.059: 99.1574% ( 4) 00:14:49.672 2.059 - 2.073: 99.1826% ( 4) 00:14:49.672 2.073 - 2.087: 99.1889% ( 1) 00:14:49.672 2.087 - 2.101: 99.2015% ( 2) 00:14:49.672 2.101 - 2.115: 99.2077% ( 1) 00:14:49.672 2.115 - 2.129: 99.2203% ( 2) 00:14:49.672 2.170 - 2.184: 99.2266% ( 1) 00:14:49.672 2.184 - 2.198: 99.2392% ( 2) 00:14:49.672 2.198 - 2.212: 99.2455% ( 1) 00:14:49.672 2.240 - 2.254: 99.2580% ( 2) 00:14:49.672 2.254 - 2.268: 99.2643% ( 1) 00:14:49.672 2.268 - 2.282: 99.2706% ( 1) 00:14:49.672 2.296 - 2.310: 99.2832% ( 2) 00:14:49.672 2.310 - 2.323: 99.2895% ( 1) 00:14:49.672 2.323 - 2.337: 99.2958% ( 1) 00:14:49.672 2.337 - 2.351: 99.3021% ( 1) 00:14:49.672 2.365 - 2.379: 99.3084% ( 1) 00:14:49.672 2.379 - 2.393: 99.3146% ( 1) 00:14:49.672 2.393 - 2.407: 99.3209% ( 1) 00:14:49.672 2.407 - 2.421: 99.3272% ( 1) 00:14:49.672 2.630 - 2.643: 99.3335% ( 1) 00:14:49.672 3.701 - 3.729: 99.3398% ( 1) 00:14:49.672 3.840 - 3.868: 99.3524% ( 2) 00:14:49.672 3.896 - 3.923: 99.3587% ( 1) 00:14:49.672 4.063 - 4.090: 99.3712% ( 2) 00:14:49.672 4.090 - 4.118: 99.3838% ( 2) 00:14:49.672 4.118 - 4.146: 99.3901% ( 1) 00:14:49.672 4.202 - 4.230: 99.3964% ( 1) 00:14:49.672 4.285 - 4.313: 99.4027% ( 1) 00:14:49.672 4.313 - 4.341: 99.4090% ( 1) 00:14:49.672 4.397 - 4.424: 99.4152% ( 1) 00:14:49.672 5.148 - 5.176: 99.4215% ( 1) 00:14:49.672 5.259 - 5.287: 99.4278% ( 1) 00:14:49.672 5.370 - 5.398: 99.4341% ( 1) 00:14:49.672 5.537 - 5.565: 99.4404% ( 1) 00:14:49.672 5.843 - 5.871: 99.4467% ( 1) 00:14:49.672 5.927 - 5.955: 99.4530% ( 1) 00:14:49.672 6.177 - 6.205: 99.4593% ( 1) 00:14:49.672 6.456 - 6.483: 99.4655% ( 1) 00:14:49.672 6.623 - 6.650: 99.4781% ( 2) 00:14:49.672 7.235 - 7.290: 99.4844% ( 1) 00:14:49.672 7.290 - 7.346: 99.4970% ( 2) 00:14:49.672 11.297 - 11.353: 99.5033% ( 1) 00:14:49.672 13.078 - 13.134: 99.5096% ( 1) 00:14:49.672 14.581 - 14.692: 99.5158% ( 1) 00:14:49.672 31.388 - 31.610: 99.5221% ( 1) 00:14:49.672 219.937 - 220.828: 99.5284% ( 1) 00:14:49.672 3989.148 - 4017.642: 100.0000% ( 75) 00:14:49.672 00:14:49.672 22:45:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:14:49.672 22:45:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:49.672 22:45:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:14:49.672 22:45:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:14:49.672 22:45:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:49.672 [ 00:14:49.672 { 00:14:49.672 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:49.672 "subtype": "Discovery", 00:14:49.672 "listen_addresses": [], 00:14:49.672 "allow_any_host": true, 00:14:49.672 "hosts": [] 00:14:49.672 }, 00:14:49.672 { 00:14:49.672 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:49.672 "subtype": "NVMe", 00:14:49.672 "listen_addresses": [ 00:14:49.672 { 00:14:49.672 "trtype": "VFIOUSER", 00:14:49.672 "adrfam": "IPv4", 00:14:49.672 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:49.672 "trsvcid": "0" 00:14:49.672 } 00:14:49.672 ], 00:14:49.672 "allow_any_host": true, 00:14:49.672 "hosts": [], 00:14:49.672 "serial_number": "SPDK1", 00:14:49.672 "model_number": "SPDK bdev Controller", 00:14:49.672 "max_namespaces": 32, 00:14:49.673 "min_cntlid": 1, 00:14:49.673 "max_cntlid": 65519, 00:14:49.673 "namespaces": [ 00:14:49.673 { 00:14:49.673 "nsid": 1, 00:14:49.673 "bdev_name": "Malloc1", 00:14:49.673 "name": "Malloc1", 00:14:49.673 "nguid": "2FE3BC116A044A848F1DA01DACEA658C", 00:14:49.673 "uuid": "2fe3bc11-6a04-4a84-8f1d-a01dacea658c" 00:14:49.673 } 00:14:49.673 ] 00:14:49.673 }, 00:14:49.673 { 00:14:49.673 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:49.673 "subtype": "NVMe", 00:14:49.673 "listen_addresses": [ 00:14:49.673 { 00:14:49.673 "trtype": "VFIOUSER", 00:14:49.673 "adrfam": "IPv4", 00:14:49.673 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:49.673 "trsvcid": "0" 00:14:49.673 } 00:14:49.673 ], 00:14:49.673 "allow_any_host": true, 00:14:49.673 "hosts": [], 00:14:49.673 "serial_number": "SPDK2", 00:14:49.673 "model_number": "SPDK bdev Controller", 00:14:49.673 "max_namespaces": 32, 00:14:49.673 "min_cntlid": 1, 00:14:49.673 "max_cntlid": 65519, 00:14:49.673 "namespaces": [ 00:14:49.673 { 00:14:49.673 "nsid": 1, 00:14:49.673 "bdev_name": "Malloc2", 00:14:49.673 "name": "Malloc2", 00:14:49.673 "nguid": "9244013F8EA841DE89ABFCEB8C69B034", 00:14:49.673 "uuid": "9244013f-8ea8-41de-89ab-fceb8c69b034" 00:14:49.673 } 00:14:49.673 ] 00:14:49.673 } 00:14:49.673 ] 00:14:49.673 22:45:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:49.673 22:45:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2340302 00:14:49.673 22:45:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:49.673 22:45:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:14:49.673 22:45:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:14:49.673 22:45:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:49.673 22:45:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:49.673 22:45:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:14:49.673 22:45:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:49.673 22:45:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:14:49.673 [2024-12-10 22:45:57.384615] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:49.932 Malloc3 00:14:49.932 22:45:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:14:49.932 [2024-12-10 22:45:57.618485] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:49.932 22:45:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:49.932 Asynchronous Event Request test 00:14:49.932 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:49.932 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:49.932 Registering asynchronous event callbacks... 00:14:49.932 Starting namespace attribute notice tests for all controllers... 00:14:49.932 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:49.932 aer_cb - Changed Namespace 00:14:49.932 Cleaning up... 00:14:50.191 [ 00:14:50.191 { 00:14:50.191 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:50.191 "subtype": "Discovery", 00:14:50.191 "listen_addresses": [], 00:14:50.191 "allow_any_host": true, 00:14:50.191 "hosts": [] 00:14:50.191 }, 00:14:50.191 { 00:14:50.191 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:50.191 "subtype": "NVMe", 00:14:50.191 "listen_addresses": [ 00:14:50.191 { 00:14:50.191 "trtype": "VFIOUSER", 00:14:50.191 "adrfam": "IPv4", 00:14:50.191 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:50.191 "trsvcid": "0" 00:14:50.191 } 00:14:50.191 ], 00:14:50.191 "allow_any_host": true, 00:14:50.191 "hosts": [], 00:14:50.191 "serial_number": "SPDK1", 00:14:50.191 "model_number": "SPDK bdev Controller", 00:14:50.191 "max_namespaces": 32, 00:14:50.191 "min_cntlid": 1, 00:14:50.191 "max_cntlid": 65519, 00:14:50.191 "namespaces": [ 00:14:50.191 { 00:14:50.191 "nsid": 1, 00:14:50.191 "bdev_name": "Malloc1", 00:14:50.191 "name": "Malloc1", 00:14:50.191 "nguid": "2FE3BC116A044A848F1DA01DACEA658C", 00:14:50.191 "uuid": "2fe3bc11-6a04-4a84-8f1d-a01dacea658c" 00:14:50.191 }, 00:14:50.191 { 00:14:50.191 "nsid": 2, 00:14:50.191 "bdev_name": "Malloc3", 00:14:50.191 "name": "Malloc3", 00:14:50.191 "nguid": "7894D0247C2A4F7EA9FF49F6198EAE67", 00:14:50.191 "uuid": "7894d024-7c2a-4f7e-a9ff-49f6198eae67" 00:14:50.191 } 00:14:50.191 ] 00:14:50.191 }, 00:14:50.191 { 00:14:50.191 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:50.191 "subtype": "NVMe", 00:14:50.191 "listen_addresses": [ 00:14:50.191 { 00:14:50.191 "trtype": "VFIOUSER", 00:14:50.191 "adrfam": "IPv4", 00:14:50.191 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:50.191 "trsvcid": "0" 00:14:50.191 } 00:14:50.191 ], 00:14:50.191 "allow_any_host": true, 00:14:50.191 "hosts": [], 00:14:50.191 "serial_number": "SPDK2", 00:14:50.191 "model_number": "SPDK bdev Controller", 00:14:50.191 "max_namespaces": 32, 00:14:50.191 "min_cntlid": 1, 00:14:50.191 "max_cntlid": 65519, 00:14:50.191 "namespaces": [ 00:14:50.191 { 00:14:50.191 "nsid": 1, 00:14:50.191 "bdev_name": "Malloc2", 00:14:50.191 "name": "Malloc2", 00:14:50.191 "nguid": "9244013F8EA841DE89ABFCEB8C69B034", 00:14:50.191 "uuid": "9244013f-8ea8-41de-89ab-fceb8c69b034" 00:14:50.191 } 00:14:50.191 ] 00:14:50.191 } 00:14:50.191 ] 00:14:50.191 22:45:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2340302 00:14:50.191 22:45:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:50.191 22:45:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:50.191 22:45:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:14:50.191 22:45:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:50.191 [2024-12-10 22:45:57.863786] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:14:50.191 [2024-12-10 22:45:57.863818] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2340317 ] 00:14:50.191 [2024-12-10 22:45:57.908225] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:14:50.191 [2024-12-10 22:45:57.916407] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:50.191 [2024-12-10 22:45:57.916431] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f5ff7889000 00:14:50.191 [2024-12-10 22:45:57.917402] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:50.191 [2024-12-10 22:45:57.918415] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:50.452 [2024-12-10 22:45:57.919427] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:50.452 [2024-12-10 22:45:57.920439] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:50.452 [2024-12-10 22:45:57.921443] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:50.452 [2024-12-10 22:45:57.922452] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:50.452 [2024-12-10 22:45:57.923451] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:50.452 [2024-12-10 22:45:57.924461] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:50.452 [2024-12-10 22:45:57.925469] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:50.452 [2024-12-10 22:45:57.925483] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f5ff787e000 00:14:50.452 [2024-12-10 22:45:57.926425] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:50.452 [2024-12-10 22:45:57.936951] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:14:50.452 [2024-12-10 22:45:57.936978] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:14:50.452 [2024-12-10 22:45:57.942069] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:50.452 [2024-12-10 22:45:57.942107] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:50.452 [2024-12-10 22:45:57.942182] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:14:50.452 [2024-12-10 22:45:57.942196] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:14:50.452 [2024-12-10 22:45:57.942201] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:14:50.452 [2024-12-10 22:45:57.943072] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:14:50.452 [2024-12-10 22:45:57.943082] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:14:50.452 [2024-12-10 22:45:57.943089] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:14:50.452 [2024-12-10 22:45:57.944076] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:50.452 [2024-12-10 22:45:57.944085] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:14:50.452 [2024-12-10 22:45:57.944091] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:14:50.452 [2024-12-10 22:45:57.945079] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:14:50.452 [2024-12-10 22:45:57.945089] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:50.452 [2024-12-10 22:45:57.946086] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:14:50.452 [2024-12-10 22:45:57.946094] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:14:50.452 [2024-12-10 22:45:57.946098] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:14:50.452 [2024-12-10 22:45:57.946105] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:50.452 [2024-12-10 22:45:57.946212] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:14:50.452 [2024-12-10 22:45:57.946218] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:50.452 [2024-12-10 22:45:57.946222] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:14:50.452 [2024-12-10 22:45:57.947091] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:14:50.452 [2024-12-10 22:45:57.948096] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:14:50.452 [2024-12-10 22:45:57.949101] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:50.452 [2024-12-10 22:45:57.950099] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:50.452 [2024-12-10 22:45:57.950141] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:50.452 [2024-12-10 22:45:57.951116] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:14:50.452 [2024-12-10 22:45:57.951126] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:50.452 [2024-12-10 22:45:57.951130] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:14:50.452 [2024-12-10 22:45:57.951147] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:14:50.452 [2024-12-10 22:45:57.951161] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:14:50.452 [2024-12-10 22:45:57.951172] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:50.452 [2024-12-10 22:45:57.951176] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:50.452 [2024-12-10 22:45:57.951180] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:50.452 [2024-12-10 22:45:57.951190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:50.452 [2024-12-10 22:45:57.959166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:50.452 [2024-12-10 22:45:57.959178] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:14:50.452 [2024-12-10 22:45:57.959182] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:14:50.452 [2024-12-10 22:45:57.959186] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:14:50.452 [2024-12-10 22:45:57.959190] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:50.452 [2024-12-10 22:45:57.959195] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:14:50.452 [2024-12-10 22:45:57.959199] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:14:50.452 [2024-12-10 22:45:57.959203] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:14:50.452 [2024-12-10 22:45:57.959212] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:14:50.452 [2024-12-10 22:45:57.959223] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:50.452 [2024-12-10 22:45:57.967165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:50.452 [2024-12-10 22:45:57.967177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:50.452 [2024-12-10 22:45:57.967187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:50.452 [2024-12-10 22:45:57.967195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:50.452 [2024-12-10 22:45:57.967202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:50.453 [2024-12-10 22:45:57.967206] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:14:50.453 [2024-12-10 22:45:57.967216] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:50.453 [2024-12-10 22:45:57.967225] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:50.453 [2024-12-10 22:45:57.975165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:50.453 [2024-12-10 22:45:57.975173] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:14:50.453 [2024-12-10 22:45:57.975177] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:50.453 [2024-12-10 22:45:57.975183] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:14:50.453 [2024-12-10 22:45:57.975188] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:14:50.453 [2024-12-10 22:45:57.975196] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:50.453 [2024-12-10 22:45:57.983165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:50.453 [2024-12-10 22:45:57.983218] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:14:50.453 [2024-12-10 22:45:57.983228] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:14:50.453 [2024-12-10 22:45:57.983235] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:50.453 [2024-12-10 22:45:57.983239] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:50.453 [2024-12-10 22:45:57.983242] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:50.453 [2024-12-10 22:45:57.983248] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:50.453 [2024-12-10 22:45:57.991166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:50.453 [2024-12-10 22:45:57.991176] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:14:50.453 [2024-12-10 22:45:57.991185] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:14:50.453 [2024-12-10 22:45:57.991191] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:14:50.453 [2024-12-10 22:45:57.991198] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:50.453 [2024-12-10 22:45:57.991202] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:50.453 [2024-12-10 22:45:57.991208] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:50.453 [2024-12-10 22:45:57.991213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:50.453 [2024-12-10 22:45:57.999165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:50.453 [2024-12-10 22:45:57.999179] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:50.453 [2024-12-10 22:45:57.999186] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:50.453 [2024-12-10 22:45:57.999193] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:50.453 [2024-12-10 22:45:57.999197] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:50.453 [2024-12-10 22:45:57.999200] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:50.453 [2024-12-10 22:45:57.999206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:50.453 [2024-12-10 22:45:58.007165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:50.453 [2024-12-10 22:45:58.007177] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:50.453 [2024-12-10 22:45:58.007183] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:14:50.453 [2024-12-10 22:45:58.007191] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:14:50.453 [2024-12-10 22:45:58.007196] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:14:50.453 [2024-12-10 22:45:58.007201] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:50.453 [2024-12-10 22:45:58.007205] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:14:50.453 [2024-12-10 22:45:58.007210] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:14:50.453 [2024-12-10 22:45:58.007214] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:14:50.453 [2024-12-10 22:45:58.007219] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:14:50.453 [2024-12-10 22:45:58.007234] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:50.453 [2024-12-10 22:45:58.015164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:50.453 [2024-12-10 22:45:58.015178] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:50.453 [2024-12-10 22:45:58.023164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:50.453 [2024-12-10 22:45:58.023176] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:50.453 [2024-12-10 22:45:58.031165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:50.453 [2024-12-10 22:45:58.031179] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:50.453 [2024-12-10 22:45:58.039165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:50.453 [2024-12-10 22:45:58.039181] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:50.453 [2024-12-10 22:45:58.039186] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:50.453 [2024-12-10 22:45:58.039190] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:50.453 [2024-12-10 22:45:58.039193] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:50.453 [2024-12-10 22:45:58.039196] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:50.453 [2024-12-10 22:45:58.039203] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:50.453 [2024-12-10 22:45:58.039209] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:50.453 [2024-12-10 22:45:58.039214] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:50.453 [2024-12-10 22:45:58.039217] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:50.453 [2024-12-10 22:45:58.039223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:50.453 [2024-12-10 22:45:58.039230] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:50.453 [2024-12-10 22:45:58.039234] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:50.453 [2024-12-10 22:45:58.039237] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:50.453 [2024-12-10 22:45:58.039243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:50.453 [2024-12-10 22:45:58.039250] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:50.453 [2024-12-10 22:45:58.039254] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:50.453 [2024-12-10 22:45:58.039257] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:50.453 [2024-12-10 22:45:58.039263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:50.453 [2024-12-10 22:45:58.047165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:50.453 [2024-12-10 22:45:58.047180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:50.453 [2024-12-10 22:45:58.047190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:50.453 [2024-12-10 22:45:58.047197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:50.453 ===================================================== 00:14:50.454 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:50.454 ===================================================== 00:14:50.454 Controller Capabilities/Features 00:14:50.454 ================================ 00:14:50.454 Vendor ID: 4e58 00:14:50.454 Subsystem Vendor ID: 4e58 00:14:50.454 Serial Number: SPDK2 00:14:50.454 Model Number: SPDK bdev Controller 00:14:50.454 Firmware Version: 25.01 00:14:50.454 Recommended Arb Burst: 6 00:14:50.454 IEEE OUI Identifier: 8d 6b 50 00:14:50.454 Multi-path I/O 00:14:50.454 May have multiple subsystem ports: Yes 00:14:50.454 May have multiple controllers: Yes 00:14:50.454 Associated with SR-IOV VF: No 00:14:50.454 Max Data Transfer Size: 131072 00:14:50.454 Max Number of Namespaces: 32 00:14:50.454 Max Number of I/O Queues: 127 00:14:50.454 NVMe Specification Version (VS): 1.3 00:14:50.454 NVMe Specification Version (Identify): 1.3 00:14:50.454 Maximum Queue Entries: 256 00:14:50.454 Contiguous Queues Required: Yes 00:14:50.454 Arbitration Mechanisms Supported 00:14:50.454 Weighted Round Robin: Not Supported 00:14:50.454 Vendor Specific: Not Supported 00:14:50.454 Reset Timeout: 15000 ms 00:14:50.454 Doorbell Stride: 4 bytes 00:14:50.454 NVM Subsystem Reset: Not Supported 00:14:50.454 Command Sets Supported 00:14:50.454 NVM Command Set: Supported 00:14:50.454 Boot Partition: Not Supported 00:14:50.454 Memory Page Size Minimum: 4096 bytes 00:14:50.454 Memory Page Size Maximum: 4096 bytes 00:14:50.454 Persistent Memory Region: Not Supported 00:14:50.454 Optional Asynchronous Events Supported 00:14:50.454 Namespace Attribute Notices: Supported 00:14:50.454 Firmware Activation Notices: Not Supported 00:14:50.454 ANA Change Notices: Not Supported 00:14:50.454 PLE Aggregate Log Change Notices: Not Supported 00:14:50.454 LBA Status Info Alert Notices: Not Supported 00:14:50.454 EGE Aggregate Log Change Notices: Not Supported 00:14:50.454 Normal NVM Subsystem Shutdown event: Not Supported 00:14:50.454 Zone Descriptor Change Notices: Not Supported 00:14:50.454 Discovery Log Change Notices: Not Supported 00:14:50.454 Controller Attributes 00:14:50.454 128-bit Host Identifier: Supported 00:14:50.454 Non-Operational Permissive Mode: Not Supported 00:14:50.454 NVM Sets: Not Supported 00:14:50.454 Read Recovery Levels: Not Supported 00:14:50.454 Endurance Groups: Not Supported 00:14:50.454 Predictable Latency Mode: Not Supported 00:14:50.454 Traffic Based Keep ALive: Not Supported 00:14:50.454 Namespace Granularity: Not Supported 00:14:50.454 SQ Associations: Not Supported 00:14:50.454 UUID List: Not Supported 00:14:50.454 Multi-Domain Subsystem: Not Supported 00:14:50.454 Fixed Capacity Management: Not Supported 00:14:50.454 Variable Capacity Management: Not Supported 00:14:50.454 Delete Endurance Group: Not Supported 00:14:50.454 Delete NVM Set: Not Supported 00:14:50.454 Extended LBA Formats Supported: Not Supported 00:14:50.454 Flexible Data Placement Supported: Not Supported 00:14:50.454 00:14:50.454 Controller Memory Buffer Support 00:14:50.454 ================================ 00:14:50.454 Supported: No 00:14:50.454 00:14:50.454 Persistent Memory Region Support 00:14:50.454 ================================ 00:14:50.454 Supported: No 00:14:50.454 00:14:50.454 Admin Command Set Attributes 00:14:50.454 ============================ 00:14:50.454 Security Send/Receive: Not Supported 00:14:50.454 Format NVM: Not Supported 00:14:50.454 Firmware Activate/Download: Not Supported 00:14:50.454 Namespace Management: Not Supported 00:14:50.454 Device Self-Test: Not Supported 00:14:50.454 Directives: Not Supported 00:14:50.454 NVMe-MI: Not Supported 00:14:50.454 Virtualization Management: Not Supported 00:14:50.454 Doorbell Buffer Config: Not Supported 00:14:50.454 Get LBA Status Capability: Not Supported 00:14:50.454 Command & Feature Lockdown Capability: Not Supported 00:14:50.454 Abort Command Limit: 4 00:14:50.454 Async Event Request Limit: 4 00:14:50.454 Number of Firmware Slots: N/A 00:14:50.454 Firmware Slot 1 Read-Only: N/A 00:14:50.454 Firmware Activation Without Reset: N/A 00:14:50.454 Multiple Update Detection Support: N/A 00:14:50.454 Firmware Update Granularity: No Information Provided 00:14:50.454 Per-Namespace SMART Log: No 00:14:50.454 Asymmetric Namespace Access Log Page: Not Supported 00:14:50.454 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:14:50.454 Command Effects Log Page: Supported 00:14:50.454 Get Log Page Extended Data: Supported 00:14:50.454 Telemetry Log Pages: Not Supported 00:14:50.454 Persistent Event Log Pages: Not Supported 00:14:50.454 Supported Log Pages Log Page: May Support 00:14:50.454 Commands Supported & Effects Log Page: Not Supported 00:14:50.454 Feature Identifiers & Effects Log Page:May Support 00:14:50.454 NVMe-MI Commands & Effects Log Page: May Support 00:14:50.454 Data Area 4 for Telemetry Log: Not Supported 00:14:50.454 Error Log Page Entries Supported: 128 00:14:50.454 Keep Alive: Supported 00:14:50.454 Keep Alive Granularity: 10000 ms 00:14:50.454 00:14:50.454 NVM Command Set Attributes 00:14:50.454 ========================== 00:14:50.454 Submission Queue Entry Size 00:14:50.454 Max: 64 00:14:50.454 Min: 64 00:14:50.454 Completion Queue Entry Size 00:14:50.454 Max: 16 00:14:50.454 Min: 16 00:14:50.454 Number of Namespaces: 32 00:14:50.454 Compare Command: Supported 00:14:50.454 Write Uncorrectable Command: Not Supported 00:14:50.454 Dataset Management Command: Supported 00:14:50.454 Write Zeroes Command: Supported 00:14:50.454 Set Features Save Field: Not Supported 00:14:50.454 Reservations: Not Supported 00:14:50.454 Timestamp: Not Supported 00:14:50.454 Copy: Supported 00:14:50.454 Volatile Write Cache: Present 00:14:50.454 Atomic Write Unit (Normal): 1 00:14:50.454 Atomic Write Unit (PFail): 1 00:14:50.454 Atomic Compare & Write Unit: 1 00:14:50.454 Fused Compare & Write: Supported 00:14:50.454 Scatter-Gather List 00:14:50.454 SGL Command Set: Supported (Dword aligned) 00:14:50.454 SGL Keyed: Not Supported 00:14:50.454 SGL Bit Bucket Descriptor: Not Supported 00:14:50.454 SGL Metadata Pointer: Not Supported 00:14:50.454 Oversized SGL: Not Supported 00:14:50.454 SGL Metadata Address: Not Supported 00:14:50.454 SGL Offset: Not Supported 00:14:50.454 Transport SGL Data Block: Not Supported 00:14:50.454 Replay Protected Memory Block: Not Supported 00:14:50.454 00:14:50.454 Firmware Slot Information 00:14:50.454 ========================= 00:14:50.454 Active slot: 1 00:14:50.454 Slot 1 Firmware Revision: 25.01 00:14:50.454 00:14:50.454 00:14:50.454 Commands Supported and Effects 00:14:50.454 ============================== 00:14:50.454 Admin Commands 00:14:50.454 -------------- 00:14:50.454 Get Log Page (02h): Supported 00:14:50.454 Identify (06h): Supported 00:14:50.454 Abort (08h): Supported 00:14:50.454 Set Features (09h): Supported 00:14:50.454 Get Features (0Ah): Supported 00:14:50.454 Asynchronous Event Request (0Ch): Supported 00:14:50.454 Keep Alive (18h): Supported 00:14:50.454 I/O Commands 00:14:50.454 ------------ 00:14:50.454 Flush (00h): Supported LBA-Change 00:14:50.454 Write (01h): Supported LBA-Change 00:14:50.454 Read (02h): Supported 00:14:50.454 Compare (05h): Supported 00:14:50.454 Write Zeroes (08h): Supported LBA-Change 00:14:50.454 Dataset Management (09h): Supported LBA-Change 00:14:50.454 Copy (19h): Supported LBA-Change 00:14:50.454 00:14:50.454 Error Log 00:14:50.454 ========= 00:14:50.454 00:14:50.454 Arbitration 00:14:50.454 =========== 00:14:50.454 Arbitration Burst: 1 00:14:50.454 00:14:50.454 Power Management 00:14:50.454 ================ 00:14:50.454 Number of Power States: 1 00:14:50.454 Current Power State: Power State #0 00:14:50.454 Power State #0: 00:14:50.454 Max Power: 0.00 W 00:14:50.454 Non-Operational State: Operational 00:14:50.454 Entry Latency: Not Reported 00:14:50.454 Exit Latency: Not Reported 00:14:50.454 Relative Read Throughput: 0 00:14:50.454 Relative Read Latency: 0 00:14:50.454 Relative Write Throughput: 0 00:14:50.454 Relative Write Latency: 0 00:14:50.454 Idle Power: Not Reported 00:14:50.454 Active Power: Not Reported 00:14:50.454 Non-Operational Permissive Mode: Not Supported 00:14:50.454 00:14:50.454 Health Information 00:14:50.454 ================== 00:14:50.454 Critical Warnings: 00:14:50.454 Available Spare Space: OK 00:14:50.454 Temperature: OK 00:14:50.454 Device Reliability: OK 00:14:50.454 Read Only: No 00:14:50.454 Volatile Memory Backup: OK 00:14:50.454 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:50.455 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:50.455 Available Spare: 0% 00:14:50.455 Available Sp[2024-12-10 22:45:58.047284] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:50.455 [2024-12-10 22:45:58.055164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:50.455 [2024-12-10 22:45:58.055192] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:14:50.455 [2024-12-10 22:45:58.055201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.455 [2024-12-10 22:45:58.055207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.455 [2024-12-10 22:45:58.055215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.455 [2024-12-10 22:45:58.055220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:50.455 [2024-12-10 22:45:58.055275] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:50.455 [2024-12-10 22:45:58.055286] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:14:50.455 [2024-12-10 22:45:58.056272] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:50.455 [2024-12-10 22:45:58.056316] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:14:50.455 [2024-12-10 22:45:58.056322] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:14:50.455 [2024-12-10 22:45:58.057279] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:14:50.455 [2024-12-10 22:45:58.057290] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:14:50.455 [2024-12-10 22:45:58.057340] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:14:50.455 [2024-12-10 22:45:58.058327] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:50.455 are Threshold: 0% 00:14:50.455 Life Percentage Used: 0% 00:14:50.455 Data Units Read: 0 00:14:50.455 Data Units Written: 0 00:14:50.455 Host Read Commands: 0 00:14:50.455 Host Write Commands: 0 00:14:50.455 Controller Busy Time: 0 minutes 00:14:50.455 Power Cycles: 0 00:14:50.455 Power On Hours: 0 hours 00:14:50.455 Unsafe Shutdowns: 0 00:14:50.455 Unrecoverable Media Errors: 0 00:14:50.455 Lifetime Error Log Entries: 0 00:14:50.455 Warning Temperature Time: 0 minutes 00:14:50.455 Critical Temperature Time: 0 minutes 00:14:50.455 00:14:50.455 Number of Queues 00:14:50.455 ================ 00:14:50.455 Number of I/O Submission Queues: 127 00:14:50.455 Number of I/O Completion Queues: 127 00:14:50.455 00:14:50.455 Active Namespaces 00:14:50.455 ================= 00:14:50.455 Namespace ID:1 00:14:50.455 Error Recovery Timeout: Unlimited 00:14:50.455 Command Set Identifier: NVM (00h) 00:14:50.455 Deallocate: Supported 00:14:50.455 Deallocated/Unwritten Error: Not Supported 00:14:50.455 Deallocated Read Value: Unknown 00:14:50.455 Deallocate in Write Zeroes: Not Supported 00:14:50.455 Deallocated Guard Field: 0xFFFF 00:14:50.455 Flush: Supported 00:14:50.455 Reservation: Supported 00:14:50.455 Namespace Sharing Capabilities: Multiple Controllers 00:14:50.455 Size (in LBAs): 131072 (0GiB) 00:14:50.455 Capacity (in LBAs): 131072 (0GiB) 00:14:50.455 Utilization (in LBAs): 131072 (0GiB) 00:14:50.455 NGUID: 9244013F8EA841DE89ABFCEB8C69B034 00:14:50.455 UUID: 9244013f-8ea8-41de-89ab-fceb8c69b034 00:14:50.455 Thin Provisioning: Not Supported 00:14:50.455 Per-NS Atomic Units: Yes 00:14:50.455 Atomic Boundary Size (Normal): 0 00:14:50.455 Atomic Boundary Size (PFail): 0 00:14:50.455 Atomic Boundary Offset: 0 00:14:50.455 Maximum Single Source Range Length: 65535 00:14:50.455 Maximum Copy Length: 65535 00:14:50.455 Maximum Source Range Count: 1 00:14:50.455 NGUID/EUI64 Never Reused: No 00:14:50.455 Namespace Write Protected: No 00:14:50.455 Number of LBA Formats: 1 00:14:50.455 Current LBA Format: LBA Format #00 00:14:50.455 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:50.455 00:14:50.455 22:45:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:50.714 [2024-12-10 22:45:58.284711] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:55.982 Initializing NVMe Controllers 00:14:55.982 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:55.982 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:55.982 Initialization complete. Launching workers. 00:14:55.982 ======================================================== 00:14:55.982 Latency(us) 00:14:55.982 Device Information : IOPS MiB/s Average min max 00:14:55.982 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39935.35 156.00 3205.01 1006.19 10567.16 00:14:55.982 ======================================================== 00:14:55.982 Total : 39935.35 156.00 3205.01 1006.19 10567.16 00:14:55.982 00:14:55.982 [2024-12-10 22:46:03.394424] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:55.982 22:46:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:55.982 [2024-12-10 22:46:03.636168] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:01.254 Initializing NVMe Controllers 00:15:01.254 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:01.254 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:01.254 Initialization complete. Launching workers. 00:15:01.254 ======================================================== 00:15:01.254 Latency(us) 00:15:01.254 Device Information : IOPS MiB/s Average min max 00:15:01.254 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39880.87 155.78 3209.39 976.97 10252.43 00:15:01.254 ======================================================== 00:15:01.254 Total : 39880.87 155.78 3209.39 976.97 10252.43 00:15:01.254 00:15:01.254 [2024-12-10 22:46:08.656816] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:01.254 22:46:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:01.254 [2024-12-10 22:46:08.861282] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:06.526 [2024-12-10 22:46:14.007262] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:06.526 Initializing NVMe Controllers 00:15:06.526 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:06.526 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:06.526 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:06.526 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:06.526 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:06.526 Initialization complete. Launching workers. 00:15:06.526 Starting thread on core 2 00:15:06.526 Starting thread on core 3 00:15:06.526 Starting thread on core 1 00:15:06.526 22:46:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:06.785 [2024-12-10 22:46:14.303608] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:10.074 [2024-12-10 22:46:17.393360] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:10.074 Initializing NVMe Controllers 00:15:10.074 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:10.074 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:10.074 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:10.074 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:10.074 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:10.074 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:10.074 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/arbitration run with configuration: 00:15:10.074 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:10.074 Initialization complete. Launching workers. 00:15:10.074 Starting thread on core 1 with urgent priority queue 00:15:10.074 Starting thread on core 2 with urgent priority queue 00:15:10.074 Starting thread on core 3 with urgent priority queue 00:15:10.074 Starting thread on core 0 with urgent priority queue 00:15:10.074 SPDK bdev Controller (SPDK2 ) core 0: 4820.00 IO/s 20.75 secs/100000 ios 00:15:10.074 SPDK bdev Controller (SPDK2 ) core 1: 5684.67 IO/s 17.59 secs/100000 ios 00:15:10.074 SPDK bdev Controller (SPDK2 ) core 2: 6025.67 IO/s 16.60 secs/100000 ios 00:15:10.074 SPDK bdev Controller (SPDK2 ) core 3: 4676.67 IO/s 21.38 secs/100000 ios 00:15:10.074 ======================================================== 00:15:10.074 00:15:10.074 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:10.074 [2024-12-10 22:46:17.681638] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:10.074 Initializing NVMe Controllers 00:15:10.074 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:10.074 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:10.074 Namespace ID: 1 size: 0GB 00:15:10.074 Initialization complete. 00:15:10.074 INFO: using host memory buffer for IO 00:15:10.074 Hello world! 00:15:10.074 [2024-12-10 22:46:17.691696] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:10.074 22:46:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:10.333 [2024-12-10 22:46:17.973951] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:11.716 Initializing NVMe Controllers 00:15:11.716 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:11.716 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:11.716 Initialization complete. Launching workers. 00:15:11.716 submit (in ns) avg, min, max = 5988.6, 3195.7, 4000175.7 00:15:11.716 complete (in ns) avg, min, max = 20137.6, 1792.2, 4036994.8 00:15:11.716 00:15:11.716 Submit histogram 00:15:11.716 ================ 00:15:11.716 Range in us Cumulative Count 00:15:11.716 3.186 - 3.200: 0.0063% ( 1) 00:15:11.716 3.200 - 3.214: 0.0125% ( 1) 00:15:11.716 3.214 - 3.228: 0.0376% ( 4) 00:15:11.716 3.228 - 3.242: 0.0626% ( 4) 00:15:11.716 3.242 - 3.256: 0.0689% ( 1) 00:15:11.716 3.256 - 3.270: 0.1252% ( 9) 00:15:11.716 3.270 - 3.283: 0.3569% ( 37) 00:15:11.716 3.283 - 3.297: 0.8953% ( 86) 00:15:11.716 3.297 - 3.311: 1.4963% ( 96) 00:15:11.716 3.311 - 3.325: 2.5543% ( 169) 00:15:11.716 3.325 - 3.339: 4.7580% ( 352) 00:15:11.716 3.339 - 3.353: 9.4096% ( 743) 00:15:11.716 3.353 - 3.367: 14.8751% ( 873) 00:15:11.716 3.367 - 3.381: 20.7851% ( 944) 00:15:11.716 3.381 - 3.395: 27.4025% ( 1057) 00:15:11.716 3.395 - 3.409: 33.2937% ( 941) 00:15:11.716 3.409 - 3.423: 38.2834% ( 797) 00:15:11.716 3.423 - 3.437: 43.3482% ( 809) 00:15:11.716 3.437 - 3.450: 48.0436% ( 750) 00:15:11.716 3.450 - 3.464: 51.8750% ( 612) 00:15:11.716 3.464 - 3.478: 55.7816% ( 624) 00:15:11.716 3.478 - 3.492: 61.9295% ( 982) 00:15:11.716 3.492 - 3.506: 68.2840% ( 1015) 00:15:11.716 3.506 - 3.520: 72.5349% ( 679) 00:15:11.716 3.520 - 3.534: 77.2491% ( 753) 00:15:11.716 3.534 - 3.548: 81.6503% ( 703) 00:15:11.716 3.548 - 3.562: 84.2860% ( 421) 00:15:11.716 3.562 - 3.590: 86.8966% ( 417) 00:15:11.716 3.590 - 3.617: 87.8420% ( 151) 00:15:11.716 3.617 - 3.645: 89.0127% ( 187) 00:15:11.716 3.645 - 3.673: 90.7594% ( 279) 00:15:11.716 3.673 - 3.701: 92.5374% ( 284) 00:15:11.716 3.701 - 3.729: 94.0712% ( 245) 00:15:11.716 3.729 - 3.757: 95.8492% ( 284) 00:15:11.716 3.757 - 3.784: 97.3393% ( 238) 00:15:11.716 3.784 - 3.812: 98.3284% ( 158) 00:15:11.716 3.812 - 3.840: 98.9482% ( 99) 00:15:11.716 3.840 - 3.868: 99.2988% ( 56) 00:15:11.717 3.868 - 3.896: 99.5242% ( 36) 00:15:11.717 3.896 - 3.923: 99.5993% ( 12) 00:15:11.717 3.923 - 3.951: 99.6369% ( 6) 00:15:11.717 3.951 - 3.979: 99.6494% ( 2) 00:15:11.717 3.979 - 4.007: 99.6557% ( 1) 00:15:11.717 4.007 - 4.035: 99.6682% ( 2) 00:15:11.717 4.063 - 4.090: 99.6745% ( 1) 00:15:11.717 4.341 - 4.369: 99.6807% ( 1) 00:15:11.717 5.315 - 5.343: 99.6870% ( 1) 00:15:11.717 5.649 - 5.677: 99.6932% ( 1) 00:15:11.717 5.983 - 6.010: 99.6995% ( 1) 00:15:11.717 6.122 - 6.150: 99.7058% ( 1) 00:15:11.717 6.205 - 6.233: 99.7120% ( 1) 00:15:11.717 6.456 - 6.483: 99.7183% ( 1) 00:15:11.717 6.539 - 6.567: 99.7371% ( 3) 00:15:11.717 6.567 - 6.595: 99.7496% ( 2) 00:15:11.717 6.623 - 6.650: 99.7621% ( 2) 00:15:11.717 6.734 - 6.762: 99.7684% ( 1) 00:15:11.717 6.762 - 6.790: 99.7746% ( 1) 00:15:11.717 6.901 - 6.929: 99.7871% ( 2) 00:15:11.717 6.984 - 7.012: 99.7934% ( 1) 00:15:11.717 7.012 - 7.040: 99.7997% ( 1) 00:15:11.717 7.040 - 7.068: 99.8122% ( 2) 00:15:11.717 7.123 - 7.179: 99.8372% ( 4) 00:15:11.717 7.179 - 7.235: 99.8435% ( 1) 00:15:11.717 7.235 - 7.290: 99.8497% ( 1) 00:15:11.717 7.290 - 7.346: 99.8560% ( 1) 00:15:11.717 7.680 - 7.736: 99.8623% ( 1) 00:15:11.717 7.791 - 7.847: 99.8685% ( 1) 00:15:11.717 7.903 - 7.958: 99.8748% ( 1) 00:15:11.717 8.181 - 8.237: 99.8810% ( 1) 00:15:11.717 8.459 - 8.515: 99.8936% ( 2) 00:15:11.717 9.071 - 9.127: 99.8998% ( 1) 00:15:11.717 9.405 - 9.461: 99.9061% ( 1) 00:15:11.717 11.743 - 11.798: 99.9124% ( 1) 00:15:11.717 13.746 - 13.802: 99.9186% ( 1) 00:15:11.717 13.802 - 13.857: 99.9249% ( 1) 00:15:11.717 14.024 - 14.080: 99.9311% ( 1) 00:15:11.717 14.470 - 14.581: 99.9374% ( 1) 00:15:11.717 3989.148 - 4017.642: 100.0000% ( 10) 00:15:11.717 00:15:11.717 Complete histogram 00:15:11.717 ================== 00:15:11.717 Ra[2024-12-10 22:46:19.069303] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:11.717 nge in us Cumulative Count 00:15:11.717 1.781 - 1.795: 0.0125% ( 2) 00:15:11.717 1.795 - 1.809: 0.4758% ( 74) 00:15:11.717 1.809 - 1.823: 1.7717% ( 207) 00:15:11.717 1.823 - 1.837: 2.0848% ( 50) 00:15:11.717 1.837 - 1.850: 3.7188% ( 261) 00:15:11.717 1.850 - 1.864: 33.4439% ( 4748) 00:15:11.717 1.864 - 1.878: 78.1444% ( 7140) 00:15:11.717 1.878 - 1.892: 90.1271% ( 1914) 00:15:11.717 1.892 - 1.906: 94.4719% ( 694) 00:15:11.717 1.906 - 1.920: 95.6552% ( 189) 00:15:11.717 1.920 - 1.934: 96.6569% ( 160) 00:15:11.717 1.934 - 1.948: 97.8338% ( 188) 00:15:11.717 1.948 - 1.962: 98.8981% ( 170) 00:15:11.717 1.962 - 1.976: 99.0797% ( 29) 00:15:11.717 1.976 - 1.990: 99.1986% ( 19) 00:15:11.717 1.990 - 2.003: 99.2300% ( 5) 00:15:11.717 2.003 - 2.017: 99.2487% ( 3) 00:15:11.717 2.017 - 2.031: 99.2550% ( 1) 00:15:11.717 2.031 - 2.045: 99.2613% ( 1) 00:15:11.717 2.045 - 2.059: 99.2675% ( 1) 00:15:11.717 2.073 - 2.087: 99.2738% ( 1) 00:15:11.717 2.087 - 2.101: 99.2863% ( 2) 00:15:11.717 2.101 - 2.115: 99.2926% ( 1) 00:15:11.717 2.115 - 2.129: 99.3051% ( 2) 00:15:11.717 2.157 - 2.170: 99.3113% ( 1) 00:15:11.717 2.184 - 2.198: 99.3176% ( 1) 00:15:11.717 2.212 - 2.226: 99.3239% ( 1) 00:15:11.717 2.240 - 2.254: 99.3301% ( 1) 00:15:11.717 2.254 - 2.268: 99.3364% ( 1) 00:15:11.717 2.282 - 2.296: 99.3426% ( 1) 00:15:11.717 2.296 - 2.310: 99.3489% ( 1) 00:15:11.717 2.310 - 2.323: 99.3552% ( 1) 00:15:11.717 2.323 - 2.337: 99.3614% ( 1) 00:15:11.717 2.365 - 2.379: 99.3677% ( 1) 00:15:11.717 3.478 - 3.492: 99.3739% ( 1) 00:15:11.717 3.896 - 3.923: 99.3802% ( 1) 00:15:11.717 3.979 - 4.007: 99.3865% ( 1) 00:15:11.717 4.035 - 4.063: 99.3927% ( 1) 00:15:11.717 4.146 - 4.174: 99.3990% ( 1) 00:15:11.717 4.313 - 4.341: 99.4052% ( 1) 00:15:11.717 4.536 - 4.563: 99.4115% ( 1) 00:15:11.717 4.897 - 4.925: 99.4178% ( 1) 00:15:11.717 4.925 - 4.953: 99.4240% ( 1) 00:15:11.717 5.037 - 5.064: 99.4303% ( 1) 00:15:11.717 5.120 - 5.148: 99.4365% ( 1) 00:15:11.717 5.176 - 5.203: 99.4491% ( 2) 00:15:11.717 5.454 - 5.482: 99.4553% ( 1) 00:15:11.717 5.621 - 5.649: 99.4616% ( 1) 00:15:11.717 5.677 - 5.704: 99.4679% ( 1) 00:15:11.717 5.760 - 5.788: 99.4741% ( 1) 00:15:11.717 5.788 - 5.816: 99.4804% ( 1) 00:15:11.717 5.816 - 5.843: 99.4866% ( 1) 00:15:11.717 5.843 - 5.871: 99.4929% ( 1) 00:15:11.717 5.871 - 5.899: 99.4992% ( 1) 00:15:11.717 5.927 - 5.955: 99.5054% ( 1) 00:15:11.717 5.983 - 6.010: 99.5117% ( 1) 00:15:11.717 6.177 - 6.205: 99.5179% ( 1) 00:15:11.717 6.289 - 6.317: 99.5242% ( 1) 00:15:11.717 6.539 - 6.567: 99.5305% ( 1) 00:15:11.717 7.040 - 7.068: 99.5367% ( 1) 00:15:11.717 10.407 - 10.463: 99.5430% ( 1) 00:15:11.717 3989.148 - 4017.642: 99.9937% ( 72) 00:15:11.717 4017.642 - 4046.136: 100.0000% ( 1) 00:15:11.717 00:15:11.717 22:46:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:11.717 22:46:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:11.717 22:46:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:11.717 22:46:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:11.717 22:46:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:11.717 [ 00:15:11.717 { 00:15:11.717 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:11.717 "subtype": "Discovery", 00:15:11.717 "listen_addresses": [], 00:15:11.717 "allow_any_host": true, 00:15:11.717 "hosts": [] 00:15:11.717 }, 00:15:11.717 { 00:15:11.717 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:11.717 "subtype": "NVMe", 00:15:11.717 "listen_addresses": [ 00:15:11.717 { 00:15:11.717 "trtype": "VFIOUSER", 00:15:11.717 "adrfam": "IPv4", 00:15:11.717 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:11.717 "trsvcid": "0" 00:15:11.717 } 00:15:11.717 ], 00:15:11.717 "allow_any_host": true, 00:15:11.717 "hosts": [], 00:15:11.717 "serial_number": "SPDK1", 00:15:11.717 "model_number": "SPDK bdev Controller", 00:15:11.717 "max_namespaces": 32, 00:15:11.717 "min_cntlid": 1, 00:15:11.717 "max_cntlid": 65519, 00:15:11.717 "namespaces": [ 00:15:11.717 { 00:15:11.717 "nsid": 1, 00:15:11.717 "bdev_name": "Malloc1", 00:15:11.717 "name": "Malloc1", 00:15:11.717 "nguid": "2FE3BC116A044A848F1DA01DACEA658C", 00:15:11.717 "uuid": "2fe3bc11-6a04-4a84-8f1d-a01dacea658c" 00:15:11.717 }, 00:15:11.717 { 00:15:11.717 "nsid": 2, 00:15:11.717 "bdev_name": "Malloc3", 00:15:11.717 "name": "Malloc3", 00:15:11.717 "nguid": "7894D0247C2A4F7EA9FF49F6198EAE67", 00:15:11.717 "uuid": "7894d024-7c2a-4f7e-a9ff-49f6198eae67" 00:15:11.717 } 00:15:11.717 ] 00:15:11.717 }, 00:15:11.717 { 00:15:11.717 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:11.717 "subtype": "NVMe", 00:15:11.717 "listen_addresses": [ 00:15:11.717 { 00:15:11.717 "trtype": "VFIOUSER", 00:15:11.717 "adrfam": "IPv4", 00:15:11.717 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:11.717 "trsvcid": "0" 00:15:11.717 } 00:15:11.717 ], 00:15:11.717 "allow_any_host": true, 00:15:11.717 "hosts": [], 00:15:11.717 "serial_number": "SPDK2", 00:15:11.717 "model_number": "SPDK bdev Controller", 00:15:11.717 "max_namespaces": 32, 00:15:11.717 "min_cntlid": 1, 00:15:11.717 "max_cntlid": 65519, 00:15:11.717 "namespaces": [ 00:15:11.717 { 00:15:11.717 "nsid": 1, 00:15:11.717 "bdev_name": "Malloc2", 00:15:11.717 "name": "Malloc2", 00:15:11.717 "nguid": "9244013F8EA841DE89ABFCEB8C69B034", 00:15:11.717 "uuid": "9244013f-8ea8-41de-89ab-fceb8c69b034" 00:15:11.717 } 00:15:11.717 ] 00:15:11.717 } 00:15:11.717 ] 00:15:11.717 22:46:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:11.717 22:46:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2343981 00:15:11.717 22:46:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:11.717 22:46:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:11.717 22:46:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:15:11.717 22:46:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:11.717 22:46:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:11.717 22:46:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:15:11.717 22:46:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:11.717 22:46:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:11.976 [2024-12-10 22:46:19.474601] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:11.976 Malloc4 00:15:11.976 22:46:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:11.976 [2024-12-10 22:46:19.701308] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:12.235 22:46:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:12.235 Asynchronous Event Request test 00:15:12.235 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:12.235 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:12.235 Registering asynchronous event callbacks... 00:15:12.235 Starting namespace attribute notice tests for all controllers... 00:15:12.235 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:12.235 aer_cb - Changed Namespace 00:15:12.235 Cleaning up... 00:15:12.235 [ 00:15:12.235 { 00:15:12.235 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:12.235 "subtype": "Discovery", 00:15:12.235 "listen_addresses": [], 00:15:12.235 "allow_any_host": true, 00:15:12.235 "hosts": [] 00:15:12.235 }, 00:15:12.235 { 00:15:12.235 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:12.235 "subtype": "NVMe", 00:15:12.235 "listen_addresses": [ 00:15:12.235 { 00:15:12.235 "trtype": "VFIOUSER", 00:15:12.235 "adrfam": "IPv4", 00:15:12.235 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:12.235 "trsvcid": "0" 00:15:12.235 } 00:15:12.235 ], 00:15:12.235 "allow_any_host": true, 00:15:12.235 "hosts": [], 00:15:12.235 "serial_number": "SPDK1", 00:15:12.235 "model_number": "SPDK bdev Controller", 00:15:12.235 "max_namespaces": 32, 00:15:12.235 "min_cntlid": 1, 00:15:12.235 "max_cntlid": 65519, 00:15:12.235 "namespaces": [ 00:15:12.235 { 00:15:12.235 "nsid": 1, 00:15:12.235 "bdev_name": "Malloc1", 00:15:12.236 "name": "Malloc1", 00:15:12.236 "nguid": "2FE3BC116A044A848F1DA01DACEA658C", 00:15:12.236 "uuid": "2fe3bc11-6a04-4a84-8f1d-a01dacea658c" 00:15:12.236 }, 00:15:12.236 { 00:15:12.236 "nsid": 2, 00:15:12.236 "bdev_name": "Malloc3", 00:15:12.236 "name": "Malloc3", 00:15:12.236 "nguid": "7894D0247C2A4F7EA9FF49F6198EAE67", 00:15:12.236 "uuid": "7894d024-7c2a-4f7e-a9ff-49f6198eae67" 00:15:12.236 } 00:15:12.236 ] 00:15:12.236 }, 00:15:12.236 { 00:15:12.236 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:12.236 "subtype": "NVMe", 00:15:12.236 "listen_addresses": [ 00:15:12.236 { 00:15:12.236 "trtype": "VFIOUSER", 00:15:12.236 "adrfam": "IPv4", 00:15:12.236 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:12.236 "trsvcid": "0" 00:15:12.236 } 00:15:12.236 ], 00:15:12.236 "allow_any_host": true, 00:15:12.236 "hosts": [], 00:15:12.236 "serial_number": "SPDK2", 00:15:12.236 "model_number": "SPDK bdev Controller", 00:15:12.236 "max_namespaces": 32, 00:15:12.236 "min_cntlid": 1, 00:15:12.236 "max_cntlid": 65519, 00:15:12.236 "namespaces": [ 00:15:12.236 { 00:15:12.236 "nsid": 1, 00:15:12.236 "bdev_name": "Malloc2", 00:15:12.236 "name": "Malloc2", 00:15:12.236 "nguid": "9244013F8EA841DE89ABFCEB8C69B034", 00:15:12.236 "uuid": "9244013f-8ea8-41de-89ab-fceb8c69b034" 00:15:12.236 }, 00:15:12.236 { 00:15:12.236 "nsid": 2, 00:15:12.236 "bdev_name": "Malloc4", 00:15:12.236 "name": "Malloc4", 00:15:12.236 "nguid": "9F377A94EC5C4EABA649B8AE6B663099", 00:15:12.236 "uuid": "9f377a94-ec5c-4eab-a649-b8ae6b663099" 00:15:12.236 } 00:15:12.236 ] 00:15:12.236 } 00:15:12.236 ] 00:15:12.236 22:46:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2343981 00:15:12.236 22:46:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:12.236 22:46:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2336167 00:15:12.236 22:46:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 2336167 ']' 00:15:12.236 22:46:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 2336167 00:15:12.236 22:46:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:15:12.236 22:46:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:12.236 22:46:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2336167 00:15:12.495 22:46:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:12.495 22:46:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:12.495 22:46:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2336167' 00:15:12.495 killing process with pid 2336167 00:15:12.495 22:46:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 2336167 00:15:12.495 22:46:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 2336167 00:15:12.495 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:12.495 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:12.495 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:12.495 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:12.495 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:12.495 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2344023 00:15:12.495 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2344023' 00:15:12.495 Process pid: 2344023 00:15:12.495 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:12.495 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:12.495 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2344023 00:15:12.495 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 2344023 ']' 00:15:12.495 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:12.495 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:12.495 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:12.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:12.495 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:12.495 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:12.754 [2024-12-10 22:46:20.265229] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:12.754 [2024-12-10 22:46:20.266141] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:15:12.754 [2024-12-10 22:46:20.266194] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:12.754 [2024-12-10 22:46:20.342674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:12.754 [2024-12-10 22:46:20.384352] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:12.754 [2024-12-10 22:46:20.384393] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:12.754 [2024-12-10 22:46:20.384401] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:12.754 [2024-12-10 22:46:20.384407] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:12.754 [2024-12-10 22:46:20.384412] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:12.754 [2024-12-10 22:46:20.385854] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:15:12.754 [2024-12-10 22:46:20.385961] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:15:12.754 [2024-12-10 22:46:20.386071] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:12.754 [2024-12-10 22:46:20.386072] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:15:12.755 [2024-12-10 22:46:20.456242] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:12.755 [2024-12-10 22:46:20.457225] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:12.755 [2024-12-10 22:46:20.457395] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:12.755 [2024-12-10 22:46:20.457696] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:12.755 [2024-12-10 22:46:20.457755] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:12.755 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:12.755 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:15:12.755 22:46:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:14.131 22:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:14.132 22:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:14.132 22:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:14.132 22:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:14.132 22:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:14.132 22:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:14.390 Malloc1 00:15:14.390 22:46:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:14.650 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:14.650 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:14.909 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:14.909 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:14.909 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:15.168 Malloc2 00:15:15.168 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:15.426 22:46:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:15.685 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:15.685 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:15.685 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2344023 00:15:15.685 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 2344023 ']' 00:15:15.685 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 2344023 00:15:15.685 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:15:15.685 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:15.685 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2344023 00:15:15.944 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:15.944 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:15.944 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2344023' 00:15:15.944 killing process with pid 2344023 00:15:15.944 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 2344023 00:15:15.944 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 2344023 00:15:15.944 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:15.944 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:15.944 00:15:15.944 real 0m50.998s 00:15:15.944 user 3m17.180s 00:15:15.944 sys 0m3.263s 00:15:15.944 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:15.944 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:15.944 ************************************ 00:15:15.944 END TEST nvmf_vfio_user 00:15:15.944 ************************************ 00:15:16.204 22:46:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:16.204 22:46:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:16.204 22:46:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:16.204 22:46:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:16.204 ************************************ 00:15:16.204 START TEST nvmf_vfio_user_nvme_compliance 00:15:16.204 ************************************ 00:15:16.204 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:16.204 * Looking for test storage... 00:15:16.204 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvme/compliance 00:15:16.204 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:16.204 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lcov --version 00:15:16.204 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:16.204 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:16.204 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:16.204 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:16.204 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:16.204 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:15:16.204 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:15:16.204 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:15:16.204 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:15:16.204 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:15:16.204 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:15:16.204 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:15:16.204 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:16.204 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:15:16.204 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:15:16.204 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:16.204 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:16.204 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:15:16.204 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:15:16.204 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:16.204 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:15:16.204 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:15:16.204 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:15:16.204 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:15:16.204 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:16.204 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:15:16.204 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:15:16.204 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:16.204 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:16.204 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:15:16.204 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:16.204 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:16.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:16.204 --rc genhtml_branch_coverage=1 00:15:16.204 --rc genhtml_function_coverage=1 00:15:16.204 --rc genhtml_legend=1 00:15:16.204 --rc geninfo_all_blocks=1 00:15:16.204 --rc geninfo_unexecuted_blocks=1 00:15:16.204 00:15:16.204 ' 00:15:16.204 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:16.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:16.204 --rc genhtml_branch_coverage=1 00:15:16.204 --rc genhtml_function_coverage=1 00:15:16.204 --rc genhtml_legend=1 00:15:16.204 --rc geninfo_all_blocks=1 00:15:16.204 --rc geninfo_unexecuted_blocks=1 00:15:16.204 00:15:16.204 ' 00:15:16.204 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:16.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:16.204 --rc genhtml_branch_coverage=1 00:15:16.204 --rc genhtml_function_coverage=1 00:15:16.204 --rc genhtml_legend=1 00:15:16.204 --rc geninfo_all_blocks=1 00:15:16.204 --rc geninfo_unexecuted_blocks=1 00:15:16.204 00:15:16.204 ' 00:15:16.204 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:16.204 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:16.204 --rc genhtml_branch_coverage=1 00:15:16.204 --rc genhtml_function_coverage=1 00:15:16.204 --rc genhtml_legend=1 00:15:16.204 --rc geninfo_all_blocks=1 00:15:16.204 --rc geninfo_unexecuted_blocks=1 00:15:16.204 00:15:16.204 ' 00:15:16.204 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:15:16.204 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:15:16.204 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:16.205 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:16.205 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:16.205 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:16.205 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:16.205 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:16.205 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:16.205 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:16.205 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:16.205 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:16.205 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:16.205 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:16.205 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:16.205 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:16.205 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:16.205 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:16.205 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:15:16.205 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:15:16.205 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:16.205 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:16.205 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:16.205 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.205 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.205 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.205 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:15:16.205 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.205 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:15:16.205 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:16.205 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:16.205 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:16.205 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:16.205 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:16.205 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:16.205 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:16.205 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:16.205 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:16.205 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:16.205 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:16.205 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:16.205 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:16.205 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:16.205 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:16.205 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2344763 00:15:16.205 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2344763' 00:15:16.205 Process pid: 2344763 00:15:16.205 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:16.205 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:16.205 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2344763 00:15:16.205 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 2344763 ']' 00:15:16.205 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:16.205 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:16.205 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:16.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:16.205 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:16.205 22:46:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:16.465 [2024-12-10 22:46:23.974343] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:15:16.465 [2024-12-10 22:46:23.974390] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:16.465 [2024-12-10 22:46:24.051111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:16.465 [2024-12-10 22:46:24.092048] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:16.465 [2024-12-10 22:46:24.092085] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:16.465 [2024-12-10 22:46:24.092091] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:16.465 [2024-12-10 22:46:24.092098] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:16.465 [2024-12-10 22:46:24.092103] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:16.465 [2024-12-10 22:46:24.093504] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:15:16.465 [2024-12-10 22:46:24.093610] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:16.465 [2024-12-10 22:46:24.093612] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:15:16.465 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:16.465 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:15:16.465 22:46:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:15:17.842 22:46:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:17.842 22:46:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:17.842 22:46:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:17.842 22:46:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.842 22:46:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:17.842 22:46:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.842 22:46:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:17.842 22:46:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:17.842 22:46:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.842 22:46:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:17.842 malloc0 00:15:17.842 22:46:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.842 22:46:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:17.842 22:46:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.842 22:46:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:17.842 22:46:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.842 22:46:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:17.842 22:46:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.842 22:46:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:17.842 22:46:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.842 22:46:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:17.842 22:46:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.842 22:46:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:17.842 22:46:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.842 22:46:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:17.842 00:15:17.842 00:15:17.842 CUnit - A unit testing framework for C - Version 2.1-3 00:15:17.842 http://cunit.sourceforge.net/ 00:15:17.842 00:15:17.842 00:15:17.842 Suite: nvme_compliance 00:15:17.842 Test: admin_identify_ctrlr_verify_dptr ...[2024-12-10 22:46:25.441653] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:17.842 [2024-12-10 22:46:25.442986] vfio_user.c: 832:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:17.842 [2024-12-10 22:46:25.443001] vfio_user.c:5544:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:17.842 [2024-12-10 22:46:25.443008] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:17.842 [2024-12-10 22:46:25.444670] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:17.842 passed 00:15:17.842 Test: admin_identify_ctrlr_verify_fused ...[2024-12-10 22:46:25.523250] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:17.842 [2024-12-10 22:46:25.526268] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:17.842 passed 00:15:18.101 Test: admin_identify_ns ...[2024-12-10 22:46:25.605604] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:18.101 [2024-12-10 22:46:25.666174] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:18.101 [2024-12-10 22:46:25.674169] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:18.101 [2024-12-10 22:46:25.695265] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:18.101 passed 00:15:18.101 Test: admin_get_features_mandatory_features ...[2024-12-10 22:46:25.771682] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:18.101 [2024-12-10 22:46:25.774701] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:18.101 passed 00:15:18.359 Test: admin_get_features_optional_features ...[2024-12-10 22:46:25.854248] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:18.359 [2024-12-10 22:46:25.857272] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:18.359 passed 00:15:18.359 Test: admin_set_features_number_of_queues ...[2024-12-10 22:46:25.933623] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:18.359 [2024-12-10 22:46:26.042258] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:18.359 passed 00:15:18.618 Test: admin_get_log_page_mandatory_logs ...[2024-12-10 22:46:26.116471] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:18.618 [2024-12-10 22:46:26.120495] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:18.618 passed 00:15:18.618 Test: admin_get_log_page_with_lpo ...[2024-12-10 22:46:26.198467] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:18.618 [2024-12-10 22:46:26.266169] ctrlr.c:2700:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:18.618 [2024-12-10 22:46:26.279246] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:18.618 passed 00:15:18.877 Test: fabric_property_get ...[2024-12-10 22:46:26.356327] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:18.877 [2024-12-10 22:46:26.357559] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:18.877 [2024-12-10 22:46:26.359353] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:18.877 passed 00:15:18.877 Test: admin_delete_io_sq_use_admin_qid ...[2024-12-10 22:46:26.436890] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:18.877 [2024-12-10 22:46:26.438129] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:18.877 [2024-12-10 22:46:26.439908] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:18.877 passed 00:15:18.877 Test: admin_delete_io_sq_delete_sq_twice ...[2024-12-10 22:46:26.516665] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:18.877 [2024-12-10 22:46:26.604168] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:19.136 [2024-12-10 22:46:26.620169] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:19.136 [2024-12-10 22:46:26.625255] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:19.136 passed 00:15:19.136 Test: admin_delete_io_cq_use_admin_qid ...[2024-12-10 22:46:26.697440] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:19.136 [2024-12-10 22:46:26.698678] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:19.136 [2024-12-10 22:46:26.702476] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:19.136 passed 00:15:19.136 Test: admin_delete_io_cq_delete_cq_first ...[2024-12-10 22:46:26.780657] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:19.136 [2024-12-10 22:46:26.857166] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:19.395 [2024-12-10 22:46:26.881161] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:19.395 [2024-12-10 22:46:26.886242] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:19.395 passed 00:15:19.395 Test: admin_create_io_cq_verify_iv_pc ...[2024-12-10 22:46:26.960390] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:19.395 [2024-12-10 22:46:26.961625] vfio_user.c:2178:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:19.395 [2024-12-10 22:46:26.961648] vfio_user.c:2172:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:19.395 [2024-12-10 22:46:26.963405] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:19.395 passed 00:15:19.395 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-12-10 22:46:27.041601] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:19.654 [2024-12-10 22:46:27.137166] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:19.654 [2024-12-10 22:46:27.145168] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:19.654 [2024-12-10 22:46:27.153167] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:19.654 [2024-12-10 22:46:27.161166] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:19.654 [2024-12-10 22:46:27.190257] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:19.654 passed 00:15:19.654 Test: admin_create_io_sq_verify_pc ...[2024-12-10 22:46:27.264418] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:19.654 [2024-12-10 22:46:27.283176] vfio_user.c:2071:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:19.654 [2024-12-10 22:46:27.300626] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:19.654 passed 00:15:19.654 Test: admin_create_io_qp_max_qps ...[2024-12-10 22:46:27.378177] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:21.031 [2024-12-10 22:46:28.470170] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:15:21.290 [2024-12-10 22:46:28.862182] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:21.290 passed 00:15:21.290 Test: admin_create_io_sq_shared_cq ...[2024-12-10 22:46:28.943357] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:21.549 [2024-12-10 22:46:29.075168] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:21.549 [2024-12-10 22:46:29.112245] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:21.549 passed 00:15:21.549 00:15:21.549 Run Summary: Type Total Ran Passed Failed Inactive 00:15:21.549 suites 1 1 n/a 0 0 00:15:21.549 tests 18 18 18 0 0 00:15:21.549 asserts 360 360 360 0 n/a 00:15:21.549 00:15:21.549 Elapsed time = 1.510 seconds 00:15:21.549 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2344763 00:15:21.549 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 2344763 ']' 00:15:21.549 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 2344763 00:15:21.549 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:15:21.549 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:21.549 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2344763 00:15:21.549 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:21.549 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:21.549 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2344763' 00:15:21.549 killing process with pid 2344763 00:15:21.549 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 2344763 00:15:21.549 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 2344763 00:15:21.808 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:21.808 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:21.808 00:15:21.808 real 0m5.678s 00:15:21.808 user 0m15.837s 00:15:21.808 sys 0m0.542s 00:15:21.808 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:21.809 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:21.809 ************************************ 00:15:21.809 END TEST nvmf_vfio_user_nvme_compliance 00:15:21.809 ************************************ 00:15:21.809 22:46:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:21.809 22:46:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:21.809 22:46:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:21.809 22:46:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:21.809 ************************************ 00:15:21.809 START TEST nvmf_vfio_user_fuzz 00:15:21.809 ************************************ 00:15:21.809 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:22.068 * Looking for test storage... 00:15:22.068 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:15:22.068 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:22.068 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:15:22.068 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:22.068 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:22.068 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:22.068 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:22.068 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:22.068 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:15:22.068 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:15:22.068 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:15:22.068 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:15:22.068 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:15:22.068 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:15:22.068 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:15:22.068 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:22.068 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:15:22.068 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:15:22.068 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:22.068 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:22.068 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:15:22.068 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:15:22.068 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:22.068 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:15:22.068 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:15:22.068 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:15:22.068 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:15:22.068 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:22.068 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:15:22.068 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:15:22.068 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:22.068 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:22.068 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:15:22.068 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:22.068 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:22.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:22.068 --rc genhtml_branch_coverage=1 00:15:22.068 --rc genhtml_function_coverage=1 00:15:22.068 --rc genhtml_legend=1 00:15:22.068 --rc geninfo_all_blocks=1 00:15:22.068 --rc geninfo_unexecuted_blocks=1 00:15:22.068 00:15:22.068 ' 00:15:22.069 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:22.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:22.069 --rc genhtml_branch_coverage=1 00:15:22.069 --rc genhtml_function_coverage=1 00:15:22.069 --rc genhtml_legend=1 00:15:22.069 --rc geninfo_all_blocks=1 00:15:22.069 --rc geninfo_unexecuted_blocks=1 00:15:22.069 00:15:22.069 ' 00:15:22.069 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:22.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:22.069 --rc genhtml_branch_coverage=1 00:15:22.069 --rc genhtml_function_coverage=1 00:15:22.069 --rc genhtml_legend=1 00:15:22.069 --rc geninfo_all_blocks=1 00:15:22.069 --rc geninfo_unexecuted_blocks=1 00:15:22.069 00:15:22.069 ' 00:15:22.069 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:22.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:22.069 --rc genhtml_branch_coverage=1 00:15:22.069 --rc genhtml_function_coverage=1 00:15:22.069 --rc genhtml_legend=1 00:15:22.069 --rc geninfo_all_blocks=1 00:15:22.069 --rc geninfo_unexecuted_blocks=1 00:15:22.069 00:15:22.069 ' 00:15:22.069 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:15:22.069 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:22.069 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:22.069 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:22.069 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:22.069 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:22.069 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:22.069 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:22.069 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:22.069 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:22.069 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:22.069 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:22.069 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:22.069 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:22.069 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:22.069 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:22.069 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:22.069 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:22.069 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:15:22.069 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:15:22.069 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:22.069 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:22.069 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:22.069 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:22.069 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:22.069 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:22.069 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:15:22.069 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:22.069 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:15:22.069 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:22.069 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:22.069 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:22.069 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:22.069 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:22.069 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:22.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:22.069 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:22.069 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:22.069 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:22.069 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:22.069 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:22.069 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:22.069 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:22.069 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:22.069 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:22.069 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:22.069 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2345755 00:15:22.069 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2345755' 00:15:22.069 Process pid: 2345755 00:15:22.069 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:22.069 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:22.069 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2345755 00:15:22.069 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 2345755 ']' 00:15:22.069 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:22.069 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:22.069 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:22.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:22.069 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:22.069 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:22.328 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:22.328 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:15:22.328 22:46:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:23.265 22:46:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:23.265 22:46:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.265 22:46:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:23.265 22:46:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.265 22:46:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:23.265 22:46:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:23.265 22:46:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.265 22:46:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:23.265 malloc0 00:15:23.265 22:46:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.265 22:46:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:23.265 22:46:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.265 22:46:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:23.265 22:46:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.265 22:46:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:23.265 22:46:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.265 22:46:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:23.265 22:46:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.265 22:46:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:23.265 22:46:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.265 22:46:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:23.524 22:46:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.524 22:46:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:23.524 22:46:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:15:55.607 Fuzzing completed. Shutting down the fuzz application 00:15:55.607 00:15:55.607 Dumping successful admin opcodes: 00:15:55.607 9, 10, 00:15:55.607 Dumping successful io opcodes: 00:15:55.607 0, 00:15:55.607 NS: 0x20000081ef00 I/O qp, Total commands completed: 981011, total successful commands: 3845, random_seed: 2646776064 00:15:55.607 NS: 0x20000081ef00 admin qp, Total commands completed: 240896, total successful commands: 56, random_seed: 819775296 00:15:55.607 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:15:55.607 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.607 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:55.607 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.607 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2345755 00:15:55.607 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 2345755 ']' 00:15:55.607 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 2345755 00:15:55.607 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:15:55.607 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:55.607 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2345755 00:15:55.607 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:55.607 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:55.607 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2345755' 00:15:55.607 killing process with pid 2345755 00:15:55.607 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 2345755 00:15:55.607 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 2345755 00:15:55.607 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:15:55.607 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:15:55.607 00:15:55.607 real 0m32.227s 00:15:55.607 user 0m29.884s 00:15:55.607 sys 0m30.735s 00:15:55.607 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:55.607 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:55.607 ************************************ 00:15:55.607 END TEST nvmf_vfio_user_fuzz 00:15:55.607 ************************************ 00:15:55.607 22:47:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:55.607 22:47:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:55.607 22:47:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:55.607 22:47:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:55.607 ************************************ 00:15:55.607 START TEST nvmf_auth_target 00:15:55.607 ************************************ 00:15:55.607 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:55.607 * Looking for test storage... 00:15:55.607 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:15:55.607 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:55.607 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:15:55.607 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:55.607 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:55.607 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:55.607 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:55.607 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:55.607 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:15:55.607 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:15:55.607 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:15:55.607 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:15:55.607 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:15:55.607 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:15:55.607 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:15:55.607 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:55.607 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:15:55.607 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:15:55.607 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:55.607 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:55.607 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:15:55.607 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:15:55.607 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:55.607 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:15:55.607 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:15:55.607 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:15:55.607 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:15:55.607 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:55.607 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:15:55.607 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:15:55.607 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:55.607 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:55.607 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:15:55.607 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:55.607 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:55.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:55.607 --rc genhtml_branch_coverage=1 00:15:55.607 --rc genhtml_function_coverage=1 00:15:55.607 --rc genhtml_legend=1 00:15:55.607 --rc geninfo_all_blocks=1 00:15:55.607 --rc geninfo_unexecuted_blocks=1 00:15:55.607 00:15:55.607 ' 00:15:55.607 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:55.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:55.607 --rc genhtml_branch_coverage=1 00:15:55.607 --rc genhtml_function_coverage=1 00:15:55.607 --rc genhtml_legend=1 00:15:55.607 --rc geninfo_all_blocks=1 00:15:55.607 --rc geninfo_unexecuted_blocks=1 00:15:55.607 00:15:55.607 ' 00:15:55.607 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:55.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:55.607 --rc genhtml_branch_coverage=1 00:15:55.607 --rc genhtml_function_coverage=1 00:15:55.607 --rc genhtml_legend=1 00:15:55.607 --rc geninfo_all_blocks=1 00:15:55.607 --rc geninfo_unexecuted_blocks=1 00:15:55.607 00:15:55.607 ' 00:15:55.607 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:55.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:55.607 --rc genhtml_branch_coverage=1 00:15:55.607 --rc genhtml_function_coverage=1 00:15:55.607 --rc genhtml_legend=1 00:15:55.607 --rc geninfo_all_blocks=1 00:15:55.607 --rc geninfo_unexecuted_blocks=1 00:15:55.607 00:15:55.607 ' 00:15:55.607 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:15:55.607 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:15:55.607 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:55.607 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:55.607 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:55.607 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:55.607 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:55.607 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:55.607 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:55.607 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:55.607 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:55.607 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:55.607 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:55.607 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:55.607 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:55.607 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:55.607 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:55.607 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:55.607 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:15:55.608 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:15:55.608 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:55.608 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:55.608 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:55.608 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.608 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.608 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.608 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:15:55.608 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.608 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:15:55.608 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:55.608 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:55.608 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:55.608 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:55.608 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:55.608 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:55.608 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:55.608 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:55.608 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:55.608 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:55.608 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:15:55.608 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:15:55.608 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:15:55.608 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:55.608 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:15:55.608 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:15:55.608 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:15:55.608 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:15:55.608 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:55.608 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:55.608 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:55.608 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:55.608 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:55.608 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:55.608 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:55.608 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:55.608 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:55.608 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:55.608 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:15:55.608 22:47:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.874 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:59.874 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:15:59.874 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:59.874 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:59.874 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:59.874 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:59.874 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:59.874 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:15:59.874 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:59.874 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:15:59.874 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:15:59.874 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:15:59.874 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:15:59.874 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:15:59.874 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:15:59.874 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:59.874 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:59.874 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:59.874 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:59.874 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:59.874 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:59.874 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:59.874 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:59.874 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:59.874 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:59.874 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:59.874 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:59.874 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:59.875 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:59.875 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:59.875 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:59.875 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:59.875 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:59.875 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:59.875 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:59.875 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:59.875 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:59.875 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:59.875 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:59.875 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:59.875 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:59.875 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:00.134 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:00.134 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:00.134 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:00.134 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:00.134 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:00.134 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:00.134 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:00.134 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:00.134 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:00.134 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:00.134 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:00.134 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:00.134 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:00.134 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:00.134 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:00.134 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:00.134 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:00.134 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:00.134 Found net devices under 0000:86:00.0: cvl_0_0 00:16:00.134 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:00.134 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:00.134 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:00.134 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:00.134 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:00.134 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:00.134 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:00.134 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:00.134 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:00.134 Found net devices under 0000:86:00.1: cvl_0_1 00:16:00.134 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:00.134 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:00.134 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:16:00.134 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:00.134 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:00.134 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:00.134 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:00.134 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:00.134 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:00.134 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:00.134 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:00.134 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:00.134 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:00.134 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:00.134 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:00.134 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:00.134 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:00.134 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:00.134 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:00.134 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:00.134 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:00.134 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:00.134 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:00.134 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:00.134 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:00.134 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:00.134 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:00.134 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:00.134 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:00.134 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:00.134 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.383 ms 00:16:00.134 00:16:00.134 --- 10.0.0.2 ping statistics --- 00:16:00.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:00.134 rtt min/avg/max/mdev = 0.383/0.383/0.383/0.000 ms 00:16:00.134 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:00.135 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:00.135 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:16:00.135 00:16:00.135 --- 10.0.0.1 ping statistics --- 00:16:00.135 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:00.135 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:16:00.135 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:00.135 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:16:00.135 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:00.135 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:00.135 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:00.135 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:00.135 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:00.135 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:00.135 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:00.394 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:16:00.394 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:00.394 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:00.394 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.394 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2354204 00:16:00.394 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2354204 00:16:00.394 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:16:00.394 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2354204 ']' 00:16:00.394 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:00.394 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:00.394 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:00.394 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:00.394 22:47:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.654 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:00.654 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:00.654 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:00.654 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:00.654 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.654 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:00.654 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=2354291 00:16:00.654 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:16:00.654 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:00.654 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:16:00.654 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:00.654 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:00.654 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:00.654 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:16:00.654 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:00.654 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:00.654 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=07c9fd9dee5e0a819eb911412c1e2bb9df030406d209b17a 00:16:00.654 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:16:00.654 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.iGe 00:16:00.654 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 07c9fd9dee5e0a819eb911412c1e2bb9df030406d209b17a 0 00:16:00.654 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 07c9fd9dee5e0a819eb911412c1e2bb9df030406d209b17a 0 00:16:00.654 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:00.654 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:00.654 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=07c9fd9dee5e0a819eb911412c1e2bb9df030406d209b17a 00:16:00.654 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:16:00.654 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:00.654 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.iGe 00:16:00.654 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.iGe 00:16:00.654 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.iGe 00:16:00.654 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:16:00.654 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:00.654 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:00.654 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:00.654 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:16:00.654 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:16:00.654 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:00.654 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=b81b6662f792d7ee359cda1172c8025e45f81d357930324cad8801f077955242 00:16:00.654 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:16:00.654 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.X79 00:16:00.654 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key b81b6662f792d7ee359cda1172c8025e45f81d357930324cad8801f077955242 3 00:16:00.654 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 b81b6662f792d7ee359cda1172c8025e45f81d357930324cad8801f077955242 3 00:16:00.654 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:00.654 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:00.654 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=b81b6662f792d7ee359cda1172c8025e45f81d357930324cad8801f077955242 00:16:00.654 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:16:00.654 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:00.654 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.X79 00:16:00.654 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.X79 00:16:00.654 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.X79 00:16:00.654 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:16:00.654 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:00.654 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:00.654 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:00.654 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:16:00.654 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:16:00.654 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:00.654 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=162fecc03023893951642eceac02a151 00:16:00.654 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:16:00.654 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.PhZ 00:16:00.654 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 162fecc03023893951642eceac02a151 1 00:16:00.654 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 162fecc03023893951642eceac02a151 1 00:16:00.654 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:00.654 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:00.654 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=162fecc03023893951642eceac02a151 00:16:00.654 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:16:00.654 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:00.654 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.PhZ 00:16:00.654 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.PhZ 00:16:00.654 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.PhZ 00:16:00.654 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:16:00.654 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:00.654 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:00.654 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:00.654 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:16:00.654 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:00.654 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:00.914 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=fa6ee8bd6abff6bae3f1c717d6ec3384bf4769f53e19fc93 00:16:00.914 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:16:00.914 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.L2x 00:16:00.914 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key fa6ee8bd6abff6bae3f1c717d6ec3384bf4769f53e19fc93 2 00:16:00.914 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 fa6ee8bd6abff6bae3f1c717d6ec3384bf4769f53e19fc93 2 00:16:00.914 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:00.914 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:00.914 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=fa6ee8bd6abff6bae3f1c717d6ec3384bf4769f53e19fc93 00:16:00.914 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:16:00.914 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:00.914 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.L2x 00:16:00.914 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.L2x 00:16:00.914 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.L2x 00:16:00.914 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:16:00.914 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:00.914 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:00.914 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:00.914 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:16:00.914 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:16:00.914 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:00.914 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=9a7167b1406157902ef9ff4096bd3420c4ab979debd630d4 00:16:00.914 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:16:00.914 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.WqS 00:16:00.914 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 9a7167b1406157902ef9ff4096bd3420c4ab979debd630d4 2 00:16:00.914 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 9a7167b1406157902ef9ff4096bd3420c4ab979debd630d4 2 00:16:00.914 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:00.914 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:00.914 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=9a7167b1406157902ef9ff4096bd3420c4ab979debd630d4 00:16:00.914 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:16:00.914 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:00.914 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.WqS 00:16:00.914 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.WqS 00:16:00.914 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.WqS 00:16:00.914 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:16:00.914 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:00.914 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:00.914 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:00.914 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:16:00.914 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:16:00.914 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:00.914 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=8d5f33afabef8daa163abbd047868ce9 00:16:00.914 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:16:00.914 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Xke 00:16:00.914 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 8d5f33afabef8daa163abbd047868ce9 1 00:16:00.914 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 8d5f33afabef8daa163abbd047868ce9 1 00:16:00.914 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:00.914 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:00.915 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=8d5f33afabef8daa163abbd047868ce9 00:16:00.915 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:16:00.915 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:00.915 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Xke 00:16:00.915 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Xke 00:16:00.915 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.Xke 00:16:00.915 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:16:00.915 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:16:00.915 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:00.915 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:16:00.915 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:16:00.915 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:16:00.915 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:00.915 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=52a265b02f4249fe02c5175e696b9bcbba75a95f1650725b16185da952e97d54 00:16:00.915 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:16:00.915 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.OYD 00:16:00.915 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 52a265b02f4249fe02c5175e696b9bcbba75a95f1650725b16185da952e97d54 3 00:16:00.915 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 52a265b02f4249fe02c5175e696b9bcbba75a95f1650725b16185da952e97d54 3 00:16:00.915 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:16:00.915 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:16:00.915 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=52a265b02f4249fe02c5175e696b9bcbba75a95f1650725b16185da952e97d54 00:16:00.915 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:16:00.915 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:16:00.915 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.OYD 00:16:00.915 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.OYD 00:16:00.915 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.OYD 00:16:00.915 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:16:00.915 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 2354204 00:16:00.915 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2354204 ']' 00:16:00.915 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:00.915 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:00.915 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:00.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:00.915 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:00.915 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.175 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:01.175 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:01.175 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 2354291 /var/tmp/host.sock 00:16:01.175 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2354291 ']' 00:16:01.175 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:16:01.175 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:01.175 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:01.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:01.175 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:01.175 22:47:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.434 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:01.434 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:01.434 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:16:01.434 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.434 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.434 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.434 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:01.434 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.iGe 00:16:01.434 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.434 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.434 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.434 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.iGe 00:16:01.434 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.iGe 00:16:01.692 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.X79 ]] 00:16:01.692 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.X79 00:16:01.692 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.692 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.692 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.692 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.X79 00:16:01.692 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.X79 00:16:01.950 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:01.950 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.PhZ 00:16:01.950 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.950 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.950 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.950 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.PhZ 00:16:01.950 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.PhZ 00:16:01.950 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.L2x ]] 00:16:01.950 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.L2x 00:16:01.950 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.950 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.950 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.950 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.L2x 00:16:01.950 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.L2x 00:16:02.209 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:02.209 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.WqS 00:16:02.209 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.209 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.209 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.209 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.WqS 00:16:02.209 22:47:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.WqS 00:16:02.468 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.Xke ]] 00:16:02.468 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Xke 00:16:02.468 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.468 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.468 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.468 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Xke 00:16:02.468 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Xke 00:16:02.727 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:16:02.727 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.OYD 00:16:02.727 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.727 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.727 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.727 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.OYD 00:16:02.727 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.OYD 00:16:02.727 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:16:02.727 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:02.727 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:02.727 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:02.727 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:02.727 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:02.985 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:16:02.985 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:02.985 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:02.985 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:02.985 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:02.985 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:02.985 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:02.985 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.985 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.985 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.985 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:02.985 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:02.985 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:03.244 00:16:03.244 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:03.244 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:03.244 22:47:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:03.503 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.503 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:03.503 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.503 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.503 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.503 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:03.503 { 00:16:03.503 "cntlid": 1, 00:16:03.503 "qid": 0, 00:16:03.503 "state": "enabled", 00:16:03.503 "thread": "nvmf_tgt_poll_group_000", 00:16:03.503 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:03.503 "listen_address": { 00:16:03.503 "trtype": "TCP", 00:16:03.503 "adrfam": "IPv4", 00:16:03.503 "traddr": "10.0.0.2", 00:16:03.503 "trsvcid": "4420" 00:16:03.503 }, 00:16:03.503 "peer_address": { 00:16:03.503 "trtype": "TCP", 00:16:03.503 "adrfam": "IPv4", 00:16:03.503 "traddr": "10.0.0.1", 00:16:03.503 "trsvcid": "52320" 00:16:03.503 }, 00:16:03.503 "auth": { 00:16:03.503 "state": "completed", 00:16:03.503 "digest": "sha256", 00:16:03.503 "dhgroup": "null" 00:16:03.503 } 00:16:03.503 } 00:16:03.503 ]' 00:16:03.503 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:03.503 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:03.503 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:03.503 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:03.503 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:03.761 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:03.761 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:03.761 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:03.761 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDdjOWZkOWRlZTVlMGE4MTllYjkxMTQxMmMxZTJiYjlkZjAzMDQwNmQyMDliMTdh07Cfwg==: --dhchap-ctrl-secret DHHC-1:03:YjgxYjY2NjJmNzkyZDdlZTM1OWNkYTExNzJjODAyNWU0NWY4MWQzNTc5MzAzMjRjYWQ4ODAxZjA3Nzk1NTI0MoAn0BM=: 00:16:03.761 22:47:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDdjOWZkOWRlZTVlMGE4MTllYjkxMTQxMmMxZTJiYjlkZjAzMDQwNmQyMDliMTdh07Cfwg==: --dhchap-ctrl-secret DHHC-1:03:YjgxYjY2NjJmNzkyZDdlZTM1OWNkYTExNzJjODAyNWU0NWY4MWQzNTc5MzAzMjRjYWQ4ODAxZjA3Nzk1NTI0MoAn0BM=: 00:16:04.328 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:04.328 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:04.328 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:04.328 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.328 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.587 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.587 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:04.587 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:04.587 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:04.587 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:16:04.587 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:04.587 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:04.587 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:04.587 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:04.587 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:04.587 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:04.587 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.587 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.587 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.587 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:04.587 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:04.587 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:04.846 00:16:04.846 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:04.846 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:04.846 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:05.105 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.105 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:05.105 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.105 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.105 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.105 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:05.105 { 00:16:05.105 "cntlid": 3, 00:16:05.105 "qid": 0, 00:16:05.105 "state": "enabled", 00:16:05.105 "thread": "nvmf_tgt_poll_group_000", 00:16:05.105 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:05.105 "listen_address": { 00:16:05.105 "trtype": "TCP", 00:16:05.105 "adrfam": "IPv4", 00:16:05.105 "traddr": "10.0.0.2", 00:16:05.105 "trsvcid": "4420" 00:16:05.105 }, 00:16:05.105 "peer_address": { 00:16:05.105 "trtype": "TCP", 00:16:05.105 "adrfam": "IPv4", 00:16:05.105 "traddr": "10.0.0.1", 00:16:05.105 "trsvcid": "52344" 00:16:05.105 }, 00:16:05.105 "auth": { 00:16:05.105 "state": "completed", 00:16:05.105 "digest": "sha256", 00:16:05.105 "dhgroup": "null" 00:16:05.105 } 00:16:05.105 } 00:16:05.105 ]' 00:16:05.105 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:05.105 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:05.105 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:05.105 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:05.105 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:05.365 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:05.365 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:05.365 22:47:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:05.365 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTYyZmVjYzAzMDIzODkzOTUxNjQyZWNlYWMwMmExNTHhtxj0: --dhchap-ctrl-secret DHHC-1:02:ZmE2ZWU4YmQ2YWJmZjZiYWUzZjFjNzE3ZDZlYzMzODRiZjQ3NjlmNTNlMTlmYzkzZRferw==: 00:16:05.365 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MTYyZmVjYzAzMDIzODkzOTUxNjQyZWNlYWMwMmExNTHhtxj0: --dhchap-ctrl-secret DHHC-1:02:ZmE2ZWU4YmQ2YWJmZjZiYWUzZjFjNzE3ZDZlYzMzODRiZjQ3NjlmNTNlMTlmYzkzZRferw==: 00:16:05.933 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:06.192 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:06.192 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:06.192 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.192 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.192 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.192 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:06.192 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:06.192 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:06.192 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:16:06.192 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:06.192 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:06.192 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:06.192 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:06.192 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:06.192 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:06.192 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.192 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.192 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.192 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:06.192 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:06.192 22:47:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:06.451 00:16:06.451 22:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:06.451 22:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:06.451 22:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:06.709 22:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:06.709 22:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:06.709 22:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.709 22:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.709 22:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.709 22:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:06.709 { 00:16:06.709 "cntlid": 5, 00:16:06.709 "qid": 0, 00:16:06.709 "state": "enabled", 00:16:06.709 "thread": "nvmf_tgt_poll_group_000", 00:16:06.709 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:06.709 "listen_address": { 00:16:06.709 "trtype": "TCP", 00:16:06.709 "adrfam": "IPv4", 00:16:06.709 "traddr": "10.0.0.2", 00:16:06.709 "trsvcid": "4420" 00:16:06.709 }, 00:16:06.709 "peer_address": { 00:16:06.709 "trtype": "TCP", 00:16:06.709 "adrfam": "IPv4", 00:16:06.709 "traddr": "10.0.0.1", 00:16:06.709 "trsvcid": "52374" 00:16:06.709 }, 00:16:06.709 "auth": { 00:16:06.709 "state": "completed", 00:16:06.709 "digest": "sha256", 00:16:06.709 "dhgroup": "null" 00:16:06.709 } 00:16:06.709 } 00:16:06.709 ]' 00:16:06.709 22:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:06.709 22:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:06.709 22:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:06.709 22:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:06.709 22:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:06.968 22:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:06.968 22:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:06.968 22:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:06.968 22:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWE3MTY3YjE0MDYxNTc5MDJlZjlmZjQwOTZiZDM0MjBjNGFiOTc5ZGViZDYzMGQ0PkRHzA==: --dhchap-ctrl-secret DHHC-1:01:OGQ1ZjMzYWZhYmVmOGRhYTE2M2FiYmQwNDc4NjhjZTns83ga: 00:16:06.968 22:47:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OWE3MTY3YjE0MDYxNTc5MDJlZjlmZjQwOTZiZDM0MjBjNGFiOTc5ZGViZDYzMGQ0PkRHzA==: --dhchap-ctrl-secret DHHC-1:01:OGQ1ZjMzYWZhYmVmOGRhYTE2M2FiYmQwNDc4NjhjZTns83ga: 00:16:07.535 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:07.535 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:07.535 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:07.535 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.535 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.794 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.794 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:07.794 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:07.794 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:07.794 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:16:07.794 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:07.794 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:07.794 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:07.794 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:07.794 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:07.794 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:07.794 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.794 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.794 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.794 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:07.794 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:07.794 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:08.053 00:16:08.053 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:08.053 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:08.053 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:08.313 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:08.313 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:08.313 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.313 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.313 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.313 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:08.313 { 00:16:08.313 "cntlid": 7, 00:16:08.313 "qid": 0, 00:16:08.313 "state": "enabled", 00:16:08.313 "thread": "nvmf_tgt_poll_group_000", 00:16:08.313 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:08.313 "listen_address": { 00:16:08.313 "trtype": "TCP", 00:16:08.313 "adrfam": "IPv4", 00:16:08.313 "traddr": "10.0.0.2", 00:16:08.313 "trsvcid": "4420" 00:16:08.313 }, 00:16:08.313 "peer_address": { 00:16:08.313 "trtype": "TCP", 00:16:08.313 "adrfam": "IPv4", 00:16:08.313 "traddr": "10.0.0.1", 00:16:08.313 "trsvcid": "52404" 00:16:08.313 }, 00:16:08.313 "auth": { 00:16:08.313 "state": "completed", 00:16:08.313 "digest": "sha256", 00:16:08.313 "dhgroup": "null" 00:16:08.313 } 00:16:08.313 } 00:16:08.313 ]' 00:16:08.313 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:08.313 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:08.313 22:47:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:08.313 22:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:08.571 22:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:08.571 22:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:08.571 22:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:08.571 22:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:08.571 22:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTJhMjY1YjAyZjQyNDlmZTAyYzUxNzVlNjk2YjliY2JiYTc1YTk1ZjE2NTA3MjViMTYxODVkYTk1MmU5N2Q1NMfsJw4=: 00:16:08.571 22:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTJhMjY1YjAyZjQyNDlmZTAyYzUxNzVlNjk2YjliY2JiYTc1YTk1ZjE2NTA3MjViMTYxODVkYTk1MmU5N2Q1NMfsJw4=: 00:16:09.138 22:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:09.138 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:09.138 22:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:09.138 22:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.138 22:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.138 22:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.139 22:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:09.139 22:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:09.139 22:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:09.139 22:47:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:09.397 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:16:09.397 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:09.398 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:09.398 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:09.398 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:09.398 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:09.398 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:09.398 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.398 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.398 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.398 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:09.398 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:09.398 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:09.656 00:16:09.656 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:09.656 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:09.656 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:09.915 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.915 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:09.915 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.915 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.915 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.915 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:09.915 { 00:16:09.915 "cntlid": 9, 00:16:09.915 "qid": 0, 00:16:09.915 "state": "enabled", 00:16:09.915 "thread": "nvmf_tgt_poll_group_000", 00:16:09.916 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:09.916 "listen_address": { 00:16:09.916 "trtype": "TCP", 00:16:09.916 "adrfam": "IPv4", 00:16:09.916 "traddr": "10.0.0.2", 00:16:09.916 "trsvcid": "4420" 00:16:09.916 }, 00:16:09.916 "peer_address": { 00:16:09.916 "trtype": "TCP", 00:16:09.916 "adrfam": "IPv4", 00:16:09.916 "traddr": "10.0.0.1", 00:16:09.916 "trsvcid": "52430" 00:16:09.916 }, 00:16:09.916 "auth": { 00:16:09.916 "state": "completed", 00:16:09.916 "digest": "sha256", 00:16:09.916 "dhgroup": "ffdhe2048" 00:16:09.916 } 00:16:09.916 } 00:16:09.916 ]' 00:16:09.916 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:09.916 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:09.916 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:10.174 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:10.174 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:10.174 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:10.174 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:10.174 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:10.174 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDdjOWZkOWRlZTVlMGE4MTllYjkxMTQxMmMxZTJiYjlkZjAzMDQwNmQyMDliMTdh07Cfwg==: --dhchap-ctrl-secret DHHC-1:03:YjgxYjY2NjJmNzkyZDdlZTM1OWNkYTExNzJjODAyNWU0NWY4MWQzNTc5MzAzMjRjYWQ4ODAxZjA3Nzk1NTI0MoAn0BM=: 00:16:10.175 22:47:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDdjOWZkOWRlZTVlMGE4MTllYjkxMTQxMmMxZTJiYjlkZjAzMDQwNmQyMDliMTdh07Cfwg==: --dhchap-ctrl-secret DHHC-1:03:YjgxYjY2NjJmNzkyZDdlZTM1OWNkYTExNzJjODAyNWU0NWY4MWQzNTc5MzAzMjRjYWQ4ODAxZjA3Nzk1NTI0MoAn0BM=: 00:16:10.742 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:10.742 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:10.742 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:10.743 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.001 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.001 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.001 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:11.001 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:11.001 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:11.001 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:16:11.001 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:11.001 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:11.001 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:11.001 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:11.001 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:11.001 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:11.001 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.001 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.001 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.001 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:11.001 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:11.002 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:11.260 00:16:11.260 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:11.260 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:11.260 22:47:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:11.519 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.519 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:11.519 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.519 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.519 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.519 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:11.519 { 00:16:11.519 "cntlid": 11, 00:16:11.519 "qid": 0, 00:16:11.519 "state": "enabled", 00:16:11.519 "thread": "nvmf_tgt_poll_group_000", 00:16:11.519 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:11.519 "listen_address": { 00:16:11.519 "trtype": "TCP", 00:16:11.519 "adrfam": "IPv4", 00:16:11.519 "traddr": "10.0.0.2", 00:16:11.519 "trsvcid": "4420" 00:16:11.519 }, 00:16:11.519 "peer_address": { 00:16:11.519 "trtype": "TCP", 00:16:11.519 "adrfam": "IPv4", 00:16:11.519 "traddr": "10.0.0.1", 00:16:11.519 "trsvcid": "45034" 00:16:11.519 }, 00:16:11.519 "auth": { 00:16:11.519 "state": "completed", 00:16:11.519 "digest": "sha256", 00:16:11.519 "dhgroup": "ffdhe2048" 00:16:11.519 } 00:16:11.519 } 00:16:11.519 ]' 00:16:11.519 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:11.519 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:11.519 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:11.519 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:11.519 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:11.778 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:11.778 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:11.778 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:11.778 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTYyZmVjYzAzMDIzODkzOTUxNjQyZWNlYWMwMmExNTHhtxj0: --dhchap-ctrl-secret DHHC-1:02:ZmE2ZWU4YmQ2YWJmZjZiYWUzZjFjNzE3ZDZlYzMzODRiZjQ3NjlmNTNlMTlmYzkzZRferw==: 00:16:11.778 22:47:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MTYyZmVjYzAzMDIzODkzOTUxNjQyZWNlYWMwMmExNTHhtxj0: --dhchap-ctrl-secret DHHC-1:02:ZmE2ZWU4YmQ2YWJmZjZiYWUzZjFjNzE3ZDZlYzMzODRiZjQ3NjlmNTNlMTlmYzkzZRferw==: 00:16:12.345 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:12.345 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:12.345 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:12.345 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.345 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.345 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.345 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:12.345 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:12.345 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:12.621 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:16:12.621 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:12.621 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:12.621 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:12.621 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:12.621 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:12.621 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:12.621 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.621 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.621 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.621 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:12.621 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:12.621 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:12.880 00:16:12.880 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:12.880 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:12.880 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:13.139 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.139 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:13.139 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.139 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.139 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.139 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:13.139 { 00:16:13.139 "cntlid": 13, 00:16:13.139 "qid": 0, 00:16:13.139 "state": "enabled", 00:16:13.139 "thread": "nvmf_tgt_poll_group_000", 00:16:13.139 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:13.139 "listen_address": { 00:16:13.139 "trtype": "TCP", 00:16:13.139 "adrfam": "IPv4", 00:16:13.139 "traddr": "10.0.0.2", 00:16:13.139 "trsvcid": "4420" 00:16:13.139 }, 00:16:13.139 "peer_address": { 00:16:13.139 "trtype": "TCP", 00:16:13.139 "adrfam": "IPv4", 00:16:13.139 "traddr": "10.0.0.1", 00:16:13.139 "trsvcid": "45050" 00:16:13.139 }, 00:16:13.139 "auth": { 00:16:13.139 "state": "completed", 00:16:13.139 "digest": "sha256", 00:16:13.139 "dhgroup": "ffdhe2048" 00:16:13.139 } 00:16:13.139 } 00:16:13.139 ]' 00:16:13.139 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:13.139 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:13.139 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:13.139 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:13.139 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:13.398 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:13.398 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:13.398 22:47:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:13.398 22:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWE3MTY3YjE0MDYxNTc5MDJlZjlmZjQwOTZiZDM0MjBjNGFiOTc5ZGViZDYzMGQ0PkRHzA==: --dhchap-ctrl-secret DHHC-1:01:OGQ1ZjMzYWZhYmVmOGRhYTE2M2FiYmQwNDc4NjhjZTns83ga: 00:16:13.398 22:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OWE3MTY3YjE0MDYxNTc5MDJlZjlmZjQwOTZiZDM0MjBjNGFiOTc5ZGViZDYzMGQ0PkRHzA==: --dhchap-ctrl-secret DHHC-1:01:OGQ1ZjMzYWZhYmVmOGRhYTE2M2FiYmQwNDc4NjhjZTns83ga: 00:16:13.965 22:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.966 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.966 22:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:13.966 22:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.966 22:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.966 22:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.966 22:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:13.966 22:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:13.966 22:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:14.225 22:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:16:14.225 22:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:14.225 22:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:14.225 22:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:14.225 22:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:14.225 22:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:14.225 22:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:14.225 22:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.225 22:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.225 22:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.225 22:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:14.225 22:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:14.225 22:47:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:14.484 00:16:14.484 22:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:14.484 22:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:14.484 22:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.743 22:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.743 22:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:14.743 22:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.743 22:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.743 22:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.743 22:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:14.743 { 00:16:14.743 "cntlid": 15, 00:16:14.743 "qid": 0, 00:16:14.743 "state": "enabled", 00:16:14.743 "thread": "nvmf_tgt_poll_group_000", 00:16:14.743 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:14.743 "listen_address": { 00:16:14.743 "trtype": "TCP", 00:16:14.743 "adrfam": "IPv4", 00:16:14.743 "traddr": "10.0.0.2", 00:16:14.743 "trsvcid": "4420" 00:16:14.743 }, 00:16:14.743 "peer_address": { 00:16:14.743 "trtype": "TCP", 00:16:14.743 "adrfam": "IPv4", 00:16:14.743 "traddr": "10.0.0.1", 00:16:14.743 "trsvcid": "45062" 00:16:14.743 }, 00:16:14.743 "auth": { 00:16:14.743 "state": "completed", 00:16:14.743 "digest": "sha256", 00:16:14.743 "dhgroup": "ffdhe2048" 00:16:14.743 } 00:16:14.743 } 00:16:14.743 ]' 00:16:14.743 22:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:14.743 22:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:14.743 22:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:14.743 22:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:14.743 22:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:14.743 22:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:14.743 22:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:14.743 22:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:15.002 22:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTJhMjY1YjAyZjQyNDlmZTAyYzUxNzVlNjk2YjliY2JiYTc1YTk1ZjE2NTA3MjViMTYxODVkYTk1MmU5N2Q1NMfsJw4=: 00:16:15.002 22:47:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTJhMjY1YjAyZjQyNDlmZTAyYzUxNzVlNjk2YjliY2JiYTc1YTk1ZjE2NTA3MjViMTYxODVkYTk1MmU5N2Q1NMfsJw4=: 00:16:15.569 22:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:15.569 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:15.569 22:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:15.569 22:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.569 22:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.569 22:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.569 22:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:15.569 22:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:15.569 22:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:15.569 22:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:15.828 22:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:16:15.828 22:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:15.828 22:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:15.828 22:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:15.828 22:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:15.828 22:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:15.828 22:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.828 22:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.828 22:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.828 22:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.828 22:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.828 22:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:15.828 22:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.086 00:16:16.086 22:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:16.086 22:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:16.086 22:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:16.347 22:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.347 22:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:16.347 22:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.347 22:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.347 22:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.347 22:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:16.347 { 00:16:16.347 "cntlid": 17, 00:16:16.347 "qid": 0, 00:16:16.347 "state": "enabled", 00:16:16.347 "thread": "nvmf_tgt_poll_group_000", 00:16:16.347 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:16.347 "listen_address": { 00:16:16.347 "trtype": "TCP", 00:16:16.347 "adrfam": "IPv4", 00:16:16.347 "traddr": "10.0.0.2", 00:16:16.347 "trsvcid": "4420" 00:16:16.347 }, 00:16:16.347 "peer_address": { 00:16:16.347 "trtype": "TCP", 00:16:16.347 "adrfam": "IPv4", 00:16:16.347 "traddr": "10.0.0.1", 00:16:16.347 "trsvcid": "45094" 00:16:16.347 }, 00:16:16.347 "auth": { 00:16:16.347 "state": "completed", 00:16:16.347 "digest": "sha256", 00:16:16.347 "dhgroup": "ffdhe3072" 00:16:16.347 } 00:16:16.347 } 00:16:16.347 ]' 00:16:16.347 22:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:16.347 22:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:16.347 22:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:16.347 22:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:16.347 22:47:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:16.347 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:16.347 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:16.347 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:16.605 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDdjOWZkOWRlZTVlMGE4MTllYjkxMTQxMmMxZTJiYjlkZjAzMDQwNmQyMDliMTdh07Cfwg==: --dhchap-ctrl-secret DHHC-1:03:YjgxYjY2NjJmNzkyZDdlZTM1OWNkYTExNzJjODAyNWU0NWY4MWQzNTc5MzAzMjRjYWQ4ODAxZjA3Nzk1NTI0MoAn0BM=: 00:16:16.605 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDdjOWZkOWRlZTVlMGE4MTllYjkxMTQxMmMxZTJiYjlkZjAzMDQwNmQyMDliMTdh07Cfwg==: --dhchap-ctrl-secret DHHC-1:03:YjgxYjY2NjJmNzkyZDdlZTM1OWNkYTExNzJjODAyNWU0NWY4MWQzNTc5MzAzMjRjYWQ4ODAxZjA3Nzk1NTI0MoAn0BM=: 00:16:17.172 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:17.172 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:17.172 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:17.172 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.172 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.172 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.172 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:17.172 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:17.172 22:47:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:17.431 22:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:16:17.431 22:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:17.431 22:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:17.431 22:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:17.431 22:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:17.431 22:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:17.431 22:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.431 22:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.431 22:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.431 22:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.431 22:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.431 22:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.431 22:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.690 00:16:17.690 22:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:17.690 22:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:17.690 22:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:17.949 22:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.949 22:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:17.949 22:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.949 22:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.949 22:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.949 22:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:17.949 { 00:16:17.949 "cntlid": 19, 00:16:17.949 "qid": 0, 00:16:17.949 "state": "enabled", 00:16:17.949 "thread": "nvmf_tgt_poll_group_000", 00:16:17.949 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:17.949 "listen_address": { 00:16:17.949 "trtype": "TCP", 00:16:17.949 "adrfam": "IPv4", 00:16:17.949 "traddr": "10.0.0.2", 00:16:17.949 "trsvcid": "4420" 00:16:17.949 }, 00:16:17.949 "peer_address": { 00:16:17.949 "trtype": "TCP", 00:16:17.949 "adrfam": "IPv4", 00:16:17.949 "traddr": "10.0.0.1", 00:16:17.949 "trsvcid": "45110" 00:16:17.949 }, 00:16:17.949 "auth": { 00:16:17.949 "state": "completed", 00:16:17.949 "digest": "sha256", 00:16:17.949 "dhgroup": "ffdhe3072" 00:16:17.949 } 00:16:17.950 } 00:16:17.950 ]' 00:16:17.950 22:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:17.950 22:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:17.950 22:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:17.950 22:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:17.950 22:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:17.950 22:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:17.950 22:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:17.950 22:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:18.208 22:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTYyZmVjYzAzMDIzODkzOTUxNjQyZWNlYWMwMmExNTHhtxj0: --dhchap-ctrl-secret DHHC-1:02:ZmE2ZWU4YmQ2YWJmZjZiYWUzZjFjNzE3ZDZlYzMzODRiZjQ3NjlmNTNlMTlmYzkzZRferw==: 00:16:18.208 22:47:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MTYyZmVjYzAzMDIzODkzOTUxNjQyZWNlYWMwMmExNTHhtxj0: --dhchap-ctrl-secret DHHC-1:02:ZmE2ZWU4YmQ2YWJmZjZiYWUzZjFjNzE3ZDZlYzMzODRiZjQ3NjlmNTNlMTlmYzkzZRferw==: 00:16:18.776 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:18.776 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:18.776 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:18.776 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.776 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.776 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.776 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:18.776 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:18.776 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:19.034 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:16:19.034 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:19.035 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:19.035 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:19.035 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:19.035 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:19.035 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.035 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.035 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.035 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.035 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.035 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.035 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.293 00:16:19.293 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:19.293 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:19.293 22:47:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:19.552 22:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.552 22:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:19.552 22:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.552 22:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.552 22:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.552 22:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:19.552 { 00:16:19.552 "cntlid": 21, 00:16:19.552 "qid": 0, 00:16:19.552 "state": "enabled", 00:16:19.552 "thread": "nvmf_tgt_poll_group_000", 00:16:19.552 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:19.552 "listen_address": { 00:16:19.552 "trtype": "TCP", 00:16:19.552 "adrfam": "IPv4", 00:16:19.552 "traddr": "10.0.0.2", 00:16:19.552 "trsvcid": "4420" 00:16:19.552 }, 00:16:19.552 "peer_address": { 00:16:19.552 "trtype": "TCP", 00:16:19.552 "adrfam": "IPv4", 00:16:19.552 "traddr": "10.0.0.1", 00:16:19.552 "trsvcid": "45126" 00:16:19.552 }, 00:16:19.552 "auth": { 00:16:19.552 "state": "completed", 00:16:19.552 "digest": "sha256", 00:16:19.552 "dhgroup": "ffdhe3072" 00:16:19.552 } 00:16:19.552 } 00:16:19.552 ]' 00:16:19.552 22:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:19.552 22:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:19.552 22:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:19.552 22:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:19.552 22:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:19.552 22:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:19.552 22:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:19.552 22:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:19.811 22:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWE3MTY3YjE0MDYxNTc5MDJlZjlmZjQwOTZiZDM0MjBjNGFiOTc5ZGViZDYzMGQ0PkRHzA==: --dhchap-ctrl-secret DHHC-1:01:OGQ1ZjMzYWZhYmVmOGRhYTE2M2FiYmQwNDc4NjhjZTns83ga: 00:16:19.811 22:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OWE3MTY3YjE0MDYxNTc5MDJlZjlmZjQwOTZiZDM0MjBjNGFiOTc5ZGViZDYzMGQ0PkRHzA==: --dhchap-ctrl-secret DHHC-1:01:OGQ1ZjMzYWZhYmVmOGRhYTE2M2FiYmQwNDc4NjhjZTns83ga: 00:16:20.378 22:47:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:20.378 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:20.378 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:20.378 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.378 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.378 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.378 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:20.378 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:20.378 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:20.636 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:16:20.636 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:20.636 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:20.636 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:20.636 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:20.636 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:20.636 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:20.636 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.636 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.636 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.636 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:20.636 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:20.636 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:20.895 00:16:20.895 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:20.895 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:20.895 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.153 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.153 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.153 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.153 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.153 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.153 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:21.153 { 00:16:21.153 "cntlid": 23, 00:16:21.153 "qid": 0, 00:16:21.153 "state": "enabled", 00:16:21.153 "thread": "nvmf_tgt_poll_group_000", 00:16:21.153 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:21.153 "listen_address": { 00:16:21.153 "trtype": "TCP", 00:16:21.153 "adrfam": "IPv4", 00:16:21.153 "traddr": "10.0.0.2", 00:16:21.153 "trsvcid": "4420" 00:16:21.153 }, 00:16:21.153 "peer_address": { 00:16:21.153 "trtype": "TCP", 00:16:21.153 "adrfam": "IPv4", 00:16:21.153 "traddr": "10.0.0.1", 00:16:21.153 "trsvcid": "45146" 00:16:21.153 }, 00:16:21.153 "auth": { 00:16:21.153 "state": "completed", 00:16:21.153 "digest": "sha256", 00:16:21.153 "dhgroup": "ffdhe3072" 00:16:21.153 } 00:16:21.153 } 00:16:21.153 ]' 00:16:21.153 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:21.153 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:21.153 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:21.153 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:21.153 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:21.153 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.153 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.153 22:47:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:21.412 22:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTJhMjY1YjAyZjQyNDlmZTAyYzUxNzVlNjk2YjliY2JiYTc1YTk1ZjE2NTA3MjViMTYxODVkYTk1MmU5N2Q1NMfsJw4=: 00:16:21.412 22:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTJhMjY1YjAyZjQyNDlmZTAyYzUxNzVlNjk2YjliY2JiYTc1YTk1ZjE2NTA3MjViMTYxODVkYTk1MmU5N2Q1NMfsJw4=: 00:16:21.979 22:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.979 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.979 22:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:21.979 22:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.979 22:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.979 22:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.979 22:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:21.979 22:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:21.979 22:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:21.979 22:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:22.238 22:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:16:22.238 22:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:22.238 22:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:22.238 22:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:22.238 22:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:22.238 22:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:22.238 22:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.238 22:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.238 22:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.238 22:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.238 22:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.238 22:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.238 22:47:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:22.497 00:16:22.497 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:22.497 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:22.497 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:22.755 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.755 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:22.755 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.755 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.755 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.755 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:22.755 { 00:16:22.755 "cntlid": 25, 00:16:22.755 "qid": 0, 00:16:22.755 "state": "enabled", 00:16:22.755 "thread": "nvmf_tgt_poll_group_000", 00:16:22.755 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:22.755 "listen_address": { 00:16:22.755 "trtype": "TCP", 00:16:22.755 "adrfam": "IPv4", 00:16:22.755 "traddr": "10.0.0.2", 00:16:22.755 "trsvcid": "4420" 00:16:22.755 }, 00:16:22.755 "peer_address": { 00:16:22.755 "trtype": "TCP", 00:16:22.755 "adrfam": "IPv4", 00:16:22.755 "traddr": "10.0.0.1", 00:16:22.755 "trsvcid": "40452" 00:16:22.755 }, 00:16:22.755 "auth": { 00:16:22.755 "state": "completed", 00:16:22.755 "digest": "sha256", 00:16:22.755 "dhgroup": "ffdhe4096" 00:16:22.755 } 00:16:22.755 } 00:16:22.755 ]' 00:16:22.755 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:22.755 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:22.755 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:22.755 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:22.755 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:23.014 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.014 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.014 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:23.014 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDdjOWZkOWRlZTVlMGE4MTllYjkxMTQxMmMxZTJiYjlkZjAzMDQwNmQyMDliMTdh07Cfwg==: --dhchap-ctrl-secret DHHC-1:03:YjgxYjY2NjJmNzkyZDdlZTM1OWNkYTExNzJjODAyNWU0NWY4MWQzNTc5MzAzMjRjYWQ4ODAxZjA3Nzk1NTI0MoAn0BM=: 00:16:23.014 22:47:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDdjOWZkOWRlZTVlMGE4MTllYjkxMTQxMmMxZTJiYjlkZjAzMDQwNmQyMDliMTdh07Cfwg==: --dhchap-ctrl-secret DHHC-1:03:YjgxYjY2NjJmNzkyZDdlZTM1OWNkYTExNzJjODAyNWU0NWY4MWQzNTc5MzAzMjRjYWQ4ODAxZjA3Nzk1NTI0MoAn0BM=: 00:16:23.581 22:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:23.581 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:23.581 22:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:23.581 22:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.581 22:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.581 22:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.581 22:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:23.581 22:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:23.581 22:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:23.839 22:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:16:23.839 22:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:23.839 22:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:23.839 22:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:23.839 22:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:23.839 22:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:23.839 22:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:23.839 22:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.839 22:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.839 22:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.839 22:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:23.839 22:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:23.839 22:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.097 00:16:24.097 22:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:24.097 22:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:24.097 22:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:24.356 22:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.356 22:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:24.356 22:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.356 22:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.356 22:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.356 22:47:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:24.356 { 00:16:24.356 "cntlid": 27, 00:16:24.356 "qid": 0, 00:16:24.356 "state": "enabled", 00:16:24.356 "thread": "nvmf_tgt_poll_group_000", 00:16:24.356 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:24.356 "listen_address": { 00:16:24.356 "trtype": "TCP", 00:16:24.356 "adrfam": "IPv4", 00:16:24.356 "traddr": "10.0.0.2", 00:16:24.356 "trsvcid": "4420" 00:16:24.356 }, 00:16:24.356 "peer_address": { 00:16:24.356 "trtype": "TCP", 00:16:24.356 "adrfam": "IPv4", 00:16:24.356 "traddr": "10.0.0.1", 00:16:24.356 "trsvcid": "40464" 00:16:24.356 }, 00:16:24.356 "auth": { 00:16:24.356 "state": "completed", 00:16:24.356 "digest": "sha256", 00:16:24.356 "dhgroup": "ffdhe4096" 00:16:24.356 } 00:16:24.356 } 00:16:24.356 ]' 00:16:24.356 22:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:24.356 22:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:24.356 22:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:24.356 22:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:24.356 22:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:24.615 22:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:24.615 22:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:24.615 22:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:24.615 22:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTYyZmVjYzAzMDIzODkzOTUxNjQyZWNlYWMwMmExNTHhtxj0: --dhchap-ctrl-secret DHHC-1:02:ZmE2ZWU4YmQ2YWJmZjZiYWUzZjFjNzE3ZDZlYzMzODRiZjQ3NjlmNTNlMTlmYzkzZRferw==: 00:16:24.615 22:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MTYyZmVjYzAzMDIzODkzOTUxNjQyZWNlYWMwMmExNTHhtxj0: --dhchap-ctrl-secret DHHC-1:02:ZmE2ZWU4YmQ2YWJmZjZiYWUzZjFjNzE3ZDZlYzMzODRiZjQ3NjlmNTNlMTlmYzkzZRferw==: 00:16:25.181 22:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:25.181 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:25.181 22:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:25.181 22:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.181 22:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.181 22:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.181 22:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:25.181 22:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:25.181 22:47:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:25.457 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:16:25.457 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:25.457 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:25.457 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:25.457 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:25.457 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.457 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.457 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.457 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.457 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.457 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.457 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.457 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:25.751 00:16:25.751 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:25.751 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:25.751 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.039 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.039 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.039 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.039 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.039 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.039 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:26.039 { 00:16:26.039 "cntlid": 29, 00:16:26.039 "qid": 0, 00:16:26.039 "state": "enabled", 00:16:26.039 "thread": "nvmf_tgt_poll_group_000", 00:16:26.039 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:26.039 "listen_address": { 00:16:26.039 "trtype": "TCP", 00:16:26.039 "adrfam": "IPv4", 00:16:26.039 "traddr": "10.0.0.2", 00:16:26.039 "trsvcid": "4420" 00:16:26.039 }, 00:16:26.039 "peer_address": { 00:16:26.039 "trtype": "TCP", 00:16:26.039 "adrfam": "IPv4", 00:16:26.039 "traddr": "10.0.0.1", 00:16:26.039 "trsvcid": "40490" 00:16:26.039 }, 00:16:26.039 "auth": { 00:16:26.039 "state": "completed", 00:16:26.039 "digest": "sha256", 00:16:26.039 "dhgroup": "ffdhe4096" 00:16:26.039 } 00:16:26.039 } 00:16:26.039 ]' 00:16:26.039 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:26.039 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:26.039 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:26.039 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:26.039 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:26.039 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.039 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.039 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:26.310 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWE3MTY3YjE0MDYxNTc5MDJlZjlmZjQwOTZiZDM0MjBjNGFiOTc5ZGViZDYzMGQ0PkRHzA==: --dhchap-ctrl-secret DHHC-1:01:OGQ1ZjMzYWZhYmVmOGRhYTE2M2FiYmQwNDc4NjhjZTns83ga: 00:16:26.310 22:47:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OWE3MTY3YjE0MDYxNTc5MDJlZjlmZjQwOTZiZDM0MjBjNGFiOTc5ZGViZDYzMGQ0PkRHzA==: --dhchap-ctrl-secret DHHC-1:01:OGQ1ZjMzYWZhYmVmOGRhYTE2M2FiYmQwNDc4NjhjZTns83ga: 00:16:26.877 22:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.877 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.877 22:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:26.877 22:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.877 22:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.878 22:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.878 22:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:26.878 22:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:26.878 22:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:27.136 22:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:16:27.136 22:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:27.136 22:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:27.136 22:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:27.136 22:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:27.136 22:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.136 22:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:27.136 22:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.136 22:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.136 22:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.136 22:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:27.137 22:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:27.137 22:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:27.395 00:16:27.395 22:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:27.395 22:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:27.395 22:47:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.652 22:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.652 22:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:27.652 22:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.652 22:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.652 22:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.652 22:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:27.652 { 00:16:27.652 "cntlid": 31, 00:16:27.652 "qid": 0, 00:16:27.652 "state": "enabled", 00:16:27.652 "thread": "nvmf_tgt_poll_group_000", 00:16:27.652 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:27.652 "listen_address": { 00:16:27.652 "trtype": "TCP", 00:16:27.652 "adrfam": "IPv4", 00:16:27.652 "traddr": "10.0.0.2", 00:16:27.652 "trsvcid": "4420" 00:16:27.652 }, 00:16:27.652 "peer_address": { 00:16:27.652 "trtype": "TCP", 00:16:27.652 "adrfam": "IPv4", 00:16:27.652 "traddr": "10.0.0.1", 00:16:27.652 "trsvcid": "40514" 00:16:27.652 }, 00:16:27.652 "auth": { 00:16:27.652 "state": "completed", 00:16:27.652 "digest": "sha256", 00:16:27.652 "dhgroup": "ffdhe4096" 00:16:27.652 } 00:16:27.652 } 00:16:27.652 ]' 00:16:27.652 22:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:27.652 22:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:27.652 22:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:27.652 22:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:27.652 22:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:27.652 22:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:27.652 22:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:27.652 22:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.910 22:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTJhMjY1YjAyZjQyNDlmZTAyYzUxNzVlNjk2YjliY2JiYTc1YTk1ZjE2NTA3MjViMTYxODVkYTk1MmU5N2Q1NMfsJw4=: 00:16:27.910 22:47:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTJhMjY1YjAyZjQyNDlmZTAyYzUxNzVlNjk2YjliY2JiYTc1YTk1ZjE2NTA3MjViMTYxODVkYTk1MmU5N2Q1NMfsJw4=: 00:16:28.476 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:28.476 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:28.476 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:28.476 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.476 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.476 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.476 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:28.476 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:28.476 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:28.476 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:28.734 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:16:28.734 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:28.734 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:28.735 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:28.735 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:28.735 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:28.735 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.735 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.735 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.735 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.735 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.735 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.735 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.993 00:16:28.993 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:28.993 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:28.993 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:29.252 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.252 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:29.252 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.252 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.252 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.252 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:29.252 { 00:16:29.252 "cntlid": 33, 00:16:29.252 "qid": 0, 00:16:29.252 "state": "enabled", 00:16:29.252 "thread": "nvmf_tgt_poll_group_000", 00:16:29.252 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:29.252 "listen_address": { 00:16:29.252 "trtype": "TCP", 00:16:29.252 "adrfam": "IPv4", 00:16:29.252 "traddr": "10.0.0.2", 00:16:29.252 "trsvcid": "4420" 00:16:29.252 }, 00:16:29.252 "peer_address": { 00:16:29.252 "trtype": "TCP", 00:16:29.252 "adrfam": "IPv4", 00:16:29.252 "traddr": "10.0.0.1", 00:16:29.252 "trsvcid": "40544" 00:16:29.252 }, 00:16:29.252 "auth": { 00:16:29.252 "state": "completed", 00:16:29.252 "digest": "sha256", 00:16:29.252 "dhgroup": "ffdhe6144" 00:16:29.252 } 00:16:29.252 } 00:16:29.252 ]' 00:16:29.252 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:29.252 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:29.252 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:29.252 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:29.510 22:47:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:29.510 22:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:29.510 22:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:29.510 22:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:29.769 22:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDdjOWZkOWRlZTVlMGE4MTllYjkxMTQxMmMxZTJiYjlkZjAzMDQwNmQyMDliMTdh07Cfwg==: --dhchap-ctrl-secret DHHC-1:03:YjgxYjY2NjJmNzkyZDdlZTM1OWNkYTExNzJjODAyNWU0NWY4MWQzNTc5MzAzMjRjYWQ4ODAxZjA3Nzk1NTI0MoAn0BM=: 00:16:29.769 22:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDdjOWZkOWRlZTVlMGE4MTllYjkxMTQxMmMxZTJiYjlkZjAzMDQwNmQyMDliMTdh07Cfwg==: --dhchap-ctrl-secret DHHC-1:03:YjgxYjY2NjJmNzkyZDdlZTM1OWNkYTExNzJjODAyNWU0NWY4MWQzNTc5MzAzMjRjYWQ4ODAxZjA3Nzk1NTI0MoAn0BM=: 00:16:30.335 22:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.335 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:30.336 22:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:30.336 22:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.336 22:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.336 22:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.336 22:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:30.336 22:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:30.336 22:47:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:30.336 22:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:16:30.336 22:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:30.336 22:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:30.336 22:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:30.336 22:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:30.336 22:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:30.336 22:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:30.336 22:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.336 22:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.336 22:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.336 22:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:30.336 22:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:30.336 22:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:30.903 00:16:30.903 22:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:30.903 22:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:30.903 22:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.903 22:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.903 22:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.903 22:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.903 22:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.903 22:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.903 22:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:30.903 { 00:16:30.903 "cntlid": 35, 00:16:30.903 "qid": 0, 00:16:30.903 "state": "enabled", 00:16:30.903 "thread": "nvmf_tgt_poll_group_000", 00:16:30.903 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:30.903 "listen_address": { 00:16:30.903 "trtype": "TCP", 00:16:30.903 "adrfam": "IPv4", 00:16:30.903 "traddr": "10.0.0.2", 00:16:30.903 "trsvcid": "4420" 00:16:30.903 }, 00:16:30.903 "peer_address": { 00:16:30.903 "trtype": "TCP", 00:16:30.903 "adrfam": "IPv4", 00:16:30.903 "traddr": "10.0.0.1", 00:16:30.903 "trsvcid": "40568" 00:16:30.903 }, 00:16:30.903 "auth": { 00:16:30.903 "state": "completed", 00:16:30.903 "digest": "sha256", 00:16:30.903 "dhgroup": "ffdhe6144" 00:16:30.903 } 00:16:30.903 } 00:16:30.903 ]' 00:16:30.903 22:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:31.162 22:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:31.162 22:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:31.162 22:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:31.162 22:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:31.162 22:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.162 22:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.162 22:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:31.420 22:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTYyZmVjYzAzMDIzODkzOTUxNjQyZWNlYWMwMmExNTHhtxj0: --dhchap-ctrl-secret DHHC-1:02:ZmE2ZWU4YmQ2YWJmZjZiYWUzZjFjNzE3ZDZlYzMzODRiZjQ3NjlmNTNlMTlmYzkzZRferw==: 00:16:31.420 22:47:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MTYyZmVjYzAzMDIzODkzOTUxNjQyZWNlYWMwMmExNTHhtxj0: --dhchap-ctrl-secret DHHC-1:02:ZmE2ZWU4YmQ2YWJmZjZiYWUzZjFjNzE3ZDZlYzMzODRiZjQ3NjlmNTNlMTlmYzkzZRferw==: 00:16:31.988 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.988 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.988 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:31.988 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.988 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.988 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.988 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:31.988 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:31.988 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:32.247 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:16:32.247 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:32.247 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:32.247 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:32.247 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:32.247 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:32.247 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.247 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.247 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.247 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.247 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.247 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.247 22:47:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.505 00:16:32.505 22:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:32.505 22:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:32.505 22:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:32.763 22:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.763 22:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:32.763 22:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.763 22:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.763 22:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.763 22:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:32.763 { 00:16:32.763 "cntlid": 37, 00:16:32.763 "qid": 0, 00:16:32.763 "state": "enabled", 00:16:32.763 "thread": "nvmf_tgt_poll_group_000", 00:16:32.764 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:32.764 "listen_address": { 00:16:32.764 "trtype": "TCP", 00:16:32.764 "adrfam": "IPv4", 00:16:32.764 "traddr": "10.0.0.2", 00:16:32.764 "trsvcid": "4420" 00:16:32.764 }, 00:16:32.764 "peer_address": { 00:16:32.764 "trtype": "TCP", 00:16:32.764 "adrfam": "IPv4", 00:16:32.764 "traddr": "10.0.0.1", 00:16:32.764 "trsvcid": "52116" 00:16:32.764 }, 00:16:32.764 "auth": { 00:16:32.764 "state": "completed", 00:16:32.764 "digest": "sha256", 00:16:32.764 "dhgroup": "ffdhe6144" 00:16:32.764 } 00:16:32.764 } 00:16:32.764 ]' 00:16:32.764 22:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:32.764 22:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:32.764 22:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:32.764 22:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:32.764 22:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:32.764 22:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.764 22:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.764 22:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.022 22:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWE3MTY3YjE0MDYxNTc5MDJlZjlmZjQwOTZiZDM0MjBjNGFiOTc5ZGViZDYzMGQ0PkRHzA==: --dhchap-ctrl-secret DHHC-1:01:OGQ1ZjMzYWZhYmVmOGRhYTE2M2FiYmQwNDc4NjhjZTns83ga: 00:16:33.022 22:47:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OWE3MTY3YjE0MDYxNTc5MDJlZjlmZjQwOTZiZDM0MjBjNGFiOTc5ZGViZDYzMGQ0PkRHzA==: --dhchap-ctrl-secret DHHC-1:01:OGQ1ZjMzYWZhYmVmOGRhYTE2M2FiYmQwNDc4NjhjZTns83ga: 00:16:33.589 22:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.589 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.589 22:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:33.589 22:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.589 22:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.589 22:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.589 22:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:33.589 22:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:33.589 22:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:33.848 22:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:16:33.848 22:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:33.848 22:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:33.848 22:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:33.848 22:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:33.848 22:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.848 22:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:33.848 22:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.848 22:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.848 22:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.848 22:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:33.848 22:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:33.848 22:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:34.106 00:16:34.106 22:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:34.106 22:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:34.106 22:47:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.365 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.365 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.365 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.365 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.365 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.365 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:34.365 { 00:16:34.365 "cntlid": 39, 00:16:34.365 "qid": 0, 00:16:34.365 "state": "enabled", 00:16:34.365 "thread": "nvmf_tgt_poll_group_000", 00:16:34.365 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:34.365 "listen_address": { 00:16:34.365 "trtype": "TCP", 00:16:34.365 "adrfam": "IPv4", 00:16:34.365 "traddr": "10.0.0.2", 00:16:34.365 "trsvcid": "4420" 00:16:34.365 }, 00:16:34.365 "peer_address": { 00:16:34.365 "trtype": "TCP", 00:16:34.365 "adrfam": "IPv4", 00:16:34.365 "traddr": "10.0.0.1", 00:16:34.365 "trsvcid": "52154" 00:16:34.365 }, 00:16:34.365 "auth": { 00:16:34.365 "state": "completed", 00:16:34.365 "digest": "sha256", 00:16:34.365 "dhgroup": "ffdhe6144" 00:16:34.365 } 00:16:34.365 } 00:16:34.365 ]' 00:16:34.365 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:34.365 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:34.365 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:34.623 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:34.623 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:34.623 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.623 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.623 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.623 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTJhMjY1YjAyZjQyNDlmZTAyYzUxNzVlNjk2YjliY2JiYTc1YTk1ZjE2NTA3MjViMTYxODVkYTk1MmU5N2Q1NMfsJw4=: 00:16:34.623 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTJhMjY1YjAyZjQyNDlmZTAyYzUxNzVlNjk2YjliY2JiYTc1YTk1ZjE2NTA3MjViMTYxODVkYTk1MmU5N2Q1NMfsJw4=: 00:16:35.190 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.449 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.449 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:35.449 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.449 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.449 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.449 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:35.449 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:35.449 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:35.449 22:47:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:35.449 22:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:16:35.449 22:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:35.449 22:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:35.449 22:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:35.449 22:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:35.449 22:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.449 22:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.449 22:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.449 22:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.449 22:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.449 22:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.449 22:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.449 22:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:36.016 00:16:36.016 22:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:36.016 22:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:36.016 22:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.275 22:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.275 22:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.275 22:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.275 22:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.275 22:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.275 22:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:36.275 { 00:16:36.275 "cntlid": 41, 00:16:36.275 "qid": 0, 00:16:36.275 "state": "enabled", 00:16:36.275 "thread": "nvmf_tgt_poll_group_000", 00:16:36.275 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:36.275 "listen_address": { 00:16:36.275 "trtype": "TCP", 00:16:36.275 "adrfam": "IPv4", 00:16:36.275 "traddr": "10.0.0.2", 00:16:36.275 "trsvcid": "4420" 00:16:36.275 }, 00:16:36.275 "peer_address": { 00:16:36.275 "trtype": "TCP", 00:16:36.275 "adrfam": "IPv4", 00:16:36.275 "traddr": "10.0.0.1", 00:16:36.275 "trsvcid": "52182" 00:16:36.275 }, 00:16:36.275 "auth": { 00:16:36.275 "state": "completed", 00:16:36.275 "digest": "sha256", 00:16:36.275 "dhgroup": "ffdhe8192" 00:16:36.275 } 00:16:36.275 } 00:16:36.275 ]' 00:16:36.275 22:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:36.275 22:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:36.275 22:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:36.275 22:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:36.275 22:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:36.275 22:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.275 22:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.275 22:47:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.534 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDdjOWZkOWRlZTVlMGE4MTllYjkxMTQxMmMxZTJiYjlkZjAzMDQwNmQyMDliMTdh07Cfwg==: --dhchap-ctrl-secret DHHC-1:03:YjgxYjY2NjJmNzkyZDdlZTM1OWNkYTExNzJjODAyNWU0NWY4MWQzNTc5MzAzMjRjYWQ4ODAxZjA3Nzk1NTI0MoAn0BM=: 00:16:36.534 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDdjOWZkOWRlZTVlMGE4MTllYjkxMTQxMmMxZTJiYjlkZjAzMDQwNmQyMDliMTdh07Cfwg==: --dhchap-ctrl-secret DHHC-1:03:YjgxYjY2NjJmNzkyZDdlZTM1OWNkYTExNzJjODAyNWU0NWY4MWQzNTc5MzAzMjRjYWQ4ODAxZjA3Nzk1NTI0MoAn0BM=: 00:16:37.102 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.102 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.102 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:37.102 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.102 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.102 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.102 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:37.102 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:37.102 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:37.361 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:16:37.361 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:37.361 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:37.361 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:37.361 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:37.361 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.361 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:37.361 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.361 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.361 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.361 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:37.361 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:37.361 22:47:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:37.928 00:16:37.928 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:37.928 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:37.928 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.186 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.187 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.187 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.187 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.187 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.187 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:38.187 { 00:16:38.187 "cntlid": 43, 00:16:38.187 "qid": 0, 00:16:38.187 "state": "enabled", 00:16:38.187 "thread": "nvmf_tgt_poll_group_000", 00:16:38.187 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:38.187 "listen_address": { 00:16:38.187 "trtype": "TCP", 00:16:38.187 "adrfam": "IPv4", 00:16:38.187 "traddr": "10.0.0.2", 00:16:38.187 "trsvcid": "4420" 00:16:38.187 }, 00:16:38.187 "peer_address": { 00:16:38.187 "trtype": "TCP", 00:16:38.187 "adrfam": "IPv4", 00:16:38.187 "traddr": "10.0.0.1", 00:16:38.187 "trsvcid": "52214" 00:16:38.187 }, 00:16:38.187 "auth": { 00:16:38.187 "state": "completed", 00:16:38.187 "digest": "sha256", 00:16:38.187 "dhgroup": "ffdhe8192" 00:16:38.187 } 00:16:38.187 } 00:16:38.187 ]' 00:16:38.187 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:38.187 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:38.187 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:38.187 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:38.187 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:38.187 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.187 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.187 22:47:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.445 22:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTYyZmVjYzAzMDIzODkzOTUxNjQyZWNlYWMwMmExNTHhtxj0: --dhchap-ctrl-secret DHHC-1:02:ZmE2ZWU4YmQ2YWJmZjZiYWUzZjFjNzE3ZDZlYzMzODRiZjQ3NjlmNTNlMTlmYzkzZRferw==: 00:16:38.446 22:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MTYyZmVjYzAzMDIzODkzOTUxNjQyZWNlYWMwMmExNTHhtxj0: --dhchap-ctrl-secret DHHC-1:02:ZmE2ZWU4YmQ2YWJmZjZiYWUzZjFjNzE3ZDZlYzMzODRiZjQ3NjlmNTNlMTlmYzkzZRferw==: 00:16:39.013 22:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.013 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.013 22:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:39.013 22:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.013 22:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.013 22:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.013 22:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:39.013 22:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:39.013 22:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:39.272 22:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:16:39.272 22:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:39.272 22:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:39.272 22:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:39.272 22:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:39.272 22:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:39.272 22:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:39.272 22:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.272 22:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.272 22:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.272 22:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:39.272 22:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:39.273 22:47:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:39.840 00:16:39.840 22:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:39.840 22:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:39.840 22:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.840 22:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.840 22:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.840 22:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.840 22:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.840 22:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.840 22:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:39.840 { 00:16:39.840 "cntlid": 45, 00:16:39.840 "qid": 0, 00:16:39.840 "state": "enabled", 00:16:39.840 "thread": "nvmf_tgt_poll_group_000", 00:16:39.840 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:39.840 "listen_address": { 00:16:39.840 "trtype": "TCP", 00:16:39.840 "adrfam": "IPv4", 00:16:39.840 "traddr": "10.0.0.2", 00:16:39.840 "trsvcid": "4420" 00:16:39.840 }, 00:16:39.840 "peer_address": { 00:16:39.840 "trtype": "TCP", 00:16:39.840 "adrfam": "IPv4", 00:16:39.840 "traddr": "10.0.0.1", 00:16:39.840 "trsvcid": "52246" 00:16:39.840 }, 00:16:39.840 "auth": { 00:16:39.840 "state": "completed", 00:16:39.840 "digest": "sha256", 00:16:39.840 "dhgroup": "ffdhe8192" 00:16:39.840 } 00:16:39.840 } 00:16:39.840 ]' 00:16:39.840 22:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:39.840 22:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:39.840 22:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:40.098 22:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:40.098 22:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:40.098 22:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.098 22:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.098 22:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.357 22:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWE3MTY3YjE0MDYxNTc5MDJlZjlmZjQwOTZiZDM0MjBjNGFiOTc5ZGViZDYzMGQ0PkRHzA==: --dhchap-ctrl-secret DHHC-1:01:OGQ1ZjMzYWZhYmVmOGRhYTE2M2FiYmQwNDc4NjhjZTns83ga: 00:16:40.357 22:47:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OWE3MTY3YjE0MDYxNTc5MDJlZjlmZjQwOTZiZDM0MjBjNGFiOTc5ZGViZDYzMGQ0PkRHzA==: --dhchap-ctrl-secret DHHC-1:01:OGQ1ZjMzYWZhYmVmOGRhYTE2M2FiYmQwNDc4NjhjZTns83ga: 00:16:40.924 22:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.924 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.924 22:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:40.924 22:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.924 22:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.924 22:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.924 22:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:40.924 22:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:40.924 22:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:40.924 22:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:16:40.924 22:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:40.925 22:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:40.925 22:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:40.925 22:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:40.925 22:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.925 22:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:40.925 22:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.925 22:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.925 22:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.925 22:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:40.925 22:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:40.925 22:47:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:41.492 00:16:41.492 22:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:41.492 22:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:41.492 22:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.751 22:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.751 22:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.751 22:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.751 22:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.751 22:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.751 22:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:41.751 { 00:16:41.751 "cntlid": 47, 00:16:41.751 "qid": 0, 00:16:41.751 "state": "enabled", 00:16:41.751 "thread": "nvmf_tgt_poll_group_000", 00:16:41.751 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:41.751 "listen_address": { 00:16:41.751 "trtype": "TCP", 00:16:41.751 "adrfam": "IPv4", 00:16:41.751 "traddr": "10.0.0.2", 00:16:41.751 "trsvcid": "4420" 00:16:41.751 }, 00:16:41.751 "peer_address": { 00:16:41.751 "trtype": "TCP", 00:16:41.751 "adrfam": "IPv4", 00:16:41.751 "traddr": "10.0.0.1", 00:16:41.751 "trsvcid": "48454" 00:16:41.751 }, 00:16:41.751 "auth": { 00:16:41.751 "state": "completed", 00:16:41.751 "digest": "sha256", 00:16:41.751 "dhgroup": "ffdhe8192" 00:16:41.751 } 00:16:41.751 } 00:16:41.751 ]' 00:16:41.751 22:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:41.751 22:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:41.751 22:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:41.751 22:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:41.751 22:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:41.751 22:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.751 22:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.751 22:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.009 22:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTJhMjY1YjAyZjQyNDlmZTAyYzUxNzVlNjk2YjliY2JiYTc1YTk1ZjE2NTA3MjViMTYxODVkYTk1MmU5N2Q1NMfsJw4=: 00:16:42.009 22:47:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTJhMjY1YjAyZjQyNDlmZTAyYzUxNzVlNjk2YjliY2JiYTc1YTk1ZjE2NTA3MjViMTYxODVkYTk1MmU5N2Q1NMfsJw4=: 00:16:42.575 22:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:42.575 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:42.575 22:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:42.575 22:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.575 22:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.575 22:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.575 22:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:42.575 22:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:42.575 22:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:42.575 22:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:42.575 22:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:42.834 22:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:16:42.834 22:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:42.834 22:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:42.834 22:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:42.834 22:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:42.834 22:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:42.834 22:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:42.834 22:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.834 22:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.834 22:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.834 22:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:42.834 22:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:42.834 22:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:43.093 00:16:43.093 22:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:43.093 22:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:43.093 22:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.351 22:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.351 22:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.351 22:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.351 22:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.351 22:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.351 22:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:43.351 { 00:16:43.351 "cntlid": 49, 00:16:43.351 "qid": 0, 00:16:43.351 "state": "enabled", 00:16:43.351 "thread": "nvmf_tgt_poll_group_000", 00:16:43.351 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:43.351 "listen_address": { 00:16:43.351 "trtype": "TCP", 00:16:43.351 "adrfam": "IPv4", 00:16:43.351 "traddr": "10.0.0.2", 00:16:43.351 "trsvcid": "4420" 00:16:43.351 }, 00:16:43.351 "peer_address": { 00:16:43.351 "trtype": "TCP", 00:16:43.351 "adrfam": "IPv4", 00:16:43.351 "traddr": "10.0.0.1", 00:16:43.351 "trsvcid": "48478" 00:16:43.351 }, 00:16:43.351 "auth": { 00:16:43.351 "state": "completed", 00:16:43.351 "digest": "sha384", 00:16:43.351 "dhgroup": "null" 00:16:43.351 } 00:16:43.351 } 00:16:43.351 ]' 00:16:43.351 22:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:43.351 22:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:43.351 22:47:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:43.351 22:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:43.351 22:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:43.610 22:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:43.610 22:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:43.610 22:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.610 22:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDdjOWZkOWRlZTVlMGE4MTllYjkxMTQxMmMxZTJiYjlkZjAzMDQwNmQyMDliMTdh07Cfwg==: --dhchap-ctrl-secret DHHC-1:03:YjgxYjY2NjJmNzkyZDdlZTM1OWNkYTExNzJjODAyNWU0NWY4MWQzNTc5MzAzMjRjYWQ4ODAxZjA3Nzk1NTI0MoAn0BM=: 00:16:43.610 22:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDdjOWZkOWRlZTVlMGE4MTllYjkxMTQxMmMxZTJiYjlkZjAzMDQwNmQyMDliMTdh07Cfwg==: --dhchap-ctrl-secret DHHC-1:03:YjgxYjY2NjJmNzkyZDdlZTM1OWNkYTExNzJjODAyNWU0NWY4MWQzNTc5MzAzMjRjYWQ4ODAxZjA3Nzk1NTI0MoAn0BM=: 00:16:44.176 22:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.176 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.176 22:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:44.176 22:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.176 22:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.177 22:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.435 22:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:44.435 22:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:44.435 22:47:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:44.435 22:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:16:44.435 22:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:44.435 22:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:44.435 22:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:44.435 22:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:44.435 22:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.435 22:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.435 22:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.435 22:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.435 22:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.435 22:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.435 22:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.435 22:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.693 00:16:44.693 22:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:44.693 22:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.693 22:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:44.951 22:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.951 22:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.951 22:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.951 22:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.951 22:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.951 22:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:44.951 { 00:16:44.951 "cntlid": 51, 00:16:44.951 "qid": 0, 00:16:44.951 "state": "enabled", 00:16:44.951 "thread": "nvmf_tgt_poll_group_000", 00:16:44.951 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:44.951 "listen_address": { 00:16:44.951 "trtype": "TCP", 00:16:44.951 "adrfam": "IPv4", 00:16:44.951 "traddr": "10.0.0.2", 00:16:44.951 "trsvcid": "4420" 00:16:44.951 }, 00:16:44.951 "peer_address": { 00:16:44.951 "trtype": "TCP", 00:16:44.951 "adrfam": "IPv4", 00:16:44.951 "traddr": "10.0.0.1", 00:16:44.951 "trsvcid": "48518" 00:16:44.951 }, 00:16:44.951 "auth": { 00:16:44.951 "state": "completed", 00:16:44.951 "digest": "sha384", 00:16:44.951 "dhgroup": "null" 00:16:44.951 } 00:16:44.951 } 00:16:44.951 ]' 00:16:44.951 22:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:44.951 22:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:44.951 22:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:45.209 22:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:45.209 22:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:45.209 22:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.209 22:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.209 22:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.209 22:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTYyZmVjYzAzMDIzODkzOTUxNjQyZWNlYWMwMmExNTHhtxj0: --dhchap-ctrl-secret DHHC-1:02:ZmE2ZWU4YmQ2YWJmZjZiYWUzZjFjNzE3ZDZlYzMzODRiZjQ3NjlmNTNlMTlmYzkzZRferw==: 00:16:45.209 22:47:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MTYyZmVjYzAzMDIzODkzOTUxNjQyZWNlYWMwMmExNTHhtxj0: --dhchap-ctrl-secret DHHC-1:02:ZmE2ZWU4YmQ2YWJmZjZiYWUzZjFjNzE3ZDZlYzMzODRiZjQ3NjlmNTNlMTlmYzkzZRferw==: 00:16:45.776 22:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.776 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.776 22:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:45.776 22:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.776 22:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.035 22:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.035 22:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:46.035 22:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:46.035 22:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:46.035 22:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:16:46.035 22:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:46.035 22:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:46.035 22:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:46.035 22:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:46.035 22:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.035 22:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.035 22:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.035 22:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.035 22:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.035 22:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.035 22:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.035 22:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:46.294 00:16:46.294 22:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:46.294 22:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:46.295 22:47:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.553 22:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.553 22:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.553 22:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.553 22:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.553 22:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.553 22:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:46.553 { 00:16:46.553 "cntlid": 53, 00:16:46.553 "qid": 0, 00:16:46.553 "state": "enabled", 00:16:46.553 "thread": "nvmf_tgt_poll_group_000", 00:16:46.553 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:46.553 "listen_address": { 00:16:46.553 "trtype": "TCP", 00:16:46.553 "adrfam": "IPv4", 00:16:46.553 "traddr": "10.0.0.2", 00:16:46.553 "trsvcid": "4420" 00:16:46.554 }, 00:16:46.554 "peer_address": { 00:16:46.554 "trtype": "TCP", 00:16:46.554 "adrfam": "IPv4", 00:16:46.554 "traddr": "10.0.0.1", 00:16:46.554 "trsvcid": "48552" 00:16:46.554 }, 00:16:46.554 "auth": { 00:16:46.554 "state": "completed", 00:16:46.554 "digest": "sha384", 00:16:46.554 "dhgroup": "null" 00:16:46.554 } 00:16:46.554 } 00:16:46.554 ]' 00:16:46.554 22:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:46.554 22:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:46.554 22:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:46.554 22:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:46.554 22:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:46.812 22:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.812 22:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.812 22:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.071 22:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWE3MTY3YjE0MDYxNTc5MDJlZjlmZjQwOTZiZDM0MjBjNGFiOTc5ZGViZDYzMGQ0PkRHzA==: --dhchap-ctrl-secret DHHC-1:01:OGQ1ZjMzYWZhYmVmOGRhYTE2M2FiYmQwNDc4NjhjZTns83ga: 00:16:47.071 22:47:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OWE3MTY3YjE0MDYxNTc5MDJlZjlmZjQwOTZiZDM0MjBjNGFiOTc5ZGViZDYzMGQ0PkRHzA==: --dhchap-ctrl-secret DHHC-1:01:OGQ1ZjMzYWZhYmVmOGRhYTE2M2FiYmQwNDc4NjhjZTns83ga: 00:16:47.646 22:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.646 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.646 22:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:47.646 22:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.646 22:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.646 22:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.646 22:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:47.646 22:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:47.646 22:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:47.906 22:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:16:47.906 22:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:47.906 22:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:47.906 22:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:47.906 22:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:47.906 22:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:47.906 22:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:47.906 22:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.906 22:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.906 22:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.906 22:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:47.906 22:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:47.906 22:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:47.906 00:16:48.165 22:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:48.165 22:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:48.165 22:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.165 22:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.165 22:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.165 22:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.165 22:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.165 22:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.165 22:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:48.165 { 00:16:48.165 "cntlid": 55, 00:16:48.165 "qid": 0, 00:16:48.165 "state": "enabled", 00:16:48.165 "thread": "nvmf_tgt_poll_group_000", 00:16:48.165 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:48.165 "listen_address": { 00:16:48.165 "trtype": "TCP", 00:16:48.165 "adrfam": "IPv4", 00:16:48.165 "traddr": "10.0.0.2", 00:16:48.165 "trsvcid": "4420" 00:16:48.165 }, 00:16:48.165 "peer_address": { 00:16:48.165 "trtype": "TCP", 00:16:48.165 "adrfam": "IPv4", 00:16:48.165 "traddr": "10.0.0.1", 00:16:48.165 "trsvcid": "48586" 00:16:48.165 }, 00:16:48.165 "auth": { 00:16:48.165 "state": "completed", 00:16:48.165 "digest": "sha384", 00:16:48.165 "dhgroup": "null" 00:16:48.165 } 00:16:48.165 } 00:16:48.165 ]' 00:16:48.165 22:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:48.424 22:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:48.424 22:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:48.424 22:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:48.424 22:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:48.424 22:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.424 22:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.424 22:47:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.683 22:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTJhMjY1YjAyZjQyNDlmZTAyYzUxNzVlNjk2YjliY2JiYTc1YTk1ZjE2NTA3MjViMTYxODVkYTk1MmU5N2Q1NMfsJw4=: 00:16:48.683 22:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTJhMjY1YjAyZjQyNDlmZTAyYzUxNzVlNjk2YjliY2JiYTc1YTk1ZjE2NTA3MjViMTYxODVkYTk1MmU5N2Q1NMfsJw4=: 00:16:49.251 22:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.251 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.251 22:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:49.251 22:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.251 22:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.251 22:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.251 22:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:49.251 22:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:49.251 22:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:49.251 22:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:49.509 22:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:16:49.509 22:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:49.509 22:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:49.509 22:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:49.509 22:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:49.509 22:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.509 22:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.510 22:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.510 22:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.510 22:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.510 22:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.510 22:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.510 22:47:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.510 00:16:49.768 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:49.768 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:49.768 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.768 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.768 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.768 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.769 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.769 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.769 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:49.769 { 00:16:49.769 "cntlid": 57, 00:16:49.769 "qid": 0, 00:16:49.769 "state": "enabled", 00:16:49.769 "thread": "nvmf_tgt_poll_group_000", 00:16:49.769 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:49.769 "listen_address": { 00:16:49.769 "trtype": "TCP", 00:16:49.769 "adrfam": "IPv4", 00:16:49.769 "traddr": "10.0.0.2", 00:16:49.769 "trsvcid": "4420" 00:16:49.769 }, 00:16:49.769 "peer_address": { 00:16:49.769 "trtype": "TCP", 00:16:49.769 "adrfam": "IPv4", 00:16:49.769 "traddr": "10.0.0.1", 00:16:49.769 "trsvcid": "48622" 00:16:49.769 }, 00:16:49.769 "auth": { 00:16:49.769 "state": "completed", 00:16:49.769 "digest": "sha384", 00:16:49.769 "dhgroup": "ffdhe2048" 00:16:49.769 } 00:16:49.769 } 00:16:49.769 ]' 00:16:49.769 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:50.027 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:50.027 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:50.027 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:50.027 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:50.027 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.027 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.027 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.285 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDdjOWZkOWRlZTVlMGE4MTllYjkxMTQxMmMxZTJiYjlkZjAzMDQwNmQyMDliMTdh07Cfwg==: --dhchap-ctrl-secret DHHC-1:03:YjgxYjY2NjJmNzkyZDdlZTM1OWNkYTExNzJjODAyNWU0NWY4MWQzNTc5MzAzMjRjYWQ4ODAxZjA3Nzk1NTI0MoAn0BM=: 00:16:50.285 22:47:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDdjOWZkOWRlZTVlMGE4MTllYjkxMTQxMmMxZTJiYjlkZjAzMDQwNmQyMDliMTdh07Cfwg==: --dhchap-ctrl-secret DHHC-1:03:YjgxYjY2NjJmNzkyZDdlZTM1OWNkYTExNzJjODAyNWU0NWY4MWQzNTc5MzAzMjRjYWQ4ODAxZjA3Nzk1NTI0MoAn0BM=: 00:16:50.851 22:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.851 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.851 22:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:50.851 22:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.851 22:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.851 22:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.851 22:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:50.851 22:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:50.851 22:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:51.110 22:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:16:51.110 22:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:51.110 22:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:51.110 22:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:51.110 22:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:51.110 22:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:51.110 22:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.110 22:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.110 22:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.110 22:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.110 22:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.110 22:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.110 22:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:51.110 00:16:51.369 22:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:51.369 22:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:51.369 22:47:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.369 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.369 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.369 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.369 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.369 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.369 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:51.369 { 00:16:51.369 "cntlid": 59, 00:16:51.369 "qid": 0, 00:16:51.369 "state": "enabled", 00:16:51.369 "thread": "nvmf_tgt_poll_group_000", 00:16:51.369 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:51.369 "listen_address": { 00:16:51.369 "trtype": "TCP", 00:16:51.369 "adrfam": "IPv4", 00:16:51.369 "traddr": "10.0.0.2", 00:16:51.369 "trsvcid": "4420" 00:16:51.369 }, 00:16:51.369 "peer_address": { 00:16:51.369 "trtype": "TCP", 00:16:51.369 "adrfam": "IPv4", 00:16:51.369 "traddr": "10.0.0.1", 00:16:51.369 "trsvcid": "33600" 00:16:51.369 }, 00:16:51.369 "auth": { 00:16:51.369 "state": "completed", 00:16:51.369 "digest": "sha384", 00:16:51.369 "dhgroup": "ffdhe2048" 00:16:51.369 } 00:16:51.369 } 00:16:51.369 ]' 00:16:51.369 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:51.627 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:51.627 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:51.627 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:51.627 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:51.627 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.627 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.627 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.885 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTYyZmVjYzAzMDIzODkzOTUxNjQyZWNlYWMwMmExNTHhtxj0: --dhchap-ctrl-secret DHHC-1:02:ZmE2ZWU4YmQ2YWJmZjZiYWUzZjFjNzE3ZDZlYzMzODRiZjQ3NjlmNTNlMTlmYzkzZRferw==: 00:16:51.885 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MTYyZmVjYzAzMDIzODkzOTUxNjQyZWNlYWMwMmExNTHhtxj0: --dhchap-ctrl-secret DHHC-1:02:ZmE2ZWU4YmQ2YWJmZjZiYWUzZjFjNzE3ZDZlYzMzODRiZjQ3NjlmNTNlMTlmYzkzZRferw==: 00:16:52.451 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.451 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.451 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:52.451 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.451 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.451 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.451 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:52.451 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:52.451 22:47:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:52.710 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:16:52.710 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:52.710 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:52.710 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:52.710 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:52.710 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.710 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.710 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.710 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.710 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.710 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.710 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.710 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.968 00:16:52.968 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:52.968 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:52.968 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.968 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.968 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.968 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.968 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.968 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.968 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:52.968 { 00:16:52.968 "cntlid": 61, 00:16:52.968 "qid": 0, 00:16:52.968 "state": "enabled", 00:16:52.968 "thread": "nvmf_tgt_poll_group_000", 00:16:52.968 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:52.968 "listen_address": { 00:16:52.968 "trtype": "TCP", 00:16:52.968 "adrfam": "IPv4", 00:16:52.968 "traddr": "10.0.0.2", 00:16:52.968 "trsvcid": "4420" 00:16:52.968 }, 00:16:52.968 "peer_address": { 00:16:52.968 "trtype": "TCP", 00:16:52.968 "adrfam": "IPv4", 00:16:52.968 "traddr": "10.0.0.1", 00:16:52.968 "trsvcid": "33636" 00:16:52.968 }, 00:16:52.968 "auth": { 00:16:52.968 "state": "completed", 00:16:52.968 "digest": "sha384", 00:16:52.968 "dhgroup": "ffdhe2048" 00:16:52.968 } 00:16:52.968 } 00:16:52.968 ]' 00:16:52.969 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:53.227 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:53.227 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:53.227 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:53.227 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:53.227 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.227 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.227 22:48:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.486 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWE3MTY3YjE0MDYxNTc5MDJlZjlmZjQwOTZiZDM0MjBjNGFiOTc5ZGViZDYzMGQ0PkRHzA==: --dhchap-ctrl-secret DHHC-1:01:OGQ1ZjMzYWZhYmVmOGRhYTE2M2FiYmQwNDc4NjhjZTns83ga: 00:16:53.486 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OWE3MTY3YjE0MDYxNTc5MDJlZjlmZjQwOTZiZDM0MjBjNGFiOTc5ZGViZDYzMGQ0PkRHzA==: --dhchap-ctrl-secret DHHC-1:01:OGQ1ZjMzYWZhYmVmOGRhYTE2M2FiYmQwNDc4NjhjZTns83ga: 00:16:54.113 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.113 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.113 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:54.113 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.113 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.113 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.113 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:54.113 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:54.113 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:54.113 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:16:54.113 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:54.113 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:54.113 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:54.114 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:54.114 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.114 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:54.114 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.114 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.114 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.114 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:54.114 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:54.114 22:48:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:54.372 00:16:54.372 22:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:54.372 22:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:54.372 22:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.630 22:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.630 22:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.630 22:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.630 22:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.630 22:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.630 22:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:54.630 { 00:16:54.630 "cntlid": 63, 00:16:54.630 "qid": 0, 00:16:54.630 "state": "enabled", 00:16:54.630 "thread": "nvmf_tgt_poll_group_000", 00:16:54.630 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:54.630 "listen_address": { 00:16:54.631 "trtype": "TCP", 00:16:54.631 "adrfam": "IPv4", 00:16:54.631 "traddr": "10.0.0.2", 00:16:54.631 "trsvcid": "4420" 00:16:54.631 }, 00:16:54.631 "peer_address": { 00:16:54.631 "trtype": "TCP", 00:16:54.631 "adrfam": "IPv4", 00:16:54.631 "traddr": "10.0.0.1", 00:16:54.631 "trsvcid": "33682" 00:16:54.631 }, 00:16:54.631 "auth": { 00:16:54.631 "state": "completed", 00:16:54.631 "digest": "sha384", 00:16:54.631 "dhgroup": "ffdhe2048" 00:16:54.631 } 00:16:54.631 } 00:16:54.631 ]' 00:16:54.631 22:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:54.631 22:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:54.631 22:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:54.889 22:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:54.889 22:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:54.889 22:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.889 22:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.889 22:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.148 22:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTJhMjY1YjAyZjQyNDlmZTAyYzUxNzVlNjk2YjliY2JiYTc1YTk1ZjE2NTA3MjViMTYxODVkYTk1MmU5N2Q1NMfsJw4=: 00:16:55.148 22:48:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTJhMjY1YjAyZjQyNDlmZTAyYzUxNzVlNjk2YjliY2JiYTc1YTk1ZjE2NTA3MjViMTYxODVkYTk1MmU5N2Q1NMfsJw4=: 00:16:55.715 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.715 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.715 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:55.715 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.715 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.715 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.715 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:55.715 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:55.715 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:55.715 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:55.715 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:16:55.715 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:55.715 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:55.715 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:55.715 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:55.715 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.715 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.715 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.715 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.715 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.715 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.715 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.715 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.974 00:16:55.974 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:55.974 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:55.974 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.232 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.232 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.232 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.232 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.232 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.232 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:56.232 { 00:16:56.232 "cntlid": 65, 00:16:56.232 "qid": 0, 00:16:56.232 "state": "enabled", 00:16:56.232 "thread": "nvmf_tgt_poll_group_000", 00:16:56.232 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:56.232 "listen_address": { 00:16:56.232 "trtype": "TCP", 00:16:56.232 "adrfam": "IPv4", 00:16:56.232 "traddr": "10.0.0.2", 00:16:56.232 "trsvcid": "4420" 00:16:56.232 }, 00:16:56.232 "peer_address": { 00:16:56.232 "trtype": "TCP", 00:16:56.232 "adrfam": "IPv4", 00:16:56.232 "traddr": "10.0.0.1", 00:16:56.232 "trsvcid": "33708" 00:16:56.232 }, 00:16:56.232 "auth": { 00:16:56.232 "state": "completed", 00:16:56.232 "digest": "sha384", 00:16:56.232 "dhgroup": "ffdhe3072" 00:16:56.232 } 00:16:56.232 } 00:16:56.232 ]' 00:16:56.232 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:56.233 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:56.233 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:56.491 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:56.491 22:48:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:56.491 22:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.491 22:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.491 22:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.749 22:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDdjOWZkOWRlZTVlMGE4MTllYjkxMTQxMmMxZTJiYjlkZjAzMDQwNmQyMDliMTdh07Cfwg==: --dhchap-ctrl-secret DHHC-1:03:YjgxYjY2NjJmNzkyZDdlZTM1OWNkYTExNzJjODAyNWU0NWY4MWQzNTc5MzAzMjRjYWQ4ODAxZjA3Nzk1NTI0MoAn0BM=: 00:16:56.749 22:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDdjOWZkOWRlZTVlMGE4MTllYjkxMTQxMmMxZTJiYjlkZjAzMDQwNmQyMDliMTdh07Cfwg==: --dhchap-ctrl-secret DHHC-1:03:YjgxYjY2NjJmNzkyZDdlZTM1OWNkYTExNzJjODAyNWU0NWY4MWQzNTc5MzAzMjRjYWQ4ODAxZjA3Nzk1NTI0MoAn0BM=: 00:16:57.317 22:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:57.317 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.317 22:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:57.317 22:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.317 22:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.317 22:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.317 22:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:57.317 22:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:57.317 22:48:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:57.317 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:16:57.317 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:57.317 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:57.317 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:57.317 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:57.317 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.317 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.317 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.317 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.317 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.317 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.317 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.317 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.574 00:16:57.574 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:57.574 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:57.574 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.832 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.832 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.832 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.832 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.832 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.832 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:57.832 { 00:16:57.832 "cntlid": 67, 00:16:57.832 "qid": 0, 00:16:57.832 "state": "enabled", 00:16:57.832 "thread": "nvmf_tgt_poll_group_000", 00:16:57.832 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:57.832 "listen_address": { 00:16:57.832 "trtype": "TCP", 00:16:57.832 "adrfam": "IPv4", 00:16:57.832 "traddr": "10.0.0.2", 00:16:57.832 "trsvcid": "4420" 00:16:57.832 }, 00:16:57.832 "peer_address": { 00:16:57.832 "trtype": "TCP", 00:16:57.832 "adrfam": "IPv4", 00:16:57.832 "traddr": "10.0.0.1", 00:16:57.832 "trsvcid": "33742" 00:16:57.832 }, 00:16:57.832 "auth": { 00:16:57.833 "state": "completed", 00:16:57.833 "digest": "sha384", 00:16:57.833 "dhgroup": "ffdhe3072" 00:16:57.833 } 00:16:57.833 } 00:16:57.833 ]' 00:16:57.833 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:57.833 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:57.833 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:58.091 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:58.091 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:58.091 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.091 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.091 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.349 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTYyZmVjYzAzMDIzODkzOTUxNjQyZWNlYWMwMmExNTHhtxj0: --dhchap-ctrl-secret DHHC-1:02:ZmE2ZWU4YmQ2YWJmZjZiYWUzZjFjNzE3ZDZlYzMzODRiZjQ3NjlmNTNlMTlmYzkzZRferw==: 00:16:58.349 22:48:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MTYyZmVjYzAzMDIzODkzOTUxNjQyZWNlYWMwMmExNTHhtxj0: --dhchap-ctrl-secret DHHC-1:02:ZmE2ZWU4YmQ2YWJmZjZiYWUzZjFjNzE3ZDZlYzMzODRiZjQ3NjlmNTNlMTlmYzkzZRferw==: 00:16:58.916 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.916 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.916 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:58.916 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.916 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.916 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.916 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:58.916 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:58.917 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:58.917 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:16:58.917 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:58.917 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:58.917 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:58.917 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:58.917 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.917 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.917 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.917 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.917 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.917 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.917 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.917 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.175 00:16:59.175 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:59.175 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:59.175 22:48:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.434 22:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.434 22:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.434 22:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.434 22:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.434 22:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.434 22:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:59.434 { 00:16:59.434 "cntlid": 69, 00:16:59.434 "qid": 0, 00:16:59.434 "state": "enabled", 00:16:59.434 "thread": "nvmf_tgt_poll_group_000", 00:16:59.434 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:59.434 "listen_address": { 00:16:59.434 "trtype": "TCP", 00:16:59.434 "adrfam": "IPv4", 00:16:59.434 "traddr": "10.0.0.2", 00:16:59.434 "trsvcid": "4420" 00:16:59.434 }, 00:16:59.434 "peer_address": { 00:16:59.434 "trtype": "TCP", 00:16:59.434 "adrfam": "IPv4", 00:16:59.434 "traddr": "10.0.0.1", 00:16:59.434 "trsvcid": "33762" 00:16:59.434 }, 00:16:59.434 "auth": { 00:16:59.434 "state": "completed", 00:16:59.434 "digest": "sha384", 00:16:59.434 "dhgroup": "ffdhe3072" 00:16:59.434 } 00:16:59.434 } 00:16:59.434 ]' 00:16:59.434 22:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:59.434 22:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:59.434 22:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:59.692 22:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:59.692 22:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:59.692 22:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.692 22:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.692 22:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.951 22:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWE3MTY3YjE0MDYxNTc5MDJlZjlmZjQwOTZiZDM0MjBjNGFiOTc5ZGViZDYzMGQ0PkRHzA==: --dhchap-ctrl-secret DHHC-1:01:OGQ1ZjMzYWZhYmVmOGRhYTE2M2FiYmQwNDc4NjhjZTns83ga: 00:16:59.951 22:48:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OWE3MTY3YjE0MDYxNTc5MDJlZjlmZjQwOTZiZDM0MjBjNGFiOTc5ZGViZDYzMGQ0PkRHzA==: --dhchap-ctrl-secret DHHC-1:01:OGQ1ZjMzYWZhYmVmOGRhYTE2M2FiYmQwNDc4NjhjZTns83ga: 00:17:00.519 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.519 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.519 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:00.519 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.519 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.519 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.519 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:00.519 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:00.519 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:00.519 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:17:00.519 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:00.519 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:00.519 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:00.519 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:00.519 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.519 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:00.520 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.520 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.520 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.520 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:00.520 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:00.520 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:00.778 00:17:01.037 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:01.037 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:01.037 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.037 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.037 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:01.037 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.037 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.037 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.037 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:01.037 { 00:17:01.037 "cntlid": 71, 00:17:01.037 "qid": 0, 00:17:01.037 "state": "enabled", 00:17:01.037 "thread": "nvmf_tgt_poll_group_000", 00:17:01.037 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:01.037 "listen_address": { 00:17:01.037 "trtype": "TCP", 00:17:01.037 "adrfam": "IPv4", 00:17:01.037 "traddr": "10.0.0.2", 00:17:01.037 "trsvcid": "4420" 00:17:01.037 }, 00:17:01.037 "peer_address": { 00:17:01.037 "trtype": "TCP", 00:17:01.037 "adrfam": "IPv4", 00:17:01.037 "traddr": "10.0.0.1", 00:17:01.037 "trsvcid": "33782" 00:17:01.037 }, 00:17:01.037 "auth": { 00:17:01.037 "state": "completed", 00:17:01.037 "digest": "sha384", 00:17:01.037 "dhgroup": "ffdhe3072" 00:17:01.037 } 00:17:01.037 } 00:17:01.037 ]' 00:17:01.037 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:01.296 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:01.296 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:01.296 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:01.296 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:01.296 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.296 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.296 22:48:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.555 22:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTJhMjY1YjAyZjQyNDlmZTAyYzUxNzVlNjk2YjliY2JiYTc1YTk1ZjE2NTA3MjViMTYxODVkYTk1MmU5N2Q1NMfsJw4=: 00:17:01.555 22:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTJhMjY1YjAyZjQyNDlmZTAyYzUxNzVlNjk2YjliY2JiYTc1YTk1ZjE2NTA3MjViMTYxODVkYTk1MmU5N2Q1NMfsJw4=: 00:17:02.122 22:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.122 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.122 22:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:02.122 22:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.122 22:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.122 22:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.122 22:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:02.122 22:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:02.122 22:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:02.122 22:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:02.380 22:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:17:02.380 22:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:02.380 22:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:02.380 22:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:02.381 22:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:02.381 22:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.381 22:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.381 22:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.381 22:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.381 22:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.381 22:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.381 22:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.381 22:48:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.639 00:17:02.639 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:02.639 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:02.639 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.898 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.898 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.898 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.898 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.898 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.898 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:02.898 { 00:17:02.898 "cntlid": 73, 00:17:02.898 "qid": 0, 00:17:02.898 "state": "enabled", 00:17:02.898 "thread": "nvmf_tgt_poll_group_000", 00:17:02.898 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:02.898 "listen_address": { 00:17:02.898 "trtype": "TCP", 00:17:02.898 "adrfam": "IPv4", 00:17:02.898 "traddr": "10.0.0.2", 00:17:02.898 "trsvcid": "4420" 00:17:02.898 }, 00:17:02.898 "peer_address": { 00:17:02.898 "trtype": "TCP", 00:17:02.898 "adrfam": "IPv4", 00:17:02.898 "traddr": "10.0.0.1", 00:17:02.898 "trsvcid": "57684" 00:17:02.898 }, 00:17:02.898 "auth": { 00:17:02.898 "state": "completed", 00:17:02.898 "digest": "sha384", 00:17:02.898 "dhgroup": "ffdhe4096" 00:17:02.898 } 00:17:02.898 } 00:17:02.898 ]' 00:17:02.898 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:02.898 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:02.898 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:02.898 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:02.898 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:02.898 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.898 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.898 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.157 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDdjOWZkOWRlZTVlMGE4MTllYjkxMTQxMmMxZTJiYjlkZjAzMDQwNmQyMDliMTdh07Cfwg==: --dhchap-ctrl-secret DHHC-1:03:YjgxYjY2NjJmNzkyZDdlZTM1OWNkYTExNzJjODAyNWU0NWY4MWQzNTc5MzAzMjRjYWQ4ODAxZjA3Nzk1NTI0MoAn0BM=: 00:17:03.157 22:48:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDdjOWZkOWRlZTVlMGE4MTllYjkxMTQxMmMxZTJiYjlkZjAzMDQwNmQyMDliMTdh07Cfwg==: --dhchap-ctrl-secret DHHC-1:03:YjgxYjY2NjJmNzkyZDdlZTM1OWNkYTExNzJjODAyNWU0NWY4MWQzNTc5MzAzMjRjYWQ4ODAxZjA3Nzk1NTI0MoAn0BM=: 00:17:03.724 22:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.724 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.725 22:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:03.725 22:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.725 22:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.725 22:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.725 22:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:03.725 22:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:03.725 22:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:03.983 22:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:17:03.983 22:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:03.983 22:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:03.983 22:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:03.983 22:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:03.983 22:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.983 22:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:03.983 22:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.983 22:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.983 22:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.983 22:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:03.983 22:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:03.983 22:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.242 00:17:04.242 22:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:04.242 22:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:04.242 22:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.501 22:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.501 22:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.501 22:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.501 22:48:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.501 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.501 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:04.501 { 00:17:04.501 "cntlid": 75, 00:17:04.501 "qid": 0, 00:17:04.501 "state": "enabled", 00:17:04.501 "thread": "nvmf_tgt_poll_group_000", 00:17:04.501 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:04.501 "listen_address": { 00:17:04.501 "trtype": "TCP", 00:17:04.501 "adrfam": "IPv4", 00:17:04.501 "traddr": "10.0.0.2", 00:17:04.501 "trsvcid": "4420" 00:17:04.501 }, 00:17:04.501 "peer_address": { 00:17:04.501 "trtype": "TCP", 00:17:04.501 "adrfam": "IPv4", 00:17:04.501 "traddr": "10.0.0.1", 00:17:04.501 "trsvcid": "57718" 00:17:04.501 }, 00:17:04.501 "auth": { 00:17:04.501 "state": "completed", 00:17:04.501 "digest": "sha384", 00:17:04.501 "dhgroup": "ffdhe4096" 00:17:04.501 } 00:17:04.501 } 00:17:04.501 ]' 00:17:04.501 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:04.501 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:04.501 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:04.501 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:04.501 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:04.501 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.501 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.501 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.761 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTYyZmVjYzAzMDIzODkzOTUxNjQyZWNlYWMwMmExNTHhtxj0: --dhchap-ctrl-secret DHHC-1:02:ZmE2ZWU4YmQ2YWJmZjZiYWUzZjFjNzE3ZDZlYzMzODRiZjQ3NjlmNTNlMTlmYzkzZRferw==: 00:17:04.761 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MTYyZmVjYzAzMDIzODkzOTUxNjQyZWNlYWMwMmExNTHhtxj0: --dhchap-ctrl-secret DHHC-1:02:ZmE2ZWU4YmQ2YWJmZjZiYWUzZjFjNzE3ZDZlYzMzODRiZjQ3NjlmNTNlMTlmYzkzZRferw==: 00:17:05.329 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.329 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.330 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:05.330 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.330 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.330 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.330 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:05.330 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:05.330 22:48:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:05.591 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:17:05.591 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:05.591 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:05.591 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:05.591 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:05.591 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.591 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.591 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.591 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.591 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.591 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.591 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.591 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.849 00:17:05.849 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:05.849 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:05.849 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.109 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.109 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.109 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.109 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.109 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.109 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:06.109 { 00:17:06.109 "cntlid": 77, 00:17:06.109 "qid": 0, 00:17:06.109 "state": "enabled", 00:17:06.109 "thread": "nvmf_tgt_poll_group_000", 00:17:06.109 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:06.109 "listen_address": { 00:17:06.109 "trtype": "TCP", 00:17:06.109 "adrfam": "IPv4", 00:17:06.109 "traddr": "10.0.0.2", 00:17:06.109 "trsvcid": "4420" 00:17:06.109 }, 00:17:06.109 "peer_address": { 00:17:06.109 "trtype": "TCP", 00:17:06.109 "adrfam": "IPv4", 00:17:06.109 "traddr": "10.0.0.1", 00:17:06.109 "trsvcid": "57736" 00:17:06.109 }, 00:17:06.109 "auth": { 00:17:06.109 "state": "completed", 00:17:06.109 "digest": "sha384", 00:17:06.109 "dhgroup": "ffdhe4096" 00:17:06.109 } 00:17:06.109 } 00:17:06.109 ]' 00:17:06.109 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:06.109 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:06.109 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:06.109 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:06.109 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:06.109 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.109 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.109 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.368 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWE3MTY3YjE0MDYxNTc5MDJlZjlmZjQwOTZiZDM0MjBjNGFiOTc5ZGViZDYzMGQ0PkRHzA==: --dhchap-ctrl-secret DHHC-1:01:OGQ1ZjMzYWZhYmVmOGRhYTE2M2FiYmQwNDc4NjhjZTns83ga: 00:17:06.368 22:48:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OWE3MTY3YjE0MDYxNTc5MDJlZjlmZjQwOTZiZDM0MjBjNGFiOTc5ZGViZDYzMGQ0PkRHzA==: --dhchap-ctrl-secret DHHC-1:01:OGQ1ZjMzYWZhYmVmOGRhYTE2M2FiYmQwNDc4NjhjZTns83ga: 00:17:06.936 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.936 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.936 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:06.936 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.936 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.936 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.936 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:06.936 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:06.936 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:07.195 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:17:07.195 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:07.195 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:07.195 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:07.195 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:07.195 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.195 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:07.195 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.195 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.195 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.195 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:07.195 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:07.195 22:48:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:07.456 00:17:07.456 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:07.456 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:07.456 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.715 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.715 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.715 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.715 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.715 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.715 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:07.715 { 00:17:07.715 "cntlid": 79, 00:17:07.715 "qid": 0, 00:17:07.715 "state": "enabled", 00:17:07.715 "thread": "nvmf_tgt_poll_group_000", 00:17:07.715 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:07.715 "listen_address": { 00:17:07.715 "trtype": "TCP", 00:17:07.715 "adrfam": "IPv4", 00:17:07.715 "traddr": "10.0.0.2", 00:17:07.715 "trsvcid": "4420" 00:17:07.715 }, 00:17:07.715 "peer_address": { 00:17:07.715 "trtype": "TCP", 00:17:07.715 "adrfam": "IPv4", 00:17:07.715 "traddr": "10.0.0.1", 00:17:07.715 "trsvcid": "57756" 00:17:07.715 }, 00:17:07.715 "auth": { 00:17:07.715 "state": "completed", 00:17:07.715 "digest": "sha384", 00:17:07.715 "dhgroup": "ffdhe4096" 00:17:07.715 } 00:17:07.715 } 00:17:07.715 ]' 00:17:07.715 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:07.715 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:07.715 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:07.715 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:07.715 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:07.715 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.715 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.715 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.973 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTJhMjY1YjAyZjQyNDlmZTAyYzUxNzVlNjk2YjliY2JiYTc1YTk1ZjE2NTA3MjViMTYxODVkYTk1MmU5N2Q1NMfsJw4=: 00:17:07.973 22:48:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTJhMjY1YjAyZjQyNDlmZTAyYzUxNzVlNjk2YjliY2JiYTc1YTk1ZjE2NTA3MjViMTYxODVkYTk1MmU5N2Q1NMfsJw4=: 00:17:08.563 22:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.563 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.563 22:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:08.563 22:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.563 22:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.563 22:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.563 22:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:08.563 22:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:08.563 22:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:08.563 22:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:08.847 22:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:17:08.847 22:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:08.847 22:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:08.847 22:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:08.847 22:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:08.847 22:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:08.848 22:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:08.848 22:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.848 22:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.848 22:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.848 22:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:08.848 22:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:08.848 22:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:09.106 00:17:09.106 22:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:09.106 22:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.106 22:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:09.365 22:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.365 22:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.365 22:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.365 22:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.365 22:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.365 22:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:09.365 { 00:17:09.365 "cntlid": 81, 00:17:09.365 "qid": 0, 00:17:09.365 "state": "enabled", 00:17:09.365 "thread": "nvmf_tgt_poll_group_000", 00:17:09.365 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:09.365 "listen_address": { 00:17:09.365 "trtype": "TCP", 00:17:09.365 "adrfam": "IPv4", 00:17:09.365 "traddr": "10.0.0.2", 00:17:09.365 "trsvcid": "4420" 00:17:09.365 }, 00:17:09.365 "peer_address": { 00:17:09.365 "trtype": "TCP", 00:17:09.365 "adrfam": "IPv4", 00:17:09.365 "traddr": "10.0.0.1", 00:17:09.365 "trsvcid": "57794" 00:17:09.365 }, 00:17:09.365 "auth": { 00:17:09.365 "state": "completed", 00:17:09.365 "digest": "sha384", 00:17:09.365 "dhgroup": "ffdhe6144" 00:17:09.365 } 00:17:09.365 } 00:17:09.365 ]' 00:17:09.365 22:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:09.365 22:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:09.365 22:48:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:09.365 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:09.365 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:09.365 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.365 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.365 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.624 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDdjOWZkOWRlZTVlMGE4MTllYjkxMTQxMmMxZTJiYjlkZjAzMDQwNmQyMDliMTdh07Cfwg==: --dhchap-ctrl-secret DHHC-1:03:YjgxYjY2NjJmNzkyZDdlZTM1OWNkYTExNzJjODAyNWU0NWY4MWQzNTc5MzAzMjRjYWQ4ODAxZjA3Nzk1NTI0MoAn0BM=: 00:17:09.624 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDdjOWZkOWRlZTVlMGE4MTllYjkxMTQxMmMxZTJiYjlkZjAzMDQwNmQyMDliMTdh07Cfwg==: --dhchap-ctrl-secret DHHC-1:03:YjgxYjY2NjJmNzkyZDdlZTM1OWNkYTExNzJjODAyNWU0NWY4MWQzNTc5MzAzMjRjYWQ4ODAxZjA3Nzk1NTI0MoAn0BM=: 00:17:10.189 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:10.189 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:10.189 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:10.189 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.189 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.189 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.189 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:10.189 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:10.189 22:48:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:10.447 22:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:17:10.448 22:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:10.448 22:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:10.448 22:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:10.448 22:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:10.448 22:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:10.448 22:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:10.448 22:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.448 22:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.448 22:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.448 22:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:10.448 22:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:10.448 22:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:10.706 00:17:10.706 22:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:10.706 22:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.706 22:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:10.965 22:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.965 22:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.965 22:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.965 22:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.965 22:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.965 22:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:10.965 { 00:17:10.965 "cntlid": 83, 00:17:10.965 "qid": 0, 00:17:10.965 "state": "enabled", 00:17:10.965 "thread": "nvmf_tgt_poll_group_000", 00:17:10.965 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:10.965 "listen_address": { 00:17:10.965 "trtype": "TCP", 00:17:10.965 "adrfam": "IPv4", 00:17:10.965 "traddr": "10.0.0.2", 00:17:10.965 "trsvcid": "4420" 00:17:10.965 }, 00:17:10.965 "peer_address": { 00:17:10.965 "trtype": "TCP", 00:17:10.965 "adrfam": "IPv4", 00:17:10.965 "traddr": "10.0.0.1", 00:17:10.965 "trsvcid": "57824" 00:17:10.965 }, 00:17:10.965 "auth": { 00:17:10.965 "state": "completed", 00:17:10.965 "digest": "sha384", 00:17:10.965 "dhgroup": "ffdhe6144" 00:17:10.965 } 00:17:10.965 } 00:17:10.965 ]' 00:17:10.965 22:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:10.965 22:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:10.965 22:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:10.965 22:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:11.223 22:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:11.223 22:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.223 22:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.223 22:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.223 22:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTYyZmVjYzAzMDIzODkzOTUxNjQyZWNlYWMwMmExNTHhtxj0: --dhchap-ctrl-secret DHHC-1:02:ZmE2ZWU4YmQ2YWJmZjZiYWUzZjFjNzE3ZDZlYzMzODRiZjQ3NjlmNTNlMTlmYzkzZRferw==: 00:17:11.223 22:48:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MTYyZmVjYzAzMDIzODkzOTUxNjQyZWNlYWMwMmExNTHhtxj0: --dhchap-ctrl-secret DHHC-1:02:ZmE2ZWU4YmQ2YWJmZjZiYWUzZjFjNzE3ZDZlYzMzODRiZjQ3NjlmNTNlMTlmYzkzZRferw==: 00:17:11.790 22:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.790 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.049 22:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:12.049 22:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.049 22:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.049 22:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.049 22:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:12.049 22:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:12.049 22:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:12.049 22:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:17:12.049 22:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:12.049 22:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:12.049 22:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:12.049 22:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:12.049 22:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:12.049 22:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:12.049 22:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.049 22:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.049 22:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.049 22:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:12.049 22:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:12.049 22:48:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:12.615 00:17:12.615 22:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:12.615 22:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:12.615 22:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.615 22:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.615 22:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:12.615 22:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.615 22:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.615 22:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.615 22:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:12.615 { 00:17:12.615 "cntlid": 85, 00:17:12.615 "qid": 0, 00:17:12.615 "state": "enabled", 00:17:12.615 "thread": "nvmf_tgt_poll_group_000", 00:17:12.615 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:12.615 "listen_address": { 00:17:12.615 "trtype": "TCP", 00:17:12.615 "adrfam": "IPv4", 00:17:12.615 "traddr": "10.0.0.2", 00:17:12.615 "trsvcid": "4420" 00:17:12.615 }, 00:17:12.615 "peer_address": { 00:17:12.615 "trtype": "TCP", 00:17:12.615 "adrfam": "IPv4", 00:17:12.615 "traddr": "10.0.0.1", 00:17:12.615 "trsvcid": "57624" 00:17:12.615 }, 00:17:12.615 "auth": { 00:17:12.615 "state": "completed", 00:17:12.615 "digest": "sha384", 00:17:12.615 "dhgroup": "ffdhe6144" 00:17:12.615 } 00:17:12.615 } 00:17:12.615 ]' 00:17:12.615 22:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:12.873 22:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:12.873 22:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:12.873 22:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:12.873 22:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:12.873 22:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:12.873 22:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.873 22:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.131 22:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWE3MTY3YjE0MDYxNTc5MDJlZjlmZjQwOTZiZDM0MjBjNGFiOTc5ZGViZDYzMGQ0PkRHzA==: --dhchap-ctrl-secret DHHC-1:01:OGQ1ZjMzYWZhYmVmOGRhYTE2M2FiYmQwNDc4NjhjZTns83ga: 00:17:13.131 22:48:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OWE3MTY3YjE0MDYxNTc5MDJlZjlmZjQwOTZiZDM0MjBjNGFiOTc5ZGViZDYzMGQ0PkRHzA==: --dhchap-ctrl-secret DHHC-1:01:OGQ1ZjMzYWZhYmVmOGRhYTE2M2FiYmQwNDc4NjhjZTns83ga: 00:17:13.697 22:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.697 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.697 22:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:13.697 22:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.697 22:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.697 22:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.697 22:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:13.697 22:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:13.697 22:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:13.956 22:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:17:13.956 22:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:13.956 22:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:13.956 22:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:13.956 22:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:13.956 22:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.956 22:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:13.956 22:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.956 22:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.956 22:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.956 22:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:13.956 22:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:13.956 22:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:14.214 00:17:14.214 22:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:14.214 22:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:14.214 22:48:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.473 22:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.473 22:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:14.473 22:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.473 22:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.473 22:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.473 22:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:14.473 { 00:17:14.473 "cntlid": 87, 00:17:14.473 "qid": 0, 00:17:14.473 "state": "enabled", 00:17:14.473 "thread": "nvmf_tgt_poll_group_000", 00:17:14.473 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:14.473 "listen_address": { 00:17:14.473 "trtype": "TCP", 00:17:14.473 "adrfam": "IPv4", 00:17:14.473 "traddr": "10.0.0.2", 00:17:14.473 "trsvcid": "4420" 00:17:14.473 }, 00:17:14.473 "peer_address": { 00:17:14.473 "trtype": "TCP", 00:17:14.473 "adrfam": "IPv4", 00:17:14.473 "traddr": "10.0.0.1", 00:17:14.473 "trsvcid": "57640" 00:17:14.473 }, 00:17:14.473 "auth": { 00:17:14.473 "state": "completed", 00:17:14.473 "digest": "sha384", 00:17:14.473 "dhgroup": "ffdhe6144" 00:17:14.473 } 00:17:14.473 } 00:17:14.473 ]' 00:17:14.473 22:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:14.473 22:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:14.473 22:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:14.473 22:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:14.473 22:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:14.473 22:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.473 22:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.473 22:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.731 22:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTJhMjY1YjAyZjQyNDlmZTAyYzUxNzVlNjk2YjliY2JiYTc1YTk1ZjE2NTA3MjViMTYxODVkYTk1MmU5N2Q1NMfsJw4=: 00:17:14.731 22:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTJhMjY1YjAyZjQyNDlmZTAyYzUxNzVlNjk2YjliY2JiYTc1YTk1ZjE2NTA3MjViMTYxODVkYTk1MmU5N2Q1NMfsJw4=: 00:17:15.298 22:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.298 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.298 22:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:15.298 22:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.298 22:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.298 22:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.298 22:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:15.298 22:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:15.298 22:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:15.298 22:48:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:15.559 22:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:17:15.559 22:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:15.559 22:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:15.559 22:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:15.559 22:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:15.559 22:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.559 22:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:15.559 22:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.559 22:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.559 22:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.559 22:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:15.559 22:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:15.559 22:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:16.129 00:17:16.129 22:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:16.129 22:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:16.129 22:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.129 22:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.129 22:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.129 22:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.129 22:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.129 22:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.129 22:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:16.129 { 00:17:16.129 "cntlid": 89, 00:17:16.129 "qid": 0, 00:17:16.129 "state": "enabled", 00:17:16.129 "thread": "nvmf_tgt_poll_group_000", 00:17:16.129 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:16.129 "listen_address": { 00:17:16.129 "trtype": "TCP", 00:17:16.129 "adrfam": "IPv4", 00:17:16.129 "traddr": "10.0.0.2", 00:17:16.129 "trsvcid": "4420" 00:17:16.129 }, 00:17:16.129 "peer_address": { 00:17:16.129 "trtype": "TCP", 00:17:16.129 "adrfam": "IPv4", 00:17:16.129 "traddr": "10.0.0.1", 00:17:16.129 "trsvcid": "57656" 00:17:16.129 }, 00:17:16.129 "auth": { 00:17:16.129 "state": "completed", 00:17:16.129 "digest": "sha384", 00:17:16.129 "dhgroup": "ffdhe8192" 00:17:16.129 } 00:17:16.129 } 00:17:16.129 ]' 00:17:16.389 22:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:16.389 22:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:16.389 22:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:16.389 22:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:16.389 22:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:16.389 22:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.389 22:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.389 22:48:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.648 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDdjOWZkOWRlZTVlMGE4MTllYjkxMTQxMmMxZTJiYjlkZjAzMDQwNmQyMDliMTdh07Cfwg==: --dhchap-ctrl-secret DHHC-1:03:YjgxYjY2NjJmNzkyZDdlZTM1OWNkYTExNzJjODAyNWU0NWY4MWQzNTc5MzAzMjRjYWQ4ODAxZjA3Nzk1NTI0MoAn0BM=: 00:17:16.648 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDdjOWZkOWRlZTVlMGE4MTllYjkxMTQxMmMxZTJiYjlkZjAzMDQwNmQyMDliMTdh07Cfwg==: --dhchap-ctrl-secret DHHC-1:03:YjgxYjY2NjJmNzkyZDdlZTM1OWNkYTExNzJjODAyNWU0NWY4MWQzNTc5MzAzMjRjYWQ4ODAxZjA3Nzk1NTI0MoAn0BM=: 00:17:17.216 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.216 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.216 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:17.216 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.216 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.216 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.216 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:17.216 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:17.216 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:17.475 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:17:17.475 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:17.475 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:17.475 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:17.475 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:17.475 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.475 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.475 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.475 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.475 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.475 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.475 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.475 22:48:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.734 00:17:17.734 22:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:17.734 22:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:17.734 22:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.993 22:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.993 22:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:17.993 22:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.993 22:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.993 22:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.993 22:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:17.993 { 00:17:17.993 "cntlid": 91, 00:17:17.993 "qid": 0, 00:17:17.993 "state": "enabled", 00:17:17.993 "thread": "nvmf_tgt_poll_group_000", 00:17:17.993 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:17.993 "listen_address": { 00:17:17.993 "trtype": "TCP", 00:17:17.993 "adrfam": "IPv4", 00:17:17.993 "traddr": "10.0.0.2", 00:17:17.993 "trsvcid": "4420" 00:17:17.993 }, 00:17:17.993 "peer_address": { 00:17:17.993 "trtype": "TCP", 00:17:17.993 "adrfam": "IPv4", 00:17:17.993 "traddr": "10.0.0.1", 00:17:17.993 "trsvcid": "57682" 00:17:17.993 }, 00:17:17.993 "auth": { 00:17:17.993 "state": "completed", 00:17:17.993 "digest": "sha384", 00:17:17.993 "dhgroup": "ffdhe8192" 00:17:17.993 } 00:17:17.993 } 00:17:17.993 ]' 00:17:17.993 22:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:17.993 22:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:17.993 22:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:18.252 22:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:18.253 22:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:18.253 22:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.253 22:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.253 22:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.512 22:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTYyZmVjYzAzMDIzODkzOTUxNjQyZWNlYWMwMmExNTHhtxj0: --dhchap-ctrl-secret DHHC-1:02:ZmE2ZWU4YmQ2YWJmZjZiYWUzZjFjNzE3ZDZlYzMzODRiZjQ3NjlmNTNlMTlmYzkzZRferw==: 00:17:18.512 22:48:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MTYyZmVjYzAzMDIzODkzOTUxNjQyZWNlYWMwMmExNTHhtxj0: --dhchap-ctrl-secret DHHC-1:02:ZmE2ZWU4YmQ2YWJmZjZiYWUzZjFjNzE3ZDZlYzMzODRiZjQ3NjlmNTNlMTlmYzkzZRferw==: 00:17:19.080 22:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.080 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.080 22:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:19.080 22:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.080 22:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.080 22:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.080 22:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:19.080 22:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:19.080 22:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:19.080 22:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:17:19.080 22:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:19.080 22:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:19.080 22:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:19.080 22:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:19.080 22:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.080 22:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.080 22:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.080 22:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.080 22:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.080 22:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.080 22:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.080 22:48:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.648 00:17:19.648 22:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:19.648 22:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:19.648 22:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:19.907 22:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:19.907 22:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:19.907 22:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.907 22:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.907 22:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.907 22:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:19.907 { 00:17:19.907 "cntlid": 93, 00:17:19.907 "qid": 0, 00:17:19.907 "state": "enabled", 00:17:19.907 "thread": "nvmf_tgt_poll_group_000", 00:17:19.907 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:19.907 "listen_address": { 00:17:19.907 "trtype": "TCP", 00:17:19.907 "adrfam": "IPv4", 00:17:19.907 "traddr": "10.0.0.2", 00:17:19.907 "trsvcid": "4420" 00:17:19.907 }, 00:17:19.907 "peer_address": { 00:17:19.907 "trtype": "TCP", 00:17:19.907 "adrfam": "IPv4", 00:17:19.907 "traddr": "10.0.0.1", 00:17:19.907 "trsvcid": "57710" 00:17:19.907 }, 00:17:19.907 "auth": { 00:17:19.907 "state": "completed", 00:17:19.907 "digest": "sha384", 00:17:19.907 "dhgroup": "ffdhe8192" 00:17:19.907 } 00:17:19.907 } 00:17:19.907 ]' 00:17:19.907 22:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:19.907 22:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:19.907 22:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:19.907 22:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:19.907 22:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:19.907 22:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.907 22:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.907 22:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.166 22:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWE3MTY3YjE0MDYxNTc5MDJlZjlmZjQwOTZiZDM0MjBjNGFiOTc5ZGViZDYzMGQ0PkRHzA==: --dhchap-ctrl-secret DHHC-1:01:OGQ1ZjMzYWZhYmVmOGRhYTE2M2FiYmQwNDc4NjhjZTns83ga: 00:17:20.166 22:48:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OWE3MTY3YjE0MDYxNTc5MDJlZjlmZjQwOTZiZDM0MjBjNGFiOTc5ZGViZDYzMGQ0PkRHzA==: --dhchap-ctrl-secret DHHC-1:01:OGQ1ZjMzYWZhYmVmOGRhYTE2M2FiYmQwNDc4NjhjZTns83ga: 00:17:20.734 22:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:20.734 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:20.734 22:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:20.734 22:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.734 22:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.734 22:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.734 22:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:20.734 22:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:20.734 22:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:20.993 22:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:17:20.993 22:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:20.993 22:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:20.993 22:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:20.993 22:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:20.993 22:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.993 22:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:20.993 22:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.993 22:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.993 22:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.993 22:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:20.993 22:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:20.993 22:48:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:21.561 00:17:21.561 22:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:21.561 22:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:21.561 22:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.821 22:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.821 22:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:21.821 22:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.821 22:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.821 22:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.821 22:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:21.821 { 00:17:21.821 "cntlid": 95, 00:17:21.821 "qid": 0, 00:17:21.821 "state": "enabled", 00:17:21.821 "thread": "nvmf_tgt_poll_group_000", 00:17:21.821 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:21.821 "listen_address": { 00:17:21.821 "trtype": "TCP", 00:17:21.821 "adrfam": "IPv4", 00:17:21.821 "traddr": "10.0.0.2", 00:17:21.821 "trsvcid": "4420" 00:17:21.821 }, 00:17:21.821 "peer_address": { 00:17:21.821 "trtype": "TCP", 00:17:21.821 "adrfam": "IPv4", 00:17:21.821 "traddr": "10.0.0.1", 00:17:21.821 "trsvcid": "44526" 00:17:21.821 }, 00:17:21.821 "auth": { 00:17:21.821 "state": "completed", 00:17:21.821 "digest": "sha384", 00:17:21.821 "dhgroup": "ffdhe8192" 00:17:21.821 } 00:17:21.821 } 00:17:21.821 ]' 00:17:21.821 22:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:21.821 22:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:21.821 22:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:21.821 22:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:21.821 22:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:21.821 22:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.821 22:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.821 22:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.080 22:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTJhMjY1YjAyZjQyNDlmZTAyYzUxNzVlNjk2YjliY2JiYTc1YTk1ZjE2NTA3MjViMTYxODVkYTk1MmU5N2Q1NMfsJw4=: 00:17:22.080 22:48:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTJhMjY1YjAyZjQyNDlmZTAyYzUxNzVlNjk2YjliY2JiYTc1YTk1ZjE2NTA3MjViMTYxODVkYTk1MmU5N2Q1NMfsJw4=: 00:17:22.648 22:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:22.648 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:22.648 22:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:22.648 22:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.648 22:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.648 22:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.648 22:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:22.648 22:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:22.648 22:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:22.648 22:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:22.648 22:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:22.907 22:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:17:22.907 22:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:22.907 22:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:22.907 22:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:22.907 22:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:22.907 22:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:22.907 22:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.907 22:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.907 22:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.907 22:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.907 22:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.907 22:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:22.907 22:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:23.166 00:17:23.166 22:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:23.166 22:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:23.166 22:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.425 22:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.425 22:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.425 22:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.425 22:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.425 22:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.425 22:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:23.425 { 00:17:23.425 "cntlid": 97, 00:17:23.425 "qid": 0, 00:17:23.425 "state": "enabled", 00:17:23.425 "thread": "nvmf_tgt_poll_group_000", 00:17:23.425 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:23.425 "listen_address": { 00:17:23.425 "trtype": "TCP", 00:17:23.425 "adrfam": "IPv4", 00:17:23.425 "traddr": "10.0.0.2", 00:17:23.425 "trsvcid": "4420" 00:17:23.425 }, 00:17:23.425 "peer_address": { 00:17:23.425 "trtype": "TCP", 00:17:23.425 "adrfam": "IPv4", 00:17:23.425 "traddr": "10.0.0.1", 00:17:23.425 "trsvcid": "44548" 00:17:23.425 }, 00:17:23.425 "auth": { 00:17:23.425 "state": "completed", 00:17:23.425 "digest": "sha512", 00:17:23.425 "dhgroup": "null" 00:17:23.425 } 00:17:23.425 } 00:17:23.425 ]' 00:17:23.425 22:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:23.425 22:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:23.425 22:48:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:23.425 22:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:23.425 22:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:23.425 22:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.425 22:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.425 22:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.684 22:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDdjOWZkOWRlZTVlMGE4MTllYjkxMTQxMmMxZTJiYjlkZjAzMDQwNmQyMDliMTdh07Cfwg==: --dhchap-ctrl-secret DHHC-1:03:YjgxYjY2NjJmNzkyZDdlZTM1OWNkYTExNzJjODAyNWU0NWY4MWQzNTc5MzAzMjRjYWQ4ODAxZjA3Nzk1NTI0MoAn0BM=: 00:17:23.684 22:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDdjOWZkOWRlZTVlMGE4MTllYjkxMTQxMmMxZTJiYjlkZjAzMDQwNmQyMDliMTdh07Cfwg==: --dhchap-ctrl-secret DHHC-1:03:YjgxYjY2NjJmNzkyZDdlZTM1OWNkYTExNzJjODAyNWU0NWY4MWQzNTc5MzAzMjRjYWQ4ODAxZjA3Nzk1NTI0MoAn0BM=: 00:17:24.252 22:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:24.252 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:24.252 22:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:24.252 22:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.252 22:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.252 22:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.252 22:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:24.252 22:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:24.252 22:48:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:24.511 22:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:17:24.511 22:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:24.511 22:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:24.511 22:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:24.511 22:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:24.511 22:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:24.511 22:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.511 22:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.511 22:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.511 22:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.511 22:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.511 22:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.511 22:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:24.771 00:17:24.771 22:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:24.771 22:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:24.771 22:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.771 22:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.771 22:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.771 22:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.771 22:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.030 22:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.030 22:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:25.030 { 00:17:25.030 "cntlid": 99, 00:17:25.030 "qid": 0, 00:17:25.030 "state": "enabled", 00:17:25.030 "thread": "nvmf_tgt_poll_group_000", 00:17:25.030 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:25.030 "listen_address": { 00:17:25.030 "trtype": "TCP", 00:17:25.030 "adrfam": "IPv4", 00:17:25.030 "traddr": "10.0.0.2", 00:17:25.030 "trsvcid": "4420" 00:17:25.030 }, 00:17:25.030 "peer_address": { 00:17:25.030 "trtype": "TCP", 00:17:25.030 "adrfam": "IPv4", 00:17:25.030 "traddr": "10.0.0.1", 00:17:25.030 "trsvcid": "44580" 00:17:25.030 }, 00:17:25.030 "auth": { 00:17:25.030 "state": "completed", 00:17:25.030 "digest": "sha512", 00:17:25.030 "dhgroup": "null" 00:17:25.030 } 00:17:25.030 } 00:17:25.030 ]' 00:17:25.030 22:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:25.030 22:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:25.030 22:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:25.030 22:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:25.030 22:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:25.030 22:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.030 22:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.031 22:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.290 22:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTYyZmVjYzAzMDIzODkzOTUxNjQyZWNlYWMwMmExNTHhtxj0: --dhchap-ctrl-secret DHHC-1:02:ZmE2ZWU4YmQ2YWJmZjZiYWUzZjFjNzE3ZDZlYzMzODRiZjQ3NjlmNTNlMTlmYzkzZRferw==: 00:17:25.290 22:48:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MTYyZmVjYzAzMDIzODkzOTUxNjQyZWNlYWMwMmExNTHhtxj0: --dhchap-ctrl-secret DHHC-1:02:ZmE2ZWU4YmQ2YWJmZjZiYWUzZjFjNzE3ZDZlYzMzODRiZjQ3NjlmNTNlMTlmYzkzZRferw==: 00:17:25.858 22:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.858 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.858 22:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:25.858 22:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.858 22:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.858 22:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.858 22:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:25.858 22:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:25.858 22:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:26.119 22:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:17:26.119 22:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:26.119 22:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:26.119 22:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:26.119 22:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:26.119 22:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:26.119 22:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.119 22:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.119 22:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.119 22:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.119 22:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.119 22:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.119 22:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:26.378 00:17:26.378 22:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:26.378 22:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:26.378 22:48:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.378 22:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.378 22:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.378 22:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.378 22:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.378 22:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.378 22:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:26.378 { 00:17:26.378 "cntlid": 101, 00:17:26.378 "qid": 0, 00:17:26.378 "state": "enabled", 00:17:26.378 "thread": "nvmf_tgt_poll_group_000", 00:17:26.378 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:26.378 "listen_address": { 00:17:26.378 "trtype": "TCP", 00:17:26.378 "adrfam": "IPv4", 00:17:26.378 "traddr": "10.0.0.2", 00:17:26.378 "trsvcid": "4420" 00:17:26.378 }, 00:17:26.378 "peer_address": { 00:17:26.378 "trtype": "TCP", 00:17:26.378 "adrfam": "IPv4", 00:17:26.378 "traddr": "10.0.0.1", 00:17:26.378 "trsvcid": "44600" 00:17:26.378 }, 00:17:26.378 "auth": { 00:17:26.378 "state": "completed", 00:17:26.378 "digest": "sha512", 00:17:26.378 "dhgroup": "null" 00:17:26.378 } 00:17:26.378 } 00:17:26.378 ]' 00:17:26.378 22:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:26.637 22:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:26.638 22:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:26.638 22:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:26.638 22:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:26.638 22:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.638 22:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.638 22:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.896 22:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWE3MTY3YjE0MDYxNTc5MDJlZjlmZjQwOTZiZDM0MjBjNGFiOTc5ZGViZDYzMGQ0PkRHzA==: --dhchap-ctrl-secret DHHC-1:01:OGQ1ZjMzYWZhYmVmOGRhYTE2M2FiYmQwNDc4NjhjZTns83ga: 00:17:26.896 22:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OWE3MTY3YjE0MDYxNTc5MDJlZjlmZjQwOTZiZDM0MjBjNGFiOTc5ZGViZDYzMGQ0PkRHzA==: --dhchap-ctrl-secret DHHC-1:01:OGQ1ZjMzYWZhYmVmOGRhYTE2M2FiYmQwNDc4NjhjZTns83ga: 00:17:27.466 22:48:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.466 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.466 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:27.466 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.466 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.466 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.466 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:27.467 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:27.467 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:27.726 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:17:27.726 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:27.726 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:27.726 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:27.726 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:27.726 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:27.726 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:27.726 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.726 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.726 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.726 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:27.726 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:27.726 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:27.985 00:17:27.985 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:27.985 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:27.985 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:27.986 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:27.986 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:27.986 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.986 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.986 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.986 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:27.986 { 00:17:27.986 "cntlid": 103, 00:17:27.986 "qid": 0, 00:17:27.986 "state": "enabled", 00:17:27.986 "thread": "nvmf_tgt_poll_group_000", 00:17:27.986 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:27.986 "listen_address": { 00:17:27.986 "trtype": "TCP", 00:17:27.986 "adrfam": "IPv4", 00:17:27.986 "traddr": "10.0.0.2", 00:17:27.986 "trsvcid": "4420" 00:17:27.986 }, 00:17:27.986 "peer_address": { 00:17:27.986 "trtype": "TCP", 00:17:27.986 "adrfam": "IPv4", 00:17:27.986 "traddr": "10.0.0.1", 00:17:27.986 "trsvcid": "44618" 00:17:27.986 }, 00:17:27.986 "auth": { 00:17:27.986 "state": "completed", 00:17:27.986 "digest": "sha512", 00:17:27.986 "dhgroup": "null" 00:17:27.986 } 00:17:27.986 } 00:17:27.986 ]' 00:17:27.986 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:28.245 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:28.245 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:28.245 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:28.245 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:28.245 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.245 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.245 22:48:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.503 22:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTJhMjY1YjAyZjQyNDlmZTAyYzUxNzVlNjk2YjliY2JiYTc1YTk1ZjE2NTA3MjViMTYxODVkYTk1MmU5N2Q1NMfsJw4=: 00:17:28.503 22:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTJhMjY1YjAyZjQyNDlmZTAyYzUxNzVlNjk2YjliY2JiYTc1YTk1ZjE2NTA3MjViMTYxODVkYTk1MmU5N2Q1NMfsJw4=: 00:17:29.079 22:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.079 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.079 22:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:29.079 22:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.079 22:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.079 22:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.079 22:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:29.079 22:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:29.079 22:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:29.079 22:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:29.338 22:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:17:29.338 22:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:29.338 22:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:29.338 22:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:29.338 22:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:29.338 22:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.338 22:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:29.338 22:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.338 22:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.338 22:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.338 22:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:29.338 22:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:29.338 22:48:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:29.338 00:17:29.597 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:29.597 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:29.597 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.597 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.597 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:29.597 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.597 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.597 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.597 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:29.597 { 00:17:29.597 "cntlid": 105, 00:17:29.597 "qid": 0, 00:17:29.597 "state": "enabled", 00:17:29.597 "thread": "nvmf_tgt_poll_group_000", 00:17:29.597 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:29.597 "listen_address": { 00:17:29.597 "trtype": "TCP", 00:17:29.597 "adrfam": "IPv4", 00:17:29.597 "traddr": "10.0.0.2", 00:17:29.597 "trsvcid": "4420" 00:17:29.597 }, 00:17:29.597 "peer_address": { 00:17:29.597 "trtype": "TCP", 00:17:29.597 "adrfam": "IPv4", 00:17:29.597 "traddr": "10.0.0.1", 00:17:29.597 "trsvcid": "44650" 00:17:29.597 }, 00:17:29.597 "auth": { 00:17:29.597 "state": "completed", 00:17:29.597 "digest": "sha512", 00:17:29.597 "dhgroup": "ffdhe2048" 00:17:29.597 } 00:17:29.597 } 00:17:29.597 ]' 00:17:29.597 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:29.857 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:29.857 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:29.857 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:29.857 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:29.857 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:29.857 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:29.857 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.115 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDdjOWZkOWRlZTVlMGE4MTllYjkxMTQxMmMxZTJiYjlkZjAzMDQwNmQyMDliMTdh07Cfwg==: --dhchap-ctrl-secret DHHC-1:03:YjgxYjY2NjJmNzkyZDdlZTM1OWNkYTExNzJjODAyNWU0NWY4MWQzNTc5MzAzMjRjYWQ4ODAxZjA3Nzk1NTI0MoAn0BM=: 00:17:30.115 22:48:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDdjOWZkOWRlZTVlMGE4MTllYjkxMTQxMmMxZTJiYjlkZjAzMDQwNmQyMDliMTdh07Cfwg==: --dhchap-ctrl-secret DHHC-1:03:YjgxYjY2NjJmNzkyZDdlZTM1OWNkYTExNzJjODAyNWU0NWY4MWQzNTc5MzAzMjRjYWQ4ODAxZjA3Nzk1NTI0MoAn0BM=: 00:17:30.682 22:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:30.682 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:30.682 22:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:30.682 22:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.682 22:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.682 22:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.682 22:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:30.682 22:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:30.682 22:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:30.941 22:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:17:30.941 22:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:30.941 22:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:30.941 22:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:30.941 22:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:30.941 22:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:30.941 22:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.941 22:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.941 22:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.941 22:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.941 22:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.941 22:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:30.941 22:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:31.200 00:17:31.200 22:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:31.200 22:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.200 22:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:31.200 22:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.200 22:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.200 22:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.200 22:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.460 22:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.460 22:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:31.460 { 00:17:31.460 "cntlid": 107, 00:17:31.460 "qid": 0, 00:17:31.460 "state": "enabled", 00:17:31.460 "thread": "nvmf_tgt_poll_group_000", 00:17:31.460 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:31.460 "listen_address": { 00:17:31.460 "trtype": "TCP", 00:17:31.460 "adrfam": "IPv4", 00:17:31.460 "traddr": "10.0.0.2", 00:17:31.460 "trsvcid": "4420" 00:17:31.460 }, 00:17:31.460 "peer_address": { 00:17:31.460 "trtype": "TCP", 00:17:31.460 "adrfam": "IPv4", 00:17:31.460 "traddr": "10.0.0.1", 00:17:31.460 "trsvcid": "44666" 00:17:31.460 }, 00:17:31.460 "auth": { 00:17:31.460 "state": "completed", 00:17:31.460 "digest": "sha512", 00:17:31.460 "dhgroup": "ffdhe2048" 00:17:31.460 } 00:17:31.460 } 00:17:31.460 ]' 00:17:31.460 22:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:31.460 22:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:31.460 22:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:31.460 22:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:31.460 22:48:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:31.460 22:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:31.460 22:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:31.460 22:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:31.719 22:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTYyZmVjYzAzMDIzODkzOTUxNjQyZWNlYWMwMmExNTHhtxj0: --dhchap-ctrl-secret DHHC-1:02:ZmE2ZWU4YmQ2YWJmZjZiYWUzZjFjNzE3ZDZlYzMzODRiZjQ3NjlmNTNlMTlmYzkzZRferw==: 00:17:31.719 22:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MTYyZmVjYzAzMDIzODkzOTUxNjQyZWNlYWMwMmExNTHhtxj0: --dhchap-ctrl-secret DHHC-1:02:ZmE2ZWU4YmQ2YWJmZjZiYWUzZjFjNzE3ZDZlYzMzODRiZjQ3NjlmNTNlMTlmYzkzZRferw==: 00:17:32.285 22:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.285 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.285 22:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:32.285 22:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.285 22:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.285 22:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.285 22:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:32.285 22:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:32.285 22:48:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:32.544 22:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:17:32.544 22:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:32.544 22:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:32.544 22:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:32.544 22:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:32.544 22:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:32.544 22:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.544 22:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.544 22:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.544 22:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.544 22:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.544 22:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.544 22:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:32.803 00:17:32.803 22:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:32.803 22:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:32.803 22:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.803 22:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.803 22:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:33.063 22:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.063 22:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.063 22:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.063 22:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:33.063 { 00:17:33.063 "cntlid": 109, 00:17:33.063 "qid": 0, 00:17:33.063 "state": "enabled", 00:17:33.063 "thread": "nvmf_tgt_poll_group_000", 00:17:33.063 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:33.063 "listen_address": { 00:17:33.063 "trtype": "TCP", 00:17:33.063 "adrfam": "IPv4", 00:17:33.063 "traddr": "10.0.0.2", 00:17:33.063 "trsvcid": "4420" 00:17:33.063 }, 00:17:33.063 "peer_address": { 00:17:33.063 "trtype": "TCP", 00:17:33.063 "adrfam": "IPv4", 00:17:33.063 "traddr": "10.0.0.1", 00:17:33.063 "trsvcid": "57324" 00:17:33.063 }, 00:17:33.063 "auth": { 00:17:33.063 "state": "completed", 00:17:33.063 "digest": "sha512", 00:17:33.063 "dhgroup": "ffdhe2048" 00:17:33.063 } 00:17:33.063 } 00:17:33.063 ]' 00:17:33.063 22:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:33.063 22:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:33.063 22:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:33.063 22:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:33.063 22:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:33.063 22:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:33.063 22:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:33.063 22:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:33.321 22:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWE3MTY3YjE0MDYxNTc5MDJlZjlmZjQwOTZiZDM0MjBjNGFiOTc5ZGViZDYzMGQ0PkRHzA==: --dhchap-ctrl-secret DHHC-1:01:OGQ1ZjMzYWZhYmVmOGRhYTE2M2FiYmQwNDc4NjhjZTns83ga: 00:17:33.321 22:48:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OWE3MTY3YjE0MDYxNTc5MDJlZjlmZjQwOTZiZDM0MjBjNGFiOTc5ZGViZDYzMGQ0PkRHzA==: --dhchap-ctrl-secret DHHC-1:01:OGQ1ZjMzYWZhYmVmOGRhYTE2M2FiYmQwNDc4NjhjZTns83ga: 00:17:33.889 22:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.889 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.889 22:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:33.889 22:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.889 22:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.889 22:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.889 22:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:33.889 22:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:33.889 22:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:34.147 22:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:17:34.147 22:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:34.147 22:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:34.147 22:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:34.147 22:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:34.147 22:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:34.147 22:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:34.147 22:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.147 22:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.147 22:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.147 22:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:34.147 22:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:34.147 22:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:34.406 00:17:34.406 22:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:34.406 22:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:34.406 22:48:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.406 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.406 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.406 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.406 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.665 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.665 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:34.665 { 00:17:34.665 "cntlid": 111, 00:17:34.665 "qid": 0, 00:17:34.665 "state": "enabled", 00:17:34.665 "thread": "nvmf_tgt_poll_group_000", 00:17:34.666 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:34.666 "listen_address": { 00:17:34.666 "trtype": "TCP", 00:17:34.666 "adrfam": "IPv4", 00:17:34.666 "traddr": "10.0.0.2", 00:17:34.666 "trsvcid": "4420" 00:17:34.666 }, 00:17:34.666 "peer_address": { 00:17:34.666 "trtype": "TCP", 00:17:34.666 "adrfam": "IPv4", 00:17:34.666 "traddr": "10.0.0.1", 00:17:34.666 "trsvcid": "57350" 00:17:34.666 }, 00:17:34.666 "auth": { 00:17:34.666 "state": "completed", 00:17:34.666 "digest": "sha512", 00:17:34.666 "dhgroup": "ffdhe2048" 00:17:34.666 } 00:17:34.666 } 00:17:34.666 ]' 00:17:34.666 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:34.666 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:34.666 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:34.666 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:34.666 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:34.666 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.666 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.666 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.925 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTJhMjY1YjAyZjQyNDlmZTAyYzUxNzVlNjk2YjliY2JiYTc1YTk1ZjE2NTA3MjViMTYxODVkYTk1MmU5N2Q1NMfsJw4=: 00:17:34.925 22:48:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTJhMjY1YjAyZjQyNDlmZTAyYzUxNzVlNjk2YjliY2JiYTc1YTk1ZjE2NTA3MjViMTYxODVkYTk1MmU5N2Q1NMfsJw4=: 00:17:35.492 22:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.492 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.492 22:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:35.492 22:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.492 22:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.492 22:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.492 22:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:35.492 22:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:35.492 22:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:35.492 22:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:35.751 22:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:17:35.751 22:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:35.751 22:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:35.751 22:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:35.751 22:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:35.751 22:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.751 22:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:35.751 22:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.751 22:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.751 22:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.751 22:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:35.751 22:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:35.751 22:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:36.010 00:17:36.010 22:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:36.010 22:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:36.010 22:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.010 22:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.010 22:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.010 22:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.010 22:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.269 22:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.269 22:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:36.269 { 00:17:36.269 "cntlid": 113, 00:17:36.269 "qid": 0, 00:17:36.269 "state": "enabled", 00:17:36.269 "thread": "nvmf_tgt_poll_group_000", 00:17:36.269 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:36.269 "listen_address": { 00:17:36.269 "trtype": "TCP", 00:17:36.269 "adrfam": "IPv4", 00:17:36.269 "traddr": "10.0.0.2", 00:17:36.269 "trsvcid": "4420" 00:17:36.269 }, 00:17:36.269 "peer_address": { 00:17:36.269 "trtype": "TCP", 00:17:36.269 "adrfam": "IPv4", 00:17:36.269 "traddr": "10.0.0.1", 00:17:36.269 "trsvcid": "57370" 00:17:36.269 }, 00:17:36.269 "auth": { 00:17:36.269 "state": "completed", 00:17:36.269 "digest": "sha512", 00:17:36.269 "dhgroup": "ffdhe3072" 00:17:36.269 } 00:17:36.269 } 00:17:36.269 ]' 00:17:36.269 22:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:36.269 22:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:36.269 22:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:36.269 22:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:36.269 22:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:36.269 22:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.269 22:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.269 22:48:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.527 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDdjOWZkOWRlZTVlMGE4MTllYjkxMTQxMmMxZTJiYjlkZjAzMDQwNmQyMDliMTdh07Cfwg==: --dhchap-ctrl-secret DHHC-1:03:YjgxYjY2NjJmNzkyZDdlZTM1OWNkYTExNzJjODAyNWU0NWY4MWQzNTc5MzAzMjRjYWQ4ODAxZjA3Nzk1NTI0MoAn0BM=: 00:17:36.527 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDdjOWZkOWRlZTVlMGE4MTllYjkxMTQxMmMxZTJiYjlkZjAzMDQwNmQyMDliMTdh07Cfwg==: --dhchap-ctrl-secret DHHC-1:03:YjgxYjY2NjJmNzkyZDdlZTM1OWNkYTExNzJjODAyNWU0NWY4MWQzNTc5MzAzMjRjYWQ4ODAxZjA3Nzk1NTI0MoAn0BM=: 00:17:37.093 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.093 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.093 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:37.093 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.093 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.093 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.093 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:37.093 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:37.093 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:37.352 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:17:37.352 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:37.352 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:37.352 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:37.352 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:37.352 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.352 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:37.352 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.352 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.352 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.352 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:37.352 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:37.352 22:48:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:37.608 00:17:37.608 22:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:37.608 22:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:37.608 22:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.866 22:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.866 22:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:37.866 22:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.866 22:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.866 22:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.866 22:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:37.866 { 00:17:37.866 "cntlid": 115, 00:17:37.866 "qid": 0, 00:17:37.866 "state": "enabled", 00:17:37.866 "thread": "nvmf_tgt_poll_group_000", 00:17:37.866 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:37.866 "listen_address": { 00:17:37.866 "trtype": "TCP", 00:17:37.866 "adrfam": "IPv4", 00:17:37.866 "traddr": "10.0.0.2", 00:17:37.866 "trsvcid": "4420" 00:17:37.866 }, 00:17:37.866 "peer_address": { 00:17:37.866 "trtype": "TCP", 00:17:37.866 "adrfam": "IPv4", 00:17:37.866 "traddr": "10.0.0.1", 00:17:37.866 "trsvcid": "57404" 00:17:37.866 }, 00:17:37.866 "auth": { 00:17:37.866 "state": "completed", 00:17:37.866 "digest": "sha512", 00:17:37.866 "dhgroup": "ffdhe3072" 00:17:37.866 } 00:17:37.866 } 00:17:37.866 ]' 00:17:37.866 22:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:37.866 22:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:37.866 22:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:37.866 22:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:37.866 22:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:37.866 22:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:37.866 22:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:37.866 22:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.124 22:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTYyZmVjYzAzMDIzODkzOTUxNjQyZWNlYWMwMmExNTHhtxj0: --dhchap-ctrl-secret DHHC-1:02:ZmE2ZWU4YmQ2YWJmZjZiYWUzZjFjNzE3ZDZlYzMzODRiZjQ3NjlmNTNlMTlmYzkzZRferw==: 00:17:38.124 22:48:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MTYyZmVjYzAzMDIzODkzOTUxNjQyZWNlYWMwMmExNTHhtxj0: --dhchap-ctrl-secret DHHC-1:02:ZmE2ZWU4YmQ2YWJmZjZiYWUzZjFjNzE3ZDZlYzMzODRiZjQ3NjlmNTNlMTlmYzkzZRferw==: 00:17:38.691 22:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.691 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.691 22:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:38.691 22:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.691 22:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.691 22:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.691 22:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:38.691 22:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:38.691 22:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:38.950 22:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:17:38.950 22:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:38.950 22:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:38.950 22:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:38.950 22:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:38.950 22:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:38.950 22:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:38.950 22:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.950 22:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.950 22:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.950 22:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:38.950 22:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:38.950 22:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:39.208 00:17:39.208 22:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:39.208 22:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:39.208 22:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.467 22:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.467 22:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.467 22:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.467 22:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.467 22:48:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.467 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:39.467 { 00:17:39.467 "cntlid": 117, 00:17:39.467 "qid": 0, 00:17:39.467 "state": "enabled", 00:17:39.467 "thread": "nvmf_tgt_poll_group_000", 00:17:39.467 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:39.467 "listen_address": { 00:17:39.467 "trtype": "TCP", 00:17:39.467 "adrfam": "IPv4", 00:17:39.467 "traddr": "10.0.0.2", 00:17:39.467 "trsvcid": "4420" 00:17:39.467 }, 00:17:39.467 "peer_address": { 00:17:39.467 "trtype": "TCP", 00:17:39.467 "adrfam": "IPv4", 00:17:39.467 "traddr": "10.0.0.1", 00:17:39.467 "trsvcid": "57424" 00:17:39.467 }, 00:17:39.467 "auth": { 00:17:39.467 "state": "completed", 00:17:39.467 "digest": "sha512", 00:17:39.467 "dhgroup": "ffdhe3072" 00:17:39.467 } 00:17:39.467 } 00:17:39.467 ]' 00:17:39.467 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:39.467 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:39.467 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:39.467 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:39.467 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:39.467 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.467 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.467 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:39.726 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWE3MTY3YjE0MDYxNTc5MDJlZjlmZjQwOTZiZDM0MjBjNGFiOTc5ZGViZDYzMGQ0PkRHzA==: --dhchap-ctrl-secret DHHC-1:01:OGQ1ZjMzYWZhYmVmOGRhYTE2M2FiYmQwNDc4NjhjZTns83ga: 00:17:39.726 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OWE3MTY3YjE0MDYxNTc5MDJlZjlmZjQwOTZiZDM0MjBjNGFiOTc5ZGViZDYzMGQ0PkRHzA==: --dhchap-ctrl-secret DHHC-1:01:OGQ1ZjMzYWZhYmVmOGRhYTE2M2FiYmQwNDc4NjhjZTns83ga: 00:17:40.293 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.293 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.293 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:40.293 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.293 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.293 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.293 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:40.293 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:40.293 22:48:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:40.552 22:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:17:40.552 22:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:40.552 22:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:40.552 22:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:40.552 22:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:40.552 22:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.552 22:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:40.552 22:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.552 22:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.552 22:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.552 22:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:40.552 22:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:40.552 22:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:40.811 00:17:40.811 22:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:40.811 22:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:40.811 22:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.070 22:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.070 22:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.070 22:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.070 22:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.070 22:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.070 22:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:41.070 { 00:17:41.070 "cntlid": 119, 00:17:41.070 "qid": 0, 00:17:41.070 "state": "enabled", 00:17:41.070 "thread": "nvmf_tgt_poll_group_000", 00:17:41.070 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:41.070 "listen_address": { 00:17:41.070 "trtype": "TCP", 00:17:41.070 "adrfam": "IPv4", 00:17:41.070 "traddr": "10.0.0.2", 00:17:41.070 "trsvcid": "4420" 00:17:41.070 }, 00:17:41.070 "peer_address": { 00:17:41.070 "trtype": "TCP", 00:17:41.070 "adrfam": "IPv4", 00:17:41.070 "traddr": "10.0.0.1", 00:17:41.070 "trsvcid": "57458" 00:17:41.070 }, 00:17:41.070 "auth": { 00:17:41.070 "state": "completed", 00:17:41.070 "digest": "sha512", 00:17:41.070 "dhgroup": "ffdhe3072" 00:17:41.070 } 00:17:41.070 } 00:17:41.070 ]' 00:17:41.070 22:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:41.070 22:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:41.070 22:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:41.070 22:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:41.070 22:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:41.070 22:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.070 22:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.070 22:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.328 22:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTJhMjY1YjAyZjQyNDlmZTAyYzUxNzVlNjk2YjliY2JiYTc1YTk1ZjE2NTA3MjViMTYxODVkYTk1MmU5N2Q1NMfsJw4=: 00:17:41.328 22:48:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTJhMjY1YjAyZjQyNDlmZTAyYzUxNzVlNjk2YjliY2JiYTc1YTk1ZjE2NTA3MjViMTYxODVkYTk1MmU5N2Q1NMfsJw4=: 00:17:41.896 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.896 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.896 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:41.896 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.896 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.896 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.896 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:41.896 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:41.896 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:41.896 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:42.155 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:17:42.155 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:42.155 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:42.155 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:42.155 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:42.155 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.155 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:42.155 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.155 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.155 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.155 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:42.155 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:42.155 22:48:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:42.414 00:17:42.414 22:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:42.414 22:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:42.414 22:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.672 22:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.672 22:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.672 22:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.672 22:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.672 22:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.672 22:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:42.672 { 00:17:42.672 "cntlid": 121, 00:17:42.672 "qid": 0, 00:17:42.672 "state": "enabled", 00:17:42.672 "thread": "nvmf_tgt_poll_group_000", 00:17:42.672 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:42.672 "listen_address": { 00:17:42.672 "trtype": "TCP", 00:17:42.672 "adrfam": "IPv4", 00:17:42.672 "traddr": "10.0.0.2", 00:17:42.672 "trsvcid": "4420" 00:17:42.672 }, 00:17:42.672 "peer_address": { 00:17:42.673 "trtype": "TCP", 00:17:42.673 "adrfam": "IPv4", 00:17:42.673 "traddr": "10.0.0.1", 00:17:42.673 "trsvcid": "40568" 00:17:42.673 }, 00:17:42.673 "auth": { 00:17:42.673 "state": "completed", 00:17:42.673 "digest": "sha512", 00:17:42.673 "dhgroup": "ffdhe4096" 00:17:42.673 } 00:17:42.673 } 00:17:42.673 ]' 00:17:42.673 22:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:42.673 22:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:42.673 22:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:42.673 22:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:42.673 22:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:42.673 22:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.673 22:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.673 22:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.930 22:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDdjOWZkOWRlZTVlMGE4MTllYjkxMTQxMmMxZTJiYjlkZjAzMDQwNmQyMDliMTdh07Cfwg==: --dhchap-ctrl-secret DHHC-1:03:YjgxYjY2NjJmNzkyZDdlZTM1OWNkYTExNzJjODAyNWU0NWY4MWQzNTc5MzAzMjRjYWQ4ODAxZjA3Nzk1NTI0MoAn0BM=: 00:17:42.930 22:48:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDdjOWZkOWRlZTVlMGE4MTllYjkxMTQxMmMxZTJiYjlkZjAzMDQwNmQyMDliMTdh07Cfwg==: --dhchap-ctrl-secret DHHC-1:03:YjgxYjY2NjJmNzkyZDdlZTM1OWNkYTExNzJjODAyNWU0NWY4MWQzNTc5MzAzMjRjYWQ4ODAxZjA3Nzk1NTI0MoAn0BM=: 00:17:43.497 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.497 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.497 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:43.498 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.498 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.498 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.498 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:43.498 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:43.498 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:43.756 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:17:43.756 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:43.756 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:43.756 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:43.756 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:43.756 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:43.756 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:43.756 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.756 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.756 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.756 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:43.756 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:43.757 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:44.015 00:17:44.015 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:44.015 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:44.015 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.274 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.274 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.274 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.274 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.274 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.274 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:44.274 { 00:17:44.274 "cntlid": 123, 00:17:44.274 "qid": 0, 00:17:44.274 "state": "enabled", 00:17:44.274 "thread": "nvmf_tgt_poll_group_000", 00:17:44.274 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:44.274 "listen_address": { 00:17:44.274 "trtype": "TCP", 00:17:44.274 "adrfam": "IPv4", 00:17:44.274 "traddr": "10.0.0.2", 00:17:44.274 "trsvcid": "4420" 00:17:44.274 }, 00:17:44.274 "peer_address": { 00:17:44.274 "trtype": "TCP", 00:17:44.274 "adrfam": "IPv4", 00:17:44.274 "traddr": "10.0.0.1", 00:17:44.274 "trsvcid": "40598" 00:17:44.274 }, 00:17:44.274 "auth": { 00:17:44.274 "state": "completed", 00:17:44.274 "digest": "sha512", 00:17:44.274 "dhgroup": "ffdhe4096" 00:17:44.274 } 00:17:44.274 } 00:17:44.274 ]' 00:17:44.274 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:44.274 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:44.274 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:44.274 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:44.274 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:44.274 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.274 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.274 22:48:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.533 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTYyZmVjYzAzMDIzODkzOTUxNjQyZWNlYWMwMmExNTHhtxj0: --dhchap-ctrl-secret DHHC-1:02:ZmE2ZWU4YmQ2YWJmZjZiYWUzZjFjNzE3ZDZlYzMzODRiZjQ3NjlmNTNlMTlmYzkzZRferw==: 00:17:44.533 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MTYyZmVjYzAzMDIzODkzOTUxNjQyZWNlYWMwMmExNTHhtxj0: --dhchap-ctrl-secret DHHC-1:02:ZmE2ZWU4YmQ2YWJmZjZiYWUzZjFjNzE3ZDZlYzMzODRiZjQ3NjlmNTNlMTlmYzkzZRferw==: 00:17:45.100 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.100 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.100 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:45.100 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.100 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.100 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.100 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:45.100 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:45.100 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:45.359 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:17:45.359 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:45.359 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:45.359 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:45.359 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:45.359 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.359 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.359 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.359 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.359 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.359 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.359 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.359 22:48:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:45.617 00:17:45.617 22:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:45.617 22:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:45.617 22:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:45.876 22:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.876 22:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:45.876 22:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.876 22:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.876 22:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.876 22:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:45.876 { 00:17:45.876 "cntlid": 125, 00:17:45.876 "qid": 0, 00:17:45.876 "state": "enabled", 00:17:45.876 "thread": "nvmf_tgt_poll_group_000", 00:17:45.876 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:45.876 "listen_address": { 00:17:45.876 "trtype": "TCP", 00:17:45.876 "adrfam": "IPv4", 00:17:45.876 "traddr": "10.0.0.2", 00:17:45.876 "trsvcid": "4420" 00:17:45.876 }, 00:17:45.876 "peer_address": { 00:17:45.876 "trtype": "TCP", 00:17:45.876 "adrfam": "IPv4", 00:17:45.876 "traddr": "10.0.0.1", 00:17:45.876 "trsvcid": "40620" 00:17:45.876 }, 00:17:45.876 "auth": { 00:17:45.876 "state": "completed", 00:17:45.876 "digest": "sha512", 00:17:45.876 "dhgroup": "ffdhe4096" 00:17:45.876 } 00:17:45.876 } 00:17:45.876 ]' 00:17:45.876 22:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:45.876 22:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:45.876 22:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:45.876 22:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:45.876 22:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:45.876 22:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:45.876 22:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:45.876 22:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.134 22:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWE3MTY3YjE0MDYxNTc5MDJlZjlmZjQwOTZiZDM0MjBjNGFiOTc5ZGViZDYzMGQ0PkRHzA==: --dhchap-ctrl-secret DHHC-1:01:OGQ1ZjMzYWZhYmVmOGRhYTE2M2FiYmQwNDc4NjhjZTns83ga: 00:17:46.134 22:48:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OWE3MTY3YjE0MDYxNTc5MDJlZjlmZjQwOTZiZDM0MjBjNGFiOTc5ZGViZDYzMGQ0PkRHzA==: --dhchap-ctrl-secret DHHC-1:01:OGQ1ZjMzYWZhYmVmOGRhYTE2M2FiYmQwNDc4NjhjZTns83ga: 00:17:46.702 22:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:46.702 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:46.702 22:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:46.702 22:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.702 22:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.702 22:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.702 22:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:46.702 22:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:46.702 22:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:46.961 22:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:17:46.961 22:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:46.961 22:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:46.961 22:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:46.961 22:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:46.961 22:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:46.961 22:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:46.961 22:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.961 22:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.961 22:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.961 22:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:46.961 22:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:46.961 22:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:47.219 00:17:47.219 22:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:47.219 22:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:47.219 22:48:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:47.481 22:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:47.481 22:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:47.481 22:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.481 22:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.481 22:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.481 22:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:47.481 { 00:17:47.481 "cntlid": 127, 00:17:47.481 "qid": 0, 00:17:47.481 "state": "enabled", 00:17:47.481 "thread": "nvmf_tgt_poll_group_000", 00:17:47.481 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:47.481 "listen_address": { 00:17:47.481 "trtype": "TCP", 00:17:47.481 "adrfam": "IPv4", 00:17:47.481 "traddr": "10.0.0.2", 00:17:47.481 "trsvcid": "4420" 00:17:47.481 }, 00:17:47.481 "peer_address": { 00:17:47.481 "trtype": "TCP", 00:17:47.481 "adrfam": "IPv4", 00:17:47.481 "traddr": "10.0.0.1", 00:17:47.481 "trsvcid": "40656" 00:17:47.481 }, 00:17:47.481 "auth": { 00:17:47.481 "state": "completed", 00:17:47.481 "digest": "sha512", 00:17:47.481 "dhgroup": "ffdhe4096" 00:17:47.481 } 00:17:47.481 } 00:17:47.481 ]' 00:17:47.481 22:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:47.481 22:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:47.481 22:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:47.481 22:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:47.481 22:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:47.739 22:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:47.739 22:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:47.739 22:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:47.739 22:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTJhMjY1YjAyZjQyNDlmZTAyYzUxNzVlNjk2YjliY2JiYTc1YTk1ZjE2NTA3MjViMTYxODVkYTk1MmU5N2Q1NMfsJw4=: 00:17:47.739 22:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTJhMjY1YjAyZjQyNDlmZTAyYzUxNzVlNjk2YjliY2JiYTc1YTk1ZjE2NTA3MjViMTYxODVkYTk1MmU5N2Q1NMfsJw4=: 00:17:48.306 22:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:48.306 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:48.306 22:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:48.306 22:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.306 22:48:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.306 22:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.306 22:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:48.306 22:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:48.306 22:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:48.306 22:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:48.565 22:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:17:48.565 22:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:48.565 22:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:48.565 22:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:48.565 22:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:48.565 22:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:48.565 22:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:48.565 22:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.565 22:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.565 22:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.565 22:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:48.565 22:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:48.565 22:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:48.823 00:17:49.082 22:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:49.082 22:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:49.082 22:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.082 22:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.082 22:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.082 22:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.082 22:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.082 22:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.082 22:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:49.082 { 00:17:49.082 "cntlid": 129, 00:17:49.082 "qid": 0, 00:17:49.082 "state": "enabled", 00:17:49.082 "thread": "nvmf_tgt_poll_group_000", 00:17:49.082 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:49.082 "listen_address": { 00:17:49.082 "trtype": "TCP", 00:17:49.082 "adrfam": "IPv4", 00:17:49.082 "traddr": "10.0.0.2", 00:17:49.082 "trsvcid": "4420" 00:17:49.082 }, 00:17:49.082 "peer_address": { 00:17:49.082 "trtype": "TCP", 00:17:49.082 "adrfam": "IPv4", 00:17:49.082 "traddr": "10.0.0.1", 00:17:49.082 "trsvcid": "40678" 00:17:49.082 }, 00:17:49.082 "auth": { 00:17:49.082 "state": "completed", 00:17:49.082 "digest": "sha512", 00:17:49.082 "dhgroup": "ffdhe6144" 00:17:49.082 } 00:17:49.082 } 00:17:49.082 ]' 00:17:49.082 22:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:49.082 22:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:49.082 22:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:49.341 22:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:49.341 22:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:49.341 22:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:49.341 22:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.341 22:48:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:49.599 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDdjOWZkOWRlZTVlMGE4MTllYjkxMTQxMmMxZTJiYjlkZjAzMDQwNmQyMDliMTdh07Cfwg==: --dhchap-ctrl-secret DHHC-1:03:YjgxYjY2NjJmNzkyZDdlZTM1OWNkYTExNzJjODAyNWU0NWY4MWQzNTc5MzAzMjRjYWQ4ODAxZjA3Nzk1NTI0MoAn0BM=: 00:17:49.599 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDdjOWZkOWRlZTVlMGE4MTllYjkxMTQxMmMxZTJiYjlkZjAzMDQwNmQyMDliMTdh07Cfwg==: --dhchap-ctrl-secret DHHC-1:03:YjgxYjY2NjJmNzkyZDdlZTM1OWNkYTExNzJjODAyNWU0NWY4MWQzNTc5MzAzMjRjYWQ4ODAxZjA3Nzk1NTI0MoAn0BM=: 00:17:50.166 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.166 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.166 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:50.166 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.166 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.166 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.166 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:50.166 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:50.166 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:50.166 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:17:50.167 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:50.167 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:50.167 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:50.167 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:50.167 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.167 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.167 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.167 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.167 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.167 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.167 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.167 22:48:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.734 00:17:50.734 22:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:50.734 22:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:50.734 22:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.992 22:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.992 22:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:50.992 22:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.992 22:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.992 22:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.992 22:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:50.992 { 00:17:50.992 "cntlid": 131, 00:17:50.992 "qid": 0, 00:17:50.992 "state": "enabled", 00:17:50.992 "thread": "nvmf_tgt_poll_group_000", 00:17:50.993 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:50.993 "listen_address": { 00:17:50.993 "trtype": "TCP", 00:17:50.993 "adrfam": "IPv4", 00:17:50.993 "traddr": "10.0.0.2", 00:17:50.993 "trsvcid": "4420" 00:17:50.993 }, 00:17:50.993 "peer_address": { 00:17:50.993 "trtype": "TCP", 00:17:50.993 "adrfam": "IPv4", 00:17:50.993 "traddr": "10.0.0.1", 00:17:50.993 "trsvcid": "40724" 00:17:50.993 }, 00:17:50.993 "auth": { 00:17:50.993 "state": "completed", 00:17:50.993 "digest": "sha512", 00:17:50.993 "dhgroup": "ffdhe6144" 00:17:50.993 } 00:17:50.993 } 00:17:50.993 ]' 00:17:50.993 22:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:50.993 22:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:50.993 22:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:50.993 22:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:50.993 22:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:50.993 22:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.993 22:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.993 22:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.251 22:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTYyZmVjYzAzMDIzODkzOTUxNjQyZWNlYWMwMmExNTHhtxj0: --dhchap-ctrl-secret DHHC-1:02:ZmE2ZWU4YmQ2YWJmZjZiYWUzZjFjNzE3ZDZlYzMzODRiZjQ3NjlmNTNlMTlmYzkzZRferw==: 00:17:51.251 22:48:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MTYyZmVjYzAzMDIzODkzOTUxNjQyZWNlYWMwMmExNTHhtxj0: --dhchap-ctrl-secret DHHC-1:02:ZmE2ZWU4YmQ2YWJmZjZiYWUzZjFjNzE3ZDZlYzMzODRiZjQ3NjlmNTNlMTlmYzkzZRferw==: 00:17:51.818 22:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:51.819 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:51.819 22:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:51.819 22:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.819 22:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.819 22:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.819 22:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:51.819 22:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:51.819 22:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:52.077 22:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:17:52.077 22:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:52.077 22:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:52.077 22:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:52.077 22:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:52.077 22:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:52.077 22:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.077 22:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.077 22:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.077 22:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.077 22:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.077 22:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.077 22:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:52.336 00:17:52.336 22:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:52.336 22:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:52.336 22:48:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.594 22:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.594 22:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.594 22:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.594 22:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.594 22:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.594 22:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:52.594 { 00:17:52.594 "cntlid": 133, 00:17:52.594 "qid": 0, 00:17:52.594 "state": "enabled", 00:17:52.594 "thread": "nvmf_tgt_poll_group_000", 00:17:52.594 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:52.594 "listen_address": { 00:17:52.594 "trtype": "TCP", 00:17:52.594 "adrfam": "IPv4", 00:17:52.594 "traddr": "10.0.0.2", 00:17:52.595 "trsvcid": "4420" 00:17:52.595 }, 00:17:52.595 "peer_address": { 00:17:52.595 "trtype": "TCP", 00:17:52.595 "adrfam": "IPv4", 00:17:52.595 "traddr": "10.0.0.1", 00:17:52.595 "trsvcid": "46666" 00:17:52.595 }, 00:17:52.595 "auth": { 00:17:52.595 "state": "completed", 00:17:52.595 "digest": "sha512", 00:17:52.595 "dhgroup": "ffdhe6144" 00:17:52.595 } 00:17:52.595 } 00:17:52.595 ]' 00:17:52.595 22:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:52.595 22:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:52.595 22:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:52.595 22:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:52.595 22:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:52.595 22:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.595 22:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.595 22:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:52.853 22:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWE3MTY3YjE0MDYxNTc5MDJlZjlmZjQwOTZiZDM0MjBjNGFiOTc5ZGViZDYzMGQ0PkRHzA==: --dhchap-ctrl-secret DHHC-1:01:OGQ1ZjMzYWZhYmVmOGRhYTE2M2FiYmQwNDc4NjhjZTns83ga: 00:17:52.853 22:49:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OWE3MTY3YjE0MDYxNTc5MDJlZjlmZjQwOTZiZDM0MjBjNGFiOTc5ZGViZDYzMGQ0PkRHzA==: --dhchap-ctrl-secret DHHC-1:01:OGQ1ZjMzYWZhYmVmOGRhYTE2M2FiYmQwNDc4NjhjZTns83ga: 00:17:53.421 22:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.421 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.421 22:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:53.421 22:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.421 22:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.421 22:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.421 22:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:53.421 22:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:53.421 22:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:53.680 22:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:17:53.680 22:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:53.680 22:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:53.680 22:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:53.680 22:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:53.680 22:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:53.680 22:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:53.680 22:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.680 22:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.680 22:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.680 22:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:53.680 22:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:53.680 22:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:53.938 00:17:54.197 22:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:54.197 22:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:54.197 22:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.197 22:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.197 22:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.197 22:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.197 22:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.197 22:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.197 22:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:54.197 { 00:17:54.197 "cntlid": 135, 00:17:54.197 "qid": 0, 00:17:54.197 "state": "enabled", 00:17:54.197 "thread": "nvmf_tgt_poll_group_000", 00:17:54.197 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:54.197 "listen_address": { 00:17:54.197 "trtype": "TCP", 00:17:54.197 "adrfam": "IPv4", 00:17:54.197 "traddr": "10.0.0.2", 00:17:54.197 "trsvcid": "4420" 00:17:54.197 }, 00:17:54.197 "peer_address": { 00:17:54.197 "trtype": "TCP", 00:17:54.197 "adrfam": "IPv4", 00:17:54.197 "traddr": "10.0.0.1", 00:17:54.197 "trsvcid": "46684" 00:17:54.197 }, 00:17:54.197 "auth": { 00:17:54.197 "state": "completed", 00:17:54.197 "digest": "sha512", 00:17:54.197 "dhgroup": "ffdhe6144" 00:17:54.197 } 00:17:54.197 } 00:17:54.197 ]' 00:17:54.197 22:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:54.456 22:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:54.456 22:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:54.456 22:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:54.456 22:49:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:54.456 22:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.456 22:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.456 22:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.715 22:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTJhMjY1YjAyZjQyNDlmZTAyYzUxNzVlNjk2YjliY2JiYTc1YTk1ZjE2NTA3MjViMTYxODVkYTk1MmU5N2Q1NMfsJw4=: 00:17:54.715 22:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTJhMjY1YjAyZjQyNDlmZTAyYzUxNzVlNjk2YjliY2JiYTc1YTk1ZjE2NTA3MjViMTYxODVkYTk1MmU5N2Q1NMfsJw4=: 00:17:55.283 22:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.283 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.283 22:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:55.283 22:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.283 22:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.283 22:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.283 22:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:55.283 22:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:55.283 22:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:55.283 22:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:55.283 22:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:17:55.283 22:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:55.283 22:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:55.283 22:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:55.283 22:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:55.283 22:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.283 22:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.283 22:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.283 22:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.283 22:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.283 22:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.283 22:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.283 22:49:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:55.850 00:17:55.850 22:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:55.850 22:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:55.850 22:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.108 22:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.108 22:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.108 22:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.108 22:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.108 22:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.108 22:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:56.108 { 00:17:56.108 "cntlid": 137, 00:17:56.108 "qid": 0, 00:17:56.108 "state": "enabled", 00:17:56.108 "thread": "nvmf_tgt_poll_group_000", 00:17:56.108 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:56.108 "listen_address": { 00:17:56.108 "trtype": "TCP", 00:17:56.108 "adrfam": "IPv4", 00:17:56.108 "traddr": "10.0.0.2", 00:17:56.108 "trsvcid": "4420" 00:17:56.108 }, 00:17:56.108 "peer_address": { 00:17:56.108 "trtype": "TCP", 00:17:56.108 "adrfam": "IPv4", 00:17:56.108 "traddr": "10.0.0.1", 00:17:56.108 "trsvcid": "46708" 00:17:56.108 }, 00:17:56.108 "auth": { 00:17:56.108 "state": "completed", 00:17:56.108 "digest": "sha512", 00:17:56.108 "dhgroup": "ffdhe8192" 00:17:56.108 } 00:17:56.108 } 00:17:56.108 ]' 00:17:56.108 22:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:56.108 22:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:56.108 22:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:56.108 22:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:56.108 22:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:56.108 22:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.108 22:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.108 22:49:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:56.367 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDdjOWZkOWRlZTVlMGE4MTllYjkxMTQxMmMxZTJiYjlkZjAzMDQwNmQyMDliMTdh07Cfwg==: --dhchap-ctrl-secret DHHC-1:03:YjgxYjY2NjJmNzkyZDdlZTM1OWNkYTExNzJjODAyNWU0NWY4MWQzNTc5MzAzMjRjYWQ4ODAxZjA3Nzk1NTI0MoAn0BM=: 00:17:56.367 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDdjOWZkOWRlZTVlMGE4MTllYjkxMTQxMmMxZTJiYjlkZjAzMDQwNmQyMDliMTdh07Cfwg==: --dhchap-ctrl-secret DHHC-1:03:YjgxYjY2NjJmNzkyZDdlZTM1OWNkYTExNzJjODAyNWU0NWY4MWQzNTc5MzAzMjRjYWQ4ODAxZjA3Nzk1NTI0MoAn0BM=: 00:17:56.934 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.934 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.934 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:56.934 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.934 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.934 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.934 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:56.934 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:56.934 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:57.194 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:17:57.194 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:57.194 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:57.194 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:57.194 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:57.194 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:57.194 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:57.194 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.194 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.194 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.194 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:57.194 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:57.194 22:49:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:57.761 00:17:57.761 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:57.761 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:57.761 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.019 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.019 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.019 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.019 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.019 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.019 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:58.019 { 00:17:58.019 "cntlid": 139, 00:17:58.019 "qid": 0, 00:17:58.019 "state": "enabled", 00:17:58.019 "thread": "nvmf_tgt_poll_group_000", 00:17:58.019 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:58.019 "listen_address": { 00:17:58.019 "trtype": "TCP", 00:17:58.019 "adrfam": "IPv4", 00:17:58.019 "traddr": "10.0.0.2", 00:17:58.019 "trsvcid": "4420" 00:17:58.019 }, 00:17:58.019 "peer_address": { 00:17:58.019 "trtype": "TCP", 00:17:58.019 "adrfam": "IPv4", 00:17:58.019 "traddr": "10.0.0.1", 00:17:58.019 "trsvcid": "46732" 00:17:58.019 }, 00:17:58.019 "auth": { 00:17:58.019 "state": "completed", 00:17:58.019 "digest": "sha512", 00:17:58.019 "dhgroup": "ffdhe8192" 00:17:58.019 } 00:17:58.019 } 00:17:58.019 ]' 00:17:58.019 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:58.019 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:58.019 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:58.019 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:58.019 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:58.019 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.019 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.019 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:58.278 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MTYyZmVjYzAzMDIzODkzOTUxNjQyZWNlYWMwMmExNTHhtxj0: --dhchap-ctrl-secret DHHC-1:02:ZmE2ZWU4YmQ2YWJmZjZiYWUzZjFjNzE3ZDZlYzMzODRiZjQ3NjlmNTNlMTlmYzkzZRferw==: 00:17:58.278 22:49:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MTYyZmVjYzAzMDIzODkzOTUxNjQyZWNlYWMwMmExNTHhtxj0: --dhchap-ctrl-secret DHHC-1:02:ZmE2ZWU4YmQ2YWJmZjZiYWUzZjFjNzE3ZDZlYzMzODRiZjQ3NjlmNTNlMTlmYzkzZRferw==: 00:17:58.845 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.845 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.845 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:58.845 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.845 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.845 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.845 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:58.845 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:58.845 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:59.103 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:17:59.104 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:59.104 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:59.104 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:59.104 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:59.104 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.104 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:59.104 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.104 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.104 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.104 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:59.104 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:59.104 22:49:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:59.671 00:17:59.671 22:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:59.671 22:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:59.671 22:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.671 22:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.671 22:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.671 22:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.671 22:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.671 22:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.671 22:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:59.671 { 00:17:59.671 "cntlid": 141, 00:17:59.671 "qid": 0, 00:17:59.671 "state": "enabled", 00:17:59.671 "thread": "nvmf_tgt_poll_group_000", 00:17:59.671 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:17:59.671 "listen_address": { 00:17:59.671 "trtype": "TCP", 00:17:59.671 "adrfam": "IPv4", 00:17:59.671 "traddr": "10.0.0.2", 00:17:59.671 "trsvcid": "4420" 00:17:59.671 }, 00:17:59.671 "peer_address": { 00:17:59.671 "trtype": "TCP", 00:17:59.671 "adrfam": "IPv4", 00:17:59.671 "traddr": "10.0.0.1", 00:17:59.671 "trsvcid": "46766" 00:17:59.671 }, 00:17:59.671 "auth": { 00:17:59.671 "state": "completed", 00:17:59.671 "digest": "sha512", 00:17:59.671 "dhgroup": "ffdhe8192" 00:17:59.671 } 00:17:59.671 } 00:17:59.671 ]' 00:17:59.671 22:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:59.930 22:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:59.930 22:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:59.930 22:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:59.930 22:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:59.930 22:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.930 22:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.930 22:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.189 22:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:OWE3MTY3YjE0MDYxNTc5MDJlZjlmZjQwOTZiZDM0MjBjNGFiOTc5ZGViZDYzMGQ0PkRHzA==: --dhchap-ctrl-secret DHHC-1:01:OGQ1ZjMzYWZhYmVmOGRhYTE2M2FiYmQwNDc4NjhjZTns83ga: 00:18:00.189 22:49:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OWE3MTY3YjE0MDYxNTc5MDJlZjlmZjQwOTZiZDM0MjBjNGFiOTc5ZGViZDYzMGQ0PkRHzA==: --dhchap-ctrl-secret DHHC-1:01:OGQ1ZjMzYWZhYmVmOGRhYTE2M2FiYmQwNDc4NjhjZTns83ga: 00:18:00.756 22:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:00.756 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:00.756 22:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:00.756 22:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.756 22:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.756 22:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.756 22:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:00.756 22:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:00.756 22:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:00.756 22:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:18:00.756 22:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:00.756 22:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:00.756 22:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:00.756 22:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:00.756 22:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:00.756 22:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:00.756 22:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.756 22:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.756 22:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.756 22:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:00.756 22:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:00.756 22:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:01.323 00:18:01.323 22:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:01.323 22:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:01.323 22:49:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.581 22:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.581 22:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.581 22:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.581 22:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.582 22:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.582 22:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:01.582 { 00:18:01.582 "cntlid": 143, 00:18:01.582 "qid": 0, 00:18:01.582 "state": "enabled", 00:18:01.582 "thread": "nvmf_tgt_poll_group_000", 00:18:01.582 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:01.582 "listen_address": { 00:18:01.582 "trtype": "TCP", 00:18:01.582 "adrfam": "IPv4", 00:18:01.582 "traddr": "10.0.0.2", 00:18:01.582 "trsvcid": "4420" 00:18:01.582 }, 00:18:01.582 "peer_address": { 00:18:01.582 "trtype": "TCP", 00:18:01.582 "adrfam": "IPv4", 00:18:01.582 "traddr": "10.0.0.1", 00:18:01.582 "trsvcid": "46792" 00:18:01.582 }, 00:18:01.582 "auth": { 00:18:01.582 "state": "completed", 00:18:01.582 "digest": "sha512", 00:18:01.582 "dhgroup": "ffdhe8192" 00:18:01.582 } 00:18:01.582 } 00:18:01.582 ]' 00:18:01.582 22:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:01.582 22:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:01.582 22:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:01.582 22:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:01.582 22:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:01.582 22:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:01.582 22:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:01.582 22:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:01.840 22:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTJhMjY1YjAyZjQyNDlmZTAyYzUxNzVlNjk2YjliY2JiYTc1YTk1ZjE2NTA3MjViMTYxODVkYTk1MmU5N2Q1NMfsJw4=: 00:18:01.840 22:49:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTJhMjY1YjAyZjQyNDlmZTAyYzUxNzVlNjk2YjliY2JiYTc1YTk1ZjE2NTA3MjViMTYxODVkYTk1MmU5N2Q1NMfsJw4=: 00:18:02.407 22:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.407 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.407 22:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:02.407 22:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.407 22:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.407 22:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.407 22:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:02.407 22:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:18:02.407 22:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:18:02.407 22:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:02.407 22:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:02.407 22:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:02.666 22:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:18:02.666 22:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:02.666 22:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:02.666 22:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:02.666 22:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:02.666 22:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.666 22:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:02.666 22:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.666 22:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.666 22:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.666 22:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:02.666 22:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:02.666 22:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:03.233 00:18:03.233 22:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:03.233 22:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:03.233 22:49:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.491 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.491 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.491 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.491 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.491 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.491 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:03.491 { 00:18:03.491 "cntlid": 145, 00:18:03.491 "qid": 0, 00:18:03.491 "state": "enabled", 00:18:03.491 "thread": "nvmf_tgt_poll_group_000", 00:18:03.491 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:03.491 "listen_address": { 00:18:03.491 "trtype": "TCP", 00:18:03.491 "adrfam": "IPv4", 00:18:03.491 "traddr": "10.0.0.2", 00:18:03.491 "trsvcid": "4420" 00:18:03.491 }, 00:18:03.491 "peer_address": { 00:18:03.491 "trtype": "TCP", 00:18:03.491 "adrfam": "IPv4", 00:18:03.491 "traddr": "10.0.0.1", 00:18:03.491 "trsvcid": "49474" 00:18:03.491 }, 00:18:03.491 "auth": { 00:18:03.491 "state": "completed", 00:18:03.491 "digest": "sha512", 00:18:03.491 "dhgroup": "ffdhe8192" 00:18:03.491 } 00:18:03.491 } 00:18:03.491 ]' 00:18:03.491 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:03.491 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:03.491 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:03.491 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:03.492 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:03.492 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.492 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.492 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.750 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MDdjOWZkOWRlZTVlMGE4MTllYjkxMTQxMmMxZTJiYjlkZjAzMDQwNmQyMDliMTdh07Cfwg==: --dhchap-ctrl-secret DHHC-1:03:YjgxYjY2NjJmNzkyZDdlZTM1OWNkYTExNzJjODAyNWU0NWY4MWQzNTc5MzAzMjRjYWQ4ODAxZjA3Nzk1NTI0MoAn0BM=: 00:18:03.750 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:MDdjOWZkOWRlZTVlMGE4MTllYjkxMTQxMmMxZTJiYjlkZjAzMDQwNmQyMDliMTdh07Cfwg==: --dhchap-ctrl-secret DHHC-1:03:YjgxYjY2NjJmNzkyZDdlZTM1OWNkYTExNzJjODAyNWU0NWY4MWQzNTc5MzAzMjRjYWQ4ODAxZjA3Nzk1NTI0MoAn0BM=: 00:18:04.317 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:04.317 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:04.317 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:04.317 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.317 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.317 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.317 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:18:04.318 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.318 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.318 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.318 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:18:04.318 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:04.318 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:18:04.318 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:04.318 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:04.318 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:04.318 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:04.318 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:18:04.318 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:04.318 22:49:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:18:04.886 request: 00:18:04.886 { 00:18:04.886 "name": "nvme0", 00:18:04.886 "trtype": "tcp", 00:18:04.886 "traddr": "10.0.0.2", 00:18:04.886 "adrfam": "ipv4", 00:18:04.886 "trsvcid": "4420", 00:18:04.886 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:04.886 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:04.886 "prchk_reftag": false, 00:18:04.886 "prchk_guard": false, 00:18:04.886 "hdgst": false, 00:18:04.886 "ddgst": false, 00:18:04.886 "dhchap_key": "key2", 00:18:04.886 "allow_unrecognized_csi": false, 00:18:04.886 "method": "bdev_nvme_attach_controller", 00:18:04.886 "req_id": 1 00:18:04.886 } 00:18:04.886 Got JSON-RPC error response 00:18:04.886 response: 00:18:04.886 { 00:18:04.886 "code": -5, 00:18:04.886 "message": "Input/output error" 00:18:04.886 } 00:18:04.886 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:04.886 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:04.886 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:04.886 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:04.886 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:04.886 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.886 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.886 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.886 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:04.886 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.886 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.886 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.886 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:04.886 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:04.886 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:04.886 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:04.886 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:04.886 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:04.886 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:04.886 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:04.886 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:04.886 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:05.145 request: 00:18:05.145 { 00:18:05.145 "name": "nvme0", 00:18:05.145 "trtype": "tcp", 00:18:05.145 "traddr": "10.0.0.2", 00:18:05.145 "adrfam": "ipv4", 00:18:05.145 "trsvcid": "4420", 00:18:05.145 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:05.145 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:05.145 "prchk_reftag": false, 00:18:05.145 "prchk_guard": false, 00:18:05.145 "hdgst": false, 00:18:05.145 "ddgst": false, 00:18:05.145 "dhchap_key": "key1", 00:18:05.145 "dhchap_ctrlr_key": "ckey2", 00:18:05.145 "allow_unrecognized_csi": false, 00:18:05.145 "method": "bdev_nvme_attach_controller", 00:18:05.145 "req_id": 1 00:18:05.145 } 00:18:05.145 Got JSON-RPC error response 00:18:05.145 response: 00:18:05.145 { 00:18:05.145 "code": -5, 00:18:05.145 "message": "Input/output error" 00:18:05.145 } 00:18:05.145 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:05.145 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:05.145 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:05.145 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:05.145 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:05.404 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.404 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.404 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.404 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:18:05.404 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.404 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.404 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.404 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.404 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:05.404 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.404 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:05.404 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:05.404 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:05.404 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:05.404 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.404 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.404 22:49:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.662 request: 00:18:05.662 { 00:18:05.662 "name": "nvme0", 00:18:05.662 "trtype": "tcp", 00:18:05.662 "traddr": "10.0.0.2", 00:18:05.662 "adrfam": "ipv4", 00:18:05.662 "trsvcid": "4420", 00:18:05.662 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:05.662 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:05.662 "prchk_reftag": false, 00:18:05.662 "prchk_guard": false, 00:18:05.662 "hdgst": false, 00:18:05.662 "ddgst": false, 00:18:05.662 "dhchap_key": "key1", 00:18:05.662 "dhchap_ctrlr_key": "ckey1", 00:18:05.662 "allow_unrecognized_csi": false, 00:18:05.663 "method": "bdev_nvme_attach_controller", 00:18:05.663 "req_id": 1 00:18:05.663 } 00:18:05.663 Got JSON-RPC error response 00:18:05.663 response: 00:18:05.663 { 00:18:05.663 "code": -5, 00:18:05.663 "message": "Input/output error" 00:18:05.663 } 00:18:05.663 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:05.663 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:05.663 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:05.663 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:05.663 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:05.663 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.663 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.663 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.663 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 2354204 00:18:05.663 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2354204 ']' 00:18:05.663 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2354204 00:18:05.663 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:18:05.663 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:05.663 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2354204 00:18:05.921 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:05.921 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:05.921 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2354204' 00:18:05.921 killing process with pid 2354204 00:18:05.921 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2354204 00:18:05.921 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2354204 00:18:05.921 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:18:05.921 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:05.921 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:05.921 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.921 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2376548 00:18:05.921 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:18:05.921 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2376548 00:18:05.921 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2376548 ']' 00:18:05.921 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:05.921 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:05.921 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:05.921 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:05.921 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.179 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:06.179 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:06.179 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:06.179 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:06.179 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.179 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:06.179 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:06.179 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 2376548 00:18:06.179 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2376548 ']' 00:18:06.179 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:06.179 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:06.179 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:06.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:06.179 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:06.179 22:49:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.438 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:06.438 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:06.438 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:18:06.438 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.438 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.438 null0 00:18:06.438 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.438 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:06.438 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.iGe 00:18:06.438 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.438 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.438 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.438 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.X79 ]] 00:18:06.438 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.X79 00:18:06.438 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.438 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.438 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.438 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:06.438 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.PhZ 00:18:06.438 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.438 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.697 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.697 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.L2x ]] 00:18:06.697 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.L2x 00:18:06.697 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.697 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.697 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.697 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:06.697 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.WqS 00:18:06.697 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.697 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.697 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.697 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.Xke ]] 00:18:06.697 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Xke 00:18:06.697 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.697 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.697 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.697 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:18:06.697 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.OYD 00:18:06.697 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.697 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.697 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.697 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:18:06.697 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:18:06.697 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:06.697 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:18:06.697 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:06.697 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:06.697 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:06.697 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:06.697 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.697 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.697 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.697 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:06.697 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:06.697 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:07.265 nvme0n1 00:18:07.523 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:07.523 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:07.523 22:49:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.523 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.523 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:07.523 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.523 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.523 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.523 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:07.523 { 00:18:07.523 "cntlid": 1, 00:18:07.523 "qid": 0, 00:18:07.523 "state": "enabled", 00:18:07.523 "thread": "nvmf_tgt_poll_group_000", 00:18:07.523 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:07.523 "listen_address": { 00:18:07.523 "trtype": "TCP", 00:18:07.523 "adrfam": "IPv4", 00:18:07.523 "traddr": "10.0.0.2", 00:18:07.523 "trsvcid": "4420" 00:18:07.523 }, 00:18:07.523 "peer_address": { 00:18:07.523 "trtype": "TCP", 00:18:07.523 "adrfam": "IPv4", 00:18:07.523 "traddr": "10.0.0.1", 00:18:07.523 "trsvcid": "49550" 00:18:07.523 }, 00:18:07.523 "auth": { 00:18:07.523 "state": "completed", 00:18:07.523 "digest": "sha512", 00:18:07.523 "dhgroup": "ffdhe8192" 00:18:07.523 } 00:18:07.523 } 00:18:07.523 ]' 00:18:07.523 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:07.782 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:07.782 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:07.782 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:07.782 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:07.782 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:07.782 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:07.782 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.040 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTJhMjY1YjAyZjQyNDlmZTAyYzUxNzVlNjk2YjliY2JiYTc1YTk1ZjE2NTA3MjViMTYxODVkYTk1MmU5N2Q1NMfsJw4=: 00:18:08.040 22:49:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTJhMjY1YjAyZjQyNDlmZTAyYzUxNzVlNjk2YjliY2JiYTc1YTk1ZjE2NTA3MjViMTYxODVkYTk1MmU5N2Q1NMfsJw4=: 00:18:08.607 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.607 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.607 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:08.607 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.607 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.607 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.607 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:08.607 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.607 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.607 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.607 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:18:08.607 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:18:08.865 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:08.865 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:08.865 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:08.865 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:08.865 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:08.865 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:08.865 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:08.865 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:08.865 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:08.865 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:08.865 request: 00:18:08.865 { 00:18:08.865 "name": "nvme0", 00:18:08.865 "trtype": "tcp", 00:18:08.865 "traddr": "10.0.0.2", 00:18:08.865 "adrfam": "ipv4", 00:18:08.865 "trsvcid": "4420", 00:18:08.865 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:08.865 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:08.865 "prchk_reftag": false, 00:18:08.865 "prchk_guard": false, 00:18:08.865 "hdgst": false, 00:18:08.865 "ddgst": false, 00:18:08.865 "dhchap_key": "key3", 00:18:08.865 "allow_unrecognized_csi": false, 00:18:08.865 "method": "bdev_nvme_attach_controller", 00:18:08.865 "req_id": 1 00:18:08.865 } 00:18:08.865 Got JSON-RPC error response 00:18:08.865 response: 00:18:08.865 { 00:18:08.865 "code": -5, 00:18:08.865 "message": "Input/output error" 00:18:08.865 } 00:18:08.865 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:08.865 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:08.865 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:08.865 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:08.865 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:18:08.865 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:18:08.865 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:08.866 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:09.124 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:18:09.124 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:09.124 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:18:09.124 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:09.124 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:09.124 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:09.124 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:09.124 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:09.124 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:09.124 22:49:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:09.382 request: 00:18:09.383 { 00:18:09.383 "name": "nvme0", 00:18:09.383 "trtype": "tcp", 00:18:09.383 "traddr": "10.0.0.2", 00:18:09.383 "adrfam": "ipv4", 00:18:09.383 "trsvcid": "4420", 00:18:09.383 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:09.383 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:09.383 "prchk_reftag": false, 00:18:09.383 "prchk_guard": false, 00:18:09.383 "hdgst": false, 00:18:09.383 "ddgst": false, 00:18:09.383 "dhchap_key": "key3", 00:18:09.383 "allow_unrecognized_csi": false, 00:18:09.383 "method": "bdev_nvme_attach_controller", 00:18:09.383 "req_id": 1 00:18:09.383 } 00:18:09.383 Got JSON-RPC error response 00:18:09.383 response: 00:18:09.383 { 00:18:09.383 "code": -5, 00:18:09.383 "message": "Input/output error" 00:18:09.383 } 00:18:09.383 22:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:09.383 22:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:09.383 22:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:09.383 22:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:09.383 22:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:09.383 22:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:18:09.383 22:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:18:09.383 22:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:09.383 22:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:09.383 22:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:09.641 22:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:09.641 22:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.641 22:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.641 22:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.641 22:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:09.641 22:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.641 22:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.641 22:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.641 22:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:09.641 22:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:09.641 22:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:09.641 22:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:09.641 22:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:09.641 22:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:09.641 22:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:09.641 22:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:09.641 22:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:09.642 22:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:09.900 request: 00:18:09.900 { 00:18:09.900 "name": "nvme0", 00:18:09.900 "trtype": "tcp", 00:18:09.900 "traddr": "10.0.0.2", 00:18:09.900 "adrfam": "ipv4", 00:18:09.900 "trsvcid": "4420", 00:18:09.900 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:09.900 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:09.900 "prchk_reftag": false, 00:18:09.900 "prchk_guard": false, 00:18:09.900 "hdgst": false, 00:18:09.900 "ddgst": false, 00:18:09.900 "dhchap_key": "key0", 00:18:09.900 "dhchap_ctrlr_key": "key1", 00:18:09.900 "allow_unrecognized_csi": false, 00:18:09.900 "method": "bdev_nvme_attach_controller", 00:18:09.900 "req_id": 1 00:18:09.900 } 00:18:09.900 Got JSON-RPC error response 00:18:09.900 response: 00:18:09.900 { 00:18:09.900 "code": -5, 00:18:09.900 "message": "Input/output error" 00:18:09.900 } 00:18:09.900 22:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:09.900 22:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:09.900 22:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:09.900 22:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:09.900 22:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:18:09.900 22:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:09.900 22:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:18:10.159 nvme0n1 00:18:10.159 22:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:18:10.159 22:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:10.159 22:49:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:18:10.417 22:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:10.417 22:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.417 22:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.676 22:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:18:10.676 22:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.676 22:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.676 22:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.676 22:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:10.676 22:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:10.676 22:49:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:11.613 nvme0n1 00:18:11.613 22:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:18:11.613 22:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:18:11.613 22:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.613 22:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.613 22:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:11.613 22:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.613 22:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.613 22:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.613 22:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:18:11.613 22:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:18:11.613 22:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.871 22:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.871 22:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:OWE3MTY3YjE0MDYxNTc5MDJlZjlmZjQwOTZiZDM0MjBjNGFiOTc5ZGViZDYzMGQ0PkRHzA==: --dhchap-ctrl-secret DHHC-1:03:NTJhMjY1YjAyZjQyNDlmZTAyYzUxNzVlNjk2YjliY2JiYTc1YTk1ZjE2NTA3MjViMTYxODVkYTk1MmU5N2Q1NMfsJw4=: 00:18:11.872 22:49:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:OWE3MTY3YjE0MDYxNTc5MDJlZjlmZjQwOTZiZDM0MjBjNGFiOTc5ZGViZDYzMGQ0PkRHzA==: --dhchap-ctrl-secret DHHC-1:03:NTJhMjY1YjAyZjQyNDlmZTAyYzUxNzVlNjk2YjliY2JiYTc1YTk1ZjE2NTA3MjViMTYxODVkYTk1MmU5N2Q1NMfsJw4=: 00:18:12.439 22:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:18:12.439 22:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:18:12.439 22:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:18:12.439 22:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:18:12.439 22:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:18:12.439 22:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:18:12.439 22:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:18:12.439 22:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:12.439 22:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.697 22:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:18:12.697 22:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:12.697 22:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:18:12.697 22:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:18:12.697 22:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:12.698 22:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:18:12.698 22:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:12.698 22:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:18:12.698 22:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:12.698 22:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:18:12.956 request: 00:18:12.956 { 00:18:12.956 "name": "nvme0", 00:18:12.956 "trtype": "tcp", 00:18:12.956 "traddr": "10.0.0.2", 00:18:12.956 "adrfam": "ipv4", 00:18:12.956 "trsvcid": "4420", 00:18:12.956 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:12.956 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:12.956 "prchk_reftag": false, 00:18:12.956 "prchk_guard": false, 00:18:12.956 "hdgst": false, 00:18:12.956 "ddgst": false, 00:18:12.956 "dhchap_key": "key1", 00:18:12.956 "allow_unrecognized_csi": false, 00:18:12.956 "method": "bdev_nvme_attach_controller", 00:18:12.956 "req_id": 1 00:18:12.956 } 00:18:12.956 Got JSON-RPC error response 00:18:12.956 response: 00:18:12.956 { 00:18:12.956 "code": -5, 00:18:12.956 "message": "Input/output error" 00:18:12.956 } 00:18:13.215 22:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:13.215 22:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:13.215 22:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:13.215 22:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:13.215 22:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:13.215 22:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:13.215 22:49:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:13.783 nvme0n1 00:18:13.783 22:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:18:13.783 22:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:18:13.783 22:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.041 22:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.041 22:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:14.042 22:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.300 22:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:14.300 22:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.300 22:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.300 22:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.300 22:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:18:14.300 22:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:14.300 22:49:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:18:14.562 nvme0n1 00:18:14.562 22:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:18:14.562 22:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:18:14.562 22:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.858 22:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.858 22:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:14.858 22:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.858 22:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:14.858 22:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.858 22:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.858 22:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.858 22:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:MTYyZmVjYzAzMDIzODkzOTUxNjQyZWNlYWMwMmExNTHhtxj0: '' 2s 00:18:14.858 22:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:14.858 22:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:14.858 22:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:MTYyZmVjYzAzMDIzODkzOTUxNjQyZWNlYWMwMmExNTHhtxj0: 00:18:14.858 22:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:18:14.858 22:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:14.858 22:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:14.858 22:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:MTYyZmVjYzAzMDIzODkzOTUxNjQyZWNlYWMwMmExNTHhtxj0: ]] 00:18:14.858 22:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:MTYyZmVjYzAzMDIzODkzOTUxNjQyZWNlYWMwMmExNTHhtxj0: 00:18:14.859 22:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:18:14.859 22:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:14.859 22:49:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:16.857 22:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:18:16.858 22:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:18:16.858 22:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:18:16.858 22:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:18:16.858 22:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:18:16.858 22:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:18:16.858 22:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:18:16.858 22:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:18:16.858 22:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.858 22:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.858 22:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.858 22:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:OWE3MTY3YjE0MDYxNTc5MDJlZjlmZjQwOTZiZDM0MjBjNGFiOTc5ZGViZDYzMGQ0PkRHzA==: 2s 00:18:16.858 22:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:18:16.858 22:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:18:16.858 22:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:18:16.858 22:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:OWE3MTY3YjE0MDYxNTc5MDJlZjlmZjQwOTZiZDM0MjBjNGFiOTc5ZGViZDYzMGQ0PkRHzA==: 00:18:16.858 22:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:18:16.858 22:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:18:16.858 22:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:18:16.858 22:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:OWE3MTY3YjE0MDYxNTc5MDJlZjlmZjQwOTZiZDM0MjBjNGFiOTc5ZGViZDYzMGQ0PkRHzA==: ]] 00:18:16.858 22:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:OWE3MTY3YjE0MDYxNTc5MDJlZjlmZjQwOTZiZDM0MjBjNGFiOTc5ZGViZDYzMGQ0PkRHzA==: 00:18:16.858 22:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:18:17.116 22:49:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:18:19.015 22:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:18:19.015 22:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:18:19.015 22:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:18:19.015 22:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:18:19.015 22:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:18:19.015 22:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:18:19.015 22:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:18:19.015 22:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:19.015 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:19.015 22:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:19.015 22:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.015 22:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.015 22:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.015 22:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:19.015 22:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:19.016 22:49:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:19.952 nvme0n1 00:18:19.952 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:19.952 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.952 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.952 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.952 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:19.952 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:20.210 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:18:20.210 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:18:20.210 22:49:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:20.468 22:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.468 22:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:20.468 22:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.468 22:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.468 22:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.468 22:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:18:20.468 22:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:18:20.727 22:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:18:20.727 22:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:18:20.727 22:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:20.985 22:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.985 22:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:20.985 22:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.985 22:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.985 22:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.985 22:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:20.985 22:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:20.985 22:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:20.985 22:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:18:20.985 22:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:20.985 22:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:18:20.985 22:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:20.985 22:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:20.985 22:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:18:21.244 request: 00:18:21.244 { 00:18:21.244 "name": "nvme0", 00:18:21.244 "dhchap_key": "key1", 00:18:21.244 "dhchap_ctrlr_key": "key3", 00:18:21.244 "method": "bdev_nvme_set_keys", 00:18:21.244 "req_id": 1 00:18:21.244 } 00:18:21.244 Got JSON-RPC error response 00:18:21.244 response: 00:18:21.244 { 00:18:21.244 "code": -13, 00:18:21.244 "message": "Permission denied" 00:18:21.244 } 00:18:21.502 22:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:21.502 22:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:21.502 22:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:21.502 22:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:21.502 22:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:21.502 22:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:21.502 22:49:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.502 22:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:18:21.502 22:49:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:18:22.878 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:22.878 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:22.878 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.878 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:18:22.878 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:22.878 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.878 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.878 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.878 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:22.878 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:22.878 22:49:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:23.445 nvme0n1 00:18:23.445 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:23.445 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.703 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.703 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.703 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:23.703 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:23.703 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:23.703 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:18:23.703 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:23.703 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:18:23.703 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:23.703 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:23.703 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:23.962 request: 00:18:23.962 { 00:18:23.962 "name": "nvme0", 00:18:23.962 "dhchap_key": "key2", 00:18:23.962 "dhchap_ctrlr_key": "key0", 00:18:23.962 "method": "bdev_nvme_set_keys", 00:18:23.962 "req_id": 1 00:18:23.962 } 00:18:23.962 Got JSON-RPC error response 00:18:23.962 response: 00:18:23.962 { 00:18:23.962 "code": -13, 00:18:23.962 "message": "Permission denied" 00:18:23.962 } 00:18:23.962 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:23.962 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:23.962 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:23.962 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:23.962 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:23.962 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:23.962 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:24.220 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:18:24.220 22:49:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:18:25.155 22:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:25.155 22:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:25.155 22:49:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.414 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:18:25.414 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:18:25.414 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:18:25.414 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2354291 00:18:25.414 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2354291 ']' 00:18:25.414 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2354291 00:18:25.414 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:18:25.414 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:25.414 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2354291 00:18:25.414 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:25.414 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:25.414 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2354291' 00:18:25.414 killing process with pid 2354291 00:18:25.414 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2354291 00:18:25.414 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2354291 00:18:25.982 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:25.982 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:25.982 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:18:25.982 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:25.982 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:18:25.982 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:25.982 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:25.982 rmmod nvme_tcp 00:18:25.982 rmmod nvme_fabrics 00:18:25.982 rmmod nvme_keyring 00:18:25.982 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:25.982 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:18:25.982 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:18:25.982 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 2376548 ']' 00:18:25.982 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 2376548 00:18:25.982 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2376548 ']' 00:18:25.982 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2376548 00:18:25.982 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:18:25.982 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:25.982 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2376548 00:18:25.982 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:25.982 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:25.982 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2376548' 00:18:25.982 killing process with pid 2376548 00:18:25.982 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2376548 00:18:25.982 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2376548 00:18:25.982 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:25.982 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:25.982 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:25.982 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:18:25.982 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:18:25.982 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:25.982 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:18:25.982 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:25.982 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:25.982 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:25.982 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:25.982 22:49:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:28.518 22:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:28.518 22:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.iGe /tmp/spdk.key-sha256.PhZ /tmp/spdk.key-sha384.WqS /tmp/spdk.key-sha512.OYD /tmp/spdk.key-sha512.X79 /tmp/spdk.key-sha384.L2x /tmp/spdk.key-sha256.Xke '' /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/nvmf-auth.log 00:18:28.518 00:18:28.518 real 2m34.016s 00:18:28.518 user 5m55.257s 00:18:28.518 sys 0m24.520s 00:18:28.518 22:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:28.519 22:49:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.519 ************************************ 00:18:28.519 END TEST nvmf_auth_target 00:18:28.519 ************************************ 00:18:28.519 22:49:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:18:28.519 22:49:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:28.519 22:49:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:28.519 22:49:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:28.519 22:49:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:28.519 ************************************ 00:18:28.519 START TEST nvmf_bdevio_no_huge 00:18:28.519 ************************************ 00:18:28.519 22:49:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:28.519 * Looking for test storage... 00:18:28.519 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:18:28.519 22:49:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:28.519 22:49:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:18:28.519 22:49:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:28.519 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:28.519 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:28.519 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:28.519 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:28.519 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:18:28.519 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:18:28.519 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:18:28.519 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:18:28.519 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:18:28.519 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:18:28.519 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:18:28.519 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:28.519 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:18:28.519 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:18:28.519 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:28.519 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:28.519 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:18:28.519 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:18:28.519 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:28.519 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:18:28.519 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:18:28.519 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:18:28.519 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:18:28.519 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:28.519 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:18:28.519 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:18:28.519 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:28.519 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:28.519 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:18:28.519 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:28.519 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:28.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:28.519 --rc genhtml_branch_coverage=1 00:18:28.519 --rc genhtml_function_coverage=1 00:18:28.519 --rc genhtml_legend=1 00:18:28.519 --rc geninfo_all_blocks=1 00:18:28.519 --rc geninfo_unexecuted_blocks=1 00:18:28.519 00:18:28.519 ' 00:18:28.519 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:28.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:28.519 --rc genhtml_branch_coverage=1 00:18:28.519 --rc genhtml_function_coverage=1 00:18:28.519 --rc genhtml_legend=1 00:18:28.519 --rc geninfo_all_blocks=1 00:18:28.519 --rc geninfo_unexecuted_blocks=1 00:18:28.519 00:18:28.519 ' 00:18:28.519 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:28.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:28.519 --rc genhtml_branch_coverage=1 00:18:28.519 --rc genhtml_function_coverage=1 00:18:28.519 --rc genhtml_legend=1 00:18:28.519 --rc geninfo_all_blocks=1 00:18:28.519 --rc geninfo_unexecuted_blocks=1 00:18:28.519 00:18:28.519 ' 00:18:28.519 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:28.519 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:28.519 --rc genhtml_branch_coverage=1 00:18:28.519 --rc genhtml_function_coverage=1 00:18:28.519 --rc genhtml_legend=1 00:18:28.519 --rc geninfo_all_blocks=1 00:18:28.519 --rc geninfo_unexecuted_blocks=1 00:18:28.519 00:18:28.519 ' 00:18:28.519 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:18:28.519 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:18:28.519 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:28.519 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:28.519 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:28.519 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:28.519 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:28.519 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:28.519 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:28.519 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:28.519 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:28.519 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:28.519 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:28.519 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:28.519 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:28.519 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:28.519 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:28.519 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:28.519 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:18:28.519 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:18:28.519 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:28.519 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:28.519 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:28.519 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:28.519 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:28.519 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:28.519 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:18:28.519 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:28.519 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:18:28.520 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:28.520 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:28.520 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:28.520 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:28.520 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:28.520 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:28.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:28.520 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:28.520 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:28.520 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:28.520 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:28.520 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:28.520 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:18:28.520 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:28.520 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:28.520 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:28.520 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:28.520 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:28.520 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:28.520 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:28.520 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:28.520 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:28.520 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:28.520 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:18:28.520 22:49:36 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:35.091 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:35.091 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:18:35.091 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:35.091 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:35.091 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:35.091 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:35.091 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:35.091 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:18:35.091 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:35.091 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:18:35.091 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:18:35.091 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:18:35.091 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:18:35.091 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:18:35.091 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:18:35.091 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:35.091 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:35.091 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:35.092 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:35.092 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:35.092 Found net devices under 0000:86:00.0: cvl_0_0 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:35.092 Found net devices under 0000:86:00.1: cvl_0_1 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:35.092 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:35.092 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.268 ms 00:18:35.092 00:18:35.092 --- 10.0.0.2 ping statistics --- 00:18:35.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:35.092 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:35.092 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:35.092 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.236 ms 00:18:35.092 00:18:35.092 --- 10.0.0.1 ping statistics --- 00:18:35.092 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:35.092 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=2383433 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 2383433 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 2383433 ']' 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:35.092 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:35.093 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:35.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:35.093 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:35.093 22:49:41 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:35.093 [2024-12-10 22:49:42.021482] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:18:35.093 [2024-12-10 22:49:42.021531] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:18:35.093 [2024-12-10 22:49:42.107350] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:35.093 [2024-12-10 22:49:42.154607] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:35.093 [2024-12-10 22:49:42.154637] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:35.093 [2024-12-10 22:49:42.154647] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:35.093 [2024-12-10 22:49:42.154655] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:35.093 [2024-12-10 22:49:42.154661] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:35.093 [2024-12-10 22:49:42.155836] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:18:35.093 [2024-12-10 22:49:42.155942] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:18:35.093 [2024-12-10 22:49:42.156049] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:18:35.093 [2024-12-10 22:49:42.156050] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:18:35.351 22:49:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:35.351 22:49:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:18:35.351 22:49:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:35.351 22:49:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:35.351 22:49:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:35.351 22:49:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:35.351 22:49:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:35.351 22:49:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.351 22:49:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:35.351 [2024-12-10 22:49:42.902809] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:35.351 22:49:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.351 22:49:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:35.351 22:49:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.351 22:49:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:35.351 Malloc0 00:18:35.351 22:49:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.351 22:49:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:35.351 22:49:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.351 22:49:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:35.351 22:49:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.351 22:49:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:35.351 22:49:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.351 22:49:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:35.351 22:49:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.351 22:49:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:35.351 22:49:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.351 22:49:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:35.351 [2024-12-10 22:49:42.947150] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:35.351 22:49:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.351 22:49:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:18:35.351 22:49:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:35.351 22:49:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:18:35.351 22:49:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:18:35.351 22:49:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:18:35.351 22:49:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:18:35.351 { 00:18:35.351 "params": { 00:18:35.351 "name": "Nvme$subsystem", 00:18:35.351 "trtype": "$TEST_TRANSPORT", 00:18:35.351 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:35.351 "adrfam": "ipv4", 00:18:35.351 "trsvcid": "$NVMF_PORT", 00:18:35.351 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:35.351 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:35.352 "hdgst": ${hdgst:-false}, 00:18:35.352 "ddgst": ${ddgst:-false} 00:18:35.352 }, 00:18:35.352 "method": "bdev_nvme_attach_controller" 00:18:35.352 } 00:18:35.352 EOF 00:18:35.352 )") 00:18:35.352 22:49:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:18:35.352 22:49:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:18:35.352 22:49:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:18:35.352 22:49:42 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:18:35.352 "params": { 00:18:35.352 "name": "Nvme1", 00:18:35.352 "trtype": "tcp", 00:18:35.352 "traddr": "10.0.0.2", 00:18:35.352 "adrfam": "ipv4", 00:18:35.352 "trsvcid": "4420", 00:18:35.352 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:35.352 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:35.352 "hdgst": false, 00:18:35.352 "ddgst": false 00:18:35.352 }, 00:18:35.352 "method": "bdev_nvme_attach_controller" 00:18:35.352 }' 00:18:35.352 [2024-12-10 22:49:42.998804] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:18:35.352 [2024-12-10 22:49:42.998848] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2383680 ] 00:18:35.352 [2024-12-10 22:49:43.078105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:35.610 [2024-12-10 22:49:43.126968] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:35.610 [2024-12-10 22:49:43.127076] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:35.610 [2024-12-10 22:49:43.127077] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:18:35.868 I/O targets: 00:18:35.868 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:35.868 00:18:35.868 00:18:35.868 CUnit - A unit testing framework for C - Version 2.1-3 00:18:35.868 http://cunit.sourceforge.net/ 00:18:35.868 00:18:35.868 00:18:35.868 Suite: bdevio tests on: Nvme1n1 00:18:35.868 Test: blockdev write read block ...passed 00:18:35.868 Test: blockdev write zeroes read block ...passed 00:18:35.868 Test: blockdev write zeroes read no split ...passed 00:18:35.868 Test: blockdev write zeroes read split ...passed 00:18:35.868 Test: blockdev write zeroes read split partial ...passed 00:18:35.868 Test: blockdev reset ...[2024-12-10 22:49:43.541030] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:18:35.868 [2024-12-10 22:49:43.541094] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1183540 (9): Bad file descriptor 00:18:35.868 [2024-12-10 22:49:43.595310] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:18:35.868 passed 00:18:36.126 Test: blockdev write read 8 blocks ...passed 00:18:36.126 Test: blockdev write read size > 128k ...passed 00:18:36.126 Test: blockdev write read invalid size ...passed 00:18:36.126 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:36.126 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:36.126 Test: blockdev write read max offset ...passed 00:18:36.126 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:36.126 Test: blockdev writev readv 8 blocks ...passed 00:18:36.126 Test: blockdev writev readv 30 x 1block ...passed 00:18:36.126 Test: blockdev writev readv block ...passed 00:18:36.126 Test: blockdev writev readv size > 128k ...passed 00:18:36.126 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:36.126 Test: blockdev comparev and writev ...[2024-12-10 22:49:43.845852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:36.126 [2024-12-10 22:49:43.845881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:36.126 [2024-12-10 22:49:43.845896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:36.126 [2024-12-10 22:49:43.845904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:36.126 [2024-12-10 22:49:43.846136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:36.126 [2024-12-10 22:49:43.846147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:36.126 [2024-12-10 22:49:43.846164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:36.126 [2024-12-10 22:49:43.846179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:36.126 [2024-12-10 22:49:43.846413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:36.126 [2024-12-10 22:49:43.846424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:36.127 [2024-12-10 22:49:43.846435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:36.127 [2024-12-10 22:49:43.846442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:36.127 [2024-12-10 22:49:43.846678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:36.127 [2024-12-10 22:49:43.846688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:36.127 [2024-12-10 22:49:43.846700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:36.127 [2024-12-10 22:49:43.846707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:36.385 passed 00:18:36.385 Test: blockdev nvme passthru rw ...passed 00:18:36.385 Test: blockdev nvme passthru vendor specific ...[2024-12-10 22:49:43.928452] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:36.385 [2024-12-10 22:49:43.928469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:36.385 [2024-12-10 22:49:43.928570] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:36.385 [2024-12-10 22:49:43.928581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:36.385 [2024-12-10 22:49:43.928684] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:36.385 [2024-12-10 22:49:43.928694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:36.385 [2024-12-10 22:49:43.928795] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:36.385 [2024-12-10 22:49:43.928805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:36.385 passed 00:18:36.385 Test: blockdev nvme admin passthru ...passed 00:18:36.385 Test: blockdev copy ...passed 00:18:36.385 00:18:36.385 Run Summary: Type Total Ran Passed Failed Inactive 00:18:36.385 suites 1 1 n/a 0 0 00:18:36.385 tests 23 23 23 0 0 00:18:36.385 asserts 152 152 152 0 n/a 00:18:36.385 00:18:36.385 Elapsed time = 1.137 seconds 00:18:36.643 22:49:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:36.643 22:49:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.643 22:49:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:36.643 22:49:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.643 22:49:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:36.643 22:49:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:18:36.643 22:49:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:36.643 22:49:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:18:36.643 22:49:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:36.643 22:49:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:18:36.643 22:49:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:36.643 22:49:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:36.643 rmmod nvme_tcp 00:18:36.643 rmmod nvme_fabrics 00:18:36.643 rmmod nvme_keyring 00:18:36.643 22:49:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:36.643 22:49:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:18:36.643 22:49:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:18:36.643 22:49:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 2383433 ']' 00:18:36.643 22:49:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 2383433 00:18:36.643 22:49:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 2383433 ']' 00:18:36.643 22:49:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 2383433 00:18:36.643 22:49:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:18:36.643 22:49:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:36.643 22:49:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2383433 00:18:36.901 22:49:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:18:36.901 22:49:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:18:36.901 22:49:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2383433' 00:18:36.901 killing process with pid 2383433 00:18:36.901 22:49:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 2383433 00:18:36.901 22:49:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 2383433 00:18:37.160 22:49:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:37.160 22:49:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:37.160 22:49:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:37.160 22:49:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:18:37.160 22:49:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:18:37.160 22:49:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:37.160 22:49:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:18:37.160 22:49:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:37.160 22:49:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:37.160 22:49:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:37.160 22:49:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:37.160 22:49:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:39.065 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:39.065 00:18:39.065 real 0m10.942s 00:18:39.065 user 0m14.171s 00:18:39.065 sys 0m5.395s 00:18:39.065 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:39.065 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:39.065 ************************************ 00:18:39.065 END TEST nvmf_bdevio_no_huge 00:18:39.065 ************************************ 00:18:39.324 22:49:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:39.324 22:49:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:39.324 22:49:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:39.324 22:49:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:39.324 ************************************ 00:18:39.324 START TEST nvmf_tls 00:18:39.324 ************************************ 00:18:39.324 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:39.324 * Looking for test storage... 00:18:39.324 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:18:39.324 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:39.324 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:18:39.324 22:49:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:39.324 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:39.324 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:39.324 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:39.324 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:39.324 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:18:39.324 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:18:39.324 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:18:39.324 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:18:39.324 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:18:39.324 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:18:39.324 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:18:39.324 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:39.324 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:18:39.324 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:18:39.324 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:39.324 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:39.324 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:18:39.324 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:18:39.324 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:39.324 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:18:39.324 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:18:39.324 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:18:39.324 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:18:39.324 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:39.324 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:18:39.324 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:18:39.324 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:39.324 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:39.324 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:18:39.324 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:39.324 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:39.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:39.324 --rc genhtml_branch_coverage=1 00:18:39.324 --rc genhtml_function_coverage=1 00:18:39.324 --rc genhtml_legend=1 00:18:39.324 --rc geninfo_all_blocks=1 00:18:39.324 --rc geninfo_unexecuted_blocks=1 00:18:39.324 00:18:39.324 ' 00:18:39.324 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:39.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:39.324 --rc genhtml_branch_coverage=1 00:18:39.324 --rc genhtml_function_coverage=1 00:18:39.324 --rc genhtml_legend=1 00:18:39.324 --rc geninfo_all_blocks=1 00:18:39.324 --rc geninfo_unexecuted_blocks=1 00:18:39.324 00:18:39.324 ' 00:18:39.324 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:39.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:39.324 --rc genhtml_branch_coverage=1 00:18:39.324 --rc genhtml_function_coverage=1 00:18:39.324 --rc genhtml_legend=1 00:18:39.324 --rc geninfo_all_blocks=1 00:18:39.324 --rc geninfo_unexecuted_blocks=1 00:18:39.324 00:18:39.324 ' 00:18:39.324 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:39.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:39.324 --rc genhtml_branch_coverage=1 00:18:39.324 --rc genhtml_function_coverage=1 00:18:39.324 --rc genhtml_legend=1 00:18:39.324 --rc geninfo_all_blocks=1 00:18:39.324 --rc geninfo_unexecuted_blocks=1 00:18:39.324 00:18:39.324 ' 00:18:39.324 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:18:39.324 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:18:39.324 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:39.324 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:39.324 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:39.324 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:39.324 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:39.324 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:39.324 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:39.324 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:39.324 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:39.324 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:39.584 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:39.584 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:39.584 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:39.584 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:39.584 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:39.584 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:39.584 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:18:39.584 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:18:39.584 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:39.584 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:39.584 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:39.584 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.584 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.584 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.584 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:18:39.584 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:39.584 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:18:39.584 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:39.584 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:39.584 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:39.584 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:39.584 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:39.584 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:39.584 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:39.584 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:39.584 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:39.584 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:39.584 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:18:39.584 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:18:39.584 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:39.584 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:39.584 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:39.584 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:39.584 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:39.584 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:39.584 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:39.584 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:39.584 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:39.584 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:39.584 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:18:39.584 22:49:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:46.153 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:46.153 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:18:46.153 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:46.153 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:46.153 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:46.153 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:46.153 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:46.153 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:18:46.153 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:46.153 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:18:46.153 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:18:46.153 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:18:46.153 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:18:46.153 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:18:46.153 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:18:46.153 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:46.153 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:46.153 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:46.153 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:46.153 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:46.153 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:46.153 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:46.153 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:46.153 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:46.153 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:46.153 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:46.153 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:46.153 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:46.153 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:46.153 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:46.153 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:46.153 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:46.153 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:46.153 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:46.154 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:46.154 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:46.154 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:46.154 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:46.154 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:46.154 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:46.154 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:46.154 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:46.154 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:46.154 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:46.154 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:46.154 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:46.154 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:46.154 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:46.154 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:46.154 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:46.154 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:46.154 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:46.154 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:46.154 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:46.154 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:46.154 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:46.154 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:46.154 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:46.154 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:46.154 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:46.154 Found net devices under 0000:86:00.0: cvl_0_0 00:18:46.154 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:46.154 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:46.154 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:46.154 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:46.154 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:46.154 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:46.154 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:46.154 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:46.154 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:46.154 Found net devices under 0000:86:00.1: cvl_0_1 00:18:46.154 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:46.154 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:46.154 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:18:46.154 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:46.154 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:46.154 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:46.154 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:46.154 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:46.154 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:46.154 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:46.154 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:46.154 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:46.154 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:46.154 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:46.154 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:46.154 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:46.154 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:46.154 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:46.154 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:46.154 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:46.154 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:46.154 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:46.154 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:46.154 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:46.154 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:46.154 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:46.154 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:46.154 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:46.154 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:46.154 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:46.154 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.348 ms 00:18:46.154 00:18:46.154 --- 10.0.0.2 ping statistics --- 00:18:46.154 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:46.154 rtt min/avg/max/mdev = 0.348/0.348/0.348/0.000 ms 00:18:46.154 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:46.154 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:46.154 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:18:46.154 00:18:46.154 --- 10.0.0.1 ping statistics --- 00:18:46.154 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:46.154 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:18:46.154 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:46.154 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:18:46.154 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:46.154 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:46.154 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:46.154 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:46.154 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:46.154 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:46.154 22:49:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:46.154 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:18:46.154 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:46.154 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:46.154 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:46.154 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2387449 00:18:46.154 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2387449 00:18:46.154 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:18:46.154 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2387449 ']' 00:18:46.154 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:46.154 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:46.154 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:46.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:46.154 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:46.154 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:46.154 [2024-12-10 22:49:53.064564] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:18:46.154 [2024-12-10 22:49:53.064607] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:46.154 [2024-12-10 22:49:53.146480] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:46.154 [2024-12-10 22:49:53.184002] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:46.154 [2024-12-10 22:49:53.184034] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:46.154 [2024-12-10 22:49:53.184040] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:46.154 [2024-12-10 22:49:53.184046] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:46.154 [2024-12-10 22:49:53.184051] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:46.154 [2024-12-10 22:49:53.184608] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:46.413 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:46.413 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:46.413 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:46.413 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:46.413 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:46.413 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:46.413 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:18:46.413 22:49:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:18:46.413 true 00:18:46.413 22:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:46.413 22:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:18:46.672 22:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:18:46.672 22:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:18:46.672 22:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:46.930 22:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:46.930 22:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:18:47.189 22:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:18:47.189 22:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:18:47.189 22:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:18:47.189 22:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:47.189 22:49:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:18:47.448 22:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:18:47.448 22:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:18:47.448 22:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:47.448 22:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:18:47.706 22:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:18:47.706 22:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:18:47.706 22:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:18:47.965 22:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:47.965 22:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:18:47.965 22:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:18:47.966 22:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:18:47.966 22:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:18:48.224 22:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:18:48.224 22:49:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:48.483 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:18:48.483 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:18:48.483 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:18:48.483 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:18:48.483 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:48.483 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:48.483 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:18:48.483 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:18:48.483 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:48.483 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:48.483 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:18:48.483 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:18:48.483 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:18:48.483 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:18:48.483 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:18:48.483 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:18:48.483 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:18:48.483 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:48.483 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:18:48.483 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.MFsO3UWCPp 00:18:48.483 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:18:48.483 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.spAOgbYNdq 00:18:48.483 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:48.483 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:48.483 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.MFsO3UWCPp 00:18:48.483 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.spAOgbYNdq 00:18:48.483 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:48.742 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py framework_start_init 00:18:49.000 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.MFsO3UWCPp 00:18:49.000 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.MFsO3UWCPp 00:18:49.001 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:49.259 [2024-12-10 22:49:56.750698] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:49.259 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:49.259 22:49:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:49.518 [2024-12-10 22:49:57.107610] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:49.518 [2024-12-10 22:49:57.107835] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:49.518 22:49:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:49.776 malloc0 00:18:49.776 22:49:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:50.035 22:49:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.MFsO3UWCPp 00:18:50.035 22:49:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:50.294 22:49:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.MFsO3UWCPp 00:19:02.499 Initializing NVMe Controllers 00:19:02.499 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:02.499 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:02.499 Initialization complete. Launching workers. 00:19:02.499 ======================================================== 00:19:02.499 Latency(us) 00:19:02.499 Device Information : IOPS MiB/s Average min max 00:19:02.499 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16172.77 63.17 3957.44 842.97 6407.15 00:19:02.499 ======================================================== 00:19:02.499 Total : 16172.77 63.17 3957.44 842.97 6407.15 00:19:02.499 00:19:02.499 22:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.MFsO3UWCPp 00:19:02.499 22:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:02.499 22:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:02.499 22:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:02.499 22:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.MFsO3UWCPp 00:19:02.499 22:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:02.499 22:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2390022 00:19:02.499 22:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:02.499 22:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:02.499 22:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2390022 /var/tmp/bdevperf.sock 00:19:02.499 22:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2390022 ']' 00:19:02.499 22:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:02.499 22:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:02.499 22:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:02.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:02.499 22:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:02.499 22:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:02.499 [2024-12-10 22:50:08.056836] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:19:02.499 [2024-12-10 22:50:08.056887] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2390022 ] 00:19:02.499 [2024-12-10 22:50:08.132213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:02.499 [2024-12-10 22:50:08.173775] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:19:02.499 22:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:02.499 22:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:02.499 22:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.MFsO3UWCPp 00:19:02.499 22:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:02.499 [2024-12-10 22:50:08.633505] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:02.499 TLSTESTn1 00:19:02.499 22:50:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:02.499 Running I/O for 10 seconds... 00:19:03.433 5384.00 IOPS, 21.03 MiB/s [2024-12-10T21:50:12.097Z] 5441.00 IOPS, 21.25 MiB/s [2024-12-10T21:50:13.033Z] 5391.00 IOPS, 21.06 MiB/s [2024-12-10T21:50:13.970Z] 5434.25 IOPS, 21.23 MiB/s [2024-12-10T21:50:14.905Z] 5445.40 IOPS, 21.27 MiB/s [2024-12-10T21:50:15.841Z] 5454.67 IOPS, 21.31 MiB/s [2024-12-10T21:50:16.838Z] 5461.71 IOPS, 21.33 MiB/s [2024-12-10T21:50:18.252Z] 5470.38 IOPS, 21.37 MiB/s [2024-12-10T21:50:19.189Z] 5468.44 IOPS, 21.36 MiB/s [2024-12-10T21:50:19.189Z] 5471.30 IOPS, 21.37 MiB/s 00:19:11.460 Latency(us) 00:19:11.460 [2024-12-10T21:50:19.189Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:11.460 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:11.460 Verification LBA range: start 0x0 length 0x2000 00:19:11.460 TLSTESTn1 : 10.01 5477.52 21.40 0.00 0.00 23335.70 4559.03 24276.81 00:19:11.460 [2024-12-10T21:50:19.189Z] =================================================================================================================== 00:19:11.460 [2024-12-10T21:50:19.189Z] Total : 5477.52 21.40 0.00 0.00 23335.70 4559.03 24276.81 00:19:11.460 { 00:19:11.460 "results": [ 00:19:11.460 { 00:19:11.460 "job": "TLSTESTn1", 00:19:11.460 "core_mask": "0x4", 00:19:11.460 "workload": "verify", 00:19:11.460 "status": "finished", 00:19:11.460 "verify_range": { 00:19:11.460 "start": 0, 00:19:11.460 "length": 8192 00:19:11.460 }, 00:19:11.460 "queue_depth": 128, 00:19:11.460 "io_size": 4096, 00:19:11.460 "runtime": 10.012018, 00:19:11.460 "iops": 5477.517119925274, 00:19:11.460 "mibps": 21.3965512497081, 00:19:11.460 "io_failed": 0, 00:19:11.460 "io_timeout": 0, 00:19:11.460 "avg_latency_us": 23335.702429933808, 00:19:11.460 "min_latency_us": 4559.026086956522, 00:19:11.460 "max_latency_us": 24276.81391304348 00:19:11.460 } 00:19:11.460 ], 00:19:11.460 "core_count": 1 00:19:11.460 } 00:19:11.460 22:50:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:11.460 22:50:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2390022 00:19:11.460 22:50:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2390022 ']' 00:19:11.460 22:50:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2390022 00:19:11.460 22:50:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:11.460 22:50:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:11.460 22:50:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2390022 00:19:11.460 22:50:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:11.460 22:50:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:11.460 22:50:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2390022' 00:19:11.460 killing process with pid 2390022 00:19:11.460 22:50:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2390022 00:19:11.460 Received shutdown signal, test time was about 10.000000 seconds 00:19:11.460 00:19:11.460 Latency(us) 00:19:11.460 [2024-12-10T21:50:19.189Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:11.460 [2024-12-10T21:50:19.189Z] =================================================================================================================== 00:19:11.460 [2024-12-10T21:50:19.189Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:11.460 22:50:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2390022 00:19:11.460 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.spAOgbYNdq 00:19:11.460 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:11.460 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.spAOgbYNdq 00:19:11.460 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:11.460 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:11.460 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:11.460 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:11.460 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.spAOgbYNdq 00:19:11.460 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:11.460 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:11.460 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:11.460 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.spAOgbYNdq 00:19:11.460 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:11.460 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2391702 00:19:11.460 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:11.460 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:11.460 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2391702 /var/tmp/bdevperf.sock 00:19:11.460 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2391702 ']' 00:19:11.460 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:11.460 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:11.460 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:11.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:11.460 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:11.460 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:11.460 [2024-12-10 22:50:19.124086] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:19:11.460 [2024-12-10 22:50:19.124136] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2391702 ] 00:19:11.719 [2024-12-10 22:50:19.195282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:11.719 [2024-12-10 22:50:19.236824] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:19:11.719 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:11.719 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:11.719 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.spAOgbYNdq 00:19:11.977 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:11.977 [2024-12-10 22:50:19.684727] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:11.977 [2024-12-10 22:50:19.689803] /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:11.977 [2024-12-10 22:50:19.690054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2125e20 (107): Transport endpoint is not connected 00:19:11.977 [2024-12-10 22:50:19.691047] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2125e20 (9): Bad file descriptor 00:19:11.977 [2024-12-10 22:50:19.692047] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:19:11.977 [2024-12-10 22:50:19.692056] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:11.978 [2024-12-10 22:50:19.692063] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:11.978 [2024-12-10 22:50:19.692071] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:19:11.978 request: 00:19:11.978 { 00:19:11.978 "name": "TLSTEST", 00:19:11.978 "trtype": "tcp", 00:19:11.978 "traddr": "10.0.0.2", 00:19:11.978 "adrfam": "ipv4", 00:19:11.978 "trsvcid": "4420", 00:19:11.978 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:11.978 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:11.978 "prchk_reftag": false, 00:19:11.978 "prchk_guard": false, 00:19:11.978 "hdgst": false, 00:19:11.978 "ddgst": false, 00:19:11.978 "psk": "key0", 00:19:11.978 "allow_unrecognized_csi": false, 00:19:11.978 "method": "bdev_nvme_attach_controller", 00:19:11.978 "req_id": 1 00:19:11.978 } 00:19:11.978 Got JSON-RPC error response 00:19:11.978 response: 00:19:11.978 { 00:19:11.978 "code": -5, 00:19:11.978 "message": "Input/output error" 00:19:11.978 } 00:19:12.237 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2391702 00:19:12.237 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2391702 ']' 00:19:12.237 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2391702 00:19:12.237 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:12.237 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:12.237 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2391702 00:19:12.237 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:12.237 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:12.237 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2391702' 00:19:12.237 killing process with pid 2391702 00:19:12.237 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2391702 00:19:12.237 Received shutdown signal, test time was about 10.000000 seconds 00:19:12.237 00:19:12.237 Latency(us) 00:19:12.237 [2024-12-10T21:50:19.966Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:12.237 [2024-12-10T21:50:19.966Z] =================================================================================================================== 00:19:12.237 [2024-12-10T21:50:19.966Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:12.237 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2391702 00:19:12.237 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:12.237 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:12.237 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:12.237 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:12.237 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:12.237 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.MFsO3UWCPp 00:19:12.237 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:12.237 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.MFsO3UWCPp 00:19:12.237 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:12.237 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:12.237 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:12.237 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:12.237 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.MFsO3UWCPp 00:19:12.237 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:12.237 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:12.237 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:19:12.237 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.MFsO3UWCPp 00:19:12.237 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:12.237 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2391875 00:19:12.237 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:12.237 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:12.237 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2391875 /var/tmp/bdevperf.sock 00:19:12.237 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2391875 ']' 00:19:12.237 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:12.237 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:12.237 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:12.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:12.237 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:12.237 22:50:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:12.496 [2024-12-10 22:50:19.965037] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:19:12.496 [2024-12-10 22:50:19.965089] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2391875 ] 00:19:12.496 [2024-12-10 22:50:20.031462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:12.496 [2024-12-10 22:50:20.074493] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:19:12.496 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:12.496 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:12.496 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.MFsO3UWCPp 00:19:12.754 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:19:13.013 [2024-12-10 22:50:20.559275] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:13.013 [2024-12-10 22:50:20.563998] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:13.013 [2024-12-10 22:50:20.564020] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:13.013 [2024-12-10 22:50:20.564059] /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:13.013 [2024-12-10 22:50:20.564726] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x125ee20 (107): Transport endpoint is not connected 00:19:13.013 [2024-12-10 22:50:20.565718] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x125ee20 (9): Bad file descriptor 00:19:13.013 [2024-12-10 22:50:20.566720] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:19:13.013 [2024-12-10 22:50:20.566729] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:13.013 [2024-12-10 22:50:20.566736] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:19:13.013 [2024-12-10 22:50:20.566744] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:19:13.013 request: 00:19:13.013 { 00:19:13.013 "name": "TLSTEST", 00:19:13.013 "trtype": "tcp", 00:19:13.013 "traddr": "10.0.0.2", 00:19:13.013 "adrfam": "ipv4", 00:19:13.013 "trsvcid": "4420", 00:19:13.013 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:13.013 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:13.013 "prchk_reftag": false, 00:19:13.013 "prchk_guard": false, 00:19:13.013 "hdgst": false, 00:19:13.013 "ddgst": false, 00:19:13.013 "psk": "key0", 00:19:13.013 "allow_unrecognized_csi": false, 00:19:13.013 "method": "bdev_nvme_attach_controller", 00:19:13.013 "req_id": 1 00:19:13.013 } 00:19:13.013 Got JSON-RPC error response 00:19:13.013 response: 00:19:13.013 { 00:19:13.013 "code": -5, 00:19:13.013 "message": "Input/output error" 00:19:13.013 } 00:19:13.013 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2391875 00:19:13.013 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2391875 ']' 00:19:13.013 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2391875 00:19:13.013 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:13.013 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:13.013 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2391875 00:19:13.013 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:13.013 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:13.013 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2391875' 00:19:13.013 killing process with pid 2391875 00:19:13.013 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2391875 00:19:13.013 Received shutdown signal, test time was about 10.000000 seconds 00:19:13.013 00:19:13.013 Latency(us) 00:19:13.013 [2024-12-10T21:50:20.742Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:13.013 [2024-12-10T21:50:20.742Z] =================================================================================================================== 00:19:13.013 [2024-12-10T21:50:20.742Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:13.013 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2391875 00:19:13.272 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:13.272 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:13.272 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:13.272 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:13.272 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:13.272 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.MFsO3UWCPp 00:19:13.272 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:13.272 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.MFsO3UWCPp 00:19:13.272 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:13.272 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:13.272 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:13.272 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:13.272 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.MFsO3UWCPp 00:19:13.272 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:13.272 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:19:13.272 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:13.272 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.MFsO3UWCPp 00:19:13.272 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:13.272 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2392109 00:19:13.272 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:13.272 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:13.272 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2392109 /var/tmp/bdevperf.sock 00:19:13.272 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2392109 ']' 00:19:13.272 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:13.272 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:13.272 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:13.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:13.272 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:13.272 22:50:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:13.272 [2024-12-10 22:50:20.846083] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:19:13.272 [2024-12-10 22:50:20.846133] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2392109 ] 00:19:13.272 [2024-12-10 22:50:20.910862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:13.272 [2024-12-10 22:50:20.948388] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:19:13.531 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:13.531 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:13.531 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.MFsO3UWCPp 00:19:13.531 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:13.790 [2024-12-10 22:50:21.400299] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:13.790 [2024-12-10 22:50:21.404972] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:13.790 [2024-12-10 22:50:21.404993] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:13.790 [2024-12-10 22:50:21.405022] /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:13.790 [2024-12-10 22:50:21.405695] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15efe20 (107): Transport endpoint is not connected 00:19:13.790 [2024-12-10 22:50:21.406689] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15efe20 (9): Bad file descriptor 00:19:13.790 [2024-12-10 22:50:21.407690] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:19:13.790 [2024-12-10 22:50:21.407700] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:13.790 [2024-12-10 22:50:21.407707] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:19:13.790 [2024-12-10 22:50:21.407714] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:19:13.790 request: 00:19:13.790 { 00:19:13.790 "name": "TLSTEST", 00:19:13.790 "trtype": "tcp", 00:19:13.790 "traddr": "10.0.0.2", 00:19:13.790 "adrfam": "ipv4", 00:19:13.790 "trsvcid": "4420", 00:19:13.790 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:13.790 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:13.790 "prchk_reftag": false, 00:19:13.790 "prchk_guard": false, 00:19:13.790 "hdgst": false, 00:19:13.790 "ddgst": false, 00:19:13.790 "psk": "key0", 00:19:13.790 "allow_unrecognized_csi": false, 00:19:13.790 "method": "bdev_nvme_attach_controller", 00:19:13.790 "req_id": 1 00:19:13.790 } 00:19:13.790 Got JSON-RPC error response 00:19:13.790 response: 00:19:13.790 { 00:19:13.790 "code": -5, 00:19:13.790 "message": "Input/output error" 00:19:13.790 } 00:19:13.790 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2392109 00:19:13.790 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2392109 ']' 00:19:13.790 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2392109 00:19:13.790 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:13.790 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:13.790 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2392109 00:19:13.790 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:13.790 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:13.790 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2392109' 00:19:13.790 killing process with pid 2392109 00:19:13.790 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2392109 00:19:13.790 Received shutdown signal, test time was about 10.000000 seconds 00:19:13.790 00:19:13.790 Latency(us) 00:19:13.790 [2024-12-10T21:50:21.519Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:13.790 [2024-12-10T21:50:21.519Z] =================================================================================================================== 00:19:13.790 [2024-12-10T21:50:21.519Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:13.790 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2392109 00:19:14.049 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:14.049 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:14.049 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:14.049 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:14.049 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:14.049 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:14.049 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:14.049 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:14.049 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:14.049 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:14.049 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:14.049 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:14.049 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:14.049 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:14.049 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:14.050 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:14.050 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:19:14.050 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:14.050 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2392129 00:19:14.050 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:14.050 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:14.050 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2392129 /var/tmp/bdevperf.sock 00:19:14.050 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2392129 ']' 00:19:14.050 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:14.050 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:14.050 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:14.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:14.050 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:14.050 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:14.050 [2024-12-10 22:50:21.692923] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:19:14.050 [2024-12-10 22:50:21.692971] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2392129 ] 00:19:14.050 [2024-12-10 22:50:21.765451] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.308 [2024-12-10 22:50:21.803183] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:19:14.308 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:14.308 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:14.308 22:50:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:19:14.567 [2024-12-10 22:50:22.083205] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:19:14.567 [2024-12-10 22:50:22.083239] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:14.567 request: 00:19:14.567 { 00:19:14.567 "name": "key0", 00:19:14.567 "path": "", 00:19:14.567 "method": "keyring_file_add_key", 00:19:14.567 "req_id": 1 00:19:14.567 } 00:19:14.567 Got JSON-RPC error response 00:19:14.567 response: 00:19:14.567 { 00:19:14.567 "code": -1, 00:19:14.567 "message": "Operation not permitted" 00:19:14.567 } 00:19:14.567 22:50:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:14.567 [2024-12-10 22:50:22.291833] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:14.567 [2024-12-10 22:50:22.291865] bdev_nvme.c:6754:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:14.826 request: 00:19:14.826 { 00:19:14.826 "name": "TLSTEST", 00:19:14.826 "trtype": "tcp", 00:19:14.826 "traddr": "10.0.0.2", 00:19:14.826 "adrfam": "ipv4", 00:19:14.826 "trsvcid": "4420", 00:19:14.826 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:14.826 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:14.826 "prchk_reftag": false, 00:19:14.826 "prchk_guard": false, 00:19:14.826 "hdgst": false, 00:19:14.826 "ddgst": false, 00:19:14.826 "psk": "key0", 00:19:14.826 "allow_unrecognized_csi": false, 00:19:14.826 "method": "bdev_nvme_attach_controller", 00:19:14.826 "req_id": 1 00:19:14.826 } 00:19:14.826 Got JSON-RPC error response 00:19:14.826 response: 00:19:14.826 { 00:19:14.826 "code": -126, 00:19:14.826 "message": "Required key not available" 00:19:14.826 } 00:19:14.826 22:50:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2392129 00:19:14.826 22:50:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2392129 ']' 00:19:14.826 22:50:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2392129 00:19:14.826 22:50:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:14.826 22:50:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:14.826 22:50:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2392129 00:19:14.826 22:50:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:14.826 22:50:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:14.826 22:50:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2392129' 00:19:14.826 killing process with pid 2392129 00:19:14.826 22:50:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2392129 00:19:14.826 Received shutdown signal, test time was about 10.000000 seconds 00:19:14.826 00:19:14.826 Latency(us) 00:19:14.826 [2024-12-10T21:50:22.555Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:14.826 [2024-12-10T21:50:22.555Z] =================================================================================================================== 00:19:14.826 [2024-12-10T21:50:22.555Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:14.826 22:50:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2392129 00:19:14.826 22:50:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:14.826 22:50:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:14.826 22:50:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:14.826 22:50:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:14.826 22:50:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:14.826 22:50:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 2387449 00:19:14.826 22:50:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2387449 ']' 00:19:14.826 22:50:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2387449 00:19:14.826 22:50:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:14.826 22:50:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:14.826 22:50:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2387449 00:19:15.086 22:50:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:15.086 22:50:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:15.086 22:50:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2387449' 00:19:15.086 killing process with pid 2387449 00:19:15.086 22:50:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2387449 00:19:15.086 22:50:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2387449 00:19:15.086 22:50:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:19:15.086 22:50:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:19:15.086 22:50:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:19:15.086 22:50:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:19:15.086 22:50:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:15.086 22:50:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:19:15.086 22:50:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:19:15.086 22:50:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:15.086 22:50:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:19:15.086 22:50:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.xGIDcQlaD5 00:19:15.086 22:50:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:15.086 22:50:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.xGIDcQlaD5 00:19:15.086 22:50:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:19:15.086 22:50:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:15.086 22:50:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:15.086 22:50:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:15.086 22:50:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2392374 00:19:15.086 22:50:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2392374 00:19:15.086 22:50:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:15.086 22:50:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2392374 ']' 00:19:15.086 22:50:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:15.086 22:50:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:15.086 22:50:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:15.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:15.086 22:50:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:15.086 22:50:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:15.345 [2024-12-10 22:50:22.842993] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:19:15.345 [2024-12-10 22:50:22.843041] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:15.345 [2024-12-10 22:50:22.920339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:15.345 [2024-12-10 22:50:22.959622] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:15.345 [2024-12-10 22:50:22.959660] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:15.345 [2024-12-10 22:50:22.959668] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:15.345 [2024-12-10 22:50:22.959674] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:15.345 [2024-12-10 22:50:22.959698] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:15.345 [2024-12-10 22:50:22.960272] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:15.345 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:15.345 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:15.345 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:15.345 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:15.345 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:15.604 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:15.604 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.xGIDcQlaD5 00:19:15.604 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.xGIDcQlaD5 00:19:15.604 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:15.604 [2024-12-10 22:50:23.276556] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:15.604 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:15.863 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:16.121 [2024-12-10 22:50:23.689618] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:16.121 [2024-12-10 22:50:23.689815] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:16.121 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:16.380 malloc0 00:19:16.380 22:50:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:16.638 22:50:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.xGIDcQlaD5 00:19:16.638 22:50:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:16.897 22:50:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.xGIDcQlaD5 00:19:16.897 22:50:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:16.897 22:50:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:16.897 22:50:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:16.897 22:50:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.xGIDcQlaD5 00:19:16.897 22:50:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:16.897 22:50:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:16.897 22:50:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2392636 00:19:16.897 22:50:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:16.897 22:50:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2392636 /var/tmp/bdevperf.sock 00:19:16.897 22:50:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2392636 ']' 00:19:16.897 22:50:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:16.897 22:50:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:16.897 22:50:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:16.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:16.897 22:50:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:16.897 22:50:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:16.897 [2024-12-10 22:50:24.540027] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:19:16.897 [2024-12-10 22:50:24.540076] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2392636 ] 00:19:16.897 [2024-12-10 22:50:24.615169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:17.156 [2024-12-10 22:50:24.655821] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:19:17.156 22:50:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:17.156 22:50:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:17.156 22:50:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.xGIDcQlaD5 00:19:17.415 22:50:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:17.415 [2024-12-10 22:50:25.131652] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:17.673 TLSTESTn1 00:19:17.673 22:50:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:17.673 Running I/O for 10 seconds... 00:19:19.985 5288.00 IOPS, 20.66 MiB/s [2024-12-10T21:50:28.649Z] 5370.00 IOPS, 20.98 MiB/s [2024-12-10T21:50:29.584Z] 5429.33 IOPS, 21.21 MiB/s [2024-12-10T21:50:30.519Z] 5431.00 IOPS, 21.21 MiB/s [2024-12-10T21:50:31.455Z] 5427.40 IOPS, 21.20 MiB/s [2024-12-10T21:50:32.390Z] 5393.83 IOPS, 21.07 MiB/s [2024-12-10T21:50:33.766Z] 5399.57 IOPS, 21.09 MiB/s [2024-12-10T21:50:34.702Z] 5407.50 IOPS, 21.12 MiB/s [2024-12-10T21:50:35.638Z] 5402.11 IOPS, 21.10 MiB/s [2024-12-10T21:50:35.638Z] 5406.40 IOPS, 21.12 MiB/s 00:19:27.909 Latency(us) 00:19:27.909 [2024-12-10T21:50:35.638Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:27.909 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:27.909 Verification LBA range: start 0x0 length 0x2000 00:19:27.909 TLSTESTn1 : 10.01 5412.13 21.14 0.00 0.00 23616.01 4986.43 41031.23 00:19:27.909 [2024-12-10T21:50:35.638Z] =================================================================================================================== 00:19:27.909 [2024-12-10T21:50:35.638Z] Total : 5412.13 21.14 0.00 0.00 23616.01 4986.43 41031.23 00:19:27.909 { 00:19:27.909 "results": [ 00:19:27.909 { 00:19:27.909 "job": "TLSTESTn1", 00:19:27.909 "core_mask": "0x4", 00:19:27.909 "workload": "verify", 00:19:27.909 "status": "finished", 00:19:27.909 "verify_range": { 00:19:27.909 "start": 0, 00:19:27.909 "length": 8192 00:19:27.909 }, 00:19:27.909 "queue_depth": 128, 00:19:27.909 "io_size": 4096, 00:19:27.909 "runtime": 10.013054, 00:19:27.909 "iops": 5412.134998972341, 00:19:27.909 "mibps": 21.141152339735708, 00:19:27.909 "io_failed": 0, 00:19:27.909 "io_timeout": 0, 00:19:27.909 "avg_latency_us": 23616.014295066812, 00:19:27.909 "min_latency_us": 4986.434782608696, 00:19:27.909 "max_latency_us": 41031.2347826087 00:19:27.909 } 00:19:27.909 ], 00:19:27.909 "core_count": 1 00:19:27.909 } 00:19:27.909 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:27.909 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2392636 00:19:27.909 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2392636 ']' 00:19:27.909 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2392636 00:19:27.909 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:27.909 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:27.909 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2392636 00:19:27.909 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:27.909 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:27.909 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2392636' 00:19:27.909 killing process with pid 2392636 00:19:27.909 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2392636 00:19:27.909 Received shutdown signal, test time was about 10.000000 seconds 00:19:27.909 00:19:27.909 Latency(us) 00:19:27.909 [2024-12-10T21:50:35.638Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:27.909 [2024-12-10T21:50:35.638Z] =================================================================================================================== 00:19:27.909 [2024-12-10T21:50:35.638Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:27.909 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2392636 00:19:27.909 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.xGIDcQlaD5 00:19:27.909 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.xGIDcQlaD5 00:19:27.909 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:27.909 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.xGIDcQlaD5 00:19:27.909 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:27.909 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:27.909 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:27.909 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:27.909 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.xGIDcQlaD5 00:19:27.909 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:27.909 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:27.909 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:27.909 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.xGIDcQlaD5 00:19:27.909 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:27.909 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2394470 00:19:27.909 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:27.909 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:27.909 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2394470 /var/tmp/bdevperf.sock 00:19:27.909 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2394470 ']' 00:19:27.909 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:27.909 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:27.909 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:27.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:27.909 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:27.909 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:28.168 [2024-12-10 22:50:35.644630] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:19:28.168 [2024-12-10 22:50:35.644681] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2394470 ] 00:19:28.168 [2024-12-10 22:50:35.714591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:28.168 [2024-12-10 22:50:35.751645] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:19:28.168 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:28.168 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:28.168 22:50:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.xGIDcQlaD5 00:19:28.426 [2024-12-10 22:50:36.014658] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.xGIDcQlaD5': 0100666 00:19:28.426 [2024-12-10 22:50:36.014690] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:28.426 request: 00:19:28.426 { 00:19:28.426 "name": "key0", 00:19:28.426 "path": "/tmp/tmp.xGIDcQlaD5", 00:19:28.426 "method": "keyring_file_add_key", 00:19:28.426 "req_id": 1 00:19:28.426 } 00:19:28.426 Got JSON-RPC error response 00:19:28.426 response: 00:19:28.426 { 00:19:28.426 "code": -1, 00:19:28.426 "message": "Operation not permitted" 00:19:28.426 } 00:19:28.426 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:28.683 [2024-12-10 22:50:36.227292] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:28.683 [2024-12-10 22:50:36.227321] bdev_nvme.c:6754:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:28.683 request: 00:19:28.683 { 00:19:28.683 "name": "TLSTEST", 00:19:28.683 "trtype": "tcp", 00:19:28.683 "traddr": "10.0.0.2", 00:19:28.683 "adrfam": "ipv4", 00:19:28.683 "trsvcid": "4420", 00:19:28.683 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:28.683 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:28.683 "prchk_reftag": false, 00:19:28.683 "prchk_guard": false, 00:19:28.683 "hdgst": false, 00:19:28.683 "ddgst": false, 00:19:28.683 "psk": "key0", 00:19:28.683 "allow_unrecognized_csi": false, 00:19:28.683 "method": "bdev_nvme_attach_controller", 00:19:28.683 "req_id": 1 00:19:28.683 } 00:19:28.683 Got JSON-RPC error response 00:19:28.683 response: 00:19:28.683 { 00:19:28.683 "code": -126, 00:19:28.683 "message": "Required key not available" 00:19:28.683 } 00:19:28.683 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2394470 00:19:28.683 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2394470 ']' 00:19:28.683 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2394470 00:19:28.683 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:28.683 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:28.683 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2394470 00:19:28.683 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:28.683 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:28.683 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2394470' 00:19:28.683 killing process with pid 2394470 00:19:28.683 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2394470 00:19:28.683 Received shutdown signal, test time was about 10.000000 seconds 00:19:28.683 00:19:28.683 Latency(us) 00:19:28.683 [2024-12-10T21:50:36.412Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:28.683 [2024-12-10T21:50:36.412Z] =================================================================================================================== 00:19:28.683 [2024-12-10T21:50:36.412Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:28.683 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2394470 00:19:28.942 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:28.942 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:28.942 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:28.942 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:28.942 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:28.942 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 2392374 00:19:28.942 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2392374 ']' 00:19:28.942 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2392374 00:19:28.942 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:28.942 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:28.942 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2392374 00:19:28.942 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:28.942 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:28.942 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2392374' 00:19:28.942 killing process with pid 2392374 00:19:28.942 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2392374 00:19:28.942 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2392374 00:19:29.200 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:19:29.200 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:29.200 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:29.200 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:29.200 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2394707 00:19:29.200 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:29.200 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2394707 00:19:29.200 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2394707 ']' 00:19:29.200 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:29.200 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:29.200 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:29.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:29.200 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:29.200 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:29.200 [2024-12-10 22:50:36.739537] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:19:29.200 [2024-12-10 22:50:36.739585] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:29.200 [2024-12-10 22:50:36.818245] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:29.200 [2024-12-10 22:50:36.853880] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:29.200 [2024-12-10 22:50:36.853917] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:29.200 [2024-12-10 22:50:36.853925] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:29.200 [2024-12-10 22:50:36.853931] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:29.200 [2024-12-10 22:50:36.853936] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:29.200 [2024-12-10 22:50:36.854503] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:29.459 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:29.459 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:29.459 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:29.459 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:29.459 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:29.459 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:29.459 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.xGIDcQlaD5 00:19:29.459 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:29.459 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.xGIDcQlaD5 00:19:29.459 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:19:29.459 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:29.459 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:19:29.459 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:29.459 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.xGIDcQlaD5 00:19:29.459 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.xGIDcQlaD5 00:19:29.459 22:50:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:29.459 [2024-12-10 22:50:37.162667] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:29.717 22:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:29.717 22:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:29.976 [2024-12-10 22:50:37.579740] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:29.976 [2024-12-10 22:50:37.579973] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:29.976 22:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:30.235 malloc0 00:19:30.235 22:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:30.493 22:50:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.xGIDcQlaD5 00:19:30.494 [2024-12-10 22:50:38.145354] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.xGIDcQlaD5': 0100666 00:19:30.494 [2024-12-10 22:50:38.145389] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:30.494 request: 00:19:30.494 { 00:19:30.494 "name": "key0", 00:19:30.494 "path": "/tmp/tmp.xGIDcQlaD5", 00:19:30.494 "method": "keyring_file_add_key", 00:19:30.494 "req_id": 1 00:19:30.494 } 00:19:30.494 Got JSON-RPC error response 00:19:30.494 response: 00:19:30.494 { 00:19:30.494 "code": -1, 00:19:30.494 "message": "Operation not permitted" 00:19:30.494 } 00:19:30.494 22:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:30.753 [2024-12-10 22:50:38.321826] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:19:30.753 [2024-12-10 22:50:38.321865] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:19:30.753 request: 00:19:30.753 { 00:19:30.753 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:30.753 "host": "nqn.2016-06.io.spdk:host1", 00:19:30.753 "psk": "key0", 00:19:30.753 "method": "nvmf_subsystem_add_host", 00:19:30.753 "req_id": 1 00:19:30.753 } 00:19:30.753 Got JSON-RPC error response 00:19:30.753 response: 00:19:30.753 { 00:19:30.753 "code": -32603, 00:19:30.753 "message": "Internal error" 00:19:30.753 } 00:19:30.753 22:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:30.753 22:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:30.753 22:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:30.753 22:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:30.753 22:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 2394707 00:19:30.753 22:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2394707 ']' 00:19:30.753 22:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2394707 00:19:30.753 22:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:30.753 22:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:30.753 22:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2394707 00:19:30.753 22:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:30.753 22:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:30.753 22:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2394707' 00:19:30.753 killing process with pid 2394707 00:19:30.753 22:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2394707 00:19:30.753 22:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2394707 00:19:31.018 22:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.xGIDcQlaD5 00:19:31.018 22:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:19:31.018 22:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:31.018 22:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:31.018 22:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:31.018 22:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2394971 00:19:31.018 22:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:31.018 22:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2394971 00:19:31.018 22:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2394971 ']' 00:19:31.018 22:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:31.018 22:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:31.018 22:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:31.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:31.018 22:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:31.018 22:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:31.018 [2024-12-10 22:50:38.610892] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:19:31.018 [2024-12-10 22:50:38.610939] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:31.018 [2024-12-10 22:50:38.684533] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:31.018 [2024-12-10 22:50:38.724801] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:31.018 [2024-12-10 22:50:38.724838] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:31.018 [2024-12-10 22:50:38.724845] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:31.018 [2024-12-10 22:50:38.724851] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:31.018 [2024-12-10 22:50:38.724856] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:31.018 [2024-12-10 22:50:38.725412] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:31.277 22:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:31.277 22:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:31.277 22:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:31.277 22:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:31.277 22:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:31.277 22:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:31.277 22:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.xGIDcQlaD5 00:19:31.277 22:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.xGIDcQlaD5 00:19:31.277 22:50:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:31.536 [2024-12-10 22:50:39.034541] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:31.536 22:50:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:31.536 22:50:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:31.795 [2024-12-10 22:50:39.411516] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:31.795 [2024-12-10 22:50:39.411726] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:31.795 22:50:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:32.053 malloc0 00:19:32.053 22:50:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:32.312 22:50:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.xGIDcQlaD5 00:19:32.312 22:50:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:32.570 22:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=2395261 00:19:32.570 22:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:32.570 22:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:32.570 22:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 2395261 /var/tmp/bdevperf.sock 00:19:32.570 22:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2395261 ']' 00:19:32.570 22:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:32.570 22:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:32.570 22:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:32.570 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:32.570 22:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:32.570 22:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:32.570 [2024-12-10 22:50:40.241046] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:19:32.571 [2024-12-10 22:50:40.241095] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2395261 ] 00:19:32.829 [2024-12-10 22:50:40.315630] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:32.829 [2024-12-10 22:50:40.357363] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:19:32.829 22:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:32.829 22:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:32.829 22:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.xGIDcQlaD5 00:19:33.088 22:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:33.347 [2024-12-10 22:50:40.819117] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:33.347 TLSTESTn1 00:19:33.347 22:50:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py save_config 00:19:33.606 22:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:19:33.606 "subsystems": [ 00:19:33.606 { 00:19:33.606 "subsystem": "keyring", 00:19:33.606 "config": [ 00:19:33.606 { 00:19:33.606 "method": "keyring_file_add_key", 00:19:33.606 "params": { 00:19:33.606 "name": "key0", 00:19:33.606 "path": "/tmp/tmp.xGIDcQlaD5" 00:19:33.606 } 00:19:33.606 } 00:19:33.606 ] 00:19:33.606 }, 00:19:33.606 { 00:19:33.607 "subsystem": "iobuf", 00:19:33.607 "config": [ 00:19:33.607 { 00:19:33.607 "method": "iobuf_set_options", 00:19:33.607 "params": { 00:19:33.607 "small_pool_count": 8192, 00:19:33.607 "large_pool_count": 1024, 00:19:33.607 "small_bufsize": 8192, 00:19:33.607 "large_bufsize": 135168, 00:19:33.607 "enable_numa": false 00:19:33.607 } 00:19:33.607 } 00:19:33.607 ] 00:19:33.607 }, 00:19:33.607 { 00:19:33.607 "subsystem": "sock", 00:19:33.607 "config": [ 00:19:33.607 { 00:19:33.607 "method": "sock_set_default_impl", 00:19:33.607 "params": { 00:19:33.607 "impl_name": "posix" 00:19:33.607 } 00:19:33.607 }, 00:19:33.607 { 00:19:33.607 "method": "sock_impl_set_options", 00:19:33.607 "params": { 00:19:33.607 "impl_name": "ssl", 00:19:33.607 "recv_buf_size": 4096, 00:19:33.607 "send_buf_size": 4096, 00:19:33.607 "enable_recv_pipe": true, 00:19:33.607 "enable_quickack": false, 00:19:33.607 "enable_placement_id": 0, 00:19:33.607 "enable_zerocopy_send_server": true, 00:19:33.607 "enable_zerocopy_send_client": false, 00:19:33.607 "zerocopy_threshold": 0, 00:19:33.607 "tls_version": 0, 00:19:33.607 "enable_ktls": false 00:19:33.607 } 00:19:33.607 }, 00:19:33.607 { 00:19:33.607 "method": "sock_impl_set_options", 00:19:33.607 "params": { 00:19:33.607 "impl_name": "posix", 00:19:33.607 "recv_buf_size": 2097152, 00:19:33.607 "send_buf_size": 2097152, 00:19:33.607 "enable_recv_pipe": true, 00:19:33.607 "enable_quickack": false, 00:19:33.607 "enable_placement_id": 0, 00:19:33.607 "enable_zerocopy_send_server": true, 00:19:33.607 "enable_zerocopy_send_client": false, 00:19:33.607 "zerocopy_threshold": 0, 00:19:33.607 "tls_version": 0, 00:19:33.607 "enable_ktls": false 00:19:33.607 } 00:19:33.607 } 00:19:33.607 ] 00:19:33.607 }, 00:19:33.607 { 00:19:33.607 "subsystem": "vmd", 00:19:33.607 "config": [] 00:19:33.607 }, 00:19:33.607 { 00:19:33.607 "subsystem": "accel", 00:19:33.607 "config": [ 00:19:33.607 { 00:19:33.607 "method": "accel_set_options", 00:19:33.607 "params": { 00:19:33.607 "small_cache_size": 128, 00:19:33.607 "large_cache_size": 16, 00:19:33.607 "task_count": 2048, 00:19:33.607 "sequence_count": 2048, 00:19:33.607 "buf_count": 2048 00:19:33.607 } 00:19:33.607 } 00:19:33.607 ] 00:19:33.607 }, 00:19:33.607 { 00:19:33.607 "subsystem": "bdev", 00:19:33.607 "config": [ 00:19:33.607 { 00:19:33.607 "method": "bdev_set_options", 00:19:33.607 "params": { 00:19:33.607 "bdev_io_pool_size": 65535, 00:19:33.607 "bdev_io_cache_size": 256, 00:19:33.607 "bdev_auto_examine": true, 00:19:33.607 "iobuf_small_cache_size": 128, 00:19:33.607 "iobuf_large_cache_size": 16 00:19:33.607 } 00:19:33.607 }, 00:19:33.607 { 00:19:33.607 "method": "bdev_raid_set_options", 00:19:33.607 "params": { 00:19:33.607 "process_window_size_kb": 1024, 00:19:33.607 "process_max_bandwidth_mb_sec": 0 00:19:33.607 } 00:19:33.607 }, 00:19:33.607 { 00:19:33.607 "method": "bdev_iscsi_set_options", 00:19:33.607 "params": { 00:19:33.607 "timeout_sec": 30 00:19:33.607 } 00:19:33.607 }, 00:19:33.607 { 00:19:33.607 "method": "bdev_nvme_set_options", 00:19:33.607 "params": { 00:19:33.607 "action_on_timeout": "none", 00:19:33.607 "timeout_us": 0, 00:19:33.607 "timeout_admin_us": 0, 00:19:33.607 "keep_alive_timeout_ms": 10000, 00:19:33.607 "arbitration_burst": 0, 00:19:33.607 "low_priority_weight": 0, 00:19:33.607 "medium_priority_weight": 0, 00:19:33.607 "high_priority_weight": 0, 00:19:33.607 "nvme_adminq_poll_period_us": 10000, 00:19:33.607 "nvme_ioq_poll_period_us": 0, 00:19:33.607 "io_queue_requests": 0, 00:19:33.607 "delay_cmd_submit": true, 00:19:33.607 "transport_retry_count": 4, 00:19:33.607 "bdev_retry_count": 3, 00:19:33.607 "transport_ack_timeout": 0, 00:19:33.607 "ctrlr_loss_timeout_sec": 0, 00:19:33.607 "reconnect_delay_sec": 0, 00:19:33.607 "fast_io_fail_timeout_sec": 0, 00:19:33.607 "disable_auto_failback": false, 00:19:33.607 "generate_uuids": false, 00:19:33.607 "transport_tos": 0, 00:19:33.607 "nvme_error_stat": false, 00:19:33.607 "rdma_srq_size": 0, 00:19:33.607 "io_path_stat": false, 00:19:33.607 "allow_accel_sequence": false, 00:19:33.607 "rdma_max_cq_size": 0, 00:19:33.607 "rdma_cm_event_timeout_ms": 0, 00:19:33.607 "dhchap_digests": [ 00:19:33.607 "sha256", 00:19:33.607 "sha384", 00:19:33.607 "sha512" 00:19:33.607 ], 00:19:33.607 "dhchap_dhgroups": [ 00:19:33.607 "null", 00:19:33.607 "ffdhe2048", 00:19:33.607 "ffdhe3072", 00:19:33.607 "ffdhe4096", 00:19:33.607 "ffdhe6144", 00:19:33.607 "ffdhe8192" 00:19:33.607 ], 00:19:33.607 "rdma_umr_per_io": false 00:19:33.607 } 00:19:33.607 }, 00:19:33.607 { 00:19:33.607 "method": "bdev_nvme_set_hotplug", 00:19:33.607 "params": { 00:19:33.607 "period_us": 100000, 00:19:33.607 "enable": false 00:19:33.607 } 00:19:33.607 }, 00:19:33.607 { 00:19:33.607 "method": "bdev_malloc_create", 00:19:33.607 "params": { 00:19:33.607 "name": "malloc0", 00:19:33.607 "num_blocks": 8192, 00:19:33.607 "block_size": 4096, 00:19:33.607 "physical_block_size": 4096, 00:19:33.607 "uuid": "1441e7d4-ae7b-4b1e-8c37-561b5e448397", 00:19:33.607 "optimal_io_boundary": 0, 00:19:33.607 "md_size": 0, 00:19:33.607 "dif_type": 0, 00:19:33.607 "dif_is_head_of_md": false, 00:19:33.607 "dif_pi_format": 0 00:19:33.607 } 00:19:33.607 }, 00:19:33.607 { 00:19:33.607 "method": "bdev_wait_for_examine" 00:19:33.607 } 00:19:33.607 ] 00:19:33.607 }, 00:19:33.607 { 00:19:33.607 "subsystem": "nbd", 00:19:33.607 "config": [] 00:19:33.607 }, 00:19:33.607 { 00:19:33.607 "subsystem": "scheduler", 00:19:33.607 "config": [ 00:19:33.607 { 00:19:33.607 "method": "framework_set_scheduler", 00:19:33.607 "params": { 00:19:33.607 "name": "static" 00:19:33.607 } 00:19:33.607 } 00:19:33.607 ] 00:19:33.607 }, 00:19:33.607 { 00:19:33.607 "subsystem": "nvmf", 00:19:33.607 "config": [ 00:19:33.607 { 00:19:33.607 "method": "nvmf_set_config", 00:19:33.607 "params": { 00:19:33.607 "discovery_filter": "match_any", 00:19:33.607 "admin_cmd_passthru": { 00:19:33.607 "identify_ctrlr": false 00:19:33.607 }, 00:19:33.607 "dhchap_digests": [ 00:19:33.607 "sha256", 00:19:33.607 "sha384", 00:19:33.607 "sha512" 00:19:33.607 ], 00:19:33.607 "dhchap_dhgroups": [ 00:19:33.607 "null", 00:19:33.607 "ffdhe2048", 00:19:33.607 "ffdhe3072", 00:19:33.607 "ffdhe4096", 00:19:33.607 "ffdhe6144", 00:19:33.607 "ffdhe8192" 00:19:33.607 ] 00:19:33.607 } 00:19:33.607 }, 00:19:33.607 { 00:19:33.607 "method": "nvmf_set_max_subsystems", 00:19:33.607 "params": { 00:19:33.607 "max_subsystems": 1024 00:19:33.607 } 00:19:33.607 }, 00:19:33.607 { 00:19:33.607 "method": "nvmf_set_crdt", 00:19:33.607 "params": { 00:19:33.607 "crdt1": 0, 00:19:33.607 "crdt2": 0, 00:19:33.607 "crdt3": 0 00:19:33.607 } 00:19:33.607 }, 00:19:33.607 { 00:19:33.607 "method": "nvmf_create_transport", 00:19:33.607 "params": { 00:19:33.607 "trtype": "TCP", 00:19:33.607 "max_queue_depth": 128, 00:19:33.607 "max_io_qpairs_per_ctrlr": 127, 00:19:33.607 "in_capsule_data_size": 4096, 00:19:33.607 "max_io_size": 131072, 00:19:33.607 "io_unit_size": 131072, 00:19:33.607 "max_aq_depth": 128, 00:19:33.607 "num_shared_buffers": 511, 00:19:33.607 "buf_cache_size": 4294967295, 00:19:33.607 "dif_insert_or_strip": false, 00:19:33.607 "zcopy": false, 00:19:33.607 "c2h_success": false, 00:19:33.607 "sock_priority": 0, 00:19:33.607 "abort_timeout_sec": 1, 00:19:33.607 "ack_timeout": 0, 00:19:33.607 "data_wr_pool_size": 0 00:19:33.607 } 00:19:33.607 }, 00:19:33.607 { 00:19:33.607 "method": "nvmf_create_subsystem", 00:19:33.607 "params": { 00:19:33.607 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:33.607 "allow_any_host": false, 00:19:33.608 "serial_number": "SPDK00000000000001", 00:19:33.608 "model_number": "SPDK bdev Controller", 00:19:33.608 "max_namespaces": 10, 00:19:33.608 "min_cntlid": 1, 00:19:33.608 "max_cntlid": 65519, 00:19:33.608 "ana_reporting": false 00:19:33.608 } 00:19:33.608 }, 00:19:33.608 { 00:19:33.608 "method": "nvmf_subsystem_add_host", 00:19:33.608 "params": { 00:19:33.608 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:33.608 "host": "nqn.2016-06.io.spdk:host1", 00:19:33.608 "psk": "key0" 00:19:33.608 } 00:19:33.608 }, 00:19:33.608 { 00:19:33.608 "method": "nvmf_subsystem_add_ns", 00:19:33.608 "params": { 00:19:33.608 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:33.608 "namespace": { 00:19:33.608 "nsid": 1, 00:19:33.608 "bdev_name": "malloc0", 00:19:33.608 "nguid": "1441E7D4AE7B4B1E8C37561B5E448397", 00:19:33.608 "uuid": "1441e7d4-ae7b-4b1e-8c37-561b5e448397", 00:19:33.608 "no_auto_visible": false 00:19:33.608 } 00:19:33.608 } 00:19:33.608 }, 00:19:33.608 { 00:19:33.608 "method": "nvmf_subsystem_add_listener", 00:19:33.608 "params": { 00:19:33.608 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:33.608 "listen_address": { 00:19:33.608 "trtype": "TCP", 00:19:33.608 "adrfam": "IPv4", 00:19:33.608 "traddr": "10.0.0.2", 00:19:33.608 "trsvcid": "4420" 00:19:33.608 }, 00:19:33.608 "secure_channel": true 00:19:33.608 } 00:19:33.608 } 00:19:33.608 ] 00:19:33.608 } 00:19:33.608 ] 00:19:33.608 }' 00:19:33.608 22:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:33.867 22:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:19:33.867 "subsystems": [ 00:19:33.867 { 00:19:33.867 "subsystem": "keyring", 00:19:33.867 "config": [ 00:19:33.867 { 00:19:33.867 "method": "keyring_file_add_key", 00:19:33.867 "params": { 00:19:33.867 "name": "key0", 00:19:33.867 "path": "/tmp/tmp.xGIDcQlaD5" 00:19:33.867 } 00:19:33.867 } 00:19:33.867 ] 00:19:33.867 }, 00:19:33.867 { 00:19:33.867 "subsystem": "iobuf", 00:19:33.867 "config": [ 00:19:33.867 { 00:19:33.867 "method": "iobuf_set_options", 00:19:33.867 "params": { 00:19:33.867 "small_pool_count": 8192, 00:19:33.867 "large_pool_count": 1024, 00:19:33.867 "small_bufsize": 8192, 00:19:33.867 "large_bufsize": 135168, 00:19:33.868 "enable_numa": false 00:19:33.868 } 00:19:33.868 } 00:19:33.868 ] 00:19:33.868 }, 00:19:33.868 { 00:19:33.868 "subsystem": "sock", 00:19:33.868 "config": [ 00:19:33.868 { 00:19:33.868 "method": "sock_set_default_impl", 00:19:33.868 "params": { 00:19:33.868 "impl_name": "posix" 00:19:33.868 } 00:19:33.868 }, 00:19:33.868 { 00:19:33.868 "method": "sock_impl_set_options", 00:19:33.868 "params": { 00:19:33.868 "impl_name": "ssl", 00:19:33.868 "recv_buf_size": 4096, 00:19:33.868 "send_buf_size": 4096, 00:19:33.868 "enable_recv_pipe": true, 00:19:33.868 "enable_quickack": false, 00:19:33.868 "enable_placement_id": 0, 00:19:33.868 "enable_zerocopy_send_server": true, 00:19:33.868 "enable_zerocopy_send_client": false, 00:19:33.868 "zerocopy_threshold": 0, 00:19:33.868 "tls_version": 0, 00:19:33.868 "enable_ktls": false 00:19:33.868 } 00:19:33.868 }, 00:19:33.868 { 00:19:33.868 "method": "sock_impl_set_options", 00:19:33.868 "params": { 00:19:33.868 "impl_name": "posix", 00:19:33.868 "recv_buf_size": 2097152, 00:19:33.868 "send_buf_size": 2097152, 00:19:33.868 "enable_recv_pipe": true, 00:19:33.868 "enable_quickack": false, 00:19:33.868 "enable_placement_id": 0, 00:19:33.868 "enable_zerocopy_send_server": true, 00:19:33.868 "enable_zerocopy_send_client": false, 00:19:33.868 "zerocopy_threshold": 0, 00:19:33.868 "tls_version": 0, 00:19:33.868 "enable_ktls": false 00:19:33.868 } 00:19:33.868 } 00:19:33.868 ] 00:19:33.868 }, 00:19:33.868 { 00:19:33.868 "subsystem": "vmd", 00:19:33.868 "config": [] 00:19:33.868 }, 00:19:33.868 { 00:19:33.868 "subsystem": "accel", 00:19:33.868 "config": [ 00:19:33.868 { 00:19:33.868 "method": "accel_set_options", 00:19:33.868 "params": { 00:19:33.868 "small_cache_size": 128, 00:19:33.868 "large_cache_size": 16, 00:19:33.868 "task_count": 2048, 00:19:33.868 "sequence_count": 2048, 00:19:33.868 "buf_count": 2048 00:19:33.868 } 00:19:33.868 } 00:19:33.868 ] 00:19:33.868 }, 00:19:33.868 { 00:19:33.868 "subsystem": "bdev", 00:19:33.868 "config": [ 00:19:33.868 { 00:19:33.868 "method": "bdev_set_options", 00:19:33.868 "params": { 00:19:33.868 "bdev_io_pool_size": 65535, 00:19:33.868 "bdev_io_cache_size": 256, 00:19:33.868 "bdev_auto_examine": true, 00:19:33.868 "iobuf_small_cache_size": 128, 00:19:33.868 "iobuf_large_cache_size": 16 00:19:33.868 } 00:19:33.868 }, 00:19:33.868 { 00:19:33.868 "method": "bdev_raid_set_options", 00:19:33.868 "params": { 00:19:33.868 "process_window_size_kb": 1024, 00:19:33.868 "process_max_bandwidth_mb_sec": 0 00:19:33.868 } 00:19:33.868 }, 00:19:33.868 { 00:19:33.868 "method": "bdev_iscsi_set_options", 00:19:33.868 "params": { 00:19:33.868 "timeout_sec": 30 00:19:33.868 } 00:19:33.868 }, 00:19:33.868 { 00:19:33.868 "method": "bdev_nvme_set_options", 00:19:33.868 "params": { 00:19:33.868 "action_on_timeout": "none", 00:19:33.868 "timeout_us": 0, 00:19:33.868 "timeout_admin_us": 0, 00:19:33.868 "keep_alive_timeout_ms": 10000, 00:19:33.868 "arbitration_burst": 0, 00:19:33.868 "low_priority_weight": 0, 00:19:33.868 "medium_priority_weight": 0, 00:19:33.868 "high_priority_weight": 0, 00:19:33.868 "nvme_adminq_poll_period_us": 10000, 00:19:33.868 "nvme_ioq_poll_period_us": 0, 00:19:33.868 "io_queue_requests": 512, 00:19:33.868 "delay_cmd_submit": true, 00:19:33.868 "transport_retry_count": 4, 00:19:33.868 "bdev_retry_count": 3, 00:19:33.868 "transport_ack_timeout": 0, 00:19:33.868 "ctrlr_loss_timeout_sec": 0, 00:19:33.868 "reconnect_delay_sec": 0, 00:19:33.868 "fast_io_fail_timeout_sec": 0, 00:19:33.868 "disable_auto_failback": false, 00:19:33.868 "generate_uuids": false, 00:19:33.868 "transport_tos": 0, 00:19:33.868 "nvme_error_stat": false, 00:19:33.868 "rdma_srq_size": 0, 00:19:33.868 "io_path_stat": false, 00:19:33.868 "allow_accel_sequence": false, 00:19:33.868 "rdma_max_cq_size": 0, 00:19:33.868 "rdma_cm_event_timeout_ms": 0, 00:19:33.868 "dhchap_digests": [ 00:19:33.868 "sha256", 00:19:33.868 "sha384", 00:19:33.868 "sha512" 00:19:33.868 ], 00:19:33.868 "dhchap_dhgroups": [ 00:19:33.868 "null", 00:19:33.868 "ffdhe2048", 00:19:33.868 "ffdhe3072", 00:19:33.868 "ffdhe4096", 00:19:33.868 "ffdhe6144", 00:19:33.868 "ffdhe8192" 00:19:33.868 ], 00:19:33.868 "rdma_umr_per_io": false 00:19:33.868 } 00:19:33.868 }, 00:19:33.868 { 00:19:33.868 "method": "bdev_nvme_attach_controller", 00:19:33.868 "params": { 00:19:33.868 "name": "TLSTEST", 00:19:33.868 "trtype": "TCP", 00:19:33.868 "adrfam": "IPv4", 00:19:33.868 "traddr": "10.0.0.2", 00:19:33.868 "trsvcid": "4420", 00:19:33.868 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:33.868 "prchk_reftag": false, 00:19:33.868 "prchk_guard": false, 00:19:33.868 "ctrlr_loss_timeout_sec": 0, 00:19:33.868 "reconnect_delay_sec": 0, 00:19:33.868 "fast_io_fail_timeout_sec": 0, 00:19:33.868 "psk": "key0", 00:19:33.868 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:33.868 "hdgst": false, 00:19:33.868 "ddgst": false, 00:19:33.868 "multipath": "multipath" 00:19:33.868 } 00:19:33.868 }, 00:19:33.868 { 00:19:33.868 "method": "bdev_nvme_set_hotplug", 00:19:33.868 "params": { 00:19:33.868 "period_us": 100000, 00:19:33.868 "enable": false 00:19:33.868 } 00:19:33.868 }, 00:19:33.868 { 00:19:33.868 "method": "bdev_wait_for_examine" 00:19:33.868 } 00:19:33.868 ] 00:19:33.868 }, 00:19:33.868 { 00:19:33.868 "subsystem": "nbd", 00:19:33.868 "config": [] 00:19:33.868 } 00:19:33.868 ] 00:19:33.868 }' 00:19:33.868 22:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 2395261 00:19:33.868 22:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2395261 ']' 00:19:33.868 22:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2395261 00:19:33.868 22:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:33.868 22:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:33.868 22:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2395261 00:19:33.868 22:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:33.868 22:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:33.868 22:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2395261' 00:19:33.868 killing process with pid 2395261 00:19:33.868 22:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2395261 00:19:33.868 Received shutdown signal, test time was about 10.000000 seconds 00:19:33.868 00:19:33.868 Latency(us) 00:19:33.868 [2024-12-10T21:50:41.597Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:33.868 [2024-12-10T21:50:41.597Z] =================================================================================================================== 00:19:33.868 [2024-12-10T21:50:41.597Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:33.868 22:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2395261 00:19:34.127 22:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 2394971 00:19:34.127 22:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2394971 ']' 00:19:34.127 22:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2394971 00:19:34.127 22:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:34.127 22:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:34.127 22:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2394971 00:19:34.128 22:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:34.128 22:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:34.128 22:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2394971' 00:19:34.128 killing process with pid 2394971 00:19:34.128 22:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2394971 00:19:34.128 22:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2394971 00:19:34.387 22:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:19:34.387 22:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:34.387 22:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:34.387 22:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:34.387 22:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:19:34.387 "subsystems": [ 00:19:34.387 { 00:19:34.387 "subsystem": "keyring", 00:19:34.387 "config": [ 00:19:34.387 { 00:19:34.387 "method": "keyring_file_add_key", 00:19:34.387 "params": { 00:19:34.387 "name": "key0", 00:19:34.387 "path": "/tmp/tmp.xGIDcQlaD5" 00:19:34.387 } 00:19:34.387 } 00:19:34.387 ] 00:19:34.387 }, 00:19:34.387 { 00:19:34.387 "subsystem": "iobuf", 00:19:34.387 "config": [ 00:19:34.387 { 00:19:34.387 "method": "iobuf_set_options", 00:19:34.387 "params": { 00:19:34.387 "small_pool_count": 8192, 00:19:34.387 "large_pool_count": 1024, 00:19:34.387 "small_bufsize": 8192, 00:19:34.387 "large_bufsize": 135168, 00:19:34.387 "enable_numa": false 00:19:34.387 } 00:19:34.387 } 00:19:34.387 ] 00:19:34.387 }, 00:19:34.387 { 00:19:34.387 "subsystem": "sock", 00:19:34.387 "config": [ 00:19:34.387 { 00:19:34.387 "method": "sock_set_default_impl", 00:19:34.387 "params": { 00:19:34.387 "impl_name": "posix" 00:19:34.387 } 00:19:34.387 }, 00:19:34.387 { 00:19:34.387 "method": "sock_impl_set_options", 00:19:34.387 "params": { 00:19:34.387 "impl_name": "ssl", 00:19:34.387 "recv_buf_size": 4096, 00:19:34.387 "send_buf_size": 4096, 00:19:34.387 "enable_recv_pipe": true, 00:19:34.387 "enable_quickack": false, 00:19:34.388 "enable_placement_id": 0, 00:19:34.388 "enable_zerocopy_send_server": true, 00:19:34.388 "enable_zerocopy_send_client": false, 00:19:34.388 "zerocopy_threshold": 0, 00:19:34.388 "tls_version": 0, 00:19:34.388 "enable_ktls": false 00:19:34.388 } 00:19:34.388 }, 00:19:34.388 { 00:19:34.388 "method": "sock_impl_set_options", 00:19:34.388 "params": { 00:19:34.388 "impl_name": "posix", 00:19:34.388 "recv_buf_size": 2097152, 00:19:34.388 "send_buf_size": 2097152, 00:19:34.388 "enable_recv_pipe": true, 00:19:34.388 "enable_quickack": false, 00:19:34.388 "enable_placement_id": 0, 00:19:34.388 "enable_zerocopy_send_server": true, 00:19:34.388 "enable_zerocopy_send_client": false, 00:19:34.388 "zerocopy_threshold": 0, 00:19:34.388 "tls_version": 0, 00:19:34.388 "enable_ktls": false 00:19:34.388 } 00:19:34.388 } 00:19:34.388 ] 00:19:34.388 }, 00:19:34.388 { 00:19:34.388 "subsystem": "vmd", 00:19:34.388 "config": [] 00:19:34.388 }, 00:19:34.388 { 00:19:34.388 "subsystem": "accel", 00:19:34.388 "config": [ 00:19:34.388 { 00:19:34.388 "method": "accel_set_options", 00:19:34.388 "params": { 00:19:34.388 "small_cache_size": 128, 00:19:34.388 "large_cache_size": 16, 00:19:34.388 "task_count": 2048, 00:19:34.388 "sequence_count": 2048, 00:19:34.388 "buf_count": 2048 00:19:34.388 } 00:19:34.388 } 00:19:34.388 ] 00:19:34.388 }, 00:19:34.388 { 00:19:34.388 "subsystem": "bdev", 00:19:34.388 "config": [ 00:19:34.388 { 00:19:34.388 "method": "bdev_set_options", 00:19:34.388 "params": { 00:19:34.388 "bdev_io_pool_size": 65535, 00:19:34.388 "bdev_io_cache_size": 256, 00:19:34.388 "bdev_auto_examine": true, 00:19:34.388 "iobuf_small_cache_size": 128, 00:19:34.388 "iobuf_large_cache_size": 16 00:19:34.388 } 00:19:34.388 }, 00:19:34.388 { 00:19:34.388 "method": "bdev_raid_set_options", 00:19:34.388 "params": { 00:19:34.388 "process_window_size_kb": 1024, 00:19:34.388 "process_max_bandwidth_mb_sec": 0 00:19:34.388 } 00:19:34.388 }, 00:19:34.388 { 00:19:34.388 "method": "bdev_iscsi_set_options", 00:19:34.388 "params": { 00:19:34.388 "timeout_sec": 30 00:19:34.388 } 00:19:34.388 }, 00:19:34.388 { 00:19:34.388 "method": "bdev_nvme_set_options", 00:19:34.388 "params": { 00:19:34.388 "action_on_timeout": "none", 00:19:34.388 "timeout_us": 0, 00:19:34.388 "timeout_admin_us": 0, 00:19:34.388 "keep_alive_timeout_ms": 10000, 00:19:34.388 "arbitration_burst": 0, 00:19:34.388 "low_priority_weight": 0, 00:19:34.388 "medium_priority_weight": 0, 00:19:34.388 "high_priority_weight": 0, 00:19:34.388 "nvme_adminq_poll_period_us": 10000, 00:19:34.388 "nvme_ioq_poll_period_us": 0, 00:19:34.388 "io_queue_requests": 0, 00:19:34.388 "delay_cmd_submit": true, 00:19:34.388 "transport_retry_count": 4, 00:19:34.388 "bdev_retry_count": 3, 00:19:34.388 "transport_ack_timeout": 0, 00:19:34.388 "ctrlr_loss_timeout_sec": 0, 00:19:34.388 "reconnect_delay_sec": 0, 00:19:34.388 "fast_io_fail_timeout_sec": 0, 00:19:34.388 "disable_auto_failback": false, 00:19:34.388 "generate_uuids": false, 00:19:34.388 "transport_tos": 0, 00:19:34.388 "nvme_error_stat": false, 00:19:34.388 "rdma_srq_size": 0, 00:19:34.388 "io_path_stat": false, 00:19:34.388 "allow_accel_sequence": false, 00:19:34.388 "rdma_max_cq_size": 0, 00:19:34.388 "rdma_cm_event_timeout_ms": 0, 00:19:34.388 "dhchap_digests": [ 00:19:34.388 "sha256", 00:19:34.388 "sha384", 00:19:34.388 "sha512" 00:19:34.388 ], 00:19:34.388 "dhchap_dhgroups": [ 00:19:34.388 "null", 00:19:34.388 "ffdhe2048", 00:19:34.388 "ffdhe3072", 00:19:34.388 "ffdhe4096", 00:19:34.388 "ffdhe6144", 00:19:34.388 "ffdhe8192" 00:19:34.388 ], 00:19:34.388 "rdma_umr_per_io": false 00:19:34.388 } 00:19:34.388 }, 00:19:34.388 { 00:19:34.388 "method": "bdev_nvme_set_hotplug", 00:19:34.388 "params": { 00:19:34.388 "period_us": 100000, 00:19:34.388 "enable": false 00:19:34.388 } 00:19:34.388 }, 00:19:34.388 { 00:19:34.388 "method": "bdev_malloc_create", 00:19:34.388 "params": { 00:19:34.388 "name": "malloc0", 00:19:34.388 "num_blocks": 8192, 00:19:34.388 "block_size": 4096, 00:19:34.388 "physical_block_size": 4096, 00:19:34.388 "uuid": "1441e7d4-ae7b-4b1e-8c37-561b5e448397", 00:19:34.388 "optimal_io_boundary": 0, 00:19:34.388 "md_size": 0, 00:19:34.388 "dif_type": 0, 00:19:34.388 "dif_is_head_of_md": false, 00:19:34.388 "dif_pi_format": 0 00:19:34.388 } 00:19:34.388 }, 00:19:34.388 { 00:19:34.388 "method": "bdev_wait_for_examine" 00:19:34.388 } 00:19:34.388 ] 00:19:34.388 }, 00:19:34.388 { 00:19:34.388 "subsystem": "nbd", 00:19:34.388 "config": [] 00:19:34.388 }, 00:19:34.388 { 00:19:34.388 "subsystem": "scheduler", 00:19:34.388 "config": [ 00:19:34.388 { 00:19:34.388 "method": "framework_set_scheduler", 00:19:34.388 "params": { 00:19:34.388 "name": "static" 00:19:34.388 } 00:19:34.388 } 00:19:34.388 ] 00:19:34.388 }, 00:19:34.388 { 00:19:34.388 "subsystem": "nvmf", 00:19:34.388 "config": [ 00:19:34.388 { 00:19:34.388 "method": "nvmf_set_config", 00:19:34.388 "params": { 00:19:34.388 "discovery_filter": "match_any", 00:19:34.388 "admin_cmd_passthru": { 00:19:34.388 "identify_ctrlr": false 00:19:34.388 }, 00:19:34.388 "dhchap_digests": [ 00:19:34.388 "sha256", 00:19:34.388 "sha384", 00:19:34.388 "sha512" 00:19:34.388 ], 00:19:34.388 "dhchap_dhgroups": [ 00:19:34.388 "null", 00:19:34.388 "ffdhe2048", 00:19:34.388 "ffdhe3072", 00:19:34.388 "ffdhe4096", 00:19:34.388 "ffdhe6144", 00:19:34.388 "ffdhe8192" 00:19:34.388 ] 00:19:34.388 } 00:19:34.388 }, 00:19:34.388 { 00:19:34.388 "method": "nvmf_set_max_subsystems", 00:19:34.388 "params": { 00:19:34.388 "max_subsystems": 1024 00:19:34.388 } 00:19:34.388 }, 00:19:34.388 { 00:19:34.388 "method": "nvmf_set_crdt", 00:19:34.388 "params": { 00:19:34.388 "crdt1": 0, 00:19:34.388 "crdt2": 0, 00:19:34.388 "crdt3": 0 00:19:34.388 } 00:19:34.388 }, 00:19:34.388 { 00:19:34.388 "method": "nvmf_create_transport", 00:19:34.388 "params": { 00:19:34.388 "trtype": "TCP", 00:19:34.388 "max_queue_depth": 128, 00:19:34.388 "max_io_qpairs_per_ctrlr": 127, 00:19:34.388 "in_capsule_data_size": 4096, 00:19:34.388 "max_io_size": 131072, 00:19:34.388 "io_unit_size": 131072, 00:19:34.388 "max_aq_depth": 128, 00:19:34.388 "num_shared_buffers": 511, 00:19:34.388 "buf_cache_size": 4294967295, 00:19:34.388 "dif_insert_or_strip": false, 00:19:34.388 "zcopy": false, 00:19:34.388 "c2h_success": false, 00:19:34.388 "sock_priority": 0, 00:19:34.388 "abort_timeout_sec": 1, 00:19:34.388 "ack_timeout": 0, 00:19:34.388 "data_wr_pool_size": 0 00:19:34.388 } 00:19:34.388 }, 00:19:34.388 { 00:19:34.388 "method": "nvmf_create_subsystem", 00:19:34.388 "params": { 00:19:34.388 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:34.388 "allow_any_host": false, 00:19:34.388 "serial_number": "SPDK00000000000001", 00:19:34.388 "model_number": "SPDK bdev Controller", 00:19:34.388 "max_namespaces": 10, 00:19:34.388 "min_cntlid": 1, 00:19:34.388 "max_cntlid": 65519, 00:19:34.388 "ana_reporting": false 00:19:34.388 } 00:19:34.388 }, 00:19:34.388 { 00:19:34.388 "method": "nvmf_subsystem_add_host", 00:19:34.388 "params": { 00:19:34.388 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:34.388 "host": "nqn.2016-06.io.spdk:host1", 00:19:34.388 "psk": "key0" 00:19:34.388 } 00:19:34.388 }, 00:19:34.388 { 00:19:34.388 "method": "nvmf_subsystem_add_ns", 00:19:34.388 "params": { 00:19:34.388 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:34.388 "namespace": { 00:19:34.388 "nsid": 1, 00:19:34.388 "bdev_name": "malloc0", 00:19:34.388 "nguid": "1441E7D4AE7B4B1E8C37561B5E448397", 00:19:34.388 "uuid": "1441e7d4-ae7b-4b1e-8c37-561b5e448397", 00:19:34.388 "no_auto_visible": false 00:19:34.388 } 00:19:34.388 } 00:19:34.388 }, 00:19:34.388 { 00:19:34.388 "method": "nvmf_subsystem_add_listener", 00:19:34.388 "params": { 00:19:34.389 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:34.389 "listen_address": { 00:19:34.389 "trtype": "TCP", 00:19:34.389 "adrfam": "IPv4", 00:19:34.389 "traddr": "10.0.0.2", 00:19:34.389 "trsvcid": "4420" 00:19:34.389 }, 00:19:34.389 "secure_channel": true 00:19:34.389 } 00:19:34.389 } 00:19:34.389 ] 00:19:34.389 } 00:19:34.389 ] 00:19:34.389 }' 00:19:34.389 22:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2395675 00:19:34.389 22:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2395675 00:19:34.389 22:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:19:34.389 22:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2395675 ']' 00:19:34.389 22:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:34.389 22:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:34.389 22:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:34.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:34.389 22:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:34.389 22:50:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:34.389 [2024-12-10 22:50:41.930764] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:19:34.389 [2024-12-10 22:50:41.930812] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:34.389 [2024-12-10 22:50:42.008702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:34.389 [2024-12-10 22:50:42.045142] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:34.389 [2024-12-10 22:50:42.045183] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:34.389 [2024-12-10 22:50:42.045192] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:34.389 [2024-12-10 22:50:42.045199] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:34.389 [2024-12-10 22:50:42.045205] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:34.389 [2024-12-10 22:50:42.045817] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:34.648 [2024-12-10 22:50:42.260253] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:34.648 [2024-12-10 22:50:42.292285] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:34.648 [2024-12-10 22:50:42.292511] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:35.216 22:50:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:35.216 22:50:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:35.216 22:50:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:35.216 22:50:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:35.216 22:50:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:35.216 22:50:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:35.216 22:50:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=2395731 00:19:35.216 22:50:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 2395731 /var/tmp/bdevperf.sock 00:19:35.216 22:50:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:19:35.216 22:50:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2395731 ']' 00:19:35.216 22:50:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:35.216 22:50:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:35.216 22:50:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:19:35.216 "subsystems": [ 00:19:35.216 { 00:19:35.216 "subsystem": "keyring", 00:19:35.216 "config": [ 00:19:35.216 { 00:19:35.216 "method": "keyring_file_add_key", 00:19:35.216 "params": { 00:19:35.216 "name": "key0", 00:19:35.216 "path": "/tmp/tmp.xGIDcQlaD5" 00:19:35.216 } 00:19:35.216 } 00:19:35.216 ] 00:19:35.216 }, 00:19:35.216 { 00:19:35.216 "subsystem": "iobuf", 00:19:35.216 "config": [ 00:19:35.216 { 00:19:35.216 "method": "iobuf_set_options", 00:19:35.216 "params": { 00:19:35.216 "small_pool_count": 8192, 00:19:35.216 "large_pool_count": 1024, 00:19:35.216 "small_bufsize": 8192, 00:19:35.216 "large_bufsize": 135168, 00:19:35.216 "enable_numa": false 00:19:35.216 } 00:19:35.216 } 00:19:35.216 ] 00:19:35.216 }, 00:19:35.216 { 00:19:35.216 "subsystem": "sock", 00:19:35.216 "config": [ 00:19:35.216 { 00:19:35.216 "method": "sock_set_default_impl", 00:19:35.216 "params": { 00:19:35.216 "impl_name": "posix" 00:19:35.216 } 00:19:35.216 }, 00:19:35.216 { 00:19:35.216 "method": "sock_impl_set_options", 00:19:35.216 "params": { 00:19:35.216 "impl_name": "ssl", 00:19:35.216 "recv_buf_size": 4096, 00:19:35.216 "send_buf_size": 4096, 00:19:35.216 "enable_recv_pipe": true, 00:19:35.216 "enable_quickack": false, 00:19:35.216 "enable_placement_id": 0, 00:19:35.216 "enable_zerocopy_send_server": true, 00:19:35.216 "enable_zerocopy_send_client": false, 00:19:35.216 "zerocopy_threshold": 0, 00:19:35.216 "tls_version": 0, 00:19:35.216 "enable_ktls": false 00:19:35.216 } 00:19:35.216 }, 00:19:35.216 { 00:19:35.216 "method": "sock_impl_set_options", 00:19:35.216 "params": { 00:19:35.216 "impl_name": "posix", 00:19:35.216 "recv_buf_size": 2097152, 00:19:35.216 "send_buf_size": 2097152, 00:19:35.216 "enable_recv_pipe": true, 00:19:35.216 "enable_quickack": false, 00:19:35.216 "enable_placement_id": 0, 00:19:35.216 "enable_zerocopy_send_server": true, 00:19:35.216 "enable_zerocopy_send_client": false, 00:19:35.216 "zerocopy_threshold": 0, 00:19:35.216 "tls_version": 0, 00:19:35.216 "enable_ktls": false 00:19:35.216 } 00:19:35.216 } 00:19:35.216 ] 00:19:35.216 }, 00:19:35.216 { 00:19:35.216 "subsystem": "vmd", 00:19:35.216 "config": [] 00:19:35.216 }, 00:19:35.216 { 00:19:35.216 "subsystem": "accel", 00:19:35.216 "config": [ 00:19:35.216 { 00:19:35.216 "method": "accel_set_options", 00:19:35.216 "params": { 00:19:35.216 "small_cache_size": 128, 00:19:35.216 "large_cache_size": 16, 00:19:35.216 "task_count": 2048, 00:19:35.216 "sequence_count": 2048, 00:19:35.216 "buf_count": 2048 00:19:35.216 } 00:19:35.216 } 00:19:35.216 ] 00:19:35.216 }, 00:19:35.216 { 00:19:35.216 "subsystem": "bdev", 00:19:35.216 "config": [ 00:19:35.216 { 00:19:35.216 "method": "bdev_set_options", 00:19:35.216 "params": { 00:19:35.216 "bdev_io_pool_size": 65535, 00:19:35.216 "bdev_io_cache_size": 256, 00:19:35.216 "bdev_auto_examine": true, 00:19:35.216 "iobuf_small_cache_size": 128, 00:19:35.216 "iobuf_large_cache_size": 16 00:19:35.216 } 00:19:35.216 }, 00:19:35.216 { 00:19:35.216 "method": "bdev_raid_set_options", 00:19:35.216 "params": { 00:19:35.216 "process_window_size_kb": 1024, 00:19:35.216 "process_max_bandwidth_mb_sec": 0 00:19:35.216 } 00:19:35.216 }, 00:19:35.217 { 00:19:35.217 "method": "bdev_iscsi_set_options", 00:19:35.217 "params": { 00:19:35.217 "timeout_sec": 30 00:19:35.217 } 00:19:35.217 }, 00:19:35.217 { 00:19:35.217 "method": "bdev_nvme_set_options", 00:19:35.217 "params": { 00:19:35.217 "action_on_timeout": "none", 00:19:35.217 "timeout_us": 0, 00:19:35.217 "timeout_admin_us": 0, 00:19:35.217 "keep_alive_timeout_ms": 10000, 00:19:35.217 "arbitration_burst": 0, 00:19:35.217 "low_priority_weight": 0, 00:19:35.217 "medium_priority_weight": 0, 00:19:35.217 "high_priority_weight": 0, 00:19:35.217 "nvme_adminq_poll_period_us": 10000, 00:19:35.217 "nvme_ioq_poll_period_us": 0, 00:19:35.217 "io_queue_requests": 512, 00:19:35.217 "delay_cmd_submit": true, 00:19:35.217 "transport_retry_count": 4, 00:19:35.217 "bdev_retry_count": 3, 00:19:35.217 "transport_ack_timeout": 0, 00:19:35.217 "ctrlr_loss_timeout_sec": 0, 00:19:35.217 "reconnect_delay_sec": 0, 00:19:35.217 "fast_io_fail_timeout_sec": 0, 00:19:35.217 "disable_auto_failback": false, 00:19:35.217 "generate_uuids": false, 00:19:35.217 "transport_tos": 0, 00:19:35.217 "nvme_error_stat": false, 00:19:35.217 "rdma_srq_size": 0, 00:19:35.217 "io_path_stat": false, 00:19:35.217 "allow_accel_sequence": false, 00:19:35.217 "rdma_max_cq_size": 0, 00:19:35.217 "rdma_cm_event_timeout_ms": 0, 00:19:35.217 "dhchap_digests": [ 00:19:35.217 "sha256", 00:19:35.217 "sha384", 00:19:35.217 "sha512" 00:19:35.217 ], 00:19:35.217 "dhchap_dhgroups": [ 00:19:35.217 "null", 00:19:35.217 "ffdhe2048", 00:19:35.217 "ffdhe3072", 00:19:35.217 "ffdhe4096", 00:19:35.217 "ffdhe6144", 00:19:35.217 "ffdhe8192" 00:19:35.217 ], 00:19:35.217 "rdma_umr_per_io": false 00:19:35.217 } 00:19:35.217 }, 00:19:35.217 { 00:19:35.217 "method": "bdev_nvme_attach_controller", 00:19:35.217 "params": { 00:19:35.217 "name": "TLSTEST", 00:19:35.217 "trtype": "TCP", 00:19:35.217 "adrfam": "IPv4", 00:19:35.217 "traddr": "10.0.0.2", 00:19:35.217 "trsvcid": "4420", 00:19:35.217 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:35.217 "prchk_reftag": false, 00:19:35.217 "prchk_guard": false, 00:19:35.217 "ctrlr_loss_timeout_sec": 0, 00:19:35.217 "reconnect_delay_sec": 0, 00:19:35.217 "fast_io_fail_timeout_sec": 0, 00:19:35.217 "psk": "key0", 00:19:35.217 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:35.217 "hdgst": false, 00:19:35.217 "ddgst": false, 00:19:35.217 "multipath": "multipath" 00:19:35.217 } 00:19:35.217 }, 00:19:35.217 { 00:19:35.217 "method": "bdev_nvme_set_hotplug", 00:19:35.217 "params": { 00:19:35.217 "period_us": 100000, 00:19:35.217 "enable": false 00:19:35.217 } 00:19:35.217 }, 00:19:35.217 { 00:19:35.217 "method": "bdev_wait_for_examine" 00:19:35.217 } 00:19:35.217 ] 00:19:35.217 }, 00:19:35.217 { 00:19:35.217 "subsystem": "nbd", 00:19:35.217 "config": [] 00:19:35.217 } 00:19:35.217 ] 00:19:35.217 }' 00:19:35.217 22:50:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:35.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:35.217 22:50:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:35.217 22:50:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:35.217 [2024-12-10 22:50:42.844331] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:19:35.217 [2024-12-10 22:50:42.844380] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2395731 ] 00:19:35.217 [2024-12-10 22:50:42.921413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:35.476 [2024-12-10 22:50:42.964162] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:19:35.476 [2024-12-10 22:50:43.118057] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:36.044 22:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:36.044 22:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:36.044 22:50:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:36.302 Running I/O for 10 seconds... 00:19:38.179 5478.00 IOPS, 21.40 MiB/s [2024-12-10T21:50:46.845Z] 5438.00 IOPS, 21.24 MiB/s [2024-12-10T21:50:48.222Z] 5374.67 IOPS, 20.99 MiB/s [2024-12-10T21:50:49.159Z] 5382.50 IOPS, 21.03 MiB/s [2024-12-10T21:50:50.096Z] 5402.80 IOPS, 21.10 MiB/s [2024-12-10T21:50:51.033Z] 5396.17 IOPS, 21.08 MiB/s [2024-12-10T21:50:51.969Z] 5395.43 IOPS, 21.08 MiB/s [2024-12-10T21:50:52.905Z] 5406.50 IOPS, 21.12 MiB/s [2024-12-10T21:50:53.843Z] 5410.78 IOPS, 21.14 MiB/s [2024-12-10T21:50:53.843Z] 5415.90 IOPS, 21.16 MiB/s 00:19:46.114 Latency(us) 00:19:46.114 [2024-12-10T21:50:53.843Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:46.114 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:46.114 Verification LBA range: start 0x0 length 0x2000 00:19:46.114 TLSTESTn1 : 10.02 5419.41 21.17 0.00 0.00 23582.18 5043.42 30089.57 00:19:46.114 [2024-12-10T21:50:53.843Z] =================================================================================================================== 00:19:46.114 [2024-12-10T21:50:53.843Z] Total : 5419.41 21.17 0.00 0.00 23582.18 5043.42 30089.57 00:19:46.114 { 00:19:46.114 "results": [ 00:19:46.114 { 00:19:46.114 "job": "TLSTESTn1", 00:19:46.114 "core_mask": "0x4", 00:19:46.114 "workload": "verify", 00:19:46.114 "status": "finished", 00:19:46.114 "verify_range": { 00:19:46.114 "start": 0, 00:19:46.114 "length": 8192 00:19:46.114 }, 00:19:46.114 "queue_depth": 128, 00:19:46.114 "io_size": 4096, 00:19:46.114 "runtime": 10.016961, 00:19:46.114 "iops": 5419.4081418506075, 00:19:46.114 "mibps": 21.169563054103936, 00:19:46.114 "io_failed": 0, 00:19:46.114 "io_timeout": 0, 00:19:46.114 "avg_latency_us": 23582.17544810176, 00:19:46.114 "min_latency_us": 5043.422608695652, 00:19:46.114 "max_latency_us": 30089.572173913042 00:19:46.114 } 00:19:46.114 ], 00:19:46.114 "core_count": 1 00:19:46.114 } 00:19:46.114 22:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:46.114 22:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 2395731 00:19:46.373 22:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2395731 ']' 00:19:46.373 22:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2395731 00:19:46.373 22:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:46.373 22:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:46.373 22:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2395731 00:19:46.373 22:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:46.373 22:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:46.373 22:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2395731' 00:19:46.373 killing process with pid 2395731 00:19:46.373 22:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2395731 00:19:46.373 Received shutdown signal, test time was about 10.000000 seconds 00:19:46.373 00:19:46.373 Latency(us) 00:19:46.373 [2024-12-10T21:50:54.102Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:46.373 [2024-12-10T21:50:54.102Z] =================================================================================================================== 00:19:46.373 [2024-12-10T21:50:54.102Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:46.373 22:50:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2395731 00:19:46.373 22:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 2395675 00:19:46.373 22:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2395675 ']' 00:19:46.373 22:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2395675 00:19:46.373 22:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:46.373 22:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:46.373 22:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2395675 00:19:46.373 22:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:46.373 22:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:46.373 22:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2395675' 00:19:46.373 killing process with pid 2395675 00:19:46.373 22:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2395675 00:19:46.373 22:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2395675 00:19:46.633 22:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:19:46.633 22:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:46.633 22:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:46.633 22:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:46.633 22:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2397580 00:19:46.633 22:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:46.633 22:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2397580 00:19:46.633 22:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2397580 ']' 00:19:46.633 22:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:46.633 22:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:46.633 22:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:46.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:46.633 22:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:46.633 22:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:46.633 [2024-12-10 22:50:54.319592] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:19:46.633 [2024-12-10 22:50:54.319640] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:46.892 [2024-12-10 22:50:54.399967] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:46.892 [2024-12-10 22:50:54.439785] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:46.892 [2024-12-10 22:50:54.439819] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:46.892 [2024-12-10 22:50:54.439827] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:46.892 [2024-12-10 22:50:54.439833] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:46.892 [2024-12-10 22:50:54.439838] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:46.892 [2024-12-10 22:50:54.440415] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:46.892 22:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:46.892 22:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:46.892 22:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:46.892 22:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:46.892 22:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:46.892 22:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:46.892 22:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.xGIDcQlaD5 00:19:46.892 22:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.xGIDcQlaD5 00:19:46.892 22:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:47.151 [2024-12-10 22:50:54.754296] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:47.151 22:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:47.410 22:50:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:47.669 [2024-12-10 22:50:55.139281] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:47.669 [2024-12-10 22:50:55.139517] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:47.669 22:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:47.669 malloc0 00:19:47.669 22:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:47.930 22:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.xGIDcQlaD5 00:19:48.190 22:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:48.190 22:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=2397950 00:19:48.190 22:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:48.190 22:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:48.190 22:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 2397950 /var/tmp/bdevperf.sock 00:19:48.190 22:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2397950 ']' 00:19:48.190 22:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:48.190 22:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:48.190 22:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:48.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:48.190 22:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:48.190 22:50:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:48.449 [2024-12-10 22:50:55.940615] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:19:48.449 [2024-12-10 22:50:55.940666] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2397950 ] 00:19:48.449 [2024-12-10 22:50:56.016163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:48.449 [2024-12-10 22:50:56.057552] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:48.449 22:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:48.449 22:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:48.449 22:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.xGIDcQlaD5 00:19:48.708 22:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:48.966 [2024-12-10 22:50:56.511784] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:48.966 nvme0n1 00:19:48.966 22:50:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:49.225 Running I/O for 1 seconds... 00:19:50.162 5300.00 IOPS, 20.70 MiB/s 00:19:50.162 Latency(us) 00:19:50.162 [2024-12-10T21:50:57.891Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:50.162 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:50.162 Verification LBA range: start 0x0 length 0x2000 00:19:50.162 nvme0n1 : 1.01 5364.99 20.96 0.00 0.00 23703.64 4929.45 38751.72 00:19:50.162 [2024-12-10T21:50:57.891Z] =================================================================================================================== 00:19:50.162 [2024-12-10T21:50:57.891Z] Total : 5364.99 20.96 0.00 0.00 23703.64 4929.45 38751.72 00:19:50.162 { 00:19:50.162 "results": [ 00:19:50.162 { 00:19:50.162 "job": "nvme0n1", 00:19:50.162 "core_mask": "0x2", 00:19:50.162 "workload": "verify", 00:19:50.162 "status": "finished", 00:19:50.162 "verify_range": { 00:19:50.162 "start": 0, 00:19:50.162 "length": 8192 00:19:50.162 }, 00:19:50.162 "queue_depth": 128, 00:19:50.162 "io_size": 4096, 00:19:50.162 "runtime": 1.011745, 00:19:50.162 "iops": 5364.9882134332265, 00:19:50.162 "mibps": 20.95698520872354, 00:19:50.162 "io_failed": 0, 00:19:50.162 "io_timeout": 0, 00:19:50.162 "avg_latency_us": 23703.64163403928, 00:19:50.162 "min_latency_us": 4929.446956521739, 00:19:50.162 "max_latency_us": 38751.72173913044 00:19:50.162 } 00:19:50.162 ], 00:19:50.162 "core_count": 1 00:19:50.162 } 00:19:50.162 22:50:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 2397950 00:19:50.162 22:50:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2397950 ']' 00:19:50.162 22:50:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2397950 00:19:50.162 22:50:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:50.162 22:50:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:50.162 22:50:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2397950 00:19:50.162 22:50:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:50.162 22:50:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:50.162 22:50:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2397950' 00:19:50.162 killing process with pid 2397950 00:19:50.162 22:50:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2397950 00:19:50.162 Received shutdown signal, test time was about 1.000000 seconds 00:19:50.162 00:19:50.162 Latency(us) 00:19:50.162 [2024-12-10T21:50:57.891Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:50.162 [2024-12-10T21:50:57.891Z] =================================================================================================================== 00:19:50.162 [2024-12-10T21:50:57.891Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:50.162 22:50:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2397950 00:19:50.420 22:50:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 2397580 00:19:50.420 22:50:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2397580 ']' 00:19:50.420 22:50:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2397580 00:19:50.420 22:50:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:50.420 22:50:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:50.420 22:50:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2397580 00:19:50.420 22:50:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:50.420 22:50:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:50.420 22:50:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2397580' 00:19:50.420 killing process with pid 2397580 00:19:50.420 22:50:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2397580 00:19:50.420 22:50:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2397580 00:19:50.678 22:50:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:19:50.678 22:50:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:50.678 22:50:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:50.678 22:50:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:50.678 22:50:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2398296 00:19:50.678 22:50:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2398296 00:19:50.678 22:50:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:50.678 22:50:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2398296 ']' 00:19:50.678 22:50:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:50.678 22:50:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:50.678 22:50:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:50.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:50.678 22:50:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:50.678 22:50:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:50.678 [2024-12-10 22:50:58.234184] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:19:50.678 [2024-12-10 22:50:58.234229] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:50.678 [2024-12-10 22:50:58.313852] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:50.678 [2024-12-10 22:50:58.353619] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:50.678 [2024-12-10 22:50:58.353653] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:50.678 [2024-12-10 22:50:58.353660] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:50.678 [2024-12-10 22:50:58.353666] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:50.678 [2024-12-10 22:50:58.353671] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:50.678 [2024-12-10 22:50:58.354257] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:50.937 22:50:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:50.937 22:50:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:50.937 22:50:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:50.937 22:50:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:50.937 22:50:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:50.937 22:50:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:50.937 22:50:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:19:50.937 22:50:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.937 22:50:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:50.937 [2024-12-10 22:50:58.491657] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:50.937 malloc0 00:19:50.937 [2024-12-10 22:50:58.519967] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:50.937 [2024-12-10 22:50:58.520174] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:50.937 22:50:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.937 22:50:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=2398352 00:19:50.937 22:50:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 2398352 /var/tmp/bdevperf.sock 00:19:50.937 22:50:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:50.937 22:50:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2398352 ']' 00:19:50.937 22:50:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:50.937 22:50:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:50.937 22:50:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:50.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:50.937 22:50:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:50.937 22:50:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:50.937 [2024-12-10 22:50:58.595396] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:19:50.937 [2024-12-10 22:50:58.595439] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2398352 ] 00:19:51.196 [2024-12-10 22:50:58.671547] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:51.196 [2024-12-10 22:50:58.713482] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:51.196 22:50:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:51.196 22:50:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:51.196 22:50:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.xGIDcQlaD5 00:19:51.487 22:50:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:51.487 [2024-12-10 22:50:59.151362] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:51.782 nvme0n1 00:19:51.782 22:50:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:51.782 Running I/O for 1 seconds... 00:19:52.754 5175.00 IOPS, 20.21 MiB/s 00:19:52.754 Latency(us) 00:19:52.754 [2024-12-10T21:51:00.483Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:52.754 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:52.754 Verification LBA range: start 0x0 length 0x2000 00:19:52.754 nvme0n1 : 1.01 5238.25 20.46 0.00 0.00 24271.58 4815.47 46502.07 00:19:52.754 [2024-12-10T21:51:00.483Z] =================================================================================================================== 00:19:52.754 [2024-12-10T21:51:00.483Z] Total : 5238.25 20.46 0.00 0.00 24271.58 4815.47 46502.07 00:19:52.754 { 00:19:52.754 "results": [ 00:19:52.754 { 00:19:52.754 "job": "nvme0n1", 00:19:52.754 "core_mask": "0x2", 00:19:52.754 "workload": "verify", 00:19:52.754 "status": "finished", 00:19:52.754 "verify_range": { 00:19:52.754 "start": 0, 00:19:52.754 "length": 8192 00:19:52.754 }, 00:19:52.754 "queue_depth": 128, 00:19:52.754 "io_size": 4096, 00:19:52.754 "runtime": 1.012552, 00:19:52.754 "iops": 5238.2494923717495, 00:19:52.754 "mibps": 20.461912079577147, 00:19:52.754 "io_failed": 0, 00:19:52.754 "io_timeout": 0, 00:19:52.754 "avg_latency_us": 24271.581433536623, 00:19:52.754 "min_latency_us": 4815.471304347826, 00:19:52.754 "max_latency_us": 46502.06608695652 00:19:52.754 } 00:19:52.754 ], 00:19:52.754 "core_count": 1 00:19:52.754 } 00:19:52.754 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:19:52.754 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.754 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:53.013 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.013 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:19:53.013 "subsystems": [ 00:19:53.013 { 00:19:53.013 "subsystem": "keyring", 00:19:53.013 "config": [ 00:19:53.013 { 00:19:53.013 "method": "keyring_file_add_key", 00:19:53.013 "params": { 00:19:53.013 "name": "key0", 00:19:53.013 "path": "/tmp/tmp.xGIDcQlaD5" 00:19:53.013 } 00:19:53.013 } 00:19:53.013 ] 00:19:53.013 }, 00:19:53.013 { 00:19:53.013 "subsystem": "iobuf", 00:19:53.013 "config": [ 00:19:53.013 { 00:19:53.013 "method": "iobuf_set_options", 00:19:53.013 "params": { 00:19:53.013 "small_pool_count": 8192, 00:19:53.013 "large_pool_count": 1024, 00:19:53.013 "small_bufsize": 8192, 00:19:53.013 "large_bufsize": 135168, 00:19:53.013 "enable_numa": false 00:19:53.013 } 00:19:53.013 } 00:19:53.013 ] 00:19:53.013 }, 00:19:53.013 { 00:19:53.013 "subsystem": "sock", 00:19:53.013 "config": [ 00:19:53.013 { 00:19:53.013 "method": "sock_set_default_impl", 00:19:53.013 "params": { 00:19:53.013 "impl_name": "posix" 00:19:53.013 } 00:19:53.013 }, 00:19:53.013 { 00:19:53.013 "method": "sock_impl_set_options", 00:19:53.013 "params": { 00:19:53.013 "impl_name": "ssl", 00:19:53.013 "recv_buf_size": 4096, 00:19:53.013 "send_buf_size": 4096, 00:19:53.013 "enable_recv_pipe": true, 00:19:53.013 "enable_quickack": false, 00:19:53.013 "enable_placement_id": 0, 00:19:53.013 "enable_zerocopy_send_server": true, 00:19:53.013 "enable_zerocopy_send_client": false, 00:19:53.013 "zerocopy_threshold": 0, 00:19:53.013 "tls_version": 0, 00:19:53.013 "enable_ktls": false 00:19:53.013 } 00:19:53.013 }, 00:19:53.013 { 00:19:53.013 "method": "sock_impl_set_options", 00:19:53.013 "params": { 00:19:53.013 "impl_name": "posix", 00:19:53.013 "recv_buf_size": 2097152, 00:19:53.013 "send_buf_size": 2097152, 00:19:53.013 "enable_recv_pipe": true, 00:19:53.013 "enable_quickack": false, 00:19:53.013 "enable_placement_id": 0, 00:19:53.013 "enable_zerocopy_send_server": true, 00:19:53.013 "enable_zerocopy_send_client": false, 00:19:53.013 "zerocopy_threshold": 0, 00:19:53.013 "tls_version": 0, 00:19:53.013 "enable_ktls": false 00:19:53.013 } 00:19:53.013 } 00:19:53.013 ] 00:19:53.013 }, 00:19:53.013 { 00:19:53.013 "subsystem": "vmd", 00:19:53.013 "config": [] 00:19:53.013 }, 00:19:53.013 { 00:19:53.013 "subsystem": "accel", 00:19:53.013 "config": [ 00:19:53.013 { 00:19:53.013 "method": "accel_set_options", 00:19:53.013 "params": { 00:19:53.013 "small_cache_size": 128, 00:19:53.013 "large_cache_size": 16, 00:19:53.013 "task_count": 2048, 00:19:53.013 "sequence_count": 2048, 00:19:53.013 "buf_count": 2048 00:19:53.013 } 00:19:53.013 } 00:19:53.013 ] 00:19:53.013 }, 00:19:53.013 { 00:19:53.013 "subsystem": "bdev", 00:19:53.013 "config": [ 00:19:53.013 { 00:19:53.013 "method": "bdev_set_options", 00:19:53.013 "params": { 00:19:53.013 "bdev_io_pool_size": 65535, 00:19:53.013 "bdev_io_cache_size": 256, 00:19:53.013 "bdev_auto_examine": true, 00:19:53.013 "iobuf_small_cache_size": 128, 00:19:53.013 "iobuf_large_cache_size": 16 00:19:53.013 } 00:19:53.013 }, 00:19:53.013 { 00:19:53.013 "method": "bdev_raid_set_options", 00:19:53.013 "params": { 00:19:53.013 "process_window_size_kb": 1024, 00:19:53.013 "process_max_bandwidth_mb_sec": 0 00:19:53.013 } 00:19:53.013 }, 00:19:53.013 { 00:19:53.013 "method": "bdev_iscsi_set_options", 00:19:53.013 "params": { 00:19:53.013 "timeout_sec": 30 00:19:53.013 } 00:19:53.013 }, 00:19:53.013 { 00:19:53.013 "method": "bdev_nvme_set_options", 00:19:53.013 "params": { 00:19:53.013 "action_on_timeout": "none", 00:19:53.013 "timeout_us": 0, 00:19:53.013 "timeout_admin_us": 0, 00:19:53.013 "keep_alive_timeout_ms": 10000, 00:19:53.013 "arbitration_burst": 0, 00:19:53.013 "low_priority_weight": 0, 00:19:53.013 "medium_priority_weight": 0, 00:19:53.013 "high_priority_weight": 0, 00:19:53.013 "nvme_adminq_poll_period_us": 10000, 00:19:53.013 "nvme_ioq_poll_period_us": 0, 00:19:53.013 "io_queue_requests": 0, 00:19:53.013 "delay_cmd_submit": true, 00:19:53.013 "transport_retry_count": 4, 00:19:53.013 "bdev_retry_count": 3, 00:19:53.013 "transport_ack_timeout": 0, 00:19:53.013 "ctrlr_loss_timeout_sec": 0, 00:19:53.013 "reconnect_delay_sec": 0, 00:19:53.013 "fast_io_fail_timeout_sec": 0, 00:19:53.013 "disable_auto_failback": false, 00:19:53.013 "generate_uuids": false, 00:19:53.013 "transport_tos": 0, 00:19:53.013 "nvme_error_stat": false, 00:19:53.013 "rdma_srq_size": 0, 00:19:53.013 "io_path_stat": false, 00:19:53.013 "allow_accel_sequence": false, 00:19:53.013 "rdma_max_cq_size": 0, 00:19:53.013 "rdma_cm_event_timeout_ms": 0, 00:19:53.013 "dhchap_digests": [ 00:19:53.013 "sha256", 00:19:53.013 "sha384", 00:19:53.013 "sha512" 00:19:53.013 ], 00:19:53.013 "dhchap_dhgroups": [ 00:19:53.013 "null", 00:19:53.013 "ffdhe2048", 00:19:53.013 "ffdhe3072", 00:19:53.013 "ffdhe4096", 00:19:53.013 "ffdhe6144", 00:19:53.013 "ffdhe8192" 00:19:53.013 ], 00:19:53.013 "rdma_umr_per_io": false 00:19:53.013 } 00:19:53.013 }, 00:19:53.013 { 00:19:53.013 "method": "bdev_nvme_set_hotplug", 00:19:53.013 "params": { 00:19:53.013 "period_us": 100000, 00:19:53.013 "enable": false 00:19:53.013 } 00:19:53.013 }, 00:19:53.013 { 00:19:53.013 "method": "bdev_malloc_create", 00:19:53.013 "params": { 00:19:53.013 "name": "malloc0", 00:19:53.013 "num_blocks": 8192, 00:19:53.013 "block_size": 4096, 00:19:53.013 "physical_block_size": 4096, 00:19:53.013 "uuid": "6ff3eec5-199c-4c96-8a44-7c3cbba6f259", 00:19:53.013 "optimal_io_boundary": 0, 00:19:53.013 "md_size": 0, 00:19:53.013 "dif_type": 0, 00:19:53.013 "dif_is_head_of_md": false, 00:19:53.013 "dif_pi_format": 0 00:19:53.013 } 00:19:53.013 }, 00:19:53.013 { 00:19:53.013 "method": "bdev_wait_for_examine" 00:19:53.013 } 00:19:53.013 ] 00:19:53.013 }, 00:19:53.013 { 00:19:53.013 "subsystem": "nbd", 00:19:53.013 "config": [] 00:19:53.013 }, 00:19:53.013 { 00:19:53.013 "subsystem": "scheduler", 00:19:53.013 "config": [ 00:19:53.013 { 00:19:53.013 "method": "framework_set_scheduler", 00:19:53.013 "params": { 00:19:53.013 "name": "static" 00:19:53.013 } 00:19:53.013 } 00:19:53.013 ] 00:19:53.013 }, 00:19:53.013 { 00:19:53.013 "subsystem": "nvmf", 00:19:53.013 "config": [ 00:19:53.013 { 00:19:53.013 "method": "nvmf_set_config", 00:19:53.013 "params": { 00:19:53.013 "discovery_filter": "match_any", 00:19:53.013 "admin_cmd_passthru": { 00:19:53.013 "identify_ctrlr": false 00:19:53.013 }, 00:19:53.013 "dhchap_digests": [ 00:19:53.013 "sha256", 00:19:53.013 "sha384", 00:19:53.013 "sha512" 00:19:53.013 ], 00:19:53.013 "dhchap_dhgroups": [ 00:19:53.013 "null", 00:19:53.013 "ffdhe2048", 00:19:53.013 "ffdhe3072", 00:19:53.013 "ffdhe4096", 00:19:53.013 "ffdhe6144", 00:19:53.013 "ffdhe8192" 00:19:53.013 ] 00:19:53.013 } 00:19:53.013 }, 00:19:53.013 { 00:19:53.013 "method": "nvmf_set_max_subsystems", 00:19:53.013 "params": { 00:19:53.014 "max_subsystems": 1024 00:19:53.014 } 00:19:53.014 }, 00:19:53.014 { 00:19:53.014 "method": "nvmf_set_crdt", 00:19:53.014 "params": { 00:19:53.014 "crdt1": 0, 00:19:53.014 "crdt2": 0, 00:19:53.014 "crdt3": 0 00:19:53.014 } 00:19:53.014 }, 00:19:53.014 { 00:19:53.014 "method": "nvmf_create_transport", 00:19:53.014 "params": { 00:19:53.014 "trtype": "TCP", 00:19:53.014 "max_queue_depth": 128, 00:19:53.014 "max_io_qpairs_per_ctrlr": 127, 00:19:53.014 "in_capsule_data_size": 4096, 00:19:53.014 "max_io_size": 131072, 00:19:53.014 "io_unit_size": 131072, 00:19:53.014 "max_aq_depth": 128, 00:19:53.014 "num_shared_buffers": 511, 00:19:53.014 "buf_cache_size": 4294967295, 00:19:53.014 "dif_insert_or_strip": false, 00:19:53.014 "zcopy": false, 00:19:53.014 "c2h_success": false, 00:19:53.014 "sock_priority": 0, 00:19:53.014 "abort_timeout_sec": 1, 00:19:53.014 "ack_timeout": 0, 00:19:53.014 "data_wr_pool_size": 0 00:19:53.014 } 00:19:53.014 }, 00:19:53.014 { 00:19:53.014 "method": "nvmf_create_subsystem", 00:19:53.014 "params": { 00:19:53.014 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:53.014 "allow_any_host": false, 00:19:53.014 "serial_number": "00000000000000000000", 00:19:53.014 "model_number": "SPDK bdev Controller", 00:19:53.014 "max_namespaces": 32, 00:19:53.014 "min_cntlid": 1, 00:19:53.014 "max_cntlid": 65519, 00:19:53.014 "ana_reporting": false 00:19:53.014 } 00:19:53.014 }, 00:19:53.014 { 00:19:53.014 "method": "nvmf_subsystem_add_host", 00:19:53.014 "params": { 00:19:53.014 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:53.014 "host": "nqn.2016-06.io.spdk:host1", 00:19:53.014 "psk": "key0" 00:19:53.014 } 00:19:53.014 }, 00:19:53.014 { 00:19:53.014 "method": "nvmf_subsystem_add_ns", 00:19:53.014 "params": { 00:19:53.014 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:53.014 "namespace": { 00:19:53.014 "nsid": 1, 00:19:53.014 "bdev_name": "malloc0", 00:19:53.014 "nguid": "6FF3EEC5199C4C968A447C3CBBA6F259", 00:19:53.014 "uuid": "6ff3eec5-199c-4c96-8a44-7c3cbba6f259", 00:19:53.014 "no_auto_visible": false 00:19:53.014 } 00:19:53.014 } 00:19:53.014 }, 00:19:53.014 { 00:19:53.014 "method": "nvmf_subsystem_add_listener", 00:19:53.014 "params": { 00:19:53.014 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:53.014 "listen_address": { 00:19:53.014 "trtype": "TCP", 00:19:53.014 "adrfam": "IPv4", 00:19:53.014 "traddr": "10.0.0.2", 00:19:53.014 "trsvcid": "4420" 00:19:53.014 }, 00:19:53.014 "secure_channel": false, 00:19:53.014 "sock_impl": "ssl" 00:19:53.014 } 00:19:53.014 } 00:19:53.014 ] 00:19:53.014 } 00:19:53.014 ] 00:19:53.014 }' 00:19:53.014 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:53.273 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:19:53.273 "subsystems": [ 00:19:53.273 { 00:19:53.273 "subsystem": "keyring", 00:19:53.273 "config": [ 00:19:53.273 { 00:19:53.273 "method": "keyring_file_add_key", 00:19:53.273 "params": { 00:19:53.273 "name": "key0", 00:19:53.273 "path": "/tmp/tmp.xGIDcQlaD5" 00:19:53.273 } 00:19:53.273 } 00:19:53.273 ] 00:19:53.273 }, 00:19:53.273 { 00:19:53.273 "subsystem": "iobuf", 00:19:53.273 "config": [ 00:19:53.273 { 00:19:53.273 "method": "iobuf_set_options", 00:19:53.273 "params": { 00:19:53.273 "small_pool_count": 8192, 00:19:53.273 "large_pool_count": 1024, 00:19:53.273 "small_bufsize": 8192, 00:19:53.273 "large_bufsize": 135168, 00:19:53.273 "enable_numa": false 00:19:53.273 } 00:19:53.273 } 00:19:53.273 ] 00:19:53.273 }, 00:19:53.273 { 00:19:53.273 "subsystem": "sock", 00:19:53.273 "config": [ 00:19:53.273 { 00:19:53.273 "method": "sock_set_default_impl", 00:19:53.273 "params": { 00:19:53.273 "impl_name": "posix" 00:19:53.273 } 00:19:53.273 }, 00:19:53.273 { 00:19:53.273 "method": "sock_impl_set_options", 00:19:53.273 "params": { 00:19:53.273 "impl_name": "ssl", 00:19:53.273 "recv_buf_size": 4096, 00:19:53.273 "send_buf_size": 4096, 00:19:53.274 "enable_recv_pipe": true, 00:19:53.274 "enable_quickack": false, 00:19:53.274 "enable_placement_id": 0, 00:19:53.274 "enable_zerocopy_send_server": true, 00:19:53.274 "enable_zerocopy_send_client": false, 00:19:53.274 "zerocopy_threshold": 0, 00:19:53.274 "tls_version": 0, 00:19:53.274 "enable_ktls": false 00:19:53.274 } 00:19:53.274 }, 00:19:53.274 { 00:19:53.274 "method": "sock_impl_set_options", 00:19:53.274 "params": { 00:19:53.274 "impl_name": "posix", 00:19:53.274 "recv_buf_size": 2097152, 00:19:53.274 "send_buf_size": 2097152, 00:19:53.274 "enable_recv_pipe": true, 00:19:53.274 "enable_quickack": false, 00:19:53.274 "enable_placement_id": 0, 00:19:53.274 "enable_zerocopy_send_server": true, 00:19:53.274 "enable_zerocopy_send_client": false, 00:19:53.274 "zerocopy_threshold": 0, 00:19:53.274 "tls_version": 0, 00:19:53.274 "enable_ktls": false 00:19:53.274 } 00:19:53.274 } 00:19:53.274 ] 00:19:53.274 }, 00:19:53.274 { 00:19:53.274 "subsystem": "vmd", 00:19:53.274 "config": [] 00:19:53.274 }, 00:19:53.274 { 00:19:53.274 "subsystem": "accel", 00:19:53.274 "config": [ 00:19:53.274 { 00:19:53.274 "method": "accel_set_options", 00:19:53.274 "params": { 00:19:53.274 "small_cache_size": 128, 00:19:53.274 "large_cache_size": 16, 00:19:53.274 "task_count": 2048, 00:19:53.274 "sequence_count": 2048, 00:19:53.274 "buf_count": 2048 00:19:53.274 } 00:19:53.274 } 00:19:53.274 ] 00:19:53.274 }, 00:19:53.274 { 00:19:53.274 "subsystem": "bdev", 00:19:53.274 "config": [ 00:19:53.274 { 00:19:53.274 "method": "bdev_set_options", 00:19:53.274 "params": { 00:19:53.274 "bdev_io_pool_size": 65535, 00:19:53.274 "bdev_io_cache_size": 256, 00:19:53.274 "bdev_auto_examine": true, 00:19:53.274 "iobuf_small_cache_size": 128, 00:19:53.274 "iobuf_large_cache_size": 16 00:19:53.274 } 00:19:53.274 }, 00:19:53.274 { 00:19:53.274 "method": "bdev_raid_set_options", 00:19:53.274 "params": { 00:19:53.274 "process_window_size_kb": 1024, 00:19:53.274 "process_max_bandwidth_mb_sec": 0 00:19:53.274 } 00:19:53.274 }, 00:19:53.274 { 00:19:53.274 "method": "bdev_iscsi_set_options", 00:19:53.274 "params": { 00:19:53.274 "timeout_sec": 30 00:19:53.274 } 00:19:53.274 }, 00:19:53.274 { 00:19:53.274 "method": "bdev_nvme_set_options", 00:19:53.274 "params": { 00:19:53.274 "action_on_timeout": "none", 00:19:53.274 "timeout_us": 0, 00:19:53.274 "timeout_admin_us": 0, 00:19:53.274 "keep_alive_timeout_ms": 10000, 00:19:53.274 "arbitration_burst": 0, 00:19:53.274 "low_priority_weight": 0, 00:19:53.274 "medium_priority_weight": 0, 00:19:53.274 "high_priority_weight": 0, 00:19:53.274 "nvme_adminq_poll_period_us": 10000, 00:19:53.274 "nvme_ioq_poll_period_us": 0, 00:19:53.274 "io_queue_requests": 512, 00:19:53.274 "delay_cmd_submit": true, 00:19:53.274 "transport_retry_count": 4, 00:19:53.274 "bdev_retry_count": 3, 00:19:53.274 "transport_ack_timeout": 0, 00:19:53.274 "ctrlr_loss_timeout_sec": 0, 00:19:53.274 "reconnect_delay_sec": 0, 00:19:53.274 "fast_io_fail_timeout_sec": 0, 00:19:53.274 "disable_auto_failback": false, 00:19:53.274 "generate_uuids": false, 00:19:53.274 "transport_tos": 0, 00:19:53.274 "nvme_error_stat": false, 00:19:53.274 "rdma_srq_size": 0, 00:19:53.274 "io_path_stat": false, 00:19:53.274 "allow_accel_sequence": false, 00:19:53.274 "rdma_max_cq_size": 0, 00:19:53.274 "rdma_cm_event_timeout_ms": 0, 00:19:53.274 "dhchap_digests": [ 00:19:53.274 "sha256", 00:19:53.274 "sha384", 00:19:53.274 "sha512" 00:19:53.274 ], 00:19:53.274 "dhchap_dhgroups": [ 00:19:53.274 "null", 00:19:53.274 "ffdhe2048", 00:19:53.274 "ffdhe3072", 00:19:53.274 "ffdhe4096", 00:19:53.274 "ffdhe6144", 00:19:53.274 "ffdhe8192" 00:19:53.274 ], 00:19:53.274 "rdma_umr_per_io": false 00:19:53.274 } 00:19:53.274 }, 00:19:53.274 { 00:19:53.274 "method": "bdev_nvme_attach_controller", 00:19:53.274 "params": { 00:19:53.274 "name": "nvme0", 00:19:53.274 "trtype": "TCP", 00:19:53.274 "adrfam": "IPv4", 00:19:53.274 "traddr": "10.0.0.2", 00:19:53.274 "trsvcid": "4420", 00:19:53.274 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:53.274 "prchk_reftag": false, 00:19:53.274 "prchk_guard": false, 00:19:53.274 "ctrlr_loss_timeout_sec": 0, 00:19:53.274 "reconnect_delay_sec": 0, 00:19:53.274 "fast_io_fail_timeout_sec": 0, 00:19:53.274 "psk": "key0", 00:19:53.274 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:53.274 "hdgst": false, 00:19:53.274 "ddgst": false, 00:19:53.274 "multipath": "multipath" 00:19:53.274 } 00:19:53.274 }, 00:19:53.274 { 00:19:53.274 "method": "bdev_nvme_set_hotplug", 00:19:53.274 "params": { 00:19:53.274 "period_us": 100000, 00:19:53.274 "enable": false 00:19:53.274 } 00:19:53.274 }, 00:19:53.274 { 00:19:53.274 "method": "bdev_enable_histogram", 00:19:53.274 "params": { 00:19:53.274 "name": "nvme0n1", 00:19:53.274 "enable": true 00:19:53.274 } 00:19:53.274 }, 00:19:53.274 { 00:19:53.274 "method": "bdev_wait_for_examine" 00:19:53.274 } 00:19:53.274 ] 00:19:53.274 }, 00:19:53.274 { 00:19:53.274 "subsystem": "nbd", 00:19:53.274 "config": [] 00:19:53.274 } 00:19:53.274 ] 00:19:53.274 }' 00:19:53.274 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 2398352 00:19:53.274 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2398352 ']' 00:19:53.274 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2398352 00:19:53.274 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:53.274 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:53.274 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2398352 00:19:53.274 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:53.274 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:53.274 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2398352' 00:19:53.274 killing process with pid 2398352 00:19:53.274 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2398352 00:19:53.274 Received shutdown signal, test time was about 1.000000 seconds 00:19:53.274 00:19:53.274 Latency(us) 00:19:53.274 [2024-12-10T21:51:01.003Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:53.274 [2024-12-10T21:51:01.003Z] =================================================================================================================== 00:19:53.274 [2024-12-10T21:51:01.003Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:53.274 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2398352 00:19:53.274 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 2398296 00:19:53.274 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2398296 ']' 00:19:53.274 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2398296 00:19:53.274 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:53.274 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:53.274 22:51:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2398296 00:19:53.534 22:51:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:53.534 22:51:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:53.534 22:51:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2398296' 00:19:53.534 killing process with pid 2398296 00:19:53.534 22:51:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2398296 00:19:53.534 22:51:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2398296 00:19:53.534 22:51:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:19:53.534 22:51:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:53.534 22:51:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:53.534 22:51:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:19:53.534 "subsystems": [ 00:19:53.534 { 00:19:53.534 "subsystem": "keyring", 00:19:53.534 "config": [ 00:19:53.534 { 00:19:53.534 "method": "keyring_file_add_key", 00:19:53.534 "params": { 00:19:53.534 "name": "key0", 00:19:53.534 "path": "/tmp/tmp.xGIDcQlaD5" 00:19:53.534 } 00:19:53.534 } 00:19:53.534 ] 00:19:53.534 }, 00:19:53.534 { 00:19:53.534 "subsystem": "iobuf", 00:19:53.534 "config": [ 00:19:53.534 { 00:19:53.534 "method": "iobuf_set_options", 00:19:53.534 "params": { 00:19:53.534 "small_pool_count": 8192, 00:19:53.534 "large_pool_count": 1024, 00:19:53.534 "small_bufsize": 8192, 00:19:53.534 "large_bufsize": 135168, 00:19:53.534 "enable_numa": false 00:19:53.534 } 00:19:53.534 } 00:19:53.534 ] 00:19:53.534 }, 00:19:53.534 { 00:19:53.534 "subsystem": "sock", 00:19:53.534 "config": [ 00:19:53.534 { 00:19:53.534 "method": "sock_set_default_impl", 00:19:53.534 "params": { 00:19:53.534 "impl_name": "posix" 00:19:53.534 } 00:19:53.534 }, 00:19:53.534 { 00:19:53.534 "method": "sock_impl_set_options", 00:19:53.534 "params": { 00:19:53.534 "impl_name": "ssl", 00:19:53.534 "recv_buf_size": 4096, 00:19:53.534 "send_buf_size": 4096, 00:19:53.534 "enable_recv_pipe": true, 00:19:53.534 "enable_quickack": false, 00:19:53.534 "enable_placement_id": 0, 00:19:53.534 "enable_zerocopy_send_server": true, 00:19:53.534 "enable_zerocopy_send_client": false, 00:19:53.534 "zerocopy_threshold": 0, 00:19:53.534 "tls_version": 0, 00:19:53.534 "enable_ktls": false 00:19:53.534 } 00:19:53.534 }, 00:19:53.534 { 00:19:53.534 "method": "sock_impl_set_options", 00:19:53.534 "params": { 00:19:53.534 "impl_name": "posix", 00:19:53.534 "recv_buf_size": 2097152, 00:19:53.534 "send_buf_size": 2097152, 00:19:53.534 "enable_recv_pipe": true, 00:19:53.534 "enable_quickack": false, 00:19:53.534 "enable_placement_id": 0, 00:19:53.534 "enable_zerocopy_send_server": true, 00:19:53.534 "enable_zerocopy_send_client": false, 00:19:53.534 "zerocopy_threshold": 0, 00:19:53.534 "tls_version": 0, 00:19:53.534 "enable_ktls": false 00:19:53.534 } 00:19:53.534 } 00:19:53.534 ] 00:19:53.534 }, 00:19:53.534 { 00:19:53.534 "subsystem": "vmd", 00:19:53.534 "config": [] 00:19:53.534 }, 00:19:53.534 { 00:19:53.534 "subsystem": "accel", 00:19:53.534 "config": [ 00:19:53.534 { 00:19:53.534 "method": "accel_set_options", 00:19:53.534 "params": { 00:19:53.534 "small_cache_size": 128, 00:19:53.534 "large_cache_size": 16, 00:19:53.534 "task_count": 2048, 00:19:53.534 "sequence_count": 2048, 00:19:53.534 "buf_count": 2048 00:19:53.534 } 00:19:53.534 } 00:19:53.534 ] 00:19:53.534 }, 00:19:53.534 { 00:19:53.534 "subsystem": "bdev", 00:19:53.534 "config": [ 00:19:53.534 { 00:19:53.534 "method": "bdev_set_options", 00:19:53.534 "params": { 00:19:53.534 "bdev_io_pool_size": 65535, 00:19:53.534 "bdev_io_cache_size": 256, 00:19:53.534 "bdev_auto_examine": true, 00:19:53.534 "iobuf_small_cache_size": 128, 00:19:53.534 "iobuf_large_cache_size": 16 00:19:53.534 } 00:19:53.534 }, 00:19:53.534 { 00:19:53.534 "method": "bdev_raid_set_options", 00:19:53.534 "params": { 00:19:53.534 "process_window_size_kb": 1024, 00:19:53.534 "process_max_bandwidth_mb_sec": 0 00:19:53.534 } 00:19:53.534 }, 00:19:53.534 { 00:19:53.534 "method": "bdev_iscsi_set_options", 00:19:53.534 "params": { 00:19:53.534 "timeout_sec": 30 00:19:53.534 } 00:19:53.534 }, 00:19:53.534 { 00:19:53.534 "method": "bdev_nvme_set_options", 00:19:53.534 "params": { 00:19:53.534 "action_on_timeout": "none", 00:19:53.534 "timeout_us": 0, 00:19:53.534 "timeout_admin_us": 0, 00:19:53.534 "keep_alive_timeout_ms": 10000, 00:19:53.534 "arbitration_burst": 0, 00:19:53.534 "low_priority_weight": 0, 00:19:53.534 "medium_priority_weight": 0, 00:19:53.534 "high_priority_weight": 0, 00:19:53.534 "nvme_adminq_poll_period_us": 10000, 00:19:53.534 "nvme_ioq_poll_period_us": 0, 00:19:53.534 "io_queue_requests": 0, 00:19:53.534 "delay_cmd_submit": true, 00:19:53.534 "transport_retry_count": 4, 00:19:53.534 "bdev_retry_count": 3, 00:19:53.534 "transport_ack_timeout": 0, 00:19:53.534 "ctrlr_loss_timeout_sec": 0, 00:19:53.534 "reconnect_delay_sec": 0, 00:19:53.534 "fast_io_fail_timeout_sec": 0, 00:19:53.534 "disable_auto_failback": false, 00:19:53.534 "generate_uuids": false, 00:19:53.534 "transport_tos": 0, 00:19:53.534 "nvme_error_stat": false, 00:19:53.534 "rdma_srq_size": 0, 00:19:53.534 "io_path_stat": false, 00:19:53.534 "allow_accel_sequence": false, 00:19:53.534 "rdma_max_cq_size": 0, 00:19:53.534 "rdma_cm_event_timeout_ms": 0, 00:19:53.534 "dhchap_digests": [ 00:19:53.534 "sha256", 00:19:53.534 "sha384", 00:19:53.534 "sha512" 00:19:53.534 ], 00:19:53.534 "dhchap_dhgroups": [ 00:19:53.534 "null", 00:19:53.534 "ffdhe2048", 00:19:53.534 "ffdhe3072", 00:19:53.534 "ffdhe4096", 00:19:53.534 "ffdhe6144", 00:19:53.534 "ffdhe8192" 00:19:53.534 ], 00:19:53.534 "rdma_umr_per_io": false 00:19:53.534 } 00:19:53.534 }, 00:19:53.534 { 00:19:53.534 "method": "bdev_nvme_set_hotplug", 00:19:53.534 "params": { 00:19:53.534 "period_us": 100000, 00:19:53.534 "enable": false 00:19:53.534 } 00:19:53.534 }, 00:19:53.534 { 00:19:53.534 "method": "bdev_malloc_create", 00:19:53.534 "params": { 00:19:53.534 "name": "malloc0", 00:19:53.534 "num_blocks": 8192, 00:19:53.534 "block_size": 4096, 00:19:53.534 "physical_block_size": 4096, 00:19:53.534 "uuid": "6ff3eec5-199c-4c96-8a44-7c3cbba6f259", 00:19:53.534 "optimal_io_boundary": 0, 00:19:53.534 "md_size": 0, 00:19:53.534 "dif_type": 0, 00:19:53.534 "dif_is_head_of_md": false, 00:19:53.534 "dif_pi_format": 0 00:19:53.534 } 00:19:53.534 }, 00:19:53.534 { 00:19:53.534 "method": "bdev_wait_for_examine" 00:19:53.534 } 00:19:53.534 ] 00:19:53.534 }, 00:19:53.534 { 00:19:53.534 "subsystem": "nbd", 00:19:53.534 "config": [] 00:19:53.534 }, 00:19:53.534 { 00:19:53.534 "subsystem": "scheduler", 00:19:53.534 "config": [ 00:19:53.534 { 00:19:53.534 "method": "framework_set_scheduler", 00:19:53.534 "params": { 00:19:53.534 "name": "static" 00:19:53.534 } 00:19:53.534 } 00:19:53.534 ] 00:19:53.534 }, 00:19:53.534 { 00:19:53.534 "subsystem": "nvmf", 00:19:53.534 "config": [ 00:19:53.534 { 00:19:53.534 "method": "nvmf_set_config", 00:19:53.534 "params": { 00:19:53.534 "discovery_filter": "match_any", 00:19:53.534 "admin_cmd_passthru": { 00:19:53.534 "identify_ctrlr": false 00:19:53.534 }, 00:19:53.534 "dhchap_digests": [ 00:19:53.534 "sha256", 00:19:53.534 "sha384", 00:19:53.534 "sha512" 00:19:53.534 ], 00:19:53.534 "dhchap_dhgroups": [ 00:19:53.534 "null", 00:19:53.534 "ffdhe2048", 00:19:53.534 "ffdhe3072", 00:19:53.534 "ffdhe4096", 00:19:53.534 "ffdhe6144", 00:19:53.534 "ffdhe8192" 00:19:53.534 ] 00:19:53.534 } 00:19:53.534 }, 00:19:53.534 { 00:19:53.534 "method": "nvmf_set_max_subsystems", 00:19:53.534 "params": { 00:19:53.534 "max_subsystems": 1024 00:19:53.534 } 00:19:53.534 }, 00:19:53.534 { 00:19:53.534 "method": "nvmf_set_crdt", 00:19:53.534 "params": { 00:19:53.534 "crdt1": 0, 00:19:53.535 "crdt2": 0, 00:19:53.535 "crdt3": 0 00:19:53.535 } 00:19:53.535 }, 00:19:53.535 { 00:19:53.535 "method": "nvmf_create_transport", 00:19:53.535 "params": { 00:19:53.535 "trtype": "TCP", 00:19:53.535 "max_queue_depth": 128, 00:19:53.535 "max_io_qpairs_per_ctrlr": 127, 00:19:53.535 "in_capsule_data_size": 4096, 00:19:53.535 "max_io_size": 131072, 00:19:53.535 "io_unit_size": 131072, 00:19:53.535 "max_aq_depth": 128, 00:19:53.535 "num_shared_buffers": 511, 00:19:53.535 "buf_cache_size": 4294967295, 00:19:53.535 "dif_insert_or_strip": false, 00:19:53.535 "zcopy": false, 00:19:53.535 "c2h_success": false, 00:19:53.535 "sock_priority": 0, 00:19:53.535 "abort_timeout_sec": 1, 00:19:53.535 "ack_timeout": 0, 00:19:53.535 "data_wr_pool_size": 0 00:19:53.535 } 00:19:53.535 }, 00:19:53.535 { 00:19:53.535 "method": "nvmf_create_subsystem", 00:19:53.535 "params": { 00:19:53.535 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:53.535 "allow_any_host": false, 00:19:53.535 "serial_number": "00000000000000000000", 00:19:53.535 "model_number": "SPDK bdev Controller", 00:19:53.535 "max_namespaces": 32, 00:19:53.535 "min_cntlid": 1, 00:19:53.535 "max_cntlid": 65519, 00:19:53.535 "ana_reporting": false 00:19:53.535 } 00:19:53.535 }, 00:19:53.535 { 00:19:53.535 "method": "nvmf_subsystem_add_host", 00:19:53.535 "params": { 00:19:53.535 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:53.535 "host": "nqn.2016-06.io.spdk:host1", 00:19:53.535 "psk": "key0" 00:19:53.535 } 00:19:53.535 }, 00:19:53.535 { 00:19:53.535 "method": "nvmf_subsystem_add_ns", 00:19:53.535 "params": { 00:19:53.535 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:53.535 "namespace": { 00:19:53.535 "nsid": 1, 00:19:53.535 "bdev_name": "malloc0", 00:19:53.535 "nguid": "6FF3EEC5199C4C968A447C3CBBA6F259", 00:19:53.535 "uuid": "6ff3eec5-199c-4c96-8a44-7c3cbba6f259", 00:19:53.535 "no_auto_visible": false 00:19:53.535 } 00:19:53.535 } 00:19:53.535 }, 00:19:53.535 { 00:19:53.535 "method": "nvmf_subsystem_add_listener", 00:19:53.535 "params": { 00:19:53.535 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:53.535 "listen_address": { 00:19:53.535 "trtype": "TCP", 00:19:53.535 "adrfam": "IPv4", 00:19:53.535 "traddr": "10.0.0.2", 00:19:53.535 "trsvcid": "4420" 00:19:53.535 }, 00:19:53.535 "secure_channel": false, 00:19:53.535 "sock_impl": "ssl" 00:19:53.535 } 00:19:53.535 } 00:19:53.535 ] 00:19:53.535 } 00:19:53.535 ] 00:19:53.535 }' 00:19:53.535 22:51:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:53.535 22:51:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2398866 00:19:53.535 22:51:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:19:53.535 22:51:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2398866 00:19:53.535 22:51:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2398866 ']' 00:19:53.535 22:51:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:53.535 22:51:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:53.535 22:51:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:53.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:53.535 22:51:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:53.535 22:51:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:53.535 [2024-12-10 22:51:01.230842] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:19:53.535 [2024-12-10 22:51:01.230892] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:53.794 [2024-12-10 22:51:01.310091] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:53.794 [2024-12-10 22:51:01.350168] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:53.794 [2024-12-10 22:51:01.350203] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:53.794 [2024-12-10 22:51:01.350210] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:53.794 [2024-12-10 22:51:01.350217] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:53.794 [2024-12-10 22:51:01.350222] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:53.794 [2024-12-10 22:51:01.350787] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:54.053 [2024-12-10 22:51:01.563809] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:54.053 [2024-12-10 22:51:01.595843] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:54.053 [2024-12-10 22:51:01.596049] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:54.621 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:54.621 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:54.621 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:54.621 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:54.621 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:54.621 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:54.621 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=2399163 00:19:54.621 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 2399163 /var/tmp/bdevperf.sock 00:19:54.621 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2399163 ']' 00:19:54.621 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:54.621 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:19:54.621 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:54.621 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:54.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:54.621 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:19:54.621 "subsystems": [ 00:19:54.621 { 00:19:54.621 "subsystem": "keyring", 00:19:54.621 "config": [ 00:19:54.621 { 00:19:54.621 "method": "keyring_file_add_key", 00:19:54.621 "params": { 00:19:54.621 "name": "key0", 00:19:54.621 "path": "/tmp/tmp.xGIDcQlaD5" 00:19:54.621 } 00:19:54.621 } 00:19:54.621 ] 00:19:54.621 }, 00:19:54.621 { 00:19:54.621 "subsystem": "iobuf", 00:19:54.621 "config": [ 00:19:54.621 { 00:19:54.621 "method": "iobuf_set_options", 00:19:54.621 "params": { 00:19:54.621 "small_pool_count": 8192, 00:19:54.621 "large_pool_count": 1024, 00:19:54.621 "small_bufsize": 8192, 00:19:54.621 "large_bufsize": 135168, 00:19:54.621 "enable_numa": false 00:19:54.621 } 00:19:54.621 } 00:19:54.621 ] 00:19:54.621 }, 00:19:54.621 { 00:19:54.621 "subsystem": "sock", 00:19:54.621 "config": [ 00:19:54.621 { 00:19:54.621 "method": "sock_set_default_impl", 00:19:54.621 "params": { 00:19:54.621 "impl_name": "posix" 00:19:54.621 } 00:19:54.621 }, 00:19:54.621 { 00:19:54.621 "method": "sock_impl_set_options", 00:19:54.621 "params": { 00:19:54.621 "impl_name": "ssl", 00:19:54.621 "recv_buf_size": 4096, 00:19:54.621 "send_buf_size": 4096, 00:19:54.621 "enable_recv_pipe": true, 00:19:54.621 "enable_quickack": false, 00:19:54.621 "enable_placement_id": 0, 00:19:54.622 "enable_zerocopy_send_server": true, 00:19:54.622 "enable_zerocopy_send_client": false, 00:19:54.622 "zerocopy_threshold": 0, 00:19:54.622 "tls_version": 0, 00:19:54.622 "enable_ktls": false 00:19:54.622 } 00:19:54.622 }, 00:19:54.622 { 00:19:54.622 "method": "sock_impl_set_options", 00:19:54.622 "params": { 00:19:54.622 "impl_name": "posix", 00:19:54.622 "recv_buf_size": 2097152, 00:19:54.622 "send_buf_size": 2097152, 00:19:54.622 "enable_recv_pipe": true, 00:19:54.622 "enable_quickack": false, 00:19:54.622 "enable_placement_id": 0, 00:19:54.622 "enable_zerocopy_send_server": true, 00:19:54.622 "enable_zerocopy_send_client": false, 00:19:54.622 "zerocopy_threshold": 0, 00:19:54.622 "tls_version": 0, 00:19:54.622 "enable_ktls": false 00:19:54.622 } 00:19:54.622 } 00:19:54.622 ] 00:19:54.622 }, 00:19:54.622 { 00:19:54.622 "subsystem": "vmd", 00:19:54.622 "config": [] 00:19:54.622 }, 00:19:54.622 { 00:19:54.622 "subsystem": "accel", 00:19:54.622 "config": [ 00:19:54.622 { 00:19:54.622 "method": "accel_set_options", 00:19:54.622 "params": { 00:19:54.622 "small_cache_size": 128, 00:19:54.622 "large_cache_size": 16, 00:19:54.622 "task_count": 2048, 00:19:54.622 "sequence_count": 2048, 00:19:54.622 "buf_count": 2048 00:19:54.622 } 00:19:54.622 } 00:19:54.622 ] 00:19:54.622 }, 00:19:54.622 { 00:19:54.622 "subsystem": "bdev", 00:19:54.622 "config": [ 00:19:54.622 { 00:19:54.622 "method": "bdev_set_options", 00:19:54.622 "params": { 00:19:54.622 "bdev_io_pool_size": 65535, 00:19:54.622 "bdev_io_cache_size": 256, 00:19:54.622 "bdev_auto_examine": true, 00:19:54.622 "iobuf_small_cache_size": 128, 00:19:54.622 "iobuf_large_cache_size": 16 00:19:54.622 } 00:19:54.622 }, 00:19:54.622 { 00:19:54.622 "method": "bdev_raid_set_options", 00:19:54.622 "params": { 00:19:54.622 "process_window_size_kb": 1024, 00:19:54.622 "process_max_bandwidth_mb_sec": 0 00:19:54.622 } 00:19:54.622 }, 00:19:54.622 { 00:19:54.622 "method": "bdev_iscsi_set_options", 00:19:54.622 "params": { 00:19:54.622 "timeout_sec": 30 00:19:54.622 } 00:19:54.622 }, 00:19:54.622 { 00:19:54.622 "method": "bdev_nvme_set_options", 00:19:54.622 "params": { 00:19:54.622 "action_on_timeout": "none", 00:19:54.622 "timeout_us": 0, 00:19:54.622 "timeout_admin_us": 0, 00:19:54.622 "keep_alive_timeout_ms": 10000, 00:19:54.622 "arbitration_burst": 0, 00:19:54.622 "low_priority_weight": 0, 00:19:54.622 "medium_priority_weight": 0, 00:19:54.622 "high_priority_weight": 0, 00:19:54.622 "nvme_adminq_poll_period_us": 10000, 00:19:54.622 "nvme_ioq_poll_period_us": 0, 00:19:54.622 "io_queue_requests": 512, 00:19:54.622 "delay_cmd_submit": true, 00:19:54.622 "transport_retry_count": 4, 00:19:54.622 "bdev_retry_count": 3, 00:19:54.622 "transport_ack_timeout": 0, 00:19:54.622 "ctrlr_loss_timeout_sec": 0, 00:19:54.622 "reconnect_delay_sec": 0, 00:19:54.622 "fast_io_fail_timeout_sec": 0, 00:19:54.622 "disable_auto_failback": false, 00:19:54.622 "generate_uuids": false, 00:19:54.622 "transport_tos": 0, 00:19:54.622 "nvme_error_stat": false, 00:19:54.622 "rdma_srq_size": 0, 00:19:54.622 "io_path_stat": false, 00:19:54.622 "allow_accel_sequence": false, 00:19:54.622 "rdma_max_cq_size": 0, 00:19:54.622 "rdma_cm_event_timeout_ms": 0, 00:19:54.622 "dhchap_digests": [ 00:19:54.622 "sha256", 00:19:54.622 "sha384", 00:19:54.622 "sha512" 00:19:54.622 ], 00:19:54.622 "dhchap_dhgroups": [ 00:19:54.622 "null", 00:19:54.622 "ffdhe2048", 00:19:54.622 "ffdhe3072", 00:19:54.622 "ffdhe4096", 00:19:54.622 "ffdhe6144", 00:19:54.622 "ffdhe8192" 00:19:54.622 ], 00:19:54.622 "rdma_umr_per_io": false 00:19:54.622 } 00:19:54.622 }, 00:19:54.622 { 00:19:54.622 "method": "bdev_nvme_attach_controller", 00:19:54.622 "params": { 00:19:54.622 "name": "nvme0", 00:19:54.622 "trtype": "TCP", 00:19:54.622 "adrfam": "IPv4", 00:19:54.622 "traddr": "10.0.0.2", 00:19:54.622 "trsvcid": "4420", 00:19:54.622 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:54.622 "prchk_reftag": false, 00:19:54.622 "prchk_guard": false, 00:19:54.622 "ctrlr_loss_timeout_sec": 0, 00:19:54.622 "reconnect_delay_sec": 0, 00:19:54.622 "fast_io_fail_timeout_sec": 0, 00:19:54.622 "psk": "key0", 00:19:54.622 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:54.622 "hdgst": false, 00:19:54.622 "ddgst": false, 00:19:54.622 "multipath": "multipath" 00:19:54.622 } 00:19:54.622 }, 00:19:54.622 { 00:19:54.622 "method": "bdev_nvme_set_hotplug", 00:19:54.622 "params": { 00:19:54.622 "period_us": 100000, 00:19:54.622 "enable": false 00:19:54.622 } 00:19:54.622 }, 00:19:54.622 { 00:19:54.622 "method": "bdev_enable_histogram", 00:19:54.622 "params": { 00:19:54.622 "name": "nvme0n1", 00:19:54.622 "enable": true 00:19:54.622 } 00:19:54.622 }, 00:19:54.622 { 00:19:54.622 "method": "bdev_wait_for_examine" 00:19:54.622 } 00:19:54.622 ] 00:19:54.622 }, 00:19:54.622 { 00:19:54.622 "subsystem": "nbd", 00:19:54.622 "config": [] 00:19:54.622 } 00:19:54.622 ] 00:19:54.622 }' 00:19:54.622 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:54.622 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:54.622 [2024-12-10 22:51:02.151108] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:19:54.622 [2024-12-10 22:51:02.151155] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2399163 ] 00:19:54.622 [2024-12-10 22:51:02.226614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:54.622 [2024-12-10 22:51:02.268294] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:54.881 [2024-12-10 22:51:02.423143] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:55.448 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:55.448 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:55.448 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:55.448 22:51:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:19:55.707 22:51:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.707 22:51:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:55.707 Running I/O for 1 seconds... 00:19:56.642 5269.00 IOPS, 20.58 MiB/s 00:19:56.642 Latency(us) 00:19:56.642 [2024-12-10T21:51:04.371Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:56.642 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:56.642 Verification LBA range: start 0x0 length 0x2000 00:19:56.642 nvme0n1 : 1.01 5327.30 20.81 0.00 0.00 23861.83 5128.90 24732.72 00:19:56.642 [2024-12-10T21:51:04.371Z] =================================================================================================================== 00:19:56.642 [2024-12-10T21:51:04.371Z] Total : 5327.30 20.81 0.00 0.00 23861.83 5128.90 24732.72 00:19:56.642 { 00:19:56.642 "results": [ 00:19:56.642 { 00:19:56.642 "job": "nvme0n1", 00:19:56.642 "core_mask": "0x2", 00:19:56.642 "workload": "verify", 00:19:56.642 "status": "finished", 00:19:56.642 "verify_range": { 00:19:56.642 "start": 0, 00:19:56.642 "length": 8192 00:19:56.642 }, 00:19:56.642 "queue_depth": 128, 00:19:56.642 "io_size": 4096, 00:19:56.642 "runtime": 1.013083, 00:19:56.642 "iops": 5327.302896208899, 00:19:56.642 "mibps": 20.809776938316013, 00:19:56.642 "io_failed": 0, 00:19:56.642 "io_timeout": 0, 00:19:56.642 "avg_latency_us": 23861.828077756567, 00:19:56.642 "min_latency_us": 5128.904347826087, 00:19:56.642 "max_latency_us": 24732.71652173913 00:19:56.642 } 00:19:56.642 ], 00:19:56.642 "core_count": 1 00:19:56.642 } 00:19:56.642 22:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:19:56.642 22:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:19:56.642 22:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:19:56.642 22:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:19:56.642 22:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:19:56.642 22:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:19:56.642 22:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:56.642 22:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:19:56.642 22:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:19:56.642 22:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:19:56.642 22:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:56.642 nvmf_trace.0 00:19:56.901 22:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:19:56.901 22:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 2399163 00:19:56.901 22:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2399163 ']' 00:19:56.901 22:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2399163 00:19:56.901 22:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:56.901 22:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:56.901 22:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2399163 00:19:56.901 22:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:56.901 22:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:56.901 22:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2399163' 00:19:56.901 killing process with pid 2399163 00:19:56.901 22:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2399163 00:19:56.901 Received shutdown signal, test time was about 1.000000 seconds 00:19:56.901 00:19:56.901 Latency(us) 00:19:56.901 [2024-12-10T21:51:04.630Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:56.901 [2024-12-10T21:51:04.630Z] =================================================================================================================== 00:19:56.902 [2024-12-10T21:51:04.631Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:56.902 22:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2399163 00:19:56.902 22:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:19:56.902 22:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:56.902 22:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:19:56.902 22:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:56.902 22:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:19:56.902 22:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:56.902 22:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:56.902 rmmod nvme_tcp 00:19:57.161 rmmod nvme_fabrics 00:19:57.161 rmmod nvme_keyring 00:19:57.161 22:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:57.161 22:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:19:57.161 22:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:19:57.161 22:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 2398866 ']' 00:19:57.161 22:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 2398866 00:19:57.161 22:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2398866 ']' 00:19:57.161 22:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2398866 00:19:57.161 22:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:57.161 22:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:57.161 22:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2398866 00:19:57.161 22:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:57.161 22:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:57.161 22:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2398866' 00:19:57.161 killing process with pid 2398866 00:19:57.161 22:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2398866 00:19:57.161 22:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2398866 00:19:57.161 22:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:57.161 22:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:57.161 22:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:57.161 22:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:19:57.161 22:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:19:57.161 22:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:57.161 22:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:19:57.161 22:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:57.161 22:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:19:57.161 22:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:57.161 22:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:57.420 22:51:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:59.325 22:51:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:19:59.325 22:51:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.MFsO3UWCPp /tmp/tmp.spAOgbYNdq /tmp/tmp.xGIDcQlaD5 00:19:59.325 00:19:59.325 real 1m20.087s 00:19:59.325 user 2m3.613s 00:19:59.325 sys 0m29.449s 00:19:59.325 22:51:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:59.325 22:51:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:59.325 ************************************ 00:19:59.325 END TEST nvmf_tls 00:19:59.325 ************************************ 00:19:59.325 22:51:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:59.325 22:51:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:59.325 22:51:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:59.325 22:51:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:59.325 ************************************ 00:19:59.325 START TEST nvmf_fips 00:19:59.325 ************************************ 00:19:59.325 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:59.586 * Looking for test storage... 00:19:59.586 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/fips 00:19:59.586 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:59.586 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:19:59.586 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:59.586 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:59.586 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:59.586 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:59.586 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:59.586 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:19:59.586 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:19:59.586 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:19:59.586 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:19:59.586 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:19:59.586 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:19:59.586 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:19:59.586 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:59.586 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:19:59.586 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:19:59.586 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:59.586 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:59.586 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:19:59.586 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:19:59.586 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:59.586 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:19:59.586 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:19:59.586 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:19:59.586 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:19:59.586 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:59.586 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:19:59.586 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:19:59.586 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:59.586 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:59.586 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:19:59.586 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:59.586 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:59.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:59.586 --rc genhtml_branch_coverage=1 00:19:59.586 --rc genhtml_function_coverage=1 00:19:59.586 --rc genhtml_legend=1 00:19:59.586 --rc geninfo_all_blocks=1 00:19:59.586 --rc geninfo_unexecuted_blocks=1 00:19:59.586 00:19:59.586 ' 00:19:59.586 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:59.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:59.586 --rc genhtml_branch_coverage=1 00:19:59.586 --rc genhtml_function_coverage=1 00:19:59.586 --rc genhtml_legend=1 00:19:59.586 --rc geninfo_all_blocks=1 00:19:59.586 --rc geninfo_unexecuted_blocks=1 00:19:59.586 00:19:59.586 ' 00:19:59.586 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:59.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:59.586 --rc genhtml_branch_coverage=1 00:19:59.586 --rc genhtml_function_coverage=1 00:19:59.586 --rc genhtml_legend=1 00:19:59.586 --rc geninfo_all_blocks=1 00:19:59.586 --rc geninfo_unexecuted_blocks=1 00:19:59.586 00:19:59.586 ' 00:19:59.586 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:59.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:59.586 --rc genhtml_branch_coverage=1 00:19:59.586 --rc genhtml_function_coverage=1 00:19:59.586 --rc genhtml_legend=1 00:19:59.586 --rc geninfo_all_blocks=1 00:19:59.586 --rc geninfo_unexecuted_blocks=1 00:19:59.586 00:19:59.586 ' 00:19:59.586 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:19:59.586 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:19:59.586 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:59.586 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:59.586 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:59.586 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:59.586 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:59.586 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:59.586 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:59.586 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:59.586 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:59.586 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:59.586 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:59.586 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:59.586 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:59.586 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:59.586 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:59.586 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:59.586 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:19:59.586 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:19:59.586 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:59.586 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:59.586 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:59.586 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.586 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.586 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.586 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:19:59.586 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.586 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:19:59.586 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:59.586 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:59.586 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:59.586 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:59.586 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:59.586 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:59.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:59.586 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:59.587 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:59.587 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:59.587 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:19:59.587 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:19:59.587 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:19:59.587 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:19:59.587 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:19:59.587 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:19:59.587 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:19:59.587 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:59.587 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:59.587 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:19:59.587 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:19:59.587 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:19:59.587 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:19:59.587 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:19:59.587 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:19:59.587 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:19:59.587 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:59.587 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:19:59.587 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:19:59.587 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:59.587 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:59.587 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:19:59.587 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:19:59.587 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:59.587 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:19:59.587 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:19:59.587 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:19:59.587 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:19:59.587 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:59.587 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:19:59.587 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:19:59.587 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:59.587 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:59.587 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:19:59.587 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:59.587 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:19:59.587 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:19:59.587 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:59.587 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:19:59.587 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:19:59.587 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:19:59.587 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:19:59.587 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:59.587 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:19:59.587 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:19:59.587 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:59.587 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:19:59.587 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:19:59.587 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:19:59.587 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:19:59.587 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:19:59.587 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:19:59.587 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:19:59.587 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:19:59.587 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:19:59.587 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:19:59.587 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:19:59.587 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:19:59.587 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:19:59.587 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:19:59.587 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:19:59.587 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:19:59.587 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:19:59.847 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:19:59.847 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:19:59.847 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:19:59.847 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:19:59.847 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:19:59.847 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:19:59.847 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:19:59.847 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:19:59.847 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:59.847 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:19:59.847 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:59.847 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:19:59.847 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:59.847 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:19:59.847 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:19:59.847 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:19:59.847 Error setting digest 00:19:59.847 40B21D80A47F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:19:59.847 40B21D80A47F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:19:59.847 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:19:59.847 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:59.847 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:59.847 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:59.847 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:19:59.847 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:59.847 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:59.847 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:59.847 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:59.847 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:59.847 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:59.847 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:59.847 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:59.847 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:59.847 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:59.847 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:19:59.847 22:51:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:06.420 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:06.420 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:06.420 Found net devices under 0000:86:00.0: cvl_0_0 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:06.420 Found net devices under 0000:86:00.1: cvl_0_1 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:06.420 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:06.420 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.424 ms 00:20:06.420 00:20:06.420 --- 10.0.0.2 ping statistics --- 00:20:06.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:06.420 rtt min/avg/max/mdev = 0.424/0.424/0.424/0.000 ms 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:06.420 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:06.420 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:20:06.420 00:20:06.420 --- 10.0.0.1 ping statistics --- 00:20:06.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:06.420 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=2403574 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 2403574 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 2403574 ']' 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:06.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:06.420 22:51:13 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:06.420 [2024-12-10 22:51:13.413986] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:20:06.420 [2024-12-10 22:51:13.414036] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:06.420 [2024-12-10 22:51:13.495684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:06.420 [2024-12-10 22:51:13.536659] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:06.420 [2024-12-10 22:51:13.536694] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:06.420 [2024-12-10 22:51:13.536701] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:06.420 [2024-12-10 22:51:13.536707] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:06.420 [2024-12-10 22:51:13.536712] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:06.420 [2024-12-10 22:51:13.537307] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:20:06.679 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:06.679 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:20:06.679 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:06.679 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:06.679 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:06.679 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:06.679 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:20:06.679 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:06.679 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:20:06.679 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.pi5 00:20:06.679 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:06.679 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.pi5 00:20:06.679 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.pi5 00:20:06.679 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.pi5 00:20:06.679 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:20:06.937 [2024-12-10 22:51:14.460527] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:06.937 [2024-12-10 22:51:14.476533] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:06.937 [2024-12-10 22:51:14.476731] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:06.937 malloc0 00:20:06.937 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:06.938 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=2403826 00:20:06.938 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:06.938 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 2403826 /var/tmp/bdevperf.sock 00:20:06.938 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 2403826 ']' 00:20:06.938 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:06.938 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:06.938 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:06.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:06.938 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:06.938 22:51:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:06.938 [2024-12-10 22:51:14.606016] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:20:06.938 [2024-12-10 22:51:14.606060] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2403826 ] 00:20:07.197 [2024-12-10 22:51:14.679866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:07.197 [2024-12-10 22:51:14.719549] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:20:07.763 22:51:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:07.763 22:51:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:20:07.763 22:51:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.pi5 00:20:08.022 22:51:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:20:08.281 [2024-12-10 22:51:15.801921] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:08.281 TLSTESTn1 00:20:08.281 22:51:15 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:08.281 Running I/O for 10 seconds... 00:20:10.594 5273.00 IOPS, 20.60 MiB/s [2024-12-10T21:51:19.261Z] 5402.00 IOPS, 21.10 MiB/s [2024-12-10T21:51:20.198Z] 5361.00 IOPS, 20.94 MiB/s [2024-12-10T21:51:21.132Z] 5365.50 IOPS, 20.96 MiB/s [2024-12-10T21:51:22.068Z] 5390.40 IOPS, 21.06 MiB/s [2024-12-10T21:51:23.004Z] 5389.33 IOPS, 21.05 MiB/s [2024-12-10T21:51:24.380Z] 5393.43 IOPS, 21.07 MiB/s [2024-12-10T21:51:25.316Z] 5395.50 IOPS, 21.08 MiB/s [2024-12-10T21:51:26.251Z] 5392.11 IOPS, 21.06 MiB/s [2024-12-10T21:51:26.252Z] 5402.60 IOPS, 21.10 MiB/s 00:20:18.523 Latency(us) 00:20:18.523 [2024-12-10T21:51:26.252Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:18.523 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:18.523 Verification LBA range: start 0x0 length 0x2000 00:20:18.523 TLSTESTn1 : 10.02 5405.90 21.12 0.00 0.00 23640.90 6183.18 23137.06 00:20:18.523 [2024-12-10T21:51:26.252Z] =================================================================================================================== 00:20:18.523 [2024-12-10T21:51:26.252Z] Total : 5405.90 21.12 0.00 0.00 23640.90 6183.18 23137.06 00:20:18.523 { 00:20:18.523 "results": [ 00:20:18.523 { 00:20:18.523 "job": "TLSTESTn1", 00:20:18.523 "core_mask": "0x4", 00:20:18.523 "workload": "verify", 00:20:18.523 "status": "finished", 00:20:18.523 "verify_range": { 00:20:18.523 "start": 0, 00:20:18.523 "length": 8192 00:20:18.523 }, 00:20:18.523 "queue_depth": 128, 00:20:18.523 "io_size": 4096, 00:20:18.523 "runtime": 10.017208, 00:20:18.523 "iops": 5405.897531527747, 00:20:18.523 "mibps": 21.11678723253026, 00:20:18.523 "io_failed": 0, 00:20:18.523 "io_timeout": 0, 00:20:18.523 "avg_latency_us": 23640.89851458375, 00:20:18.523 "min_latency_us": 6183.179130434783, 00:20:18.523 "max_latency_us": 23137.057391304348 00:20:18.523 } 00:20:18.523 ], 00:20:18.523 "core_count": 1 00:20:18.523 } 00:20:18.523 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:20:18.523 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:20:18.523 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:20:18.523 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:20:18.523 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:20:18.523 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:18.523 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:20:18.523 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:20:18.523 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:20:18.523 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:18.523 nvmf_trace.0 00:20:18.523 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:20:18.523 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2403826 00:20:18.523 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 2403826 ']' 00:20:18.523 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 2403826 00:20:18.523 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:20:18.523 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:18.523 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2403826 00:20:18.523 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:20:18.523 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:20:18.523 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2403826' 00:20:18.523 killing process with pid 2403826 00:20:18.523 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 2403826 00:20:18.523 Received shutdown signal, test time was about 10.000000 seconds 00:20:18.523 00:20:18.523 Latency(us) 00:20:18.523 [2024-12-10T21:51:26.252Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:18.523 [2024-12-10T21:51:26.252Z] =================================================================================================================== 00:20:18.523 [2024-12-10T21:51:26.252Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:18.523 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 2403826 00:20:18.782 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:20:18.782 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:18.782 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:20:18.782 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:18.782 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:20:18.782 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:18.782 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:18.782 rmmod nvme_tcp 00:20:18.782 rmmod nvme_fabrics 00:20:18.782 rmmod nvme_keyring 00:20:18.782 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:18.782 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:20:18.782 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:20:18.782 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 2403574 ']' 00:20:18.782 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 2403574 00:20:18.782 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 2403574 ']' 00:20:18.782 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 2403574 00:20:18.782 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:20:18.782 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:18.782 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2403574 00:20:18.782 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:18.782 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:18.782 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2403574' 00:20:18.782 killing process with pid 2403574 00:20:18.782 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 2403574 00:20:18.782 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 2403574 00:20:19.042 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:19.042 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:19.042 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:19.042 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:20:19.042 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:20:19.042 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:19.042 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:20:19.042 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:19.042 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:19.042 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:19.042 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:19.042 22:51:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:21.580 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:21.580 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.pi5 00:20:21.580 00:20:21.580 real 0m21.684s 00:20:21.580 user 0m23.564s 00:20:21.580 sys 0m9.553s 00:20:21.580 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:21.580 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:21.580 ************************************ 00:20:21.580 END TEST nvmf_fips 00:20:21.580 ************************************ 00:20:21.580 22:51:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:21.580 22:51:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:21.580 22:51:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:21.580 22:51:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:21.580 ************************************ 00:20:21.580 START TEST nvmf_control_msg_list 00:20:21.580 ************************************ 00:20:21.580 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:20:21.580 * Looking for test storage... 00:20:21.580 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:20:21.580 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:21.580 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:20:21.580 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:21.580 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:21.580 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:21.580 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:21.580 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:21.581 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:20:21.581 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:20:21.581 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:20:21.581 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:20:21.581 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:20:21.581 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:20:21.581 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:20:21.581 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:21.581 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:20:21.581 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:20:21.581 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:21.581 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:21.581 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:20:21.581 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:20:21.581 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:21.581 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:20:21.581 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:20:21.581 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:20:21.581 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:20:21.581 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:21.581 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:20:21.581 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:20:21.581 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:21.581 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:21.581 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:20:21.581 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:21.581 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:21.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:21.581 --rc genhtml_branch_coverage=1 00:20:21.581 --rc genhtml_function_coverage=1 00:20:21.581 --rc genhtml_legend=1 00:20:21.581 --rc geninfo_all_blocks=1 00:20:21.581 --rc geninfo_unexecuted_blocks=1 00:20:21.581 00:20:21.581 ' 00:20:21.581 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:21.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:21.581 --rc genhtml_branch_coverage=1 00:20:21.581 --rc genhtml_function_coverage=1 00:20:21.581 --rc genhtml_legend=1 00:20:21.581 --rc geninfo_all_blocks=1 00:20:21.581 --rc geninfo_unexecuted_blocks=1 00:20:21.581 00:20:21.581 ' 00:20:21.581 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:21.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:21.581 --rc genhtml_branch_coverage=1 00:20:21.581 --rc genhtml_function_coverage=1 00:20:21.581 --rc genhtml_legend=1 00:20:21.581 --rc geninfo_all_blocks=1 00:20:21.581 --rc geninfo_unexecuted_blocks=1 00:20:21.581 00:20:21.581 ' 00:20:21.581 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:21.581 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:21.581 --rc genhtml_branch_coverage=1 00:20:21.581 --rc genhtml_function_coverage=1 00:20:21.581 --rc genhtml_legend=1 00:20:21.581 --rc geninfo_all_blocks=1 00:20:21.581 --rc geninfo_unexecuted_blocks=1 00:20:21.581 00:20:21.581 ' 00:20:21.581 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:20:21.581 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:20:21.581 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:21.581 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:21.581 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:21.581 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:21.581 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:21.581 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:21.581 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:21.581 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:21.581 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:21.581 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:21.581 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:21.581 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:21.581 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:21.581 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:21.581 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:21.581 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:21.581 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:20:21.581 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:20:21.581 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:21.581 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:21.581 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:21.581 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.581 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.581 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.581 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:20:21.581 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.581 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:20:21.581 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:21.581 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:21.581 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:21.581 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:21.581 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:21.581 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:21.581 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:21.581 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:21.581 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:21.581 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:21.581 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:20:21.581 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:21.581 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:21.581 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:21.581 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:21.581 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:21.581 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:21.581 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:21.582 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:21.582 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:21.582 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:21.582 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:20:21.582 22:51:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:28.153 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:28.153 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:28.153 Found net devices under 0000:86:00.0: cvl_0_0 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:28.153 Found net devices under 0000:86:00.1: cvl_0_1 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:28.153 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:28.153 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:28.153 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.369 ms 00:20:28.153 00:20:28.153 --- 10.0.0.2 ping statistics --- 00:20:28.153 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:28.154 rtt min/avg/max/mdev = 0.369/0.369/0.369/0.000 ms 00:20:28.154 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:28.154 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:28.154 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:20:28.154 00:20:28.154 --- 10.0.0.1 ping statistics --- 00:20:28.154 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:28.154 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:20:28.154 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:28.154 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:20:28.154 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:28.154 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:28.154 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:28.154 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:28.154 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:28.154 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:28.154 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:28.154 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:20:28.154 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:28.154 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:28.154 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:28.154 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=2409203 00:20:28.154 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 2409203 00:20:28.154 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:28.154 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 2409203 ']' 00:20:28.154 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:28.154 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:28.154 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:28.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:28.154 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:28.154 22:51:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:28.154 [2024-12-10 22:51:35.002672] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:20:28.154 [2024-12-10 22:51:35.002719] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:28.154 [2024-12-10 22:51:35.082731] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:28.154 [2024-12-10 22:51:35.123090] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:28.154 [2024-12-10 22:51:35.123120] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:28.154 [2024-12-10 22:51:35.123128] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:28.154 [2024-12-10 22:51:35.123135] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:28.154 [2024-12-10 22:51:35.123140] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:28.154 [2024-12-10 22:51:35.123558] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:28.154 22:51:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:28.154 22:51:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:20:28.154 22:51:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:28.154 22:51:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:28.154 22:51:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:28.154 22:51:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:28.154 22:51:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:20:28.154 22:51:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf 00:20:28.154 22:51:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:20:28.154 22:51:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.154 22:51:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:28.154 [2024-12-10 22:51:35.272646] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:28.154 22:51:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.154 22:51:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:20:28.154 22:51:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.154 22:51:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:28.154 22:51:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.154 22:51:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:20:28.154 22:51:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.154 22:51:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:28.154 Malloc0 00:20:28.154 22:51:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.154 22:51:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:20:28.154 22:51:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.154 22:51:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:28.154 22:51:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.154 22:51:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:28.154 22:51:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.154 22:51:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:28.154 [2024-12-10 22:51:35.313050] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:28.154 22:51:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.154 22:51:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=2409231 00:20:28.154 22:51:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:28.154 22:51:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=2409232 00:20:28.154 22:51:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:28.154 22:51:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=2409233 00:20:28.154 22:51:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:28.154 22:51:35 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 2409231 00:20:28.154 [2024-12-10 22:51:35.411784] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:28.154 [2024-12-10 22:51:35.411977] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:28.154 [2024-12-10 22:51:35.412133] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:29.088 Initializing NVMe Controllers 00:20:29.088 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:29.088 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:20:29.088 Initialization complete. Launching workers. 00:20:29.088 ======================================================== 00:20:29.088 Latency(us) 00:20:29.088 Device Information : IOPS MiB/s Average min max 00:20:29.088 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 84.00 0.33 12372.04 236.34 41877.59 00:20:29.088 ======================================================== 00:20:29.088 Total : 84.00 0.33 12372.04 236.34 41877.59 00:20:29.088 00:20:29.088 Initializing NVMe Controllers 00:20:29.088 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:29.088 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:20:29.088 Initialization complete. Launching workers. 00:20:29.088 ======================================================== 00:20:29.088 Latency(us) 00:20:29.088 Device Information : IOPS MiB/s Average min max 00:20:29.088 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 6694.00 26.15 149.04 119.95 315.28 00:20:29.088 ======================================================== 00:20:29.088 Total : 6694.00 26.15 149.04 119.95 315.28 00:20:29.088 00:20:29.088 22:51:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 2409232 00:20:29.088 Initializing NVMe Controllers 00:20:29.088 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:29.088 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:20:29.088 Initialization complete. Launching workers. 00:20:29.088 ======================================================== 00:20:29.088 Latency(us) 00:20:29.088 Device Information : IOPS MiB/s Average min max 00:20:29.088 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 40882.29 40506.78 40939.16 00:20:29.088 ======================================================== 00:20:29.088 Total : 25.00 0.10 40882.29 40506.78 40939.16 00:20:29.088 00:20:29.088 22:51:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 2409233 00:20:29.088 22:51:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:29.088 22:51:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:20:29.088 22:51:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:29.088 22:51:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:20:29.088 22:51:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:29.088 22:51:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:20:29.088 22:51:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:29.088 22:51:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:29.088 rmmod nvme_tcp 00:20:29.088 rmmod nvme_fabrics 00:20:29.088 rmmod nvme_keyring 00:20:29.088 22:51:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:29.088 22:51:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:20:29.088 22:51:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:20:29.088 22:51:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 2409203 ']' 00:20:29.088 22:51:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 2409203 00:20:29.088 22:51:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 2409203 ']' 00:20:29.088 22:51:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 2409203 00:20:29.088 22:51:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:20:29.088 22:51:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:29.088 22:51:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2409203 00:20:29.088 22:51:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:29.088 22:51:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:29.088 22:51:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2409203' 00:20:29.088 killing process with pid 2409203 00:20:29.088 22:51:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 2409203 00:20:29.088 22:51:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 2409203 00:20:29.348 22:51:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:29.348 22:51:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:29.348 22:51:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:29.348 22:51:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:20:29.348 22:51:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:20:29.348 22:51:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:29.348 22:51:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:20:29.348 22:51:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:29.348 22:51:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:29.348 22:51:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:29.348 22:51:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:29.348 22:51:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:31.254 22:51:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:31.254 00:20:31.254 real 0m10.162s 00:20:31.254 user 0m6.771s 00:20:31.254 sys 0m5.461s 00:20:31.254 22:51:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:31.254 22:51:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:31.254 ************************************ 00:20:31.254 END TEST nvmf_control_msg_list 00:20:31.254 ************************************ 00:20:31.254 22:51:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:20:31.254 22:51:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:31.254 22:51:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:31.254 22:51:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:31.514 ************************************ 00:20:31.514 START TEST nvmf_wait_for_buf 00:20:31.514 ************************************ 00:20:31.514 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:20:31.514 * Looking for test storage... 00:20:31.514 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:20:31.514 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:31.514 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:31.514 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:20:31.514 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:31.514 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:31.514 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:31.514 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:31.514 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:20:31.514 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:20:31.514 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:20:31.514 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:20:31.514 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:20:31.514 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:20:31.514 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:20:31.514 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:31.514 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:20:31.515 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:20:31.515 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:31.515 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:31.515 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:20:31.515 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:20:31.515 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:31.515 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:20:31.515 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:20:31.515 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:20:31.515 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:20:31.515 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:31.515 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:20:31.515 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:20:31.515 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:31.515 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:31.515 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:20:31.515 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:31.515 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:31.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:31.515 --rc genhtml_branch_coverage=1 00:20:31.515 --rc genhtml_function_coverage=1 00:20:31.515 --rc genhtml_legend=1 00:20:31.515 --rc geninfo_all_blocks=1 00:20:31.515 --rc geninfo_unexecuted_blocks=1 00:20:31.515 00:20:31.515 ' 00:20:31.515 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:31.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:31.515 --rc genhtml_branch_coverage=1 00:20:31.515 --rc genhtml_function_coverage=1 00:20:31.515 --rc genhtml_legend=1 00:20:31.515 --rc geninfo_all_blocks=1 00:20:31.515 --rc geninfo_unexecuted_blocks=1 00:20:31.515 00:20:31.515 ' 00:20:31.515 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:31.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:31.515 --rc genhtml_branch_coverage=1 00:20:31.515 --rc genhtml_function_coverage=1 00:20:31.515 --rc genhtml_legend=1 00:20:31.515 --rc geninfo_all_blocks=1 00:20:31.515 --rc geninfo_unexecuted_blocks=1 00:20:31.515 00:20:31.515 ' 00:20:31.515 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:31.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:31.515 --rc genhtml_branch_coverage=1 00:20:31.515 --rc genhtml_function_coverage=1 00:20:31.515 --rc genhtml_legend=1 00:20:31.515 --rc geninfo_all_blocks=1 00:20:31.515 --rc geninfo_unexecuted_blocks=1 00:20:31.515 00:20:31.515 ' 00:20:31.515 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:20:31.515 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:20:31.515 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:31.515 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:31.515 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:31.515 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:31.515 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:31.515 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:31.515 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:31.515 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:31.515 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:31.515 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:31.515 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:31.515 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:31.515 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:31.515 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:31.515 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:31.515 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:31.515 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:20:31.515 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:20:31.515 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:31.515 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:31.515 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:31.515 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:31.515 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:31.515 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:31.515 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:20:31.515 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:31.515 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:20:31.515 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:31.515 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:31.515 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:31.515 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:31.515 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:31.515 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:31.515 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:31.515 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:31.515 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:31.515 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:31.515 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:20:31.515 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:31.515 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:31.515 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:31.515 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:31.515 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:31.515 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:31.515 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:31.515 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:31.515 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:31.515 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:31.515 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:20:31.515 22:51:39 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:38.084 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:38.084 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:20:38.084 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:38.084 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:38.084 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:38.084 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:38.084 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:38.084 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:20:38.084 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:38.084 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:20:38.084 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:20:38.084 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:20:38.084 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:20:38.084 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:20:38.084 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:20:38.084 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:38.084 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:38.084 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:38.084 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:38.084 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:38.084 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:38.084 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:38.084 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:38.084 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:38.084 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:38.084 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:38.084 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:38.084 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:38.084 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:38.084 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:38.084 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:38.084 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:38.084 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:38.084 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:38.084 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:38.084 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:38.084 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:38.084 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:38.084 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:38.084 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:38.084 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:38.084 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:38.084 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:38.084 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:38.084 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:38.084 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:38.084 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:38.084 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:38.084 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:38.084 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:38.084 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:38.084 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:38.084 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:38.084 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:38.084 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:38.084 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:38.084 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:38.084 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:38.084 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:38.084 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:38.084 Found net devices under 0000:86:00.0: cvl_0_0 00:20:38.084 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:38.084 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:38.084 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:38.084 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:38.084 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:38.084 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:38.084 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:38.084 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:38.084 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:38.084 Found net devices under 0000:86:00.1: cvl_0_1 00:20:38.084 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:38.084 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:38.084 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:20:38.084 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:38.084 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:38.084 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:38.084 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:38.084 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:38.084 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:38.084 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:38.084 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:38.084 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:38.084 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:38.084 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:38.084 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:38.084 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:38.084 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:38.084 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:38.084 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:38.084 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:38.084 22:51:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:38.084 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:38.085 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:38.085 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:38.085 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:38.085 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:38.085 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:38.085 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:38.085 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:38.085 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:38.085 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.361 ms 00:20:38.085 00:20:38.085 --- 10.0.0.2 ping statistics --- 00:20:38.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:38.085 rtt min/avg/max/mdev = 0.361/0.361/0.361/0.000 ms 00:20:38.085 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:38.085 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:38.085 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:20:38.085 00:20:38.085 --- 10.0.0.1 ping statistics --- 00:20:38.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:38.085 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:20:38.085 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:38.085 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:20:38.085 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:38.085 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:38.085 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:38.085 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:38.085 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:38.085 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:38.085 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:38.085 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:20:38.085 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:38.085 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:38.085 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:38.085 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=2412986 00:20:38.085 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:20:38.085 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 2412986 00:20:38.085 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 2412986 ']' 00:20:38.085 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:38.085 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:38.085 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:38.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:38.085 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:38.085 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:38.085 [2024-12-10 22:51:45.306301] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:20:38.085 [2024-12-10 22:51:45.306343] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:38.085 [2024-12-10 22:51:45.386226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:38.085 [2024-12-10 22:51:45.425825] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:38.085 [2024-12-10 22:51:45.425862] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:38.085 [2024-12-10 22:51:45.425869] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:38.085 [2024-12-10 22:51:45.425875] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:38.085 [2024-12-10 22:51:45.425880] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:38.085 [2024-12-10 22:51:45.426394] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:38.085 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:38.085 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:20:38.085 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:38.085 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:38.085 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:38.085 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:38.085 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:20:38.085 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf 00:20:38.085 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:20:38.085 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.085 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:38.085 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.085 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:20:38.085 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.085 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:38.085 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.085 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:20:38.085 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.085 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:38.085 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.085 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:20:38.085 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.085 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:38.085 Malloc0 00:20:38.085 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.085 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:20:38.085 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.085 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:38.085 [2024-12-10 22:51:45.600242] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:38.085 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.085 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:20:38.085 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.085 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:38.085 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.085 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:20:38.085 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.085 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:38.085 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.085 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:38.085 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.085 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:38.085 [2024-12-10 22:51:45.628442] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:38.085 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.085 22:51:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:38.085 [2024-12-10 22:51:45.716231] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:39.461 Initializing NVMe Controllers 00:20:39.461 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:39.461 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:20:39.461 Initialization complete. Launching workers. 00:20:39.461 ======================================================== 00:20:39.461 Latency(us) 00:20:39.461 Device Information : IOPS MiB/s Average min max 00:20:39.461 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 124.00 15.50 33635.30 7260.73 71836.31 00:20:39.461 ======================================================== 00:20:39.461 Total : 124.00 15.50 33635.30 7260.73 71836.31 00:20:39.461 00:20:39.461 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:20:39.461 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:20:39.461 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.461 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:39.461 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.461 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=1958 00:20:39.461 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 1958 -eq 0 ]] 00:20:39.461 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:39.461 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:20:39.461 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:39.461 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:20:39.462 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:39.462 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:20:39.462 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:39.462 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:39.462 rmmod nvme_tcp 00:20:39.721 rmmod nvme_fabrics 00:20:39.721 rmmod nvme_keyring 00:20:39.721 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:39.721 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:20:39.721 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:20:39.721 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 2412986 ']' 00:20:39.721 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 2412986 00:20:39.721 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 2412986 ']' 00:20:39.721 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 2412986 00:20:39.721 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:20:39.721 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:39.721 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2412986 00:20:39.721 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:39.721 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:39.721 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2412986' 00:20:39.721 killing process with pid 2412986 00:20:39.721 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 2412986 00:20:39.721 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 2412986 00:20:39.980 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:39.980 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:39.980 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:39.980 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:20:39.980 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:20:39.980 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:39.980 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:20:39.980 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:39.980 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:20:39.980 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:39.980 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:39.980 22:51:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:41.885 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:20:41.885 00:20:41.885 real 0m10.510s 00:20:41.885 user 0m3.968s 00:20:41.885 sys 0m4.890s 00:20:41.885 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:41.885 22:51:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:41.885 ************************************ 00:20:41.885 END TEST nvmf_wait_for_buf 00:20:41.885 ************************************ 00:20:41.885 22:51:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:20:41.885 22:51:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:20:41.885 22:51:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:20:41.885 22:51:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:20:41.885 22:51:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:20:41.885 22:51:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:48.456 22:51:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:48.456 22:51:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:20:48.456 22:51:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:48.456 22:51:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:48.456 22:51:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:48.456 22:51:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:48.456 22:51:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:48.456 22:51:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:20:48.456 22:51:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:48.456 22:51:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:20:48.456 22:51:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:20:48.456 22:51:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:20:48.456 22:51:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:20:48.456 22:51:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:20:48.456 22:51:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:20:48.456 22:51:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:48.456 22:51:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:48.456 22:51:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:48.456 22:51:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:48.456 22:51:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:48.456 22:51:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:48.456 22:51:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:48.456 22:51:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:48.456 22:51:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:48.456 22:51:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:48.456 22:51:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:48.456 22:51:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:48.456 22:51:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:48.456 22:51:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:48.456 22:51:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:48.456 22:51:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:48.456 22:51:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:48.456 22:51:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:48.456 22:51:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:48.456 22:51:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:48.456 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:48.456 22:51:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:48.456 22:51:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:48.456 22:51:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:48.456 22:51:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:48.456 22:51:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:48.456 22:51:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:48.456 22:51:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:48.456 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:48.456 22:51:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:48.456 22:51:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:48.456 22:51:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:48.456 22:51:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:48.456 22:51:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:48.456 22:51:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:48.456 22:51:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:48.456 22:51:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:48.456 22:51:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:48.456 22:51:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:48.456 22:51:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:48.456 22:51:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:48.456 22:51:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:48.456 22:51:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:48.456 22:51:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:48.456 22:51:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:48.456 Found net devices under 0000:86:00.0: cvl_0_0 00:20:48.456 22:51:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:48.456 22:51:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:48.456 22:51:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:48.456 22:51:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:48.456 22:51:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:48.456 22:51:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:48.457 22:51:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:48.457 22:51:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:48.457 22:51:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:48.457 Found net devices under 0000:86:00.1: cvl_0_1 00:20:48.457 22:51:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:48.457 22:51:55 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:48.457 22:51:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:48.457 22:51:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:20:48.457 22:51:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:48.457 22:51:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:48.457 22:51:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:48.457 22:51:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:48.457 ************************************ 00:20:48.457 START TEST nvmf_perf_adq 00:20:48.457 ************************************ 00:20:48.457 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:48.457 * Looking for test storage... 00:20:48.457 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:20:48.457 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:48.457 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lcov --version 00:20:48.457 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:48.457 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:48.457 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:48.457 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:48.457 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:48.457 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:20:48.457 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:20:48.457 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:20:48.457 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:20:48.457 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:20:48.457 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:20:48.457 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:20:48.457 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:48.457 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:20:48.457 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:20:48.457 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:48.457 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:48.457 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:20:48.457 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:20:48.457 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:48.457 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:20:48.457 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:20:48.457 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:20:48.457 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:20:48.457 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:48.457 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:20:48.457 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:20:48.457 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:48.457 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:48.457 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:20:48.457 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:48.457 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:48.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:48.457 --rc genhtml_branch_coverage=1 00:20:48.457 --rc genhtml_function_coverage=1 00:20:48.457 --rc genhtml_legend=1 00:20:48.457 --rc geninfo_all_blocks=1 00:20:48.457 --rc geninfo_unexecuted_blocks=1 00:20:48.457 00:20:48.457 ' 00:20:48.457 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:48.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:48.457 --rc genhtml_branch_coverage=1 00:20:48.457 --rc genhtml_function_coverage=1 00:20:48.457 --rc genhtml_legend=1 00:20:48.457 --rc geninfo_all_blocks=1 00:20:48.457 --rc geninfo_unexecuted_blocks=1 00:20:48.457 00:20:48.457 ' 00:20:48.457 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:48.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:48.457 --rc genhtml_branch_coverage=1 00:20:48.457 --rc genhtml_function_coverage=1 00:20:48.457 --rc genhtml_legend=1 00:20:48.457 --rc geninfo_all_blocks=1 00:20:48.457 --rc geninfo_unexecuted_blocks=1 00:20:48.457 00:20:48.457 ' 00:20:48.457 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:48.457 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:48.457 --rc genhtml_branch_coverage=1 00:20:48.457 --rc genhtml_function_coverage=1 00:20:48.457 --rc genhtml_legend=1 00:20:48.457 --rc geninfo_all_blocks=1 00:20:48.457 --rc geninfo_unexecuted_blocks=1 00:20:48.457 00:20:48.457 ' 00:20:48.457 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:20:48.457 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:20:48.457 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:48.457 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:48.457 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:48.457 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:48.457 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:48.457 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:48.457 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:48.457 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:48.457 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:48.457 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:48.457 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:48.457 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:48.457 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:48.457 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:48.457 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:48.457 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:48.457 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:20:48.457 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:20:48.457 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:48.457 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:48.457 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:48.457 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.457 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.457 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.457 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:20:48.457 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:48.457 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:20:48.457 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:48.458 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:48.458 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:48.458 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:48.458 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:48.458 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:48.458 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:48.458 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:48.458 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:48.458 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:48.458 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:20:48.458 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:20:48.458 22:51:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:53.787 22:52:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:53.787 22:52:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:20:53.787 22:52:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:53.787 22:52:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:53.787 22:52:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:53.787 22:52:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:53.787 22:52:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:53.787 22:52:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:20:53.787 22:52:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:53.787 22:52:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:20:53.787 22:52:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:20:53.787 22:52:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:20:53.787 22:52:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:20:53.787 22:52:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:20:53.787 22:52:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:20:53.787 22:52:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:53.787 22:52:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:53.787 22:52:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:53.787 22:52:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:53.788 22:52:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:53.788 22:52:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:53.788 22:52:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:53.788 22:52:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:53.788 22:52:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:53.788 22:52:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:53.788 22:52:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:53.788 22:52:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:53.788 22:52:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:53.788 22:52:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:53.788 22:52:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:53.788 22:52:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:53.788 22:52:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:53.788 22:52:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:53.788 22:52:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:53.788 22:52:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:53.788 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:53.788 22:52:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:53.788 22:52:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:53.788 22:52:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:53.788 22:52:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:53.788 22:52:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:53.788 22:52:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:53.788 22:52:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:53.788 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:53.788 22:52:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:53.788 22:52:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:53.788 22:52:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:53.788 22:52:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:53.788 22:52:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:53.788 22:52:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:53.788 22:52:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:53.788 22:52:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:53.788 22:52:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:53.788 22:52:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:53.788 22:52:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:53.788 22:52:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:53.788 22:52:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:53.788 22:52:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:53.788 22:52:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:53.788 22:52:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:53.788 Found net devices under 0000:86:00.0: cvl_0_0 00:20:53.788 22:52:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:53.788 22:52:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:53.788 22:52:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:53.788 22:52:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:53.788 22:52:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:53.788 22:52:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:53.788 22:52:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:53.788 22:52:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:53.788 22:52:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:53.788 Found net devices under 0000:86:00.1: cvl_0_1 00:20:53.788 22:52:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:53.788 22:52:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:53.788 22:52:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:53.788 22:52:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:20:53.788 22:52:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf 00:20:53.788 22:52:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:20:53.788 22:52:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:20:53.788 22:52:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:20:54.723 22:52:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:20:57.259 22:52:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:21:02.531 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:21:02.531 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:02.531 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:02.531 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:02.531 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:02.532 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:02.532 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:02.532 Found net devices under 0000:86:00.0: cvl_0_0 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:02.532 Found net devices under 0000:86:00.1: cvl_0_1 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:02.532 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:02.532 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:02.532 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:21:02.532 00:21:02.532 --- 10.0.0.2 ping statistics --- 00:21:02.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:02.532 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:21:02.533 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:02.533 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:02.533 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:21:02.533 00:21:02.533 --- 10.0.0.1 ping statistics --- 00:21:02.533 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:02.533 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:21:02.533 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:02.533 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:21:02.533 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:02.533 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:02.533 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:02.533 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:02.533 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:02.533 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:02.533 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:02.533 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:02.533 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:02.533 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:02.533 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:02.533 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2421404 00:21:02.533 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:02.533 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2421404 00:21:02.533 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2421404 ']' 00:21:02.533 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:02.533 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:02.533 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:02.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:02.533 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:02.533 22:52:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:02.533 [2024-12-10 22:52:10.017349] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:21:02.533 [2024-12-10 22:52:10.017398] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:02.533 [2024-12-10 22:52:10.099213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:02.533 [2024-12-10 22:52:10.143326] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:02.533 [2024-12-10 22:52:10.143364] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:02.533 [2024-12-10 22:52:10.143372] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:02.533 [2024-12-10 22:52:10.143379] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:02.533 [2024-12-10 22:52:10.143385] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:02.533 [2024-12-10 22:52:10.144848] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:21:02.533 [2024-12-10 22:52:10.144937] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:21:02.533 [2024-12-10 22:52:10.144935] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:21:02.533 [2024-12-10 22:52:10.144899] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:21:02.533 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:02.533 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:21:02.533 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:02.533 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:02.533 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:02.533 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:02.533 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:21:02.533 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:02.533 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:02.533 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.533 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:02.533 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.792 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:02.792 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:21:02.792 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.792 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:02.792 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.792 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:02.792 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.792 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:02.792 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.792 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:21:02.792 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.792 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:02.792 [2024-12-10 22:52:10.367971] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:02.792 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.792 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:02.792 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.792 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:02.792 Malloc1 00:21:02.792 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.792 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:02.792 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.792 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:02.792 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.792 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:02.792 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.792 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:02.792 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.792 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:02.792 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.792 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:02.792 [2024-12-10 22:52:10.434916] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:02.792 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.792 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=2421576 00:21:02.792 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:21:02.792 22:52:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:05.317 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:21:05.317 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.317 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:05.317 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.317 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:21:05.317 "tick_rate": 2300000000, 00:21:05.317 "poll_groups": [ 00:21:05.317 { 00:21:05.317 "name": "nvmf_tgt_poll_group_000", 00:21:05.317 "admin_qpairs": 1, 00:21:05.317 "io_qpairs": 1, 00:21:05.317 "current_admin_qpairs": 1, 00:21:05.317 "current_io_qpairs": 1, 00:21:05.317 "pending_bdev_io": 0, 00:21:05.317 "completed_nvme_io": 19759, 00:21:05.317 "transports": [ 00:21:05.317 { 00:21:05.317 "trtype": "TCP" 00:21:05.317 } 00:21:05.317 ] 00:21:05.317 }, 00:21:05.317 { 00:21:05.317 "name": "nvmf_tgt_poll_group_001", 00:21:05.317 "admin_qpairs": 0, 00:21:05.317 "io_qpairs": 1, 00:21:05.317 "current_admin_qpairs": 0, 00:21:05.317 "current_io_qpairs": 1, 00:21:05.317 "pending_bdev_io": 0, 00:21:05.317 "completed_nvme_io": 20072, 00:21:05.317 "transports": [ 00:21:05.317 { 00:21:05.317 "trtype": "TCP" 00:21:05.317 } 00:21:05.317 ] 00:21:05.317 }, 00:21:05.317 { 00:21:05.317 "name": "nvmf_tgt_poll_group_002", 00:21:05.317 "admin_qpairs": 0, 00:21:05.317 "io_qpairs": 1, 00:21:05.317 "current_admin_qpairs": 0, 00:21:05.317 "current_io_qpairs": 1, 00:21:05.317 "pending_bdev_io": 0, 00:21:05.317 "completed_nvme_io": 19749, 00:21:05.317 "transports": [ 00:21:05.317 { 00:21:05.317 "trtype": "TCP" 00:21:05.317 } 00:21:05.317 ] 00:21:05.317 }, 00:21:05.317 { 00:21:05.317 "name": "nvmf_tgt_poll_group_003", 00:21:05.317 "admin_qpairs": 0, 00:21:05.317 "io_qpairs": 1, 00:21:05.317 "current_admin_qpairs": 0, 00:21:05.317 "current_io_qpairs": 1, 00:21:05.317 "pending_bdev_io": 0, 00:21:05.317 "completed_nvme_io": 19309, 00:21:05.317 "transports": [ 00:21:05.317 { 00:21:05.317 "trtype": "TCP" 00:21:05.317 } 00:21:05.317 ] 00:21:05.317 } 00:21:05.317 ] 00:21:05.317 }' 00:21:05.317 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:21:05.317 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:21:05.317 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:21:05.317 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:21:05.317 22:52:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 2421576 00:21:13.422 Initializing NVMe Controllers 00:21:13.422 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:13.422 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:13.422 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:13.422 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:13.422 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:13.422 Initialization complete. Launching workers. 00:21:13.422 ======================================================== 00:21:13.422 Latency(us) 00:21:13.422 Device Information : IOPS MiB/s Average min max 00:21:13.422 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10280.37 40.16 6226.72 2376.15 10238.78 00:21:13.422 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10447.66 40.81 6127.19 2343.15 10338.19 00:21:13.422 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10296.37 40.22 6228.66 2338.26 44243.15 00:21:13.422 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10163.98 39.70 6296.60 2426.29 11499.48 00:21:13.422 ======================================================== 00:21:13.422 Total : 41188.36 160.89 6219.20 2338.26 44243.15 00:21:13.422 00:21:13.422 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:21:13.422 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:13.422 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:21:13.422 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:13.422 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:21:13.422 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:13.422 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:13.422 rmmod nvme_tcp 00:21:13.422 rmmod nvme_fabrics 00:21:13.422 rmmod nvme_keyring 00:21:13.422 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:13.422 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:21:13.422 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:21:13.422 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2421404 ']' 00:21:13.422 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2421404 00:21:13.422 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2421404 ']' 00:21:13.422 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2421404 00:21:13.422 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:21:13.422 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:13.422 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2421404 00:21:13.422 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:13.422 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:13.422 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2421404' 00:21:13.422 killing process with pid 2421404 00:21:13.422 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2421404 00:21:13.422 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2421404 00:21:13.422 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:13.422 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:13.422 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:13.422 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:21:13.422 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:21:13.422 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:13.422 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:21:13.422 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:13.422 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:13.422 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:13.422 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:13.422 22:52:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:15.328 22:52:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:15.328 22:52:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:21:15.328 22:52:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:21:15.328 22:52:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:21:16.704 22:52:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:21:19.384 22:52:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:24.665 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:24.665 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:24.665 Found net devices under 0000:86:00.0: cvl_0_0 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:24.665 Found net devices under 0000:86:00.1: cvl_0_1 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:24.665 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:24.665 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.404 ms 00:21:24.665 00:21:24.665 --- 10.0.0.2 ping statistics --- 00:21:24.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:24.665 rtt min/avg/max/mdev = 0.404/0.404/0.404/0.000 ms 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:24.665 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:24.665 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:21:24.665 00:21:24.665 --- 10.0.0.1 ping statistics --- 00:21:24.665 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:24.665 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:21:24.665 net.core.busy_poll = 1 00:21:24.665 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:21:24.665 net.core.busy_read = 1 00:21:24.666 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:21:24.666 22:52:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:21:24.666 22:52:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:21:24.666 22:52:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:21:24.666 22:52:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:21:24.925 22:52:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:24.925 22:52:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:24.925 22:52:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:24.925 22:52:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:24.925 22:52:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2425368 00:21:24.925 22:52:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2425368 00:21:24.925 22:52:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:24.925 22:52:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2425368 ']' 00:21:24.925 22:52:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:24.925 22:52:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:24.925 22:52:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:24.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:24.925 22:52:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:24.925 22:52:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:24.925 [2024-12-10 22:52:32.456742] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:21:24.925 [2024-12-10 22:52:32.456790] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:24.925 [2024-12-10 22:52:32.536578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:24.925 [2024-12-10 22:52:32.579266] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:24.925 [2024-12-10 22:52:32.579303] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:24.925 [2024-12-10 22:52:32.579311] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:24.925 [2024-12-10 22:52:32.579317] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:24.925 [2024-12-10 22:52:32.579322] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:24.925 [2024-12-10 22:52:32.584181] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:21:24.925 [2024-12-10 22:52:32.584208] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:21:24.925 [2024-12-10 22:52:32.584321] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:21:24.925 [2024-12-10 22:52:32.584323] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:21:25.861 22:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:25.861 22:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:21:25.861 22:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:25.861 22:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:25.861 22:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:25.861 22:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:25.861 22:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:21:25.861 22:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:25.861 22:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:25.861 22:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.861 22:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:25.861 22:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.861 22:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:25.861 22:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:21:25.861 22:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.861 22:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:25.861 22:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.861 22:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:25.861 22:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.861 22:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:25.861 22:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.861 22:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:21:25.861 22:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.861 22:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:25.861 [2024-12-10 22:52:33.489003] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:25.861 22:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.861 22:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:25.861 22:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.861 22:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:25.861 Malloc1 00:21:25.861 22:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.861 22:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:25.861 22:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.861 22:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:25.861 22:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.861 22:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:25.861 22:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.861 22:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:25.861 22:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.861 22:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:25.861 22:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.861 22:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:25.861 [2024-12-10 22:52:33.546962] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:25.861 22:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.861 22:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=2425617 00:21:25.861 22:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:21:25.861 22:52:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:28.385 22:52:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:21:28.385 22:52:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.385 22:52:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:28.385 22:52:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.385 22:52:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:21:28.385 "tick_rate": 2300000000, 00:21:28.385 "poll_groups": [ 00:21:28.385 { 00:21:28.385 "name": "nvmf_tgt_poll_group_000", 00:21:28.385 "admin_qpairs": 1, 00:21:28.385 "io_qpairs": 4, 00:21:28.385 "current_admin_qpairs": 1, 00:21:28.385 "current_io_qpairs": 4, 00:21:28.385 "pending_bdev_io": 0, 00:21:28.385 "completed_nvme_io": 41870, 00:21:28.385 "transports": [ 00:21:28.385 { 00:21:28.385 "trtype": "TCP" 00:21:28.385 } 00:21:28.385 ] 00:21:28.385 }, 00:21:28.385 { 00:21:28.385 "name": "nvmf_tgt_poll_group_001", 00:21:28.385 "admin_qpairs": 0, 00:21:28.385 "io_qpairs": 0, 00:21:28.385 "current_admin_qpairs": 0, 00:21:28.385 "current_io_qpairs": 0, 00:21:28.385 "pending_bdev_io": 0, 00:21:28.385 "completed_nvme_io": 0, 00:21:28.385 "transports": [ 00:21:28.385 { 00:21:28.385 "trtype": "TCP" 00:21:28.385 } 00:21:28.385 ] 00:21:28.385 }, 00:21:28.385 { 00:21:28.385 "name": "nvmf_tgt_poll_group_002", 00:21:28.385 "admin_qpairs": 0, 00:21:28.385 "io_qpairs": 0, 00:21:28.385 "current_admin_qpairs": 0, 00:21:28.385 "current_io_qpairs": 0, 00:21:28.385 "pending_bdev_io": 0, 00:21:28.385 "completed_nvme_io": 0, 00:21:28.385 "transports": [ 00:21:28.385 { 00:21:28.385 "trtype": "TCP" 00:21:28.385 } 00:21:28.385 ] 00:21:28.385 }, 00:21:28.385 { 00:21:28.385 "name": "nvmf_tgt_poll_group_003", 00:21:28.385 "admin_qpairs": 0, 00:21:28.385 "io_qpairs": 0, 00:21:28.385 "current_admin_qpairs": 0, 00:21:28.385 "current_io_qpairs": 0, 00:21:28.385 "pending_bdev_io": 0, 00:21:28.385 "completed_nvme_io": 0, 00:21:28.385 "transports": [ 00:21:28.385 { 00:21:28.385 "trtype": "TCP" 00:21:28.385 } 00:21:28.385 ] 00:21:28.385 } 00:21:28.385 ] 00:21:28.385 }' 00:21:28.385 22:52:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:21:28.385 22:52:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:21:28.385 22:52:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=3 00:21:28.385 22:52:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 3 -lt 2 ]] 00:21:28.385 22:52:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 2425617 00:21:36.487 Initializing NVMe Controllers 00:21:36.487 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:36.487 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:36.487 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:36.487 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:36.487 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:36.487 Initialization complete. Launching workers. 00:21:36.487 ======================================================== 00:21:36.487 Latency(us) 00:21:36.487 Device Information : IOPS MiB/s Average min max 00:21:36.487 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5407.60 21.12 11872.84 1504.37 56997.73 00:21:36.487 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5618.50 21.95 11392.17 982.69 55762.66 00:21:36.487 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6113.10 23.88 10470.73 1023.37 55646.82 00:21:36.487 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5617.10 21.94 11416.60 1227.73 57040.47 00:21:36.487 ======================================================== 00:21:36.487 Total : 22756.30 88.89 11264.89 982.69 57040.47 00:21:36.487 00:21:36.487 22:52:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:21:36.487 22:52:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:36.487 22:52:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:21:36.487 22:52:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:36.487 22:52:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:21:36.487 22:52:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:36.487 22:52:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:36.487 rmmod nvme_tcp 00:21:36.487 rmmod nvme_fabrics 00:21:36.487 rmmod nvme_keyring 00:21:36.487 22:52:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:36.487 22:52:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:21:36.488 22:52:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:21:36.488 22:52:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2425368 ']' 00:21:36.488 22:52:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2425368 00:21:36.488 22:52:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2425368 ']' 00:21:36.488 22:52:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2425368 00:21:36.488 22:52:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:21:36.488 22:52:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:36.488 22:52:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2425368 00:21:36.488 22:52:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:36.488 22:52:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:36.488 22:52:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2425368' 00:21:36.488 killing process with pid 2425368 00:21:36.488 22:52:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2425368 00:21:36.488 22:52:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2425368 00:21:36.488 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:36.488 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:36.488 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:36.488 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:21:36.488 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:21:36.488 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:36.488 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:21:36.488 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:36.488 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:36.488 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:36.488 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:36.488 22:52:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:39.777 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:39.777 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:21:39.777 00:21:39.777 real 0m51.890s 00:21:39.777 user 2m47.504s 00:21:39.777 sys 0m9.971s 00:21:39.777 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:39.777 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:39.777 ************************************ 00:21:39.777 END TEST nvmf_perf_adq 00:21:39.777 ************************************ 00:21:39.777 22:52:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:39.777 22:52:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:39.777 22:52:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:39.777 22:52:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:39.777 ************************************ 00:21:39.777 START TEST nvmf_shutdown 00:21:39.777 ************************************ 00:21:39.777 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:39.777 * Looking for test storage... 00:21:39.777 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:21:39.777 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:39.777 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:21:39.777 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:39.777 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:39.777 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:39.777 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:39.777 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:39.777 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:21:39.777 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:21:39.777 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:21:39.777 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:21:39.777 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:21:39.777 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:21:39.777 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:21:39.777 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:39.777 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:21:39.777 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:21:39.777 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:39.777 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:39.777 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:21:39.777 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:21:39.777 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:39.777 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:21:39.777 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:21:39.777 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:21:39.777 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:21:39.777 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:39.777 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:21:39.777 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:21:39.777 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:39.777 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:39.777 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:21:39.777 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:39.777 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:39.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:39.777 --rc genhtml_branch_coverage=1 00:21:39.777 --rc genhtml_function_coverage=1 00:21:39.777 --rc genhtml_legend=1 00:21:39.777 --rc geninfo_all_blocks=1 00:21:39.777 --rc geninfo_unexecuted_blocks=1 00:21:39.777 00:21:39.777 ' 00:21:39.777 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:39.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:39.777 --rc genhtml_branch_coverage=1 00:21:39.777 --rc genhtml_function_coverage=1 00:21:39.777 --rc genhtml_legend=1 00:21:39.777 --rc geninfo_all_blocks=1 00:21:39.777 --rc geninfo_unexecuted_blocks=1 00:21:39.777 00:21:39.777 ' 00:21:39.777 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:39.777 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:39.777 --rc genhtml_branch_coverage=1 00:21:39.777 --rc genhtml_function_coverage=1 00:21:39.777 --rc genhtml_legend=1 00:21:39.777 --rc geninfo_all_blocks=1 00:21:39.777 --rc geninfo_unexecuted_blocks=1 00:21:39.777 00:21:39.777 ' 00:21:39.778 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:39.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:39.778 --rc genhtml_branch_coverage=1 00:21:39.778 --rc genhtml_function_coverage=1 00:21:39.778 --rc genhtml_legend=1 00:21:39.778 --rc geninfo_all_blocks=1 00:21:39.778 --rc geninfo_unexecuted_blocks=1 00:21:39.778 00:21:39.778 ' 00:21:39.778 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:21:39.778 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:21:39.778 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:39.778 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:39.778 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:39.778 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:39.778 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:39.778 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:39.778 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:39.778 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:39.778 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:39.778 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:39.778 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:39.778 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:39.778 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:39.778 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:39.778 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:39.778 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:39.778 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:21:39.778 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:21:39.778 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:39.778 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:39.778 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:39.778 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.778 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.778 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.778 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:21:39.778 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:39.778 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:21:39.778 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:39.778 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:39.778 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:39.778 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:39.778 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:39.778 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:39.778 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:39.778 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:39.778 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:39.778 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:39.778 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:39.778 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:39.778 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:21:39.778 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:39.778 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:39.778 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:39.778 ************************************ 00:21:39.778 START TEST nvmf_shutdown_tc1 00:21:39.778 ************************************ 00:21:39.778 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:21:39.778 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:21:39.778 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:39.778 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:39.778 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:39.778 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:39.778 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:39.778 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:39.778 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:39.778 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:39.778 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:39.778 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:39.778 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:39.778 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:39.778 22:52:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:46.344 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:46.344 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:46.344 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:46.344 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:46.344 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:46.344 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:46.344 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:46.344 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:21:46.344 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:46.344 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:21:46.344 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:21:46.344 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:21:46.344 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:21:46.344 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:21:46.344 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:46.344 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:46.344 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:46.344 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:46.344 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:46.344 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:46.344 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:46.344 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:46.344 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:46.344 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:46.344 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:46.344 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:46.344 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:46.344 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:46.344 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:46.344 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:46.344 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:46.344 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:46.344 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:46.344 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:46.344 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:46.344 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:46.344 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:46.344 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:46.344 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:46.344 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:46.344 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:46.344 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:46.344 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:46.344 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:46.344 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:46.344 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:46.344 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:46.345 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:46.345 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:46.345 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:46.345 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:46.345 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:46.345 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:46.345 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:46.345 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:46.345 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:46.345 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:46.345 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:46.345 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:46.345 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:46.345 Found net devices under 0000:86:00.0: cvl_0_0 00:21:46.345 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:46.345 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:46.345 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:46.345 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:46.345 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:46.345 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:46.345 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:46.345 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:46.345 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:46.345 Found net devices under 0000:86:00.1: cvl_0_1 00:21:46.345 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:46.345 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:46.345 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:46.345 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:46.345 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:46.345 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:46.345 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:46.345 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:46.345 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:46.345 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:46.345 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:46.345 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:46.345 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:46.345 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:46.345 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:46.345 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:46.345 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:46.345 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:46.345 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:46.345 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:46.345 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:46.345 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:46.345 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:46.345 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:46.345 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:46.345 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:46.345 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:46.345 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:46.345 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:46.345 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:46.345 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.373 ms 00:21:46.345 00:21:46.345 --- 10.0.0.2 ping statistics --- 00:21:46.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:46.345 rtt min/avg/max/mdev = 0.373/0.373/0.373/0.000 ms 00:21:46.345 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:46.345 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:46.345 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:21:46.345 00:21:46.345 --- 10.0.0.1 ping statistics --- 00:21:46.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:46.345 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:21:46.345 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:46.345 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:21:46.345 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:46.345 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:46.345 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:46.345 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:46.345 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:46.345 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:46.345 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:46.345 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:46.345 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:46.345 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:46.345 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:46.345 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=2431062 00:21:46.345 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 2431062 00:21:46.345 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:46.345 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2431062 ']' 00:21:46.345 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:46.345 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:46.345 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:46.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:46.345 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:46.345 22:52:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:46.345 [2024-12-10 22:52:53.448653] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:21:46.345 [2024-12-10 22:52:53.448707] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:46.345 [2024-12-10 22:52:53.530658] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:46.345 [2024-12-10 22:52:53.573813] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:46.345 [2024-12-10 22:52:53.573853] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:46.345 [2024-12-10 22:52:53.573861] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:46.345 [2024-12-10 22:52:53.573866] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:46.345 [2024-12-10 22:52:53.573871] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:46.345 [2024-12-10 22:52:53.575357] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:21:46.345 [2024-12-10 22:52:53.575394] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:21:46.345 [2024-12-10 22:52:53.575414] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:21:46.345 [2024-12-10 22:52:53.575416] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:21:46.603 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:46.603 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:21:46.603 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:46.603 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:46.603 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:46.862 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:46.862 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:46.862 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.862 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:46.862 [2024-12-10 22:52:54.343660] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:46.862 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.862 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:46.862 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:46.862 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:46.862 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:46.862 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/rpcs.txt 00:21:46.862 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:46.862 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:46.862 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:46.862 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:46.862 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:46.862 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:46.862 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:46.862 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:46.862 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:46.862 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:46.862 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:46.862 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:46.862 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:46.862 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:46.862 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:46.862 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:46.862 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:46.862 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:46.862 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:46.862 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:46.862 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:46.862 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.862 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:46.862 Malloc1 00:21:46.862 [2024-12-10 22:52:54.453202] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:46.862 Malloc2 00:21:46.862 Malloc3 00:21:46.862 Malloc4 00:21:47.121 Malloc5 00:21:47.121 Malloc6 00:21:47.121 Malloc7 00:21:47.121 Malloc8 00:21:47.121 Malloc9 00:21:47.121 Malloc10 00:21:47.121 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.121 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:47.121 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:47.121 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:47.381 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=2431339 00:21:47.381 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 2431339 /var/tmp/bdevperf.sock 00:21:47.381 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2431339 ']' 00:21:47.381 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:47.381 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:21:47.381 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:47.381 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:47.381 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:47.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:47.381 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:21:47.381 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:47.381 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:21:47.381 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:47.381 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:47.381 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:47.381 { 00:21:47.381 "params": { 00:21:47.381 "name": "Nvme$subsystem", 00:21:47.381 "trtype": "$TEST_TRANSPORT", 00:21:47.381 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:47.381 "adrfam": "ipv4", 00:21:47.381 "trsvcid": "$NVMF_PORT", 00:21:47.381 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:47.381 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:47.381 "hdgst": ${hdgst:-false}, 00:21:47.381 "ddgst": ${ddgst:-false} 00:21:47.381 }, 00:21:47.381 "method": "bdev_nvme_attach_controller" 00:21:47.381 } 00:21:47.381 EOF 00:21:47.381 )") 00:21:47.381 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:47.381 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:47.381 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:47.381 { 00:21:47.381 "params": { 00:21:47.381 "name": "Nvme$subsystem", 00:21:47.381 "trtype": "$TEST_TRANSPORT", 00:21:47.381 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:47.381 "adrfam": "ipv4", 00:21:47.381 "trsvcid": "$NVMF_PORT", 00:21:47.381 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:47.381 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:47.381 "hdgst": ${hdgst:-false}, 00:21:47.381 "ddgst": ${ddgst:-false} 00:21:47.381 }, 00:21:47.381 "method": "bdev_nvme_attach_controller" 00:21:47.381 } 00:21:47.381 EOF 00:21:47.381 )") 00:21:47.381 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:47.381 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:47.381 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:47.381 { 00:21:47.381 "params": { 00:21:47.381 "name": "Nvme$subsystem", 00:21:47.381 "trtype": "$TEST_TRANSPORT", 00:21:47.381 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:47.381 "adrfam": "ipv4", 00:21:47.381 "trsvcid": "$NVMF_PORT", 00:21:47.381 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:47.381 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:47.381 "hdgst": ${hdgst:-false}, 00:21:47.381 "ddgst": ${ddgst:-false} 00:21:47.381 }, 00:21:47.381 "method": "bdev_nvme_attach_controller" 00:21:47.381 } 00:21:47.381 EOF 00:21:47.381 )") 00:21:47.381 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:47.381 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:47.381 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:47.381 { 00:21:47.381 "params": { 00:21:47.381 "name": "Nvme$subsystem", 00:21:47.381 "trtype": "$TEST_TRANSPORT", 00:21:47.381 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:47.381 "adrfam": "ipv4", 00:21:47.381 "trsvcid": "$NVMF_PORT", 00:21:47.381 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:47.381 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:47.381 "hdgst": ${hdgst:-false}, 00:21:47.381 "ddgst": ${ddgst:-false} 00:21:47.381 }, 00:21:47.381 "method": "bdev_nvme_attach_controller" 00:21:47.381 } 00:21:47.381 EOF 00:21:47.381 )") 00:21:47.382 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:47.382 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:47.382 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:47.382 { 00:21:47.382 "params": { 00:21:47.382 "name": "Nvme$subsystem", 00:21:47.382 "trtype": "$TEST_TRANSPORT", 00:21:47.382 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:47.382 "adrfam": "ipv4", 00:21:47.382 "trsvcid": "$NVMF_PORT", 00:21:47.382 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:47.382 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:47.382 "hdgst": ${hdgst:-false}, 00:21:47.382 "ddgst": ${ddgst:-false} 00:21:47.382 }, 00:21:47.382 "method": "bdev_nvme_attach_controller" 00:21:47.382 } 00:21:47.382 EOF 00:21:47.382 )") 00:21:47.382 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:47.382 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:47.382 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:47.382 { 00:21:47.382 "params": { 00:21:47.382 "name": "Nvme$subsystem", 00:21:47.382 "trtype": "$TEST_TRANSPORT", 00:21:47.382 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:47.382 "adrfam": "ipv4", 00:21:47.382 "trsvcid": "$NVMF_PORT", 00:21:47.382 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:47.382 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:47.382 "hdgst": ${hdgst:-false}, 00:21:47.382 "ddgst": ${ddgst:-false} 00:21:47.382 }, 00:21:47.382 "method": "bdev_nvme_attach_controller" 00:21:47.382 } 00:21:47.382 EOF 00:21:47.382 )") 00:21:47.382 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:47.382 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:47.382 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:47.382 { 00:21:47.382 "params": { 00:21:47.382 "name": "Nvme$subsystem", 00:21:47.382 "trtype": "$TEST_TRANSPORT", 00:21:47.382 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:47.382 "adrfam": "ipv4", 00:21:47.382 "trsvcid": "$NVMF_PORT", 00:21:47.382 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:47.382 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:47.382 "hdgst": ${hdgst:-false}, 00:21:47.382 "ddgst": ${ddgst:-false} 00:21:47.382 }, 00:21:47.382 "method": "bdev_nvme_attach_controller" 00:21:47.382 } 00:21:47.382 EOF 00:21:47.382 )") 00:21:47.382 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:47.382 [2024-12-10 22:52:54.925731] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:21:47.382 [2024-12-10 22:52:54.925779] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:21:47.382 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:47.382 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:47.382 { 00:21:47.382 "params": { 00:21:47.382 "name": "Nvme$subsystem", 00:21:47.382 "trtype": "$TEST_TRANSPORT", 00:21:47.382 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:47.382 "adrfam": "ipv4", 00:21:47.382 "trsvcid": "$NVMF_PORT", 00:21:47.382 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:47.382 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:47.382 "hdgst": ${hdgst:-false}, 00:21:47.382 "ddgst": ${ddgst:-false} 00:21:47.382 }, 00:21:47.382 "method": "bdev_nvme_attach_controller" 00:21:47.382 } 00:21:47.382 EOF 00:21:47.382 )") 00:21:47.382 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:47.382 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:47.382 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:47.382 { 00:21:47.382 "params": { 00:21:47.382 "name": "Nvme$subsystem", 00:21:47.382 "trtype": "$TEST_TRANSPORT", 00:21:47.382 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:47.382 "adrfam": "ipv4", 00:21:47.382 "trsvcid": "$NVMF_PORT", 00:21:47.382 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:47.382 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:47.382 "hdgst": ${hdgst:-false}, 00:21:47.382 "ddgst": ${ddgst:-false} 00:21:47.382 }, 00:21:47.382 "method": "bdev_nvme_attach_controller" 00:21:47.382 } 00:21:47.382 EOF 00:21:47.382 )") 00:21:47.382 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:47.382 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:47.382 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:47.382 { 00:21:47.382 "params": { 00:21:47.382 "name": "Nvme$subsystem", 00:21:47.382 "trtype": "$TEST_TRANSPORT", 00:21:47.382 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:47.382 "adrfam": "ipv4", 00:21:47.382 "trsvcid": "$NVMF_PORT", 00:21:47.382 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:47.382 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:47.382 "hdgst": ${hdgst:-false}, 00:21:47.382 "ddgst": ${ddgst:-false} 00:21:47.382 }, 00:21:47.382 "method": "bdev_nvme_attach_controller" 00:21:47.382 } 00:21:47.382 EOF 00:21:47.382 )") 00:21:47.382 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:47.382 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:21:47.382 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:21:47.382 22:52:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:47.382 "params": { 00:21:47.382 "name": "Nvme1", 00:21:47.382 "trtype": "tcp", 00:21:47.382 "traddr": "10.0.0.2", 00:21:47.382 "adrfam": "ipv4", 00:21:47.382 "trsvcid": "4420", 00:21:47.382 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:47.382 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:47.382 "hdgst": false, 00:21:47.382 "ddgst": false 00:21:47.382 }, 00:21:47.382 "method": "bdev_nvme_attach_controller" 00:21:47.382 },{ 00:21:47.382 "params": { 00:21:47.382 "name": "Nvme2", 00:21:47.382 "trtype": "tcp", 00:21:47.382 "traddr": "10.0.0.2", 00:21:47.382 "adrfam": "ipv4", 00:21:47.382 "trsvcid": "4420", 00:21:47.382 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:47.382 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:47.382 "hdgst": false, 00:21:47.382 "ddgst": false 00:21:47.382 }, 00:21:47.382 "method": "bdev_nvme_attach_controller" 00:21:47.382 },{ 00:21:47.382 "params": { 00:21:47.382 "name": "Nvme3", 00:21:47.382 "trtype": "tcp", 00:21:47.382 "traddr": "10.0.0.2", 00:21:47.382 "adrfam": "ipv4", 00:21:47.382 "trsvcid": "4420", 00:21:47.382 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:47.382 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:47.382 "hdgst": false, 00:21:47.382 "ddgst": false 00:21:47.382 }, 00:21:47.382 "method": "bdev_nvme_attach_controller" 00:21:47.382 },{ 00:21:47.382 "params": { 00:21:47.382 "name": "Nvme4", 00:21:47.382 "trtype": "tcp", 00:21:47.382 "traddr": "10.0.0.2", 00:21:47.382 "adrfam": "ipv4", 00:21:47.382 "trsvcid": "4420", 00:21:47.382 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:47.382 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:47.382 "hdgst": false, 00:21:47.382 "ddgst": false 00:21:47.382 }, 00:21:47.382 "method": "bdev_nvme_attach_controller" 00:21:47.382 },{ 00:21:47.382 "params": { 00:21:47.382 "name": "Nvme5", 00:21:47.382 "trtype": "tcp", 00:21:47.382 "traddr": "10.0.0.2", 00:21:47.382 "adrfam": "ipv4", 00:21:47.382 "trsvcid": "4420", 00:21:47.382 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:47.382 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:47.382 "hdgst": false, 00:21:47.382 "ddgst": false 00:21:47.382 }, 00:21:47.382 "method": "bdev_nvme_attach_controller" 00:21:47.382 },{ 00:21:47.382 "params": { 00:21:47.382 "name": "Nvme6", 00:21:47.382 "trtype": "tcp", 00:21:47.382 "traddr": "10.0.0.2", 00:21:47.382 "adrfam": "ipv4", 00:21:47.382 "trsvcid": "4420", 00:21:47.382 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:47.382 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:47.382 "hdgst": false, 00:21:47.382 "ddgst": false 00:21:47.382 }, 00:21:47.382 "method": "bdev_nvme_attach_controller" 00:21:47.382 },{ 00:21:47.382 "params": { 00:21:47.382 "name": "Nvme7", 00:21:47.382 "trtype": "tcp", 00:21:47.382 "traddr": "10.0.0.2", 00:21:47.382 "adrfam": "ipv4", 00:21:47.382 "trsvcid": "4420", 00:21:47.382 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:47.382 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:47.382 "hdgst": false, 00:21:47.382 "ddgst": false 00:21:47.382 }, 00:21:47.382 "method": "bdev_nvme_attach_controller" 00:21:47.382 },{ 00:21:47.382 "params": { 00:21:47.382 "name": "Nvme8", 00:21:47.382 "trtype": "tcp", 00:21:47.382 "traddr": "10.0.0.2", 00:21:47.382 "adrfam": "ipv4", 00:21:47.382 "trsvcid": "4420", 00:21:47.383 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:47.383 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:47.383 "hdgst": false, 00:21:47.383 "ddgst": false 00:21:47.383 }, 00:21:47.383 "method": "bdev_nvme_attach_controller" 00:21:47.383 },{ 00:21:47.383 "params": { 00:21:47.383 "name": "Nvme9", 00:21:47.383 "trtype": "tcp", 00:21:47.383 "traddr": "10.0.0.2", 00:21:47.383 "adrfam": "ipv4", 00:21:47.383 "trsvcid": "4420", 00:21:47.383 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:47.383 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:47.383 "hdgst": false, 00:21:47.383 "ddgst": false 00:21:47.383 }, 00:21:47.383 "method": "bdev_nvme_attach_controller" 00:21:47.383 },{ 00:21:47.383 "params": { 00:21:47.383 "name": "Nvme10", 00:21:47.383 "trtype": "tcp", 00:21:47.383 "traddr": "10.0.0.2", 00:21:47.383 "adrfam": "ipv4", 00:21:47.383 "trsvcid": "4420", 00:21:47.383 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:47.383 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:47.383 "hdgst": false, 00:21:47.383 "ddgst": false 00:21:47.383 }, 00:21:47.383 "method": "bdev_nvme_attach_controller" 00:21:47.383 }' 00:21:47.383 [2024-12-10 22:52:55.001304] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:47.383 [2024-12-10 22:52:55.041867] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:21:49.289 22:52:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:49.289 22:52:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:21:49.289 22:52:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:49.289 22:52:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.289 22:52:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:49.289 22:52:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.289 22:52:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 2431339 00:21:49.289 22:52:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:21:49.289 22:52:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:21:50.224 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/shutdown.sh: line 74: 2431339 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:21:50.225 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 2431062 00:21:50.225 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:21:50.225 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:50.225 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:21:50.225 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:21:50.225 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:50.225 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:50.225 { 00:21:50.225 "params": { 00:21:50.225 "name": "Nvme$subsystem", 00:21:50.225 "trtype": "$TEST_TRANSPORT", 00:21:50.225 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:50.225 "adrfam": "ipv4", 00:21:50.225 "trsvcid": "$NVMF_PORT", 00:21:50.225 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:50.225 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:50.225 "hdgst": ${hdgst:-false}, 00:21:50.225 "ddgst": ${ddgst:-false} 00:21:50.225 }, 00:21:50.225 "method": "bdev_nvme_attach_controller" 00:21:50.225 } 00:21:50.225 EOF 00:21:50.225 )") 00:21:50.225 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:50.225 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:50.225 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:50.225 { 00:21:50.225 "params": { 00:21:50.225 "name": "Nvme$subsystem", 00:21:50.225 "trtype": "$TEST_TRANSPORT", 00:21:50.225 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:50.225 "adrfam": "ipv4", 00:21:50.225 "trsvcid": "$NVMF_PORT", 00:21:50.225 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:50.225 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:50.225 "hdgst": ${hdgst:-false}, 00:21:50.225 "ddgst": ${ddgst:-false} 00:21:50.225 }, 00:21:50.225 "method": "bdev_nvme_attach_controller" 00:21:50.225 } 00:21:50.225 EOF 00:21:50.225 )") 00:21:50.225 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:50.225 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:50.225 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:50.225 { 00:21:50.225 "params": { 00:21:50.225 "name": "Nvme$subsystem", 00:21:50.225 "trtype": "$TEST_TRANSPORT", 00:21:50.225 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:50.225 "adrfam": "ipv4", 00:21:50.225 "trsvcid": "$NVMF_PORT", 00:21:50.225 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:50.225 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:50.225 "hdgst": ${hdgst:-false}, 00:21:50.225 "ddgst": ${ddgst:-false} 00:21:50.225 }, 00:21:50.225 "method": "bdev_nvme_attach_controller" 00:21:50.225 } 00:21:50.225 EOF 00:21:50.225 )") 00:21:50.225 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:50.225 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:50.225 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:50.225 { 00:21:50.225 "params": { 00:21:50.225 "name": "Nvme$subsystem", 00:21:50.225 "trtype": "$TEST_TRANSPORT", 00:21:50.225 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:50.225 "adrfam": "ipv4", 00:21:50.225 "trsvcid": "$NVMF_PORT", 00:21:50.225 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:50.225 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:50.225 "hdgst": ${hdgst:-false}, 00:21:50.225 "ddgst": ${ddgst:-false} 00:21:50.225 }, 00:21:50.225 "method": "bdev_nvme_attach_controller" 00:21:50.225 } 00:21:50.225 EOF 00:21:50.225 )") 00:21:50.225 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:50.225 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:50.225 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:50.225 { 00:21:50.225 "params": { 00:21:50.225 "name": "Nvme$subsystem", 00:21:50.225 "trtype": "$TEST_TRANSPORT", 00:21:50.225 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:50.225 "adrfam": "ipv4", 00:21:50.225 "trsvcid": "$NVMF_PORT", 00:21:50.225 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:50.225 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:50.225 "hdgst": ${hdgst:-false}, 00:21:50.225 "ddgst": ${ddgst:-false} 00:21:50.225 }, 00:21:50.225 "method": "bdev_nvme_attach_controller" 00:21:50.225 } 00:21:50.225 EOF 00:21:50.225 )") 00:21:50.225 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:50.225 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:50.225 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:50.225 { 00:21:50.225 "params": { 00:21:50.225 "name": "Nvme$subsystem", 00:21:50.225 "trtype": "$TEST_TRANSPORT", 00:21:50.225 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:50.225 "adrfam": "ipv4", 00:21:50.225 "trsvcid": "$NVMF_PORT", 00:21:50.225 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:50.225 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:50.225 "hdgst": ${hdgst:-false}, 00:21:50.225 "ddgst": ${ddgst:-false} 00:21:50.225 }, 00:21:50.225 "method": "bdev_nvme_attach_controller" 00:21:50.225 } 00:21:50.225 EOF 00:21:50.225 )") 00:21:50.225 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:50.225 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:50.225 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:50.225 { 00:21:50.225 "params": { 00:21:50.225 "name": "Nvme$subsystem", 00:21:50.225 "trtype": "$TEST_TRANSPORT", 00:21:50.225 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:50.225 "adrfam": "ipv4", 00:21:50.225 "trsvcid": "$NVMF_PORT", 00:21:50.225 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:50.225 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:50.225 "hdgst": ${hdgst:-false}, 00:21:50.225 "ddgst": ${ddgst:-false} 00:21:50.225 }, 00:21:50.225 "method": "bdev_nvme_attach_controller" 00:21:50.225 } 00:21:50.225 EOF 00:21:50.225 )") 00:21:50.225 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:50.225 [2024-12-10 22:52:57.853534] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:21:50.225 [2024-12-10 22:52:57.853584] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2431831 ] 00:21:50.225 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:50.225 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:50.225 { 00:21:50.225 "params": { 00:21:50.225 "name": "Nvme$subsystem", 00:21:50.225 "trtype": "$TEST_TRANSPORT", 00:21:50.225 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:50.225 "adrfam": "ipv4", 00:21:50.225 "trsvcid": "$NVMF_PORT", 00:21:50.225 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:50.225 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:50.225 "hdgst": ${hdgst:-false}, 00:21:50.225 "ddgst": ${ddgst:-false} 00:21:50.225 }, 00:21:50.225 "method": "bdev_nvme_attach_controller" 00:21:50.225 } 00:21:50.225 EOF 00:21:50.225 )") 00:21:50.225 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:50.225 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:50.225 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:50.225 { 00:21:50.225 "params": { 00:21:50.225 "name": "Nvme$subsystem", 00:21:50.225 "trtype": "$TEST_TRANSPORT", 00:21:50.225 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:50.225 "adrfam": "ipv4", 00:21:50.225 "trsvcid": "$NVMF_PORT", 00:21:50.225 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:50.225 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:50.225 "hdgst": ${hdgst:-false}, 00:21:50.225 "ddgst": ${ddgst:-false} 00:21:50.225 }, 00:21:50.225 "method": "bdev_nvme_attach_controller" 00:21:50.225 } 00:21:50.225 EOF 00:21:50.225 )") 00:21:50.225 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:50.225 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:50.225 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:50.225 { 00:21:50.225 "params": { 00:21:50.225 "name": "Nvme$subsystem", 00:21:50.225 "trtype": "$TEST_TRANSPORT", 00:21:50.225 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:50.225 "adrfam": "ipv4", 00:21:50.225 "trsvcid": "$NVMF_PORT", 00:21:50.225 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:50.225 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:50.225 "hdgst": ${hdgst:-false}, 00:21:50.225 "ddgst": ${ddgst:-false} 00:21:50.225 }, 00:21:50.225 "method": "bdev_nvme_attach_controller" 00:21:50.225 } 00:21:50.225 EOF 00:21:50.225 )") 00:21:50.225 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:21:50.225 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:21:50.225 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:21:50.225 22:52:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:50.225 "params": { 00:21:50.226 "name": "Nvme1", 00:21:50.226 "trtype": "tcp", 00:21:50.226 "traddr": "10.0.0.2", 00:21:50.226 "adrfam": "ipv4", 00:21:50.226 "trsvcid": "4420", 00:21:50.226 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:50.226 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:50.226 "hdgst": false, 00:21:50.226 "ddgst": false 00:21:50.226 }, 00:21:50.226 "method": "bdev_nvme_attach_controller" 00:21:50.226 },{ 00:21:50.226 "params": { 00:21:50.226 "name": "Nvme2", 00:21:50.226 "trtype": "tcp", 00:21:50.226 "traddr": "10.0.0.2", 00:21:50.226 "adrfam": "ipv4", 00:21:50.226 "trsvcid": "4420", 00:21:50.226 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:50.226 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:50.226 "hdgst": false, 00:21:50.226 "ddgst": false 00:21:50.226 }, 00:21:50.226 "method": "bdev_nvme_attach_controller" 00:21:50.226 },{ 00:21:50.226 "params": { 00:21:50.226 "name": "Nvme3", 00:21:50.226 "trtype": "tcp", 00:21:50.226 "traddr": "10.0.0.2", 00:21:50.226 "adrfam": "ipv4", 00:21:50.226 "trsvcid": "4420", 00:21:50.226 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:50.226 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:50.226 "hdgst": false, 00:21:50.226 "ddgst": false 00:21:50.226 }, 00:21:50.226 "method": "bdev_nvme_attach_controller" 00:21:50.226 },{ 00:21:50.226 "params": { 00:21:50.226 "name": "Nvme4", 00:21:50.226 "trtype": "tcp", 00:21:50.226 "traddr": "10.0.0.2", 00:21:50.226 "adrfam": "ipv4", 00:21:50.226 "trsvcid": "4420", 00:21:50.226 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:50.226 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:50.226 "hdgst": false, 00:21:50.226 "ddgst": false 00:21:50.226 }, 00:21:50.226 "method": "bdev_nvme_attach_controller" 00:21:50.226 },{ 00:21:50.226 "params": { 00:21:50.226 "name": "Nvme5", 00:21:50.226 "trtype": "tcp", 00:21:50.226 "traddr": "10.0.0.2", 00:21:50.226 "adrfam": "ipv4", 00:21:50.226 "trsvcid": "4420", 00:21:50.226 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:50.226 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:50.226 "hdgst": false, 00:21:50.226 "ddgst": false 00:21:50.226 }, 00:21:50.226 "method": "bdev_nvme_attach_controller" 00:21:50.226 },{ 00:21:50.226 "params": { 00:21:50.226 "name": "Nvme6", 00:21:50.226 "trtype": "tcp", 00:21:50.226 "traddr": "10.0.0.2", 00:21:50.226 "adrfam": "ipv4", 00:21:50.226 "trsvcid": "4420", 00:21:50.226 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:50.226 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:50.226 "hdgst": false, 00:21:50.226 "ddgst": false 00:21:50.226 }, 00:21:50.226 "method": "bdev_nvme_attach_controller" 00:21:50.226 },{ 00:21:50.226 "params": { 00:21:50.226 "name": "Nvme7", 00:21:50.226 "trtype": "tcp", 00:21:50.226 "traddr": "10.0.0.2", 00:21:50.226 "adrfam": "ipv4", 00:21:50.226 "trsvcid": "4420", 00:21:50.226 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:50.226 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:50.226 "hdgst": false, 00:21:50.226 "ddgst": false 00:21:50.226 }, 00:21:50.226 "method": "bdev_nvme_attach_controller" 00:21:50.226 },{ 00:21:50.226 "params": { 00:21:50.226 "name": "Nvme8", 00:21:50.226 "trtype": "tcp", 00:21:50.226 "traddr": "10.0.0.2", 00:21:50.226 "adrfam": "ipv4", 00:21:50.226 "trsvcid": "4420", 00:21:50.226 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:50.226 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:50.226 "hdgst": false, 00:21:50.226 "ddgst": false 00:21:50.226 }, 00:21:50.226 "method": "bdev_nvme_attach_controller" 00:21:50.226 },{ 00:21:50.226 "params": { 00:21:50.226 "name": "Nvme9", 00:21:50.226 "trtype": "tcp", 00:21:50.226 "traddr": "10.0.0.2", 00:21:50.226 "adrfam": "ipv4", 00:21:50.226 "trsvcid": "4420", 00:21:50.226 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:50.226 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:50.226 "hdgst": false, 00:21:50.226 "ddgst": false 00:21:50.226 }, 00:21:50.226 "method": "bdev_nvme_attach_controller" 00:21:50.226 },{ 00:21:50.226 "params": { 00:21:50.226 "name": "Nvme10", 00:21:50.226 "trtype": "tcp", 00:21:50.226 "traddr": "10.0.0.2", 00:21:50.226 "adrfam": "ipv4", 00:21:50.226 "trsvcid": "4420", 00:21:50.226 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:50.226 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:50.226 "hdgst": false, 00:21:50.226 "ddgst": false 00:21:50.226 }, 00:21:50.226 "method": "bdev_nvme_attach_controller" 00:21:50.226 }' 00:21:50.226 [2024-12-10 22:52:57.932191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:50.485 [2024-12-10 22:52:57.973380] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:21:51.860 Running I/O for 1 seconds... 00:21:53.055 2186.00 IOPS, 136.62 MiB/s 00:21:53.055 Latency(us) 00:21:53.055 [2024-12-10T21:53:00.784Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:53.055 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:53.055 Verification LBA range: start 0x0 length 0x400 00:21:53.055 Nvme1n1 : 1.16 276.16 17.26 0.00 0.00 226031.88 15386.71 217009.64 00:21:53.055 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:53.055 Verification LBA range: start 0x0 length 0x400 00:21:53.055 Nvme2n1 : 1.05 243.75 15.23 0.00 0.00 255758.69 18919.96 248011.02 00:21:53.055 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:53.055 Verification LBA range: start 0x0 length 0x400 00:21:53.055 Nvme3n1 : 1.12 289.83 18.11 0.00 0.00 211051.97 10143.83 220656.86 00:21:53.055 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:53.055 Verification LBA range: start 0x0 length 0x400 00:21:53.055 Nvme4n1 : 1.13 282.53 17.66 0.00 0.00 214637.88 20629.59 208803.39 00:21:53.055 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:53.055 Verification LBA range: start 0x0 length 0x400 00:21:53.055 Nvme5n1 : 1.18 271.89 16.99 0.00 0.00 220326.33 17780.20 226127.69 00:21:53.055 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:53.055 Verification LBA range: start 0x0 length 0x400 00:21:53.055 Nvme6n1 : 1.18 271.73 16.98 0.00 0.00 216837.08 21883.33 219745.06 00:21:53.055 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:53.055 Verification LBA range: start 0x0 length 0x400 00:21:53.055 Nvme7n1 : 1.16 274.95 17.18 0.00 0.00 211311.93 18122.13 230686.72 00:21:53.055 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:53.055 Verification LBA range: start 0x0 length 0x400 00:21:53.055 Nvme8n1 : 1.17 273.72 17.11 0.00 0.00 209158.32 15044.79 221568.67 00:21:53.055 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:53.055 Verification LBA range: start 0x0 length 0x400 00:21:53.055 Nvme9n1 : 1.18 274.56 17.16 0.00 0.00 205527.65 12195.39 222480.47 00:21:53.055 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:53.055 Verification LBA range: start 0x0 length 0x400 00:21:53.055 Nvme10n1 : 1.18 270.24 16.89 0.00 0.00 205941.18 13335.15 248011.02 00:21:53.055 [2024-12-10T21:53:00.784Z] =================================================================================================================== 00:21:53.055 [2024-12-10T21:53:00.784Z] Total : 2729.36 170.59 0.00 0.00 216855.18 10143.83 248011.02 00:21:53.055 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:21:53.055 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:53.055 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/bdevperf.conf 00:21:53.055 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/rpcs.txt 00:21:53.315 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:53.315 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:53.315 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:21:53.315 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:53.315 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:21:53.315 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:53.315 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:53.315 rmmod nvme_tcp 00:21:53.315 rmmod nvme_fabrics 00:21:53.315 rmmod nvme_keyring 00:21:53.315 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:53.315 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:21:53.315 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:21:53.315 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 2431062 ']' 00:21:53.315 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 2431062 00:21:53.315 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 2431062 ']' 00:21:53.315 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 2431062 00:21:53.315 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:21:53.315 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:53.315 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2431062 00:21:53.315 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:53.315 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:53.315 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2431062' 00:21:53.315 killing process with pid 2431062 00:21:53.315 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 2431062 00:21:53.315 22:53:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 2431062 00:21:53.574 22:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:53.574 22:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:53.574 22:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:53.574 22:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:21:53.574 22:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:21:53.574 22:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:53.574 22:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:21:53.574 22:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:53.574 22:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:53.574 22:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:53.574 22:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:53.574 22:53:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:56.110 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:56.110 00:21:56.110 real 0m15.874s 00:21:56.110 user 0m36.764s 00:21:56.110 sys 0m5.834s 00:21:56.110 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:56.110 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:56.110 ************************************ 00:21:56.110 END TEST nvmf_shutdown_tc1 00:21:56.110 ************************************ 00:21:56.110 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:21:56.110 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:56.110 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:56.110 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:56.110 ************************************ 00:21:56.110 START TEST nvmf_shutdown_tc2 00:21:56.110 ************************************ 00:21:56.110 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:21:56.110 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:21:56.110 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:56.110 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:56.110 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:56.110 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:56.110 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:56.110 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:56.110 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:56.110 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:56.110 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:56.111 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:56.111 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:56.111 Found net devices under 0000:86:00.0: cvl_0_0 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:56.111 Found net devices under 0000:86:00.1: cvl_0_1 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:56.111 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:56.112 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:56.112 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:56.112 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.393 ms 00:21:56.112 00:21:56.112 --- 10.0.0.2 ping statistics --- 00:21:56.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:56.112 rtt min/avg/max/mdev = 0.393/0.393/0.393/0.000 ms 00:21:56.112 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:56.112 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:56.112 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:21:56.112 00:21:56.112 --- 10.0.0.1 ping statistics --- 00:21:56.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:56.112 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:21:56.112 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:56.112 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:21:56.112 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:56.112 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:56.112 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:56.112 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:56.112 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:56.112 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:56.112 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:56.112 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:56.112 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:56.112 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:56.112 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:56.112 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2432860 00:21:56.112 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2432860 00:21:56.112 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:56.112 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2432860 ']' 00:21:56.112 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:56.112 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:56.112 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:56.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:56.112 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:56.112 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:56.112 [2024-12-10 22:53:03.769359] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:21:56.112 [2024-12-10 22:53:03.769402] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:56.371 [2024-12-10 22:53:03.849989] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:56.371 [2024-12-10 22:53:03.891483] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:56.371 [2024-12-10 22:53:03.891519] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:56.371 [2024-12-10 22:53:03.891527] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:56.371 [2024-12-10 22:53:03.891533] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:56.371 [2024-12-10 22:53:03.891538] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:56.371 [2024-12-10 22:53:03.893194] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:21:56.371 [2024-12-10 22:53:03.893349] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:21:56.371 [2024-12-10 22:53:03.893455] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:21:56.371 [2024-12-10 22:53:03.893456] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:21:56.371 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:56.371 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:21:56.371 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:56.371 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:56.371 22:53:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:56.371 22:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:56.372 22:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:56.372 22:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.372 22:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:56.372 [2024-12-10 22:53:04.030615] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:56.372 22:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.372 22:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:56.372 22:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:56.372 22:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:56.372 22:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:56.372 22:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/rpcs.txt 00:21:56.372 22:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:56.372 22:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:56.372 22:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:56.372 22:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:56.372 22:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:56.372 22:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:56.372 22:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:56.372 22:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:56.372 22:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:56.372 22:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:56.372 22:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:56.372 22:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:56.372 22:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:56.372 22:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:56.372 22:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:56.372 22:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:56.372 22:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:56.372 22:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:56.372 22:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:56.372 22:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:56.372 22:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:56.372 22:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.372 22:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:56.631 Malloc1 00:21:56.631 [2024-12-10 22:53:04.138960] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:56.631 Malloc2 00:21:56.631 Malloc3 00:21:56.631 Malloc4 00:21:56.631 Malloc5 00:21:56.631 Malloc6 00:21:56.891 Malloc7 00:21:56.891 Malloc8 00:21:56.891 Malloc9 00:21:56.891 Malloc10 00:21:56.891 22:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.891 22:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:56.891 22:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:56.891 22:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:56.891 22:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=2433128 00:21:56.891 22:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 2433128 /var/tmp/bdevperf.sock 00:21:56.891 22:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2433128 ']' 00:21:56.891 22:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:56.891 22:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:56.891 22:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:56.891 22:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:56.891 22:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:56.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:56.891 22:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:21:56.891 22:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:56.891 22:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:21:56.891 22:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:56.891 22:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:56.891 22:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:56.891 { 00:21:56.891 "params": { 00:21:56.891 "name": "Nvme$subsystem", 00:21:56.891 "trtype": "$TEST_TRANSPORT", 00:21:56.891 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:56.891 "adrfam": "ipv4", 00:21:56.891 "trsvcid": "$NVMF_PORT", 00:21:56.891 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:56.891 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:56.891 "hdgst": ${hdgst:-false}, 00:21:56.891 "ddgst": ${ddgst:-false} 00:21:56.891 }, 00:21:56.891 "method": "bdev_nvme_attach_controller" 00:21:56.891 } 00:21:56.891 EOF 00:21:56.891 )") 00:21:56.891 22:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:56.891 22:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:56.891 22:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:56.891 { 00:21:56.891 "params": { 00:21:56.891 "name": "Nvme$subsystem", 00:21:56.891 "trtype": "$TEST_TRANSPORT", 00:21:56.891 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:56.891 "adrfam": "ipv4", 00:21:56.891 "trsvcid": "$NVMF_PORT", 00:21:56.891 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:56.891 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:56.891 "hdgst": ${hdgst:-false}, 00:21:56.891 "ddgst": ${ddgst:-false} 00:21:56.891 }, 00:21:56.891 "method": "bdev_nvme_attach_controller" 00:21:56.891 } 00:21:56.891 EOF 00:21:56.891 )") 00:21:56.891 22:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:56.891 22:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:56.891 22:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:56.891 { 00:21:56.891 "params": { 00:21:56.891 "name": "Nvme$subsystem", 00:21:56.891 "trtype": "$TEST_TRANSPORT", 00:21:56.891 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:56.891 "adrfam": "ipv4", 00:21:56.891 "trsvcid": "$NVMF_PORT", 00:21:56.891 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:56.891 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:56.891 "hdgst": ${hdgst:-false}, 00:21:56.891 "ddgst": ${ddgst:-false} 00:21:56.891 }, 00:21:56.891 "method": "bdev_nvme_attach_controller" 00:21:56.891 } 00:21:56.891 EOF 00:21:56.891 )") 00:21:56.891 22:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:56.891 22:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:56.891 22:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:56.891 { 00:21:56.891 "params": { 00:21:56.891 "name": "Nvme$subsystem", 00:21:56.891 "trtype": "$TEST_TRANSPORT", 00:21:56.891 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:56.891 "adrfam": "ipv4", 00:21:56.891 "trsvcid": "$NVMF_PORT", 00:21:56.891 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:56.891 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:56.891 "hdgst": ${hdgst:-false}, 00:21:56.891 "ddgst": ${ddgst:-false} 00:21:56.891 }, 00:21:56.891 "method": "bdev_nvme_attach_controller" 00:21:56.891 } 00:21:56.891 EOF 00:21:56.891 )") 00:21:56.891 22:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:56.891 22:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:56.891 22:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:56.891 { 00:21:56.891 "params": { 00:21:56.891 "name": "Nvme$subsystem", 00:21:56.891 "trtype": "$TEST_TRANSPORT", 00:21:56.891 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:56.891 "adrfam": "ipv4", 00:21:56.891 "trsvcid": "$NVMF_PORT", 00:21:56.891 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:56.891 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:56.891 "hdgst": ${hdgst:-false}, 00:21:56.891 "ddgst": ${ddgst:-false} 00:21:56.891 }, 00:21:56.891 "method": "bdev_nvme_attach_controller" 00:21:56.891 } 00:21:56.891 EOF 00:21:56.891 )") 00:21:56.891 22:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:56.891 22:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:56.891 22:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:56.891 { 00:21:56.891 "params": { 00:21:56.891 "name": "Nvme$subsystem", 00:21:56.891 "trtype": "$TEST_TRANSPORT", 00:21:56.891 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:56.891 "adrfam": "ipv4", 00:21:56.891 "trsvcid": "$NVMF_PORT", 00:21:56.891 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:56.891 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:56.891 "hdgst": ${hdgst:-false}, 00:21:56.891 "ddgst": ${ddgst:-false} 00:21:56.891 }, 00:21:56.891 "method": "bdev_nvme_attach_controller" 00:21:56.891 } 00:21:56.891 EOF 00:21:56.891 )") 00:21:56.891 22:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:56.891 22:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:56.891 22:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:56.891 { 00:21:56.891 "params": { 00:21:56.891 "name": "Nvme$subsystem", 00:21:56.891 "trtype": "$TEST_TRANSPORT", 00:21:56.891 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:56.891 "adrfam": "ipv4", 00:21:56.891 "trsvcid": "$NVMF_PORT", 00:21:56.891 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:56.891 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:56.891 "hdgst": ${hdgst:-false}, 00:21:56.891 "ddgst": ${ddgst:-false} 00:21:56.891 }, 00:21:56.891 "method": "bdev_nvme_attach_controller" 00:21:56.891 } 00:21:56.891 EOF 00:21:56.891 )") 00:21:56.891 [2024-12-10 22:53:04.607302] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:21:56.891 [2024-12-10 22:53:04.607352] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2433128 ] 00:21:56.891 22:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:56.891 22:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:56.891 22:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:56.891 { 00:21:56.891 "params": { 00:21:56.891 "name": "Nvme$subsystem", 00:21:56.891 "trtype": "$TEST_TRANSPORT", 00:21:56.891 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:56.891 "adrfam": "ipv4", 00:21:56.891 "trsvcid": "$NVMF_PORT", 00:21:56.891 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:56.891 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:56.891 "hdgst": ${hdgst:-false}, 00:21:56.891 "ddgst": ${ddgst:-false} 00:21:56.891 }, 00:21:56.891 "method": "bdev_nvme_attach_controller" 00:21:56.891 } 00:21:56.891 EOF 00:21:56.891 )") 00:21:56.891 22:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:57.151 22:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:57.151 22:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:57.151 { 00:21:57.151 "params": { 00:21:57.151 "name": "Nvme$subsystem", 00:21:57.151 "trtype": "$TEST_TRANSPORT", 00:21:57.151 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:57.151 "adrfam": "ipv4", 00:21:57.151 "trsvcid": "$NVMF_PORT", 00:21:57.151 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:57.151 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:57.151 "hdgst": ${hdgst:-false}, 00:21:57.151 "ddgst": ${ddgst:-false} 00:21:57.151 }, 00:21:57.151 "method": "bdev_nvme_attach_controller" 00:21:57.151 } 00:21:57.151 EOF 00:21:57.151 )") 00:21:57.151 22:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:57.151 22:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:57.151 22:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:57.151 { 00:21:57.151 "params": { 00:21:57.151 "name": "Nvme$subsystem", 00:21:57.151 "trtype": "$TEST_TRANSPORT", 00:21:57.151 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:57.151 "adrfam": "ipv4", 00:21:57.151 "trsvcid": "$NVMF_PORT", 00:21:57.151 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:57.151 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:57.151 "hdgst": ${hdgst:-false}, 00:21:57.151 "ddgst": ${ddgst:-false} 00:21:57.151 }, 00:21:57.151 "method": "bdev_nvme_attach_controller" 00:21:57.151 } 00:21:57.151 EOF 00:21:57.151 )") 00:21:57.151 22:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:21:57.151 22:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:21:57.151 22:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:21:57.151 22:53:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:57.151 "params": { 00:21:57.151 "name": "Nvme1", 00:21:57.151 "trtype": "tcp", 00:21:57.151 "traddr": "10.0.0.2", 00:21:57.151 "adrfam": "ipv4", 00:21:57.151 "trsvcid": "4420", 00:21:57.151 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:57.151 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:57.151 "hdgst": false, 00:21:57.151 "ddgst": false 00:21:57.151 }, 00:21:57.151 "method": "bdev_nvme_attach_controller" 00:21:57.151 },{ 00:21:57.151 "params": { 00:21:57.151 "name": "Nvme2", 00:21:57.151 "trtype": "tcp", 00:21:57.151 "traddr": "10.0.0.2", 00:21:57.151 "adrfam": "ipv4", 00:21:57.151 "trsvcid": "4420", 00:21:57.151 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:57.151 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:57.151 "hdgst": false, 00:21:57.151 "ddgst": false 00:21:57.151 }, 00:21:57.151 "method": "bdev_nvme_attach_controller" 00:21:57.151 },{ 00:21:57.151 "params": { 00:21:57.151 "name": "Nvme3", 00:21:57.151 "trtype": "tcp", 00:21:57.151 "traddr": "10.0.0.2", 00:21:57.151 "adrfam": "ipv4", 00:21:57.151 "trsvcid": "4420", 00:21:57.151 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:57.151 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:57.151 "hdgst": false, 00:21:57.151 "ddgst": false 00:21:57.151 }, 00:21:57.151 "method": "bdev_nvme_attach_controller" 00:21:57.152 },{ 00:21:57.152 "params": { 00:21:57.152 "name": "Nvme4", 00:21:57.152 "trtype": "tcp", 00:21:57.152 "traddr": "10.0.0.2", 00:21:57.152 "adrfam": "ipv4", 00:21:57.152 "trsvcid": "4420", 00:21:57.152 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:57.152 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:57.152 "hdgst": false, 00:21:57.152 "ddgst": false 00:21:57.152 }, 00:21:57.152 "method": "bdev_nvme_attach_controller" 00:21:57.152 },{ 00:21:57.152 "params": { 00:21:57.152 "name": "Nvme5", 00:21:57.152 "trtype": "tcp", 00:21:57.152 "traddr": "10.0.0.2", 00:21:57.152 "adrfam": "ipv4", 00:21:57.152 "trsvcid": "4420", 00:21:57.152 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:57.152 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:57.152 "hdgst": false, 00:21:57.152 "ddgst": false 00:21:57.152 }, 00:21:57.152 "method": "bdev_nvme_attach_controller" 00:21:57.152 },{ 00:21:57.152 "params": { 00:21:57.152 "name": "Nvme6", 00:21:57.152 "trtype": "tcp", 00:21:57.152 "traddr": "10.0.0.2", 00:21:57.152 "adrfam": "ipv4", 00:21:57.152 "trsvcid": "4420", 00:21:57.152 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:57.152 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:57.152 "hdgst": false, 00:21:57.152 "ddgst": false 00:21:57.152 }, 00:21:57.152 "method": "bdev_nvme_attach_controller" 00:21:57.152 },{ 00:21:57.152 "params": { 00:21:57.152 "name": "Nvme7", 00:21:57.152 "trtype": "tcp", 00:21:57.152 "traddr": "10.0.0.2", 00:21:57.152 "adrfam": "ipv4", 00:21:57.152 "trsvcid": "4420", 00:21:57.152 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:57.152 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:57.152 "hdgst": false, 00:21:57.152 "ddgst": false 00:21:57.152 }, 00:21:57.152 "method": "bdev_nvme_attach_controller" 00:21:57.152 },{ 00:21:57.152 "params": { 00:21:57.152 "name": "Nvme8", 00:21:57.152 "trtype": "tcp", 00:21:57.152 "traddr": "10.0.0.2", 00:21:57.152 "adrfam": "ipv4", 00:21:57.152 "trsvcid": "4420", 00:21:57.152 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:57.152 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:57.152 "hdgst": false, 00:21:57.152 "ddgst": false 00:21:57.152 }, 00:21:57.152 "method": "bdev_nvme_attach_controller" 00:21:57.152 },{ 00:21:57.152 "params": { 00:21:57.152 "name": "Nvme9", 00:21:57.152 "trtype": "tcp", 00:21:57.152 "traddr": "10.0.0.2", 00:21:57.152 "adrfam": "ipv4", 00:21:57.152 "trsvcid": "4420", 00:21:57.152 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:57.152 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:57.152 "hdgst": false, 00:21:57.152 "ddgst": false 00:21:57.152 }, 00:21:57.152 "method": "bdev_nvme_attach_controller" 00:21:57.152 },{ 00:21:57.152 "params": { 00:21:57.152 "name": "Nvme10", 00:21:57.152 "trtype": "tcp", 00:21:57.152 "traddr": "10.0.0.2", 00:21:57.152 "adrfam": "ipv4", 00:21:57.152 "trsvcid": "4420", 00:21:57.152 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:57.152 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:57.152 "hdgst": false, 00:21:57.152 "ddgst": false 00:21:57.152 }, 00:21:57.152 "method": "bdev_nvme_attach_controller" 00:21:57.152 }' 00:21:57.152 [2024-12-10 22:53:04.685655] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:57.152 [2024-12-10 22:53:04.726678] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:21:58.529 Running I/O for 10 seconds... 00:21:58.789 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:58.789 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:21:58.789 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:58.789 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.789 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:59.048 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.048 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:59.048 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:59.048 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:21:59.048 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:21:59.048 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:21:59.048 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:21:59.048 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:59.048 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:59.048 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:59.048 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.048 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:59.048 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.048 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:21:59.048 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:21:59.048 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:21:59.048 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:21:59.048 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:21:59.048 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 2433128 00:21:59.048 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2433128 ']' 00:21:59.048 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2433128 00:21:59.048 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:21:59.048 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:59.049 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2433128 00:21:59.049 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:59.049 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:59.049 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2433128' 00:21:59.049 killing process with pid 2433128 00:21:59.049 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2433128 00:21:59.049 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2433128 00:21:59.049 Received shutdown signal, test time was about 0.842812 seconds 00:21:59.049 00:21:59.049 Latency(us) 00:21:59.049 [2024-12-10T21:53:06.778Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:59.049 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:59.049 Verification LBA range: start 0x0 length 0x400 00:21:59.049 Nvme1n1 : 0.81 236.89 14.81 0.00 0.00 266808.10 17780.20 217009.64 00:21:59.049 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:59.049 Verification LBA range: start 0x0 length 0x400 00:21:59.049 Nvme2n1 : 0.80 238.72 14.92 0.00 0.00 259520.78 20515.62 216097.84 00:21:59.049 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:59.049 Verification LBA range: start 0x0 length 0x400 00:21:59.049 Nvme3n1 : 0.83 308.69 19.29 0.00 0.00 196695.15 14246.96 218833.25 00:21:59.049 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:59.049 Verification LBA range: start 0x0 length 0x400 00:21:59.049 Nvme4n1 : 0.82 235.42 14.71 0.00 0.00 252605.66 16412.49 219745.06 00:21:59.049 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:59.049 Verification LBA range: start 0x0 length 0x400 00:21:59.049 Nvme5n1 : 0.83 306.97 19.19 0.00 0.00 189923.06 18805.98 223392.28 00:21:59.049 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:59.049 Verification LBA range: start 0x0 length 0x400 00:21:59.049 Nvme6n1 : 0.84 306.05 19.13 0.00 0.00 186578.37 15842.62 220656.86 00:21:59.049 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:59.049 Verification LBA range: start 0x0 length 0x400 00:21:59.049 Nvme7n1 : 0.84 304.85 19.05 0.00 0.00 183380.59 16868.40 206979.78 00:21:59.049 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:59.049 Verification LBA range: start 0x0 length 0x400 00:21:59.049 Nvme8n1 : 0.84 303.98 19.00 0.00 0.00 180122.49 14702.86 222480.47 00:21:59.049 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:59.049 Verification LBA range: start 0x0 length 0x400 00:21:59.049 Nvme9n1 : 0.83 232.56 14.53 0.00 0.00 229487.90 18805.98 222480.47 00:21:59.049 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:59.049 Verification LBA range: start 0x0 length 0x400 00:21:59.049 Nvme10n1 : 0.82 233.97 14.62 0.00 0.00 222653.22 17780.20 235245.75 00:21:59.049 [2024-12-10T21:53:06.778Z] =================================================================================================================== 00:21:59.049 [2024-12-10T21:53:06.778Z] Total : 2708.10 169.26 0.00 0.00 212572.16 14246.96 235245.75 00:21:59.308 22:53:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:22:00.243 22:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 2432860 00:22:00.243 22:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:22:00.243 22:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:00.243 22:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/bdevperf.conf 00:22:00.243 22:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/rpcs.txt 00:22:00.243 22:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:00.243 22:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:00.243 22:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:22:00.243 22:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:00.243 22:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:22:00.243 22:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:00.243 22:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:00.243 rmmod nvme_tcp 00:22:00.243 rmmod nvme_fabrics 00:22:00.243 rmmod nvme_keyring 00:22:00.243 22:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:00.243 22:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:22:00.243 22:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:22:00.243 22:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 2432860 ']' 00:22:00.243 22:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 2432860 00:22:00.243 22:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2432860 ']' 00:22:00.243 22:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2432860 00:22:00.243 22:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:22:00.243 22:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:00.502 22:53:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2432860 00:22:00.502 22:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:00.502 22:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:00.502 22:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2432860' 00:22:00.502 killing process with pid 2432860 00:22:00.502 22:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2432860 00:22:00.502 22:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2432860 00:22:00.761 22:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:00.761 22:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:00.761 22:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:00.761 22:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:22:00.761 22:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:22:00.761 22:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:00.761 22:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:22:00.761 22:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:00.762 22:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:00.762 22:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:00.762 22:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:00.762 22:53:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:03.298 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:03.298 00:22:03.298 real 0m7.050s 00:22:03.298 user 0m20.109s 00:22:03.298 sys 0m1.287s 00:22:03.298 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:03.298 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:22:03.298 ************************************ 00:22:03.298 END TEST nvmf_shutdown_tc2 00:22:03.298 ************************************ 00:22:03.298 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:22:03.298 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:03.298 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:03.298 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:03.298 ************************************ 00:22:03.298 START TEST nvmf_shutdown_tc3 00:22:03.298 ************************************ 00:22:03.298 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:22:03.298 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:22:03.298 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:03.298 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:03.298 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:03.298 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:03.299 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:03.299 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:03.299 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:03.299 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:03.299 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:03.299 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:03.299 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:03.299 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:03.299 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:03.299 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:03.299 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:03.299 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:03.299 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:03.299 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:03.299 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:03.299 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:03.299 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:22:03.299 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:03.299 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:22:03.299 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:22:03.299 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:22:03.299 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:22:03.299 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:22:03.299 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:03.299 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:03.299 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:03.299 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:03.299 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:03.299 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:03.299 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:03.299 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:03.299 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:03.299 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:03.299 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:03.299 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:03.299 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:03.299 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:03.299 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:03.299 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:03.299 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:03.299 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:03.299 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:03.299 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:03.299 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:03.299 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:03.299 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:03.299 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:03.299 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:03.299 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:03.299 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:03.299 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:03.299 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:03.299 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:03.299 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:03.299 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:03.299 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:03.299 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:03.299 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:03.299 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:03.299 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:03.299 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:03.299 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:03.299 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:03.299 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:03.299 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:03.299 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:03.299 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:03.299 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:03.299 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:03.299 Found net devices under 0000:86:00.0: cvl_0_0 00:22:03.299 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:03.299 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:03.299 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:03.299 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:03.299 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:03.299 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:03.299 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:03.299 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:03.299 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:03.299 Found net devices under 0000:86:00.1: cvl_0_1 00:22:03.299 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:03.299 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:03.299 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:03.299 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:03.299 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:03.299 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:03.299 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:03.299 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:03.299 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:03.299 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:03.300 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:03.300 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:03.300 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:03.300 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:03.300 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:03.300 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:03.300 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:03.300 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:03.300 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:03.300 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:03.300 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:03.300 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:03.300 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:03.300 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:03.300 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:03.300 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:03.300 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:03.300 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:03.300 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:03.300 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:03.300 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.402 ms 00:22:03.300 00:22:03.300 --- 10.0.0.2 ping statistics --- 00:22:03.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:03.300 rtt min/avg/max/mdev = 0.402/0.402/0.402/0.000 ms 00:22:03.300 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:03.300 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:03.300 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:22:03.300 00:22:03.300 --- 10.0.0.1 ping statistics --- 00:22:03.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:03.300 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:22:03.300 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:03.300 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:22:03.300 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:03.300 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:03.300 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:03.300 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:03.300 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:03.300 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:03.300 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:03.300 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:03.300 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:03.300 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:03.300 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:03.300 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=2434187 00:22:03.300 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 2434187 00:22:03.300 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:03.300 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2434187 ']' 00:22:03.300 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:03.300 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:03.300 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:03.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:03.300 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:03.300 22:53:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:03.300 [2024-12-10 22:53:10.916567] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:22:03.300 [2024-12-10 22:53:10.916621] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:03.300 [2024-12-10 22:53:11.000305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:03.559 [2024-12-10 22:53:11.041212] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:03.559 [2024-12-10 22:53:11.041248] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:03.559 [2024-12-10 22:53:11.041258] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:03.559 [2024-12-10 22:53:11.041264] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:03.559 [2024-12-10 22:53:11.041269] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:03.559 [2024-12-10 22:53:11.042805] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:03.559 [2024-12-10 22:53:11.042841] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:22:03.559 [2024-12-10 22:53:11.042950] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:22:03.559 [2024-12-10 22:53:11.042951] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:22:04.126 22:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:04.126 22:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:22:04.126 22:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:04.126 22:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:04.126 22:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:04.126 22:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:04.126 22:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:04.126 22:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.126 22:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:04.126 [2024-12-10 22:53:11.798114] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:04.126 22:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.126 22:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:04.126 22:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:04.126 22:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:04.126 22:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:04.126 22:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/rpcs.txt 00:22:04.126 22:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:04.126 22:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:04.126 22:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:04.126 22:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:04.126 22:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:04.126 22:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:04.126 22:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:04.126 22:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:04.126 22:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:04.126 22:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:04.126 22:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:04.126 22:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:04.126 22:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:04.126 22:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:04.126 22:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:04.126 22:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:04.126 22:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:04.126 22:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:04.126 22:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:04.126 22:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:22:04.385 22:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:04.385 22:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.385 22:53:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:04.385 Malloc1 00:22:04.385 [2024-12-10 22:53:11.913413] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:04.385 Malloc2 00:22:04.385 Malloc3 00:22:04.385 Malloc4 00:22:04.385 Malloc5 00:22:04.385 Malloc6 00:22:04.644 Malloc7 00:22:04.644 Malloc8 00:22:04.644 Malloc9 00:22:04.644 Malloc10 00:22:04.644 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.644 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:04.644 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:04.644 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:04.644 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=2434472 00:22:04.644 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 2434472 /var/tmp/bdevperf.sock 00:22:04.644 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2434472 ']' 00:22:04.644 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:04.644 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:22:04.644 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:04.644 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:04.644 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:04.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:04.644 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:22:04.644 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:04.644 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:22:04.644 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:04.644 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:04.644 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:04.644 { 00:22:04.644 "params": { 00:22:04.644 "name": "Nvme$subsystem", 00:22:04.644 "trtype": "$TEST_TRANSPORT", 00:22:04.644 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:04.644 "adrfam": "ipv4", 00:22:04.644 "trsvcid": "$NVMF_PORT", 00:22:04.644 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:04.644 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:04.644 "hdgst": ${hdgst:-false}, 00:22:04.644 "ddgst": ${ddgst:-false} 00:22:04.644 }, 00:22:04.644 "method": "bdev_nvme_attach_controller" 00:22:04.644 } 00:22:04.644 EOF 00:22:04.644 )") 00:22:04.644 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:04.644 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:04.644 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:04.644 { 00:22:04.644 "params": { 00:22:04.644 "name": "Nvme$subsystem", 00:22:04.644 "trtype": "$TEST_TRANSPORT", 00:22:04.644 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:04.644 "adrfam": "ipv4", 00:22:04.644 "trsvcid": "$NVMF_PORT", 00:22:04.644 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:04.644 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:04.644 "hdgst": ${hdgst:-false}, 00:22:04.644 "ddgst": ${ddgst:-false} 00:22:04.644 }, 00:22:04.644 "method": "bdev_nvme_attach_controller" 00:22:04.644 } 00:22:04.644 EOF 00:22:04.644 )") 00:22:04.644 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:04.644 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:04.644 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:04.644 { 00:22:04.644 "params": { 00:22:04.644 "name": "Nvme$subsystem", 00:22:04.644 "trtype": "$TEST_TRANSPORT", 00:22:04.644 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:04.644 "adrfam": "ipv4", 00:22:04.644 "trsvcid": "$NVMF_PORT", 00:22:04.644 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:04.644 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:04.644 "hdgst": ${hdgst:-false}, 00:22:04.644 "ddgst": ${ddgst:-false} 00:22:04.644 }, 00:22:04.644 "method": "bdev_nvme_attach_controller" 00:22:04.644 } 00:22:04.644 EOF 00:22:04.644 )") 00:22:04.644 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:04.644 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:04.644 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:04.644 { 00:22:04.644 "params": { 00:22:04.644 "name": "Nvme$subsystem", 00:22:04.644 "trtype": "$TEST_TRANSPORT", 00:22:04.644 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:04.644 "adrfam": "ipv4", 00:22:04.644 "trsvcid": "$NVMF_PORT", 00:22:04.644 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:04.644 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:04.644 "hdgst": ${hdgst:-false}, 00:22:04.644 "ddgst": ${ddgst:-false} 00:22:04.644 }, 00:22:04.644 "method": "bdev_nvme_attach_controller" 00:22:04.644 } 00:22:04.644 EOF 00:22:04.644 )") 00:22:04.904 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:04.904 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:04.904 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:04.904 { 00:22:04.904 "params": { 00:22:04.904 "name": "Nvme$subsystem", 00:22:04.904 "trtype": "$TEST_TRANSPORT", 00:22:04.904 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:04.904 "adrfam": "ipv4", 00:22:04.904 "trsvcid": "$NVMF_PORT", 00:22:04.904 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:04.904 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:04.904 "hdgst": ${hdgst:-false}, 00:22:04.904 "ddgst": ${ddgst:-false} 00:22:04.904 }, 00:22:04.904 "method": "bdev_nvme_attach_controller" 00:22:04.904 } 00:22:04.904 EOF 00:22:04.904 )") 00:22:04.904 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:04.904 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:04.904 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:04.904 { 00:22:04.904 "params": { 00:22:04.904 "name": "Nvme$subsystem", 00:22:04.904 "trtype": "$TEST_TRANSPORT", 00:22:04.904 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:04.904 "adrfam": "ipv4", 00:22:04.904 "trsvcid": "$NVMF_PORT", 00:22:04.904 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:04.904 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:04.904 "hdgst": ${hdgst:-false}, 00:22:04.904 "ddgst": ${ddgst:-false} 00:22:04.904 }, 00:22:04.904 "method": "bdev_nvme_attach_controller" 00:22:04.904 } 00:22:04.904 EOF 00:22:04.904 )") 00:22:04.904 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:04.904 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:04.904 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:04.904 { 00:22:04.904 "params": { 00:22:04.904 "name": "Nvme$subsystem", 00:22:04.904 "trtype": "$TEST_TRANSPORT", 00:22:04.904 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:04.904 "adrfam": "ipv4", 00:22:04.904 "trsvcid": "$NVMF_PORT", 00:22:04.904 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:04.904 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:04.904 "hdgst": ${hdgst:-false}, 00:22:04.904 "ddgst": ${ddgst:-false} 00:22:04.904 }, 00:22:04.904 "method": "bdev_nvme_attach_controller" 00:22:04.904 } 00:22:04.904 EOF 00:22:04.904 )") 00:22:04.904 [2024-12-10 22:53:12.390020] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:22:04.904 [2024-12-10 22:53:12.390070] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2434472 ] 00:22:04.904 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:04.904 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:04.904 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:04.904 { 00:22:04.904 "params": { 00:22:04.904 "name": "Nvme$subsystem", 00:22:04.904 "trtype": "$TEST_TRANSPORT", 00:22:04.904 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:04.904 "adrfam": "ipv4", 00:22:04.904 "trsvcid": "$NVMF_PORT", 00:22:04.904 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:04.904 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:04.904 "hdgst": ${hdgst:-false}, 00:22:04.904 "ddgst": ${ddgst:-false} 00:22:04.904 }, 00:22:04.904 "method": "bdev_nvme_attach_controller" 00:22:04.904 } 00:22:04.904 EOF 00:22:04.904 )") 00:22:04.904 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:04.904 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:04.904 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:04.904 { 00:22:04.904 "params": { 00:22:04.904 "name": "Nvme$subsystem", 00:22:04.904 "trtype": "$TEST_TRANSPORT", 00:22:04.904 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:04.904 "adrfam": "ipv4", 00:22:04.904 "trsvcid": "$NVMF_PORT", 00:22:04.904 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:04.904 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:04.904 "hdgst": ${hdgst:-false}, 00:22:04.904 "ddgst": ${ddgst:-false} 00:22:04.904 }, 00:22:04.904 "method": "bdev_nvme_attach_controller" 00:22:04.904 } 00:22:04.904 EOF 00:22:04.904 )") 00:22:04.904 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:04.904 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:04.904 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:04.904 { 00:22:04.904 "params": { 00:22:04.904 "name": "Nvme$subsystem", 00:22:04.904 "trtype": "$TEST_TRANSPORT", 00:22:04.904 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:04.904 "adrfam": "ipv4", 00:22:04.904 "trsvcid": "$NVMF_PORT", 00:22:04.904 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:04.904 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:04.904 "hdgst": ${hdgst:-false}, 00:22:04.904 "ddgst": ${ddgst:-false} 00:22:04.904 }, 00:22:04.904 "method": "bdev_nvme_attach_controller" 00:22:04.904 } 00:22:04.904 EOF 00:22:04.904 )") 00:22:04.904 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:22:04.904 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:22:04.904 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:22:04.904 22:53:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:04.904 "params": { 00:22:04.904 "name": "Nvme1", 00:22:04.904 "trtype": "tcp", 00:22:04.904 "traddr": "10.0.0.2", 00:22:04.904 "adrfam": "ipv4", 00:22:04.904 "trsvcid": "4420", 00:22:04.904 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:04.904 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:04.904 "hdgst": false, 00:22:04.904 "ddgst": false 00:22:04.904 }, 00:22:04.904 "method": "bdev_nvme_attach_controller" 00:22:04.904 },{ 00:22:04.904 "params": { 00:22:04.904 "name": "Nvme2", 00:22:04.904 "trtype": "tcp", 00:22:04.904 "traddr": "10.0.0.2", 00:22:04.904 "adrfam": "ipv4", 00:22:04.904 "trsvcid": "4420", 00:22:04.904 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:04.904 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:04.904 "hdgst": false, 00:22:04.904 "ddgst": false 00:22:04.904 }, 00:22:04.904 "method": "bdev_nvme_attach_controller" 00:22:04.904 },{ 00:22:04.904 "params": { 00:22:04.904 "name": "Nvme3", 00:22:04.904 "trtype": "tcp", 00:22:04.904 "traddr": "10.0.0.2", 00:22:04.904 "adrfam": "ipv4", 00:22:04.904 "trsvcid": "4420", 00:22:04.904 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:04.904 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:04.904 "hdgst": false, 00:22:04.904 "ddgst": false 00:22:04.904 }, 00:22:04.904 "method": "bdev_nvme_attach_controller" 00:22:04.904 },{ 00:22:04.904 "params": { 00:22:04.904 "name": "Nvme4", 00:22:04.904 "trtype": "tcp", 00:22:04.904 "traddr": "10.0.0.2", 00:22:04.904 "adrfam": "ipv4", 00:22:04.904 "trsvcid": "4420", 00:22:04.904 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:04.904 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:04.904 "hdgst": false, 00:22:04.904 "ddgst": false 00:22:04.904 }, 00:22:04.904 "method": "bdev_nvme_attach_controller" 00:22:04.904 },{ 00:22:04.904 "params": { 00:22:04.904 "name": "Nvme5", 00:22:04.904 "trtype": "tcp", 00:22:04.904 "traddr": "10.0.0.2", 00:22:04.904 "adrfam": "ipv4", 00:22:04.904 "trsvcid": "4420", 00:22:04.904 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:04.904 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:04.904 "hdgst": false, 00:22:04.904 "ddgst": false 00:22:04.904 }, 00:22:04.904 "method": "bdev_nvme_attach_controller" 00:22:04.904 },{ 00:22:04.904 "params": { 00:22:04.904 "name": "Nvme6", 00:22:04.905 "trtype": "tcp", 00:22:04.905 "traddr": "10.0.0.2", 00:22:04.905 "adrfam": "ipv4", 00:22:04.905 "trsvcid": "4420", 00:22:04.905 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:04.905 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:04.905 "hdgst": false, 00:22:04.905 "ddgst": false 00:22:04.905 }, 00:22:04.905 "method": "bdev_nvme_attach_controller" 00:22:04.905 },{ 00:22:04.905 "params": { 00:22:04.905 "name": "Nvme7", 00:22:04.905 "trtype": "tcp", 00:22:04.905 "traddr": "10.0.0.2", 00:22:04.905 "adrfam": "ipv4", 00:22:04.905 "trsvcid": "4420", 00:22:04.905 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:04.905 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:04.905 "hdgst": false, 00:22:04.905 "ddgst": false 00:22:04.905 }, 00:22:04.905 "method": "bdev_nvme_attach_controller" 00:22:04.905 },{ 00:22:04.905 "params": { 00:22:04.905 "name": "Nvme8", 00:22:04.905 "trtype": "tcp", 00:22:04.905 "traddr": "10.0.0.2", 00:22:04.905 "adrfam": "ipv4", 00:22:04.905 "trsvcid": "4420", 00:22:04.905 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:04.905 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:04.905 "hdgst": false, 00:22:04.905 "ddgst": false 00:22:04.905 }, 00:22:04.905 "method": "bdev_nvme_attach_controller" 00:22:04.905 },{ 00:22:04.905 "params": { 00:22:04.905 "name": "Nvme9", 00:22:04.905 "trtype": "tcp", 00:22:04.905 "traddr": "10.0.0.2", 00:22:04.905 "adrfam": "ipv4", 00:22:04.905 "trsvcid": "4420", 00:22:04.905 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:04.905 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:04.905 "hdgst": false, 00:22:04.905 "ddgst": false 00:22:04.905 }, 00:22:04.905 "method": "bdev_nvme_attach_controller" 00:22:04.905 },{ 00:22:04.905 "params": { 00:22:04.905 "name": "Nvme10", 00:22:04.905 "trtype": "tcp", 00:22:04.905 "traddr": "10.0.0.2", 00:22:04.905 "adrfam": "ipv4", 00:22:04.905 "trsvcid": "4420", 00:22:04.905 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:04.905 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:04.905 "hdgst": false, 00:22:04.905 "ddgst": false 00:22:04.905 }, 00:22:04.905 "method": "bdev_nvme_attach_controller" 00:22:04.905 }' 00:22:04.905 [2024-12-10 22:53:12.467636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:04.905 [2024-12-10 22:53:12.508501] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:22:06.283 Running I/O for 10 seconds... 00:22:06.851 22:53:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:06.851 22:53:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:22:06.851 22:53:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:06.851 22:53:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.851 22:53:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:06.851 22:53:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.851 22:53:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:06.851 22:53:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:06.851 22:53:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:06.851 22:53:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:22:06.851 22:53:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:22:06.851 22:53:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:22:06.851 22:53:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:22:06.851 22:53:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:06.851 22:53:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:06.851 22:53:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:06.851 22:53:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.851 22:53:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:06.851 22:53:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.851 22:53:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=80 00:22:06.851 22:53:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 80 -ge 100 ']' 00:22:06.851 22:53:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:22:07.125 22:53:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:22:07.125 22:53:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:22:07.125 22:53:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:07.125 22:53:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:22:07.125 22:53:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.125 22:53:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:07.125 22:53:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.125 22:53:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=195 00:22:07.125 22:53:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 195 -ge 100 ']' 00:22:07.125 22:53:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:22:07.125 22:53:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:22:07.125 22:53:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:22:07.125 22:53:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 2434187 00:22:07.125 22:53:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2434187 ']' 00:22:07.125 22:53:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2434187 00:22:07.125 22:53:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:22:07.125 22:53:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:07.125 22:53:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2434187 00:22:07.125 22:53:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:07.125 22:53:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:07.125 22:53:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2434187' 00:22:07.125 killing process with pid 2434187 00:22:07.125 22:53:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 2434187 00:22:07.125 22:53:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 2434187 00:22:07.125 [2024-12-10 22:53:14.700151] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451d70 is same with the state(6) to be set 00:22:07.125 [2024-12-10 22:53:14.700231] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451d70 is same with the state(6) to be set 00:22:07.125 [2024-12-10 22:53:14.700239] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451d70 is same with the state(6) to be set 00:22:07.125 [2024-12-10 22:53:14.700246] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451d70 is same with the state(6) to be set 00:22:07.125 [2024-12-10 22:53:14.700253] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451d70 is same with the state(6) to be set 00:22:07.125 [2024-12-10 22:53:14.700260] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451d70 is same with the state(6) to be set 00:22:07.125 [2024-12-10 22:53:14.700267] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451d70 is same with the state(6) to be set 00:22:07.125 [2024-12-10 22:53:14.700273] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451d70 is same with the state(6) to be set 00:22:07.125 [2024-12-10 22:53:14.700280] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451d70 is same with the state(6) to be set 00:22:07.126 [2024-12-10 22:53:14.700286] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451d70 is same with the state(6) to be set 00:22:07.126 [2024-12-10 22:53:14.700292] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451d70 is same with the state(6) to be set 00:22:07.126 [2024-12-10 22:53:14.700298] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451d70 is same with the state(6) to be set 00:22:07.126 [2024-12-10 22:53:14.700304] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451d70 is same with the state(6) to be set 00:22:07.126 [2024-12-10 22:53:14.700311] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451d70 is same with the state(6) to be set 00:22:07.126 [2024-12-10 22:53:14.700317] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451d70 is same with the state(6) to be set 00:22:07.126 [2024-12-10 22:53:14.700323] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451d70 is same with the state(6) to be set 00:22:07.126 [2024-12-10 22:53:14.700329] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451d70 is same with the state(6) to be set 00:22:07.126 [2024-12-10 22:53:14.700335] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451d70 is same with the state(6) to be set 00:22:07.126 [2024-12-10 22:53:14.700342] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451d70 is same with the state(6) to be set 00:22:07.126 [2024-12-10 22:53:14.700349] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451d70 is same with the state(6) to be set 00:22:07.126 [2024-12-10 22:53:14.700355] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451d70 is same with the state(6) to be set 00:22:07.126 [2024-12-10 22:53:14.700361] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451d70 is same with the state(6) to be set 00:22:07.126 [2024-12-10 22:53:14.700368] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451d70 is same with the state(6) to be set 00:22:07.126 [2024-12-10 22:53:14.700374] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451d70 is same with the state(6) to be set 00:22:07.126 [2024-12-10 22:53:14.700381] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451d70 is same with the state(6) to be set 00:22:07.126 [2024-12-10 22:53:14.700392] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451d70 is same with the state(6) to be set 00:22:07.126 [2024-12-10 22:53:14.700399] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451d70 is same with the state(6) to be set 00:22:07.126 [2024-12-10 22:53:14.700405] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451d70 is same with the state(6) to be set 00:22:07.126 [2024-12-10 22:53:14.700412] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451d70 is same with the state(6) to be set 00:22:07.126 [2024-12-10 22:53:14.700418] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451d70 is same with the state(6) to be set 00:22:07.126 [2024-12-10 22:53:14.700424] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451d70 is same with the state(6) to be set 00:22:07.126 [2024-12-10 22:53:14.700430] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451d70 is same with the state(6) to be set 00:22:07.126 [2024-12-10 22:53:14.700437] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451d70 is same with the state(6) to be set 00:22:07.126 [2024-12-10 22:53:14.700443] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451d70 is same with the state(6) to be set 00:22:07.126 [2024-12-10 22:53:14.700450] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451d70 is same with the state(6) to be set 00:22:07.126 [2024-12-10 22:53:14.700457] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451d70 is same with the state(6) to be set 00:22:07.126 [2024-12-10 22:53:14.700464] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451d70 is same with the state(6) to be set 00:22:07.126 [2024-12-10 22:53:14.700470] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451d70 is same with the state(6) to be set 00:22:07.126 [2024-12-10 22:53:14.700477] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451d70 is same with the state(6) to be set 00:22:07.126 [2024-12-10 22:53:14.700483] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451d70 is same with the state(6) to be set 00:22:07.126 [2024-12-10 22:53:14.700489] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451d70 is same with the state(6) to be set 00:22:07.126 [2024-12-10 22:53:14.700495] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451d70 is same with the state(6) to be set 00:22:07.126 [2024-12-10 22:53:14.700501] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451d70 is same with the state(6) to be set 00:22:07.126 [2024-12-10 22:53:14.700507] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451d70 is same with the state(6) to be set 00:22:07.126 [2024-12-10 22:53:14.700513] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451d70 is same with the state(6) to be set 00:22:07.126 [2024-12-10 22:53:14.700520] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451d70 is same with the state(6) to be set 00:22:07.126 [2024-12-10 22:53:14.700526] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451d70 is same with the state(6) to be set 00:22:07.126 [2024-12-10 22:53:14.700532] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451d70 is same with the state(6) to be set 00:22:07.126 [2024-12-10 22:53:14.700538] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451d70 is same with the state(6) to be set 00:22:07.126 [2024-12-10 22:53:14.700544] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451d70 is same with the state(6) to be set 00:22:07.126 [2024-12-10 22:53:14.700550] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451d70 is same with the state(6) to be set 00:22:07.126 [2024-12-10 22:53:14.700557] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451d70 is same with the state(6) to be set 00:22:07.126 [2024-12-10 22:53:14.700565] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451d70 is same with the state(6) to be set 00:22:07.126 [2024-12-10 22:53:14.700576] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451d70 is same with the state(6) to be set 00:22:07.126 [2024-12-10 22:53:14.700583] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451d70 is same with the state(6) to be set 00:22:07.126 [2024-12-10 22:53:14.700589] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451d70 is same with the state(6) to be set 00:22:07.126 [2024-12-10 22:53:14.700596] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451d70 is same with the state(6) to be set 00:22:07.126 [2024-12-10 22:53:14.700602] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451d70 is same with the state(6) to be set 00:22:07.126 [2024-12-10 22:53:14.700608] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451d70 is same with the state(6) to be set 00:22:07.126 [2024-12-10 22:53:14.700615] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451d70 is same with the state(6) to be set 00:22:07.126 [2024-12-10 22:53:14.700622] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451d70 is same with the state(6) to be set 00:22:07.126 [2024-12-10 22:53:14.700628] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451d70 is same with the state(6) to be set 00:22:07.126 [2024-12-10 22:53:14.700634] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2451d70 is same with the state(6) to be set 00:22:07.126 [2024-12-10 22:53:14.703524] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452710 is same with the state(6) to be set 00:22:07.126 [2024-12-10 22:53:14.703555] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452710 is same with the state(6) to be set 00:22:07.126 [2024-12-10 22:53:14.703563] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452710 is same with the state(6) to be set 00:22:07.126 [2024-12-10 22:53:14.703571] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452710 is same with the state(6) to be set 00:22:07.126 [2024-12-10 22:53:14.703578] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452710 is same with the state(6) to be set 00:22:07.126 [2024-12-10 22:53:14.703585] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452710 is same with the state(6) to be set 00:22:07.126 [2024-12-10 22:53:14.703592] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452710 is same with the state(6) to be set 00:22:07.126 [2024-12-10 22:53:14.703598] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452710 is same with the state(6) to be set 00:22:07.126 [2024-12-10 22:53:14.703604] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452710 is same with the state(6) to be set 00:22:07.126 [2024-12-10 22:53:14.703611] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452710 is same with the state(6) to be set 00:22:07.126 [2024-12-10 22:53:14.703617] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452710 is same with the state(6) to be set 00:22:07.126 [2024-12-10 22:53:14.703623] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452710 is same with the state(6) to be set 00:22:07.126 [2024-12-10 22:53:14.703630] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452710 is same with the state(6) to be set 00:22:07.126 [2024-12-10 22:53:14.703636] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452710 is same with the state(6) to be set 00:22:07.126 [2024-12-10 22:53:14.703643] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452710 is same with the state(6) to be set 00:22:07.126 [2024-12-10 22:53:14.703649] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452710 is same with the state(6) to be set 00:22:07.126 [2024-12-10 22:53:14.703660] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452710 is same with the state(6) to be set 00:22:07.126 [2024-12-10 22:53:14.703666] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452710 is same with the state(6) to be set 00:22:07.126 [2024-12-10 22:53:14.703672] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452710 is same with the state(6) to be set 00:22:07.126 [2024-12-10 22:53:14.703678] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452710 is same with the state(6) to be set 00:22:07.126 [2024-12-10 22:53:14.703684] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452710 is same with the state(6) to be set 00:22:07.126 [2024-12-10 22:53:14.703692] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452710 is same with the state(6) to be set 00:22:07.126 [2024-12-10 22:53:14.703699] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452710 is same with the state(6) to be set 00:22:07.126 [2024-12-10 22:53:14.703706] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452710 is same with the state(6) to be set 00:22:07.126 [2024-12-10 22:53:14.703712] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452710 is same with the state(6) to be set 00:22:07.126 [2024-12-10 22:53:14.703718] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452710 is same with the state(6) to be set 00:22:07.126 [2024-12-10 22:53:14.703724] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452710 is same with the state(6) to be set 00:22:07.126 [2024-12-10 22:53:14.703730] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452710 is same with the state(6) to be set 00:22:07.126 [2024-12-10 22:53:14.703736] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452710 is same with the state(6) to be set 00:22:07.126 [2024-12-10 22:53:14.703742] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452710 is same with the state(6) to be set 00:22:07.126 [2024-12-10 22:53:14.703749] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452710 is same with the state(6) to be set 00:22:07.126 [2024-12-10 22:53:14.703755] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452710 is same with the state(6) to be set 00:22:07.126 [2024-12-10 22:53:14.703761] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452710 is same with the state(6) to be set 00:22:07.127 [2024-12-10 22:53:14.703768] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452710 is same with the state(6) to be set 00:22:07.127 [2024-12-10 22:53:14.703774] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452710 is same with the state(6) to be set 00:22:07.127 [2024-12-10 22:53:14.703780] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452710 is same with the state(6) to be set 00:22:07.127 [2024-12-10 22:53:14.703786] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452710 is same with the state(6) to be set 00:22:07.127 [2024-12-10 22:53:14.703792] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452710 is same with the state(6) to be set 00:22:07.127 [2024-12-10 22:53:14.703799] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452710 is same with the state(6) to be set 00:22:07.127 [2024-12-10 22:53:14.703806] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452710 is same with the state(6) to be set 00:22:07.127 [2024-12-10 22:53:14.703811] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452710 is same with the state(6) to be set 00:22:07.127 [2024-12-10 22:53:14.703818] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452710 is same with the state(6) to be set 00:22:07.127 [2024-12-10 22:53:14.703824] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452710 is same with the state(6) to be set 00:22:07.127 [2024-12-10 22:53:14.703832] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452710 is same with the state(6) to be set 00:22:07.127 [2024-12-10 22:53:14.703838] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452710 is same with the state(6) to be set 00:22:07.127 [2024-12-10 22:53:14.703844] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452710 is same with the state(6) to be set 00:22:07.127 [2024-12-10 22:53:14.703851] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452710 is same with the state(6) to be set 00:22:07.127 [2024-12-10 22:53:14.703858] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452710 is same with the state(6) to be set 00:22:07.127 [2024-12-10 22:53:14.703865] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452710 is same with the state(6) to be set 00:22:07.127 [2024-12-10 22:53:14.703871] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452710 is same with the state(6) to be set 00:22:07.127 [2024-12-10 22:53:14.703877] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452710 is same with the state(6) to be set 00:22:07.127 [2024-12-10 22:53:14.703883] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452710 is same with the state(6) to be set 00:22:07.127 [2024-12-10 22:53:14.703889] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452710 is same with the state(6) to be set 00:22:07.127 [2024-12-10 22:53:14.703895] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452710 is same with the state(6) to be set 00:22:07.127 [2024-12-10 22:53:14.703901] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452710 is same with the state(6) to be set 00:22:07.127 [2024-12-10 22:53:14.703908] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452710 is same with the state(6) to be set 00:22:07.127 [2024-12-10 22:53:14.703915] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452710 is same with the state(6) to be set 00:22:07.127 [2024-12-10 22:53:14.703921] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452710 is same with the state(6) to be set 00:22:07.127 [2024-12-10 22:53:14.703927] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452710 is same with the state(6) to be set 00:22:07.127 [2024-12-10 22:53:14.703934] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452710 is same with the state(6) to be set 00:22:07.127 [2024-12-10 22:53:14.703940] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452710 is same with the state(6) to be set 00:22:07.127 [2024-12-10 22:53:14.703946] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452710 is same with the state(6) to be set 00:22:07.127 [2024-12-10 22:53:14.703952] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452710 is same with the state(6) to be set 00:22:07.127 [2024-12-10 22:53:14.705004] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452c00 is same with the state(6) to be set 00:22:07.127 [2024-12-10 22:53:14.705030] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452c00 is same with the state(6) to be set 00:22:07.127 [2024-12-10 22:53:14.705038] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452c00 is same with the state(6) to be set 00:22:07.127 [2024-12-10 22:53:14.705045] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452c00 is same with the state(6) to be set 00:22:07.127 [2024-12-10 22:53:14.705052] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452c00 is same with the state(6) to be set 00:22:07.127 [2024-12-10 22:53:14.705058] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452c00 is same with the state(6) to be set 00:22:07.127 [2024-12-10 22:53:14.705064] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452c00 is same with the state(6) to be set 00:22:07.127 [2024-12-10 22:53:14.705074] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452c00 is same with the state(6) to be set 00:22:07.127 [2024-12-10 22:53:14.705080] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452c00 is same with the state(6) to be set 00:22:07.127 [2024-12-10 22:53:14.705086] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452c00 is same with the state(6) to be set 00:22:07.127 [2024-12-10 22:53:14.705093] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452c00 is same with the state(6) to be set 00:22:07.127 [2024-12-10 22:53:14.705100] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452c00 is same with the state(6) to be set 00:22:07.127 [2024-12-10 22:53:14.705106] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452c00 is same with the state(6) to be set 00:22:07.127 [2024-12-10 22:53:14.705113] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452c00 is same with the state(6) to be set 00:22:07.127 [2024-12-10 22:53:14.705118] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452c00 is same with the state(6) to be set 00:22:07.127 [2024-12-10 22:53:14.705124] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452c00 is same with the state(6) to be set 00:22:07.127 [2024-12-10 22:53:14.705130] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452c00 is same with the state(6) to be set 00:22:07.127 [2024-12-10 22:53:14.705136] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452c00 is same with the state(6) to be set 00:22:07.127 [2024-12-10 22:53:14.705142] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452c00 is same with the state(6) to be set 00:22:07.127 [2024-12-10 22:53:14.705148] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452c00 is same with the state(6) to be set 00:22:07.127 [2024-12-10 22:53:14.705155] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452c00 is same with the state(6) to be set 00:22:07.127 [2024-12-10 22:53:14.705167] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452c00 is same with the state(6) to be set 00:22:07.127 [2024-12-10 22:53:14.705174] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452c00 is same with the state(6) to be set 00:22:07.127 [2024-12-10 22:53:14.705179] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452c00 is same with the state(6) to be set 00:22:07.127 [2024-12-10 22:53:14.705185] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452c00 is same with the state(6) to be set 00:22:07.127 [2024-12-10 22:53:14.705191] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452c00 is same with the state(6) to be set 00:22:07.127 [2024-12-10 22:53:14.705197] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452c00 is same with the state(6) to be set 00:22:07.127 [2024-12-10 22:53:14.705203] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452c00 is same with the state(6) to be set 00:22:07.127 [2024-12-10 22:53:14.705210] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452c00 is same with the state(6) to be set 00:22:07.127 [2024-12-10 22:53:14.705216] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452c00 is same with the state(6) to be set 00:22:07.127 [2024-12-10 22:53:14.705222] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452c00 is same with the state(6) to be set 00:22:07.127 [2024-12-10 22:53:14.705228] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452c00 is same with the state(6) to be set 00:22:07.127 [2024-12-10 22:53:14.705235] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452c00 is same with the state(6) to be set 00:22:07.127 [2024-12-10 22:53:14.705241] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452c00 is same with the state(6) to be set 00:22:07.127 [2024-12-10 22:53:14.705247] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452c00 is same with the state(6) to be set 00:22:07.127 [2024-12-10 22:53:14.705255] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452c00 is same with the state(6) to be set 00:22:07.127 [2024-12-10 22:53:14.705261] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452c00 is same with the state(6) to be set 00:22:07.127 [2024-12-10 22:53:14.705268] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452c00 is same with the state(6) to be set 00:22:07.127 [2024-12-10 22:53:14.705274] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452c00 is same with the state(6) to be set 00:22:07.127 [2024-12-10 22:53:14.705281] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452c00 is same with the state(6) to be set 00:22:07.127 [2024-12-10 22:53:14.705288] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452c00 is same with the state(6) to be set 00:22:07.127 [2024-12-10 22:53:14.705293] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452c00 is same with the state(6) to be set 00:22:07.127 [2024-12-10 22:53:14.705300] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452c00 is same with the state(6) to be set 00:22:07.127 [2024-12-10 22:53:14.705306] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452c00 is same with the state(6) to be set 00:22:07.127 [2024-12-10 22:53:14.705312] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452c00 is same with the state(6) to be set 00:22:07.127 [2024-12-10 22:53:14.705319] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452c00 is same with the state(6) to be set 00:22:07.127 [2024-12-10 22:53:14.705325] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452c00 is same with the state(6) to be set 00:22:07.127 [2024-12-10 22:53:14.705331] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452c00 is same with the state(6) to be set 00:22:07.127 [2024-12-10 22:53:14.705338] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452c00 is same with the state(6) to be set 00:22:07.127 [2024-12-10 22:53:14.705344] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452c00 is same with the state(6) to be set 00:22:07.127 [2024-12-10 22:53:14.705350] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452c00 is same with the state(6) to be set 00:22:07.127 [2024-12-10 22:53:14.705356] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452c00 is same with the state(6) to be set 00:22:07.127 [2024-12-10 22:53:14.705363] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452c00 is same with the state(6) to be set 00:22:07.127 [2024-12-10 22:53:14.705369] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452c00 is same with the state(6) to be set 00:22:07.127 [2024-12-10 22:53:14.705375] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452c00 is same with the state(6) to be set 00:22:07.127 [2024-12-10 22:53:14.705381] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452c00 is same with the state(6) to be set 00:22:07.127 [2024-12-10 22:53:14.705388] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452c00 is same with the state(6) to be set 00:22:07.128 [2024-12-10 22:53:14.705394] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452c00 is same with the state(6) to be set 00:22:07.128 [2024-12-10 22:53:14.705400] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452c00 is same with the state(6) to be set 00:22:07.128 [2024-12-10 22:53:14.705408] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452c00 is same with the state(6) to be set 00:22:07.128 [2024-12-10 22:53:14.705414] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452c00 is same with the state(6) to be set 00:22:07.128 [2024-12-10 22:53:14.705420] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452c00 is same with the state(6) to be set 00:22:07.128 [2024-12-10 22:53:14.705428] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2452c00 is same with the state(6) to be set 00:22:07.128 [2024-12-10 22:53:14.706124] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24530d0 is same with the state(6) to be set 00:22:07.128 [2024-12-10 22:53:14.706142] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24530d0 is same with the state(6) to be set 00:22:07.128 [2024-12-10 22:53:14.706150] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24530d0 is same with the state(6) to be set 00:22:07.128 [2024-12-10 22:53:14.706162] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24530d0 is same with the state(6) to be set 00:22:07.128 [2024-12-10 22:53:14.706170] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24530d0 is same with the state(6) to be set 00:22:07.128 [2024-12-10 22:53:14.706177] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24530d0 is same with the state(6) to be set 00:22:07.128 [2024-12-10 22:53:14.706183] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24530d0 is same with the state(6) to be set 00:22:07.128 [2024-12-10 22:53:14.706189] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24530d0 is same with the state(6) to be set 00:22:07.128 [2024-12-10 22:53:14.706196] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24530d0 is same with the state(6) to be set 00:22:07.128 [2024-12-10 22:53:14.706202] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24530d0 is same with the state(6) to be set 00:22:07.128 [2024-12-10 22:53:14.706208] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24530d0 is same with the state(6) to be set 00:22:07.128 [2024-12-10 22:53:14.706215] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24530d0 is same with the state(6) to be set 00:22:07.128 [2024-12-10 22:53:14.706221] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24530d0 is same with the state(6) to be set 00:22:07.128 [2024-12-10 22:53:14.706228] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24530d0 is same with the state(6) to be set 00:22:07.128 [2024-12-10 22:53:14.706234] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24530d0 is same with the state(6) to be set 00:22:07.128 [2024-12-10 22:53:14.706240] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24530d0 is same with the state(6) to be set 00:22:07.128 [2024-12-10 22:53:14.706247] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24530d0 is same with the state(6) to be set 00:22:07.128 [2024-12-10 22:53:14.706253] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24530d0 is same with the state(6) to be set 00:22:07.128 [2024-12-10 22:53:14.706260] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24530d0 is same with the state(6) to be set 00:22:07.128 [2024-12-10 22:53:14.706266] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24530d0 is same with the state(6) to be set 00:22:07.128 [2024-12-10 22:53:14.706273] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24530d0 is same with the state(6) to be set 00:22:07.128 [2024-12-10 22:53:14.706279] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24530d0 is same with the state(6) to be set 00:22:07.128 [2024-12-10 22:53:14.706285] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24530d0 is same with the state(6) to be set 00:22:07.128 [2024-12-10 22:53:14.706291] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24530d0 is same with the state(6) to be set 00:22:07.128 [2024-12-10 22:53:14.706297] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24530d0 is same with the state(6) to be set 00:22:07.128 [2024-12-10 22:53:14.706303] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24530d0 is same with the state(6) to be set 00:22:07.128 [2024-12-10 22:53:14.706314] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24530d0 is same with the state(6) to be set 00:22:07.128 [2024-12-10 22:53:14.706321] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24530d0 is same with the state(6) to be set 00:22:07.128 [2024-12-10 22:53:14.706328] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24530d0 is same with the state(6) to be set 00:22:07.128 [2024-12-10 22:53:14.706334] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24530d0 is same with the state(6) to be set 00:22:07.128 [2024-12-10 22:53:14.706341] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24530d0 is same with the state(6) to be set 00:22:07.128 [2024-12-10 22:53:14.706346] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24530d0 is same with the state(6) to be set 00:22:07.128 [2024-12-10 22:53:14.706352] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24530d0 is same with the state(6) to be set 00:22:07.128 [2024-12-10 22:53:14.706360] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24530d0 is same with the state(6) to be set 00:22:07.128 [2024-12-10 22:53:14.706366] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24530d0 is same with the state(6) to be set 00:22:07.128 [2024-12-10 22:53:14.706374] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24530d0 is same with the state(6) to be set 00:22:07.128 [2024-12-10 22:53:14.706381] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24530d0 is same with the state(6) to be set 00:22:07.128 [2024-12-10 22:53:14.706387] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24530d0 is same with the state(6) to be set 00:22:07.128 [2024-12-10 22:53:14.706393] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24530d0 is same with the state(6) to be set 00:22:07.128 [2024-12-10 22:53:14.706400] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24530d0 is same with the state(6) to be set 00:22:07.128 [2024-12-10 22:53:14.706406] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24530d0 is same with the state(6) to be set 00:22:07.128 [2024-12-10 22:53:14.706412] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24530d0 is same with the state(6) to be set 00:22:07.128 [2024-12-10 22:53:14.706418] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24530d0 is same with the state(6) to be set 00:22:07.128 [2024-12-10 22:53:14.706425] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24530d0 is same with the state(6) to be set 00:22:07.128 [2024-12-10 22:53:14.706431] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24530d0 is same with the state(6) to be set 00:22:07.128 [2024-12-10 22:53:14.706437] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24530d0 is same with the state(6) to be set 00:22:07.128 [2024-12-10 22:53:14.706444] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24530d0 is same with the state(6) to be set 00:22:07.128 [2024-12-10 22:53:14.706450] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24530d0 is same with the state(6) to be set 00:22:07.128 [2024-12-10 22:53:14.706456] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24530d0 is same with the state(6) to be set 00:22:07.128 [2024-12-10 22:53:14.706462] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24530d0 is same with the state(6) to be set 00:22:07.128 [2024-12-10 22:53:14.706468] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24530d0 is same with the state(6) to be set 00:22:07.128 [2024-12-10 22:53:14.706474] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24530d0 is same with the state(6) to be set 00:22:07.128 [2024-12-10 22:53:14.706481] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24530d0 is same with the state(6) to be set 00:22:07.128 [2024-12-10 22:53:14.706489] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24530d0 is same with the state(6) to be set 00:22:07.128 [2024-12-10 22:53:14.706495] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24530d0 is same with the state(6) to be set 00:22:07.128 [2024-12-10 22:53:14.706501] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24530d0 is same with the state(6) to be set 00:22:07.128 [2024-12-10 22:53:14.706508] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24530d0 is same with the state(6) to be set 00:22:07.128 [2024-12-10 22:53:14.706514] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24530d0 is same with the state(6) to be set 00:22:07.128 [2024-12-10 22:53:14.706520] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24530d0 is same with the state(6) to be set 00:22:07.128 [2024-12-10 22:53:14.706526] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24530d0 is same with the state(6) to be set 00:22:07.128 [2024-12-10 22:53:14.706533] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24530d0 is same with the state(6) to be set 00:22:07.128 [2024-12-10 22:53:14.706540] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24530d0 is same with the state(6) to be set 00:22:07.128 [2024-12-10 22:53:14.706546] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24530d0 is same with the state(6) to be set 00:22:07.128 [2024-12-10 22:53:14.706649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:07.128 [2024-12-10 22:53:14.706680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.128 [2024-12-10 22:53:14.706690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:07.128 [2024-12-10 22:53:14.706698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.128 [2024-12-10 22:53:14.706706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:07.128 [2024-12-10 22:53:14.706713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.128 [2024-12-10 22:53:14.706720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:07.128 [2024-12-10 22:53:14.706727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.128 [2024-12-10 22:53:14.706733] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ae90 is same with the state(6) to be set 00:22:07.128 [2024-12-10 22:53:14.706772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:07.128 [2024-12-10 22:53:14.706782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.128 [2024-12-10 22:53:14.706789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:07.128 [2024-12-10 22:53:14.706797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.128 [2024-12-10 22:53:14.706804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:07.128 [2024-12-10 22:53:14.706811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.129 [2024-12-10 22:53:14.706819] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:07.129 [2024-12-10 22:53:14.706829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.129 [2024-12-10 22:53:14.706838] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8bf70 is same with the state(6) to be set 00:22:07.129 [2024-12-10 22:53:14.706871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:07.129 [2024-12-10 22:53:14.706881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.129 [2024-12-10 22:53:14.706889] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:07.129 [2024-12-10 22:53:14.706895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.129 [2024-12-10 22:53:14.706904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:07.129 [2024-12-10 22:53:14.706910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.129 [2024-12-10 22:53:14.706918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:07.129 [2024-12-10 22:53:14.706924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.129 [2024-12-10 22:53:14.706931] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ac90 is same with the state(6) to be set 00:22:07.129 [2024-12-10 22:53:14.706967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:07.129 [2024-12-10 22:53:14.706977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.129 [2024-12-10 22:53:14.706985] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:07.129 [2024-12-10 22:53:14.706991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.129 [2024-12-10 22:53:14.706999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:07.129 [2024-12-10 22:53:14.707007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.129 [2024-12-10 22:53:14.707015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:07.129 [2024-12-10 22:53:14.707022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.129 [2024-12-10 22:53:14.707029] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa169c0 is same with the state(6) to be set 00:22:07.129 [2024-12-10 22:53:14.707050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:07.129 [2024-12-10 22:53:14.707058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.129 [2024-12-10 22:53:14.707066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:07.129 [2024-12-10 22:53:14.707073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.129 [2024-12-10 22:53:14.707081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:07.129 [2024-12-10 22:53:14.707088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.129 [2024-12-10 22:53:14.707098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:07.129 [2024-12-10 22:53:14.707104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.129 [2024-12-10 22:53:14.707112] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0d080 is same with the state(6) to be set 00:22:07.129 [2024-12-10 22:53:14.707135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:07.129 [2024-12-10 22:53:14.707143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.129 [2024-12-10 22:53:14.707151] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:07.129 [2024-12-10 22:53:14.707163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.129 [2024-12-10 22:53:14.707172] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:07.129 [2024-12-10 22:53:14.707180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.129 [2024-12-10 22:53:14.707187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:07.129 [2024-12-10 22:53:14.707194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.129 [2024-12-10 22:53:14.707201] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa16e50 is same with the state(6) to be set 00:22:07.129 [2024-12-10 22:53:14.707399] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24535c0 is same with the state(6) to be set 00:22:07.129 [2024-12-10 22:53:14.707428] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24535c0 is same with the state(6) to be set 00:22:07.129 [2024-12-10 22:53:14.707437] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24535c0 is same with the state(6) to be set 00:22:07.129 [2024-12-10 22:53:14.707444] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24535c0 is same with the state(6) to be set 00:22:07.129 [2024-12-10 22:53:14.707451] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24535c0 is same with the state(6) to be set 00:22:07.129 [2024-12-10 22:53:14.707458] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24535c0 is same with the state(6) to be set 00:22:07.129 [2024-12-10 22:53:14.707465] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24535c0 is same with the state(6) to be set 00:22:07.129 [2024-12-10 22:53:14.707472] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24535c0 is same with the state(6) to be set 00:22:07.129 [2024-12-10 22:53:14.707478] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24535c0 is same with the state(6) to be set 00:22:07.129 [2024-12-10 22:53:14.707485] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24535c0 is same with the state(6) to be set 00:22:07.129 [2024-12-10 22:53:14.707492] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24535c0 is same with the state(6) to be set 00:22:07.129 [2024-12-10 22:53:14.707498] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24535c0 is same with the state(6) to be set 00:22:07.129 [2024-12-10 22:53:14.707504] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24535c0 is same with the state(6) to be set 00:22:07.129 [2024-12-10 22:53:14.707511] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24535c0 is same with the state(6) to be set 00:22:07.129 [2024-12-10 22:53:14.707520] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24535c0 is same with the state(6) to be set 00:22:07.129 [2024-12-10 22:53:14.707526] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24535c0 is same with the state(6) to be set 00:22:07.129 [2024-12-10 22:53:14.707532] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24535c0 is same with the state(6) to be set 00:22:07.129 [2024-12-10 22:53:14.707527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.129 [2024-12-10 22:53:14.707540] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24535c0 is same with the state(6) to be set 00:22:07.129 [2024-12-10 22:53:14.707547] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24535c0 is same with the state(6) to be set 00:22:07.129 [2024-12-10 22:53:14.707549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.129 [2024-12-10 22:53:14.707553] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24535c0 is same with the state(6) to be set 00:22:07.129 [2024-12-10 22:53:14.707561] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24535c0 is same with the state(6) to be set 00:22:07.129 [2024-12-10 22:53:14.707564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.129 [2024-12-10 22:53:14.707567] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24535c0 is same with the state(6) to be set 00:22:07.129 [2024-12-10 22:53:14.707572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.129 [2024-12-10 22:53:14.707574] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24535c0 is same with the state(6) to be set 00:22:07.129 [2024-12-10 22:53:14.707582] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24535c0 is same with the state(6) to be set 00:22:07.129 [2024-12-10 22:53:14.707583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.129 [2024-12-10 22:53:14.707589] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24535c0 is same with the state(6) to be set 00:22:07.129 [2024-12-10 22:53:14.707592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.129 [2024-12-10 22:53:14.707597] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24535c0 is same with the state(6) to be set 00:22:07.129 [2024-12-10 22:53:14.707602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.129 [2024-12-10 22:53:14.707604] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24535c0 is same with the state(6) to be set 00:22:07.129 [2024-12-10 22:53:14.707610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-10 22:53:14.707611] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24535c0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.129 the state(6) to be set 00:22:07.129 [2024-12-10 22:53:14.707620] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24535c0 is same with the state(6) to be set 00:22:07.129 [2024-12-10 22:53:14.707621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.129 [2024-12-10 22:53:14.707626] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24535c0 is same with the state(6) to be set 00:22:07.130 [2024-12-10 22:53:14.707629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-10 22:53:14.707633] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24535c0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.130 the state(6) to be set 00:22:07.130 [2024-12-10 22:53:14.707642] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24535c0 is same with the state(6) to be set 00:22:07.130 [2024-12-10 22:53:14.707644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.130 [2024-12-10 22:53:14.707648] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24535c0 is same with the state(6) to be set 00:22:07.130 [2024-12-10 22:53:14.707653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.130 [2024-12-10 22:53:14.707656] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24535c0 is same with the state(6) to be set 00:22:07.130 [2024-12-10 22:53:14.707662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:1[2024-12-10 22:53:14.707663] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24535c0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.130 the state(6) to be set 00:22:07.130 [2024-12-10 22:53:14.707672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.130 [2024-12-10 22:53:14.707681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:1[2024-12-10 22:53:14.707681] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24535c0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.130 the state(6) to be set 00:22:07.130 [2024-12-10 22:53:14.707691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-10 22:53:14.707691] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24535c0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.130 the state(6) to be set 00:22:07.130 [2024-12-10 22:53:14.707701] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24535c0 is same with the state(6) to be set 00:22:07.130 [2024-12-10 22:53:14.707705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.130 [2024-12-10 22:53:14.707708] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24535c0 is same with the state(6) to be set 00:22:07.130 [2024-12-10 22:53:14.707713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.130 [2024-12-10 22:53:14.707715] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24535c0 is same with the state(6) to be set 00:22:07.130 [2024-12-10 22:53:14.707722] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24535c0 is same with the state(6) to be set 00:22:07.130 [2024-12-10 22:53:14.707723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.130 [2024-12-10 22:53:14.707729] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24535c0 is same with the state(6) to be set 00:22:07.130 [2024-12-10 22:53:14.707731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.130 [2024-12-10 22:53:14.707736] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24535c0 is same with the state(6) to be set 00:22:07.130 [2024-12-10 22:53:14.707741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.130 [2024-12-10 22:53:14.707743] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24535c0 is same with the state(6) to be set 00:22:07.130 [2024-12-10 22:53:14.707749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.130 [2024-12-10 22:53:14.707750] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24535c0 is same with the state(6) to be set 00:22:07.130 [2024-12-10 22:53:14.707760] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24535c0 is same with [2024-12-10 22:53:14.707760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:1the state(6) to be set 00:22:07.130 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.130 [2024-12-10 22:53:14.707769] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24535c0 is same with [2024-12-10 22:53:14.707770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:22:07.130 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.130 [2024-12-10 22:53:14.707778] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24535c0 is same with the state(6) to be set 00:22:07.130 [2024-12-10 22:53:14.707781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.130 [2024-12-10 22:53:14.707785] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24535c0 is same with the state(6) to be set 00:22:07.130 [2024-12-10 22:53:14.707789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.130 [2024-12-10 22:53:14.707792] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24535c0 is same with the state(6) to be set 00:22:07.130 [2024-12-10 22:53:14.707798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.130 [2024-12-10 22:53:14.707799] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24535c0 is same with the state(6) to be set 00:22:07.130 [2024-12-10 22:53:14.707805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.130 [2024-12-10 22:53:14.707806] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24535c0 is same with the state(6) to be set 00:22:07.130 [2024-12-10 22:53:14.707814] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24535c0 is same with the state(6) to be set 00:22:07.130 [2024-12-10 22:53:14.707816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.130 [2024-12-10 22:53:14.707821] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24535c0 is same with the state(6) to be set 00:22:07.130 [2024-12-10 22:53:14.707824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.130 [2024-12-10 22:53:14.707828] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24535c0 is same with the state(6) to be set 00:22:07.130 [2024-12-10 22:53:14.707833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.130 [2024-12-10 22:53:14.707835] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24535c0 is same with the state(6) to be set 00:22:07.130 [2024-12-10 22:53:14.707841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.130 [2024-12-10 22:53:14.707842] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24535c0 is same with the state(6) to be set 00:22:07.130 [2024-12-10 22:53:14.707849] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24535c0 is same with the state(6) to be set 00:22:07.130 [2024-12-10 22:53:14.707850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.130 [2024-12-10 22:53:14.707856] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24535c0 is same with the state(6) to be set 00:22:07.130 [2024-12-10 22:53:14.707858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.130 [2024-12-10 22:53:14.707863] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24535c0 is same with the state(6) to be set 00:22:07.130 [2024-12-10 22:53:14.707869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.130 [2024-12-10 22:53:14.707870] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24535c0 is same with the state(6) to be set 00:22:07.130 [2024-12-10 22:53:14.707877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-10 22:53:14.707878] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24535c0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.130 the state(6) to be set 00:22:07.130 [2024-12-10 22:53:14.707886] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24535c0 is same with the state(6) to be set 00:22:07.130 [2024-12-10 22:53:14.707888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.130 [2024-12-10 22:53:14.707895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.130 [2024-12-10 22:53:14.707903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.130 [2024-12-10 22:53:14.707910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.130 [2024-12-10 22:53:14.707917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.130 [2024-12-10 22:53:14.707924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.130 [2024-12-10 22:53:14.707933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.130 [2024-12-10 22:53:14.707941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.130 [2024-12-10 22:53:14.707949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.130 [2024-12-10 22:53:14.707955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.130 [2024-12-10 22:53:14.707964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.130 [2024-12-10 22:53:14.707970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.130 [2024-12-10 22:53:14.707980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.130 [2024-12-10 22:53:14.707987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.130 [2024-12-10 22:53:14.707996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.130 [2024-12-10 22:53:14.708002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.130 [2024-12-10 22:53:14.708011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.131 [2024-12-10 22:53:14.708017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.131 [2024-12-10 22:53:14.708026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.131 [2024-12-10 22:53:14.708036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.131 [2024-12-10 22:53:14.708044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.131 [2024-12-10 22:53:14.708051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.131 [2024-12-10 22:53:14.708059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.131 [2024-12-10 22:53:14.708065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.131 [2024-12-10 22:53:14.708073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.131 [2024-12-10 22:53:14.708080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.131 [2024-12-10 22:53:14.708089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.131 [2024-12-10 22:53:14.708095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.131 [2024-12-10 22:53:14.708104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.131 [2024-12-10 22:53:14.708110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.131 [2024-12-10 22:53:14.708118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.131 [2024-12-10 22:53:14.708125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.131 [2024-12-10 22:53:14.708132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.131 [2024-12-10 22:53:14.708139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.131 [2024-12-10 22:53:14.708147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.131 [2024-12-10 22:53:14.708153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.131 [2024-12-10 22:53:14.708171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.131 [2024-12-10 22:53:14.708178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.131 [2024-12-10 22:53:14.708187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.131 [2024-12-10 22:53:14.708195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.131 [2024-12-10 22:53:14.708203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.131 [2024-12-10 22:53:14.708211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.131 [2024-12-10 22:53:14.708219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.131 [2024-12-10 22:53:14.708226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.131 [2024-12-10 22:53:14.708239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.131 [2024-12-10 22:53:14.708246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.131 [2024-12-10 22:53:14.708254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.131 [2024-12-10 22:53:14.708262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.131 [2024-12-10 22:53:14.708270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.131 [2024-12-10 22:53:14.708276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.131 [2024-12-10 22:53:14.708284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.131 [2024-12-10 22:53:14.708291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.131 [2024-12-10 22:53:14.708299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.131 [2024-12-10 22:53:14.708307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.131 [2024-12-10 22:53:14.708316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.131 [2024-12-10 22:53:14.708322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.131 [2024-12-10 22:53:14.708330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.131 [2024-12-10 22:53:14.708337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.131 [2024-12-10 22:53:14.708345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.131 [2024-12-10 22:53:14.708351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.131 [2024-12-10 22:53:14.708359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.131 [2024-12-10 22:53:14.708368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.131 [2024-12-10 22:53:14.708377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.131 [2024-12-10 22:53:14.708383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.131 [2024-12-10 22:53:14.708391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.131 [2024-12-10 22:53:14.708398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.131 [2024-12-10 22:53:14.708406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.131 [2024-12-10 22:53:14.708412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.131 [2024-12-10 22:53:14.708422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.131 [2024-12-10 22:53:14.708431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.131 [2024-12-10 22:53:14.708439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.131 [2024-12-10 22:53:14.708446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.131 [2024-12-10 22:53:14.708455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.131 [2024-12-10 22:53:14.708461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.131 [2024-12-10 22:53:14.708469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.131 [2024-12-10 22:53:14.708476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.131 [2024-12-10 22:53:14.708484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.131 [2024-12-10 22:53:14.708490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.131 [2024-12-10 22:53:14.708498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.131 [2024-12-10 22:53:14.708505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.131 [2024-12-10 22:53:14.708512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.131 [2024-12-10 22:53:14.708519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.131 [2024-12-10 22:53:14.708527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.131 [2024-12-10 22:53:14.708534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.131 [2024-12-10 22:53:14.708542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.131 [2024-12-10 22:53:14.708548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.131 [2024-12-10 22:53:14.708556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.131 [2024-12-10 22:53:14.708563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.131 [2024-12-10 22:53:14.708571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.131 [2024-12-10 22:53:14.708577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.131 [2024-12-10 22:53:14.708586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.131 [2024-12-10 22:53:14.708592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.131 [2024-12-10 22:53:14.708618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:07.131 [2024-12-10 22:53:14.708881] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453a90 is same with the state(6) to be set 00:22:07.131 [2024-12-10 22:53:14.708898] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453a90 is same with the state(6) to be set 00:22:07.131 [2024-12-10 22:53:14.708907] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453a90 is same with the state(6) to be set 00:22:07.131 [2024-12-10 22:53:14.708913] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453a90 is same with the state(6) to be set 00:22:07.131 [2024-12-10 22:53:14.708921] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453a90 is same with the state(6) to be set 00:22:07.131 [2024-12-10 22:53:14.708930] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453a90 is same with the state(6) to be set 00:22:07.132 [2024-12-10 22:53:14.708937] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453a90 is same with the state(6) to be set 00:22:07.132 [2024-12-10 22:53:14.708944] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453a90 is same with the state(6) to be set 00:22:07.132 [2024-12-10 22:53:14.708950] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453a90 is same with the state(6) to be set 00:22:07.132 [2024-12-10 22:53:14.708957] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453a90 is same with the state(6) to be set 00:22:07.132 [2024-12-10 22:53:14.708962] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453a90 is same with the state(6) to be set 00:22:07.132 [2024-12-10 22:53:14.708968] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453a90 is same with the state(6) to be set 00:22:07.132 [2024-12-10 22:53:14.708975] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453a90 is same with the state(6) to be set 00:22:07.132 [2024-12-10 22:53:14.708981] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453a90 is same with the state(6) to be set 00:22:07.132 [2024-12-10 22:53:14.708987] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453a90 is same with the state(6) to be set 00:22:07.132 [2024-12-10 22:53:14.708994] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453a90 is same with the state(6) to be set 00:22:07.132 [2024-12-10 22:53:14.709000] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453a90 is same with the state(6) to be set 00:22:07.132 [2024-12-10 22:53:14.709006] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453a90 is same with the state(6) to be set 00:22:07.132 [2024-12-10 22:53:14.709013] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453a90 is same with the state(6) to be set 00:22:07.132 [2024-12-10 22:53:14.709018] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453a90 is same with the state(6) to be set 00:22:07.132 [2024-12-10 22:53:14.709024] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453a90 is same with the state(6) to be set 00:22:07.132 [2024-12-10 22:53:14.709032] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453a90 is same with the state(6) to be set 00:22:07.132 [2024-12-10 22:53:14.709038] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453a90 is same with the state(6) to be set 00:22:07.132 [2024-12-10 22:53:14.709045] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453a90 is same with the state(6) to be set 00:22:07.132 [2024-12-10 22:53:14.709052] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453a90 is same with the state(6) to be set 00:22:07.132 [2024-12-10 22:53:14.709058] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453a90 is same with the state(6) to be set 00:22:07.132 [2024-12-10 22:53:14.709065] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453a90 is same with the state(6) to be set 00:22:07.132 [2024-12-10 22:53:14.709071] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453a90 is same with the state(6) to be set 00:22:07.132 [2024-12-10 22:53:14.709078] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453a90 is same with the state(6) to be set 00:22:07.132 [2024-12-10 22:53:14.709086] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453a90 is same with the state(6) to be set 00:22:07.132 [2024-12-10 22:53:14.709092] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453a90 is same with the state(6) to be set 00:22:07.132 [2024-12-10 22:53:14.709099] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453a90 is same with the state(6) to be set 00:22:07.132 [2024-12-10 22:53:14.709106] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453a90 is same with the state(6) to be set 00:22:07.132 [2024-12-10 22:53:14.709121] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453a90 is same with the state(6) to be set 00:22:07.132 [2024-12-10 22:53:14.709127] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453a90 is same with the state(6) to be set 00:22:07.132 [2024-12-10 22:53:14.709133] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453a90 is same with the state(6) to be set 00:22:07.132 [2024-12-10 22:53:14.709139] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453a90 is same with the state(6) to be set 00:22:07.132 [2024-12-10 22:53:14.709145] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453a90 is same with the state(6) to be set 00:22:07.132 [2024-12-10 22:53:14.709150] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453a90 is same with the state(6) to be set 00:22:07.132 [2024-12-10 22:53:14.709162] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453a90 is same with the state(6) to be set 00:22:07.132 [2024-12-10 22:53:14.709169] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453a90 is same with the state(6) to be set 00:22:07.132 [2024-12-10 22:53:14.709175] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453a90 is same with the state(6) to be set 00:22:07.132 [2024-12-10 22:53:14.709181] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453a90 is same with the state(6) to be set 00:22:07.132 [2024-12-10 22:53:14.709188] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453a90 is same with the state(6) to be set 00:22:07.132 [2024-12-10 22:53:14.709195] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453a90 is same with the state(6) to be set 00:22:07.132 [2024-12-10 22:53:14.709202] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453a90 is same with the state(6) to be set 00:22:07.132 [2024-12-10 22:53:14.709208] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453a90 is same with the state(6) to be set 00:22:07.132 [2024-12-10 22:53:14.709215] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453a90 is same with the state(6) to be set 00:22:07.132 [2024-12-10 22:53:14.709222] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453a90 is same with the state(6) to be set 00:22:07.132 [2024-12-10 22:53:14.709228] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453a90 is same with the state(6) to be set 00:22:07.132 [2024-12-10 22:53:14.709235] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453a90 is same with the state(6) to be set 00:22:07.132 [2024-12-10 22:53:14.709241] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453a90 is same with the state(6) to be set 00:22:07.132 [2024-12-10 22:53:14.709247] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453a90 is same with the state(6) to be set 00:22:07.132 [2024-12-10 22:53:14.709254] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453a90 is same with the state(6) to be set 00:22:07.132 [2024-12-10 22:53:14.709261] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453a90 is same with the state(6) to be set 00:22:07.132 [2024-12-10 22:53:14.709270] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453a90 is same with the state(6) to be set 00:22:07.132 [2024-12-10 22:53:14.709278] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453a90 is same with the state(6) to be set 00:22:07.132 [2024-12-10 22:53:14.709286] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453a90 is same with the state(6) to be set 00:22:07.132 [2024-12-10 22:53:14.709293] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453a90 is same with the state(6) to be set 00:22:07.132 [2024-12-10 22:53:14.709300] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453a90 is same with the state(6) to be set 00:22:07.132 [2024-12-10 22:53:14.709306] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453a90 is same with the state(6) to be set 00:22:07.132 [2024-12-10 22:53:14.709312] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453a90 is same with the state(6) to be set 00:22:07.132 [2024-12-10 22:53:14.709318] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453a90 is same with the state(6) to be set 00:22:07.132 [2024-12-10 22:53:14.709650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.132 [2024-12-10 22:53:14.709676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.132 [2024-12-10 22:53:14.709691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.132 [2024-12-10 22:53:14.709699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.132 [2024-12-10 22:53:14.709709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.132 [2024-12-10 22:53:14.709716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.132 [2024-12-10 22:53:14.709724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.132 [2024-12-10 22:53:14.709731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.132 [2024-12-10 22:53:14.709739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.132 [2024-12-10 22:53:14.709747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.132 [2024-12-10 22:53:14.709755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.132 [2024-12-10 22:53:14.709762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.132 [2024-12-10 22:53:14.709771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.132 [2024-12-10 22:53:14.709777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.132 [2024-12-10 22:53:14.709786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.132 [2024-12-10 22:53:14.709796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.132 [2024-12-10 22:53:14.709808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.132 [2024-12-10 22:53:14.709815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.132 [2024-12-10 22:53:14.709828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.132 [2024-12-10 22:53:14.709834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.132 [2024-12-10 22:53:14.709842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.132 [2024-12-10 22:53:14.709849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.132 [2024-12-10 22:53:14.709858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.132 [2024-12-10 22:53:14.709865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.132 [2024-12-10 22:53:14.709873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.132 [2024-12-10 22:53:14.709882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.133 [2024-12-10 22:53:14.709890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.133 [2024-12-10 22:53:14.709896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.133 [2024-12-10 22:53:14.709905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.133 [2024-12-10 22:53:14.709912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.133 [2024-12-10 22:53:14.709920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.133 [2024-12-10 22:53:14.709927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.133 [2024-12-10 22:53:14.709935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.133 [2024-12-10 22:53:14.709942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.133 [2024-12-10 22:53:14.709950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.133 [2024-12-10 22:53:14.709957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.133 [2024-12-10 22:53:14.709966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.133 [2024-12-10 22:53:14.709972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.133 [2024-12-10 22:53:14.709980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.133 [2024-12-10 22:53:14.709987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.133 [2024-12-10 22:53:14.709995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.133 [2024-12-10 22:53:14.710001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.133 [2024-12-10 22:53:14.710010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.133 [2024-12-10 22:53:14.710021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.133 [2024-12-10 22:53:14.710029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.133 [2024-12-10 22:53:14.710036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.133 [2024-12-10 22:53:14.710044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.133 [2024-12-10 22:53:14.710050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.133 [2024-12-10 22:53:14.710058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.133 [2024-12-10 22:53:14.710065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.133 [2024-12-10 22:53:14.710073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.133 [2024-12-10 22:53:14.710079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.133 [2024-12-10 22:53:14.710087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.133 [2024-12-10 22:53:14.710093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.133 [2024-12-10 22:53:14.710101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.133 [2024-12-10 22:53:14.710108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.133 [2024-12-10 22:53:14.710116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.133 [2024-12-10 22:53:14.710124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.133 [2024-12-10 22:53:14.710132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.133 [2024-12-10 22:53:14.710138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.133 [2024-12-10 22:53:14.710146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.133 [2024-12-10 22:53:14.710152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.133 [2024-12-10 22:53:14.710168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.133 [2024-12-10 22:53:14.710176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.133 [2024-12-10 22:53:14.710184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.133 [2024-12-10 22:53:14.710190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.133 [2024-12-10 22:53:14.710198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.133 [2024-12-10 22:53:14.710205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.133 [2024-12-10 22:53:14.710214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.133 [2024-12-10 22:53:14.710221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.133 [2024-12-10 22:53:14.710230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.133 [2024-12-10 22:53:14.710237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.133 [2024-12-10 22:53:14.710246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.133 [2024-12-10 22:53:14.710253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.133 [2024-12-10 22:53:14.710262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.133 [2024-12-10 22:53:14.710273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.133 [2024-12-10 22:53:14.710282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.133 [2024-12-10 22:53:14.710290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.133 [2024-12-10 22:53:14.710300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.133 [2024-12-10 22:53:14.710309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.133 [2024-12-10 22:53:14.710319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.133 [2024-12-10 22:53:14.710328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.133 [2024-12-10 22:53:14.710338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.133 [2024-12-10 22:53:14.710347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.133 [2024-12-10 22:53:14.710358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.133 [2024-12-10 22:53:14.710368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.133 [2024-12-10 22:53:14.710378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.133 [2024-12-10 22:53:14.710386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.133 [2024-12-10 22:53:14.710395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.133 [2024-12-10 22:53:14.710404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.133 [2024-12-10 22:53:14.710413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.133 [2024-12-10 22:53:14.710420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.133 [2024-12-10 22:53:14.710429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.133 [2024-12-10 22:53:14.710438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.133 [2024-12-10 22:53:14.710447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.133 [2024-12-10 22:53:14.710466] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453f80 is same with the state(6) to be set 00:22:07.133 [2024-12-10 22:53:14.710482] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453f80 is same with the state(6) to be set 00:22:07.133 [2024-12-10 22:53:14.710490] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453f80 is same with the state(6) to be set 00:22:07.133 [2024-12-10 22:53:14.710498] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453f80 is same with the state(6) to be set 00:22:07.133 [2024-12-10 22:53:14.710509] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453f80 is same with the state(6) to be set 00:22:07.133 [2024-12-10 22:53:14.710519] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453f80 is same with the state(6) to be set 00:22:07.133 [2024-12-10 22:53:14.710526] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453f80 is same with the state(6) to be set 00:22:07.133 [2024-12-10 22:53:14.710539] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453f80 is same with the state(6) to be set 00:22:07.133 [2024-12-10 22:53:14.710547] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453f80 is same with the state(6) to be set 00:22:07.133 [2024-12-10 22:53:14.710554] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453f80 is same with the state(6) to be set 00:22:07.133 [2024-12-10 22:53:14.710561] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453f80 is same with the state(6) to be set 00:22:07.133 [2024-12-10 22:53:14.710567] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453f80 is same with the state(6) to be set 00:22:07.133 [2024-12-10 22:53:14.710575] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453f80 is same with the state(6) to be set 00:22:07.134 [2024-12-10 22:53:14.710582] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453f80 is same with the state(6) to be set 00:22:07.134 [2024-12-10 22:53:14.710589] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453f80 is same with the state(6) to be set 00:22:07.134 [2024-12-10 22:53:14.710596] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453f80 is same with the state(6) to be set 00:22:07.134 [2024-12-10 22:53:14.710604] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453f80 is same with the state(6) to be set 00:22:07.134 [2024-12-10 22:53:14.710615] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453f80 is same with the state(6) to be set 00:22:07.134 [2024-12-10 22:53:14.710625] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453f80 is same with the state(6) to be set 00:22:07.134 [2024-12-10 22:53:14.710634] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453f80 is same with the state(6) to be set 00:22:07.134 [2024-12-10 22:53:14.710643] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453f80 is same with the state(6) to be set 00:22:07.134 [2024-12-10 22:53:14.710650] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453f80 is same with the state(6) to be set 00:22:07.134 [2024-12-10 22:53:14.710658] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453f80 is same with the state(6) to be set 00:22:07.134 [2024-12-10 22:53:14.710664] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453f80 is same with the state(6) to be set 00:22:07.134 [2024-12-10 22:53:14.710670] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453f80 is same with the state(6) to be set 00:22:07.134 [2024-12-10 22:53:14.710685] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453f80 is same with the state(6) to be set 00:22:07.134 [2024-12-10 22:53:14.710693] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453f80 is same with the state(6) to be set 00:22:07.134 [2024-12-10 22:53:14.710700] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453f80 is same with the state(6) to be set 00:22:07.134 [2024-12-10 22:53:14.710707] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453f80 is same with the state(6) to be set 00:22:07.134 [2024-12-10 22:53:14.710715] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453f80 is same with the state(6) to be set 00:22:07.134 [2024-12-10 22:53:14.710725] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453f80 is same with the state(6) to be set 00:22:07.134 [2024-12-10 22:53:14.710733] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453f80 is same with the state(6) to be set 00:22:07.134 [2024-12-10 22:53:14.710739] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453f80 is same with the state(6) to be set 00:22:07.134 [2024-12-10 22:53:14.710747] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453f80 is same with the state(6) to be set 00:22:07.134 [2024-12-10 22:53:14.710755] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453f80 is same with the state(6) to be set 00:22:07.134 [2024-12-10 22:53:14.710761] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453f80 is same with the state(6) to be set 00:22:07.134 [2024-12-10 22:53:14.710767] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453f80 is same with the state(6) to be set 00:22:07.134 [2024-12-10 22:53:14.710775] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453f80 is same with the state(6) to be set 00:22:07.134 [2024-12-10 22:53:14.710782] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453f80 is same with the state(6) to be set 00:22:07.134 [2024-12-10 22:53:14.710788] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453f80 is same with the state(6) to be set 00:22:07.134 [2024-12-10 22:53:14.710794] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453f80 is same with the state(6) to be set 00:22:07.134 [2024-12-10 22:53:14.710800] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453f80 is same with the state(6) to be set 00:22:07.134 [2024-12-10 22:53:14.710809] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453f80 is same with the state(6) to be set 00:22:07.134 [2024-12-10 22:53:14.710815] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453f80 is same with the state(6) to be set 00:22:07.134 [2024-12-10 22:53:14.710822] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453f80 is same with the state(6) to be set 00:22:07.134 [2024-12-10 22:53:14.710829] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453f80 is same with the state(6) to be set 00:22:07.134 [2024-12-10 22:53:14.710868] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453f80 is same with the state(6) to be set 00:22:07.134 [2024-12-10 22:53:14.710920] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453f80 is same with the state(6) to be set 00:22:07.134 [2024-12-10 22:53:14.710974] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453f80 is same with the state(6) to be set 00:22:07.134 [2024-12-10 22:53:14.711038] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453f80 is same with the state(6) to be set 00:22:07.134 [2024-12-10 22:53:14.711092] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453f80 is same with the state(6) to be set 00:22:07.134 [2024-12-10 22:53:14.711146] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453f80 is same with the state(6) to be set 00:22:07.134 [2024-12-10 22:53:14.711211] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453f80 is same with the state(6) to be set 00:22:07.134 [2024-12-10 22:53:14.711265] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453f80 is same with the state(6) to be set 00:22:07.134 [2024-12-10 22:53:14.711320] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453f80 is same with the state(6) to be set 00:22:07.134 [2024-12-10 22:53:14.711373] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453f80 is same with the state(6) to be set 00:22:07.134 [2024-12-10 22:53:14.711427] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453f80 is same with the state(6) to be set 00:22:07.134 [2024-12-10 22:53:14.711481] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453f80 is same with the state(6) to be set 00:22:07.134 [2024-12-10 22:53:14.711535] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453f80 is same with the state(6) to be set 00:22:07.134 [2024-12-10 22:53:14.711590] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453f80 is same with the state(6) to be set 00:22:07.134 [2024-12-10 22:53:14.711643] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453f80 is same with the state(6) to be set 00:22:07.134 [2024-12-10 22:53:14.711695] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453f80 is same with the state(6) to be set 00:22:07.134 [2024-12-10 22:53:14.711758] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2453f80 is same with the state(6) to be set 00:22:07.134 [2024-12-10 22:53:14.727441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.134 [2024-12-10 22:53:14.727472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.134 [2024-12-10 22:53:14.727481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.134 [2024-12-10 22:53:14.727492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.134 [2024-12-10 22:53:14.727501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.134 [2024-12-10 22:53:14.727512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.134 [2024-12-10 22:53:14.727521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.134 [2024-12-10 22:53:14.727533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.134 [2024-12-10 22:53:14.727543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.134 [2024-12-10 22:53:14.727554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.134 [2024-12-10 22:53:14.727564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.134 [2024-12-10 22:53:14.727575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.134 [2024-12-10 22:53:14.727586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.134 [2024-12-10 22:53:14.727597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.134 [2024-12-10 22:53:14.727607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.134 [2024-12-10 22:53:14.727623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.134 [2024-12-10 22:53:14.727632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.134 [2024-12-10 22:53:14.727644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.134 [2024-12-10 22:53:14.727653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.134 [2024-12-10 22:53:14.727664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.134 [2024-12-10 22:53:14.727674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.134 [2024-12-10 22:53:14.727686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.135 [2024-12-10 22:53:14.727695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.135 [2024-12-10 22:53:14.727706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.135 [2024-12-10 22:53:14.727716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.135 [2024-12-10 22:53:14.727727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.135 [2024-12-10 22:53:14.727739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.135 [2024-12-10 22:53:14.727751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.135 [2024-12-10 22:53:14.727760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.135 [2024-12-10 22:53:14.727771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.135 [2024-12-10 22:53:14.727781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.135 [2024-12-10 22:53:14.727792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.135 [2024-12-10 22:53:14.727802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.135 [2024-12-10 22:53:14.727812] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5ba80 is same with the state(6) to be set 00:22:07.135 [2024-12-10 22:53:14.727951] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa0ae90 (9): Bad file descriptor 00:22:07.135 [2024-12-10 22:53:14.728004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:07.135 [2024-12-10 22:53:14.728017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.135 [2024-12-10 22:53:14.728028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:07.135 [2024-12-10 22:53:14.728037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.135 [2024-12-10 22:53:14.728047] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:07.135 [2024-12-10 22:53:14.728060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.135 [2024-12-10 22:53:14.728070] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:07.135 [2024-12-10 22:53:14.728079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.135 [2024-12-10 22:53:14.728088] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8bd50 is same with the state(6) to be set 00:22:07.135 [2024-12-10 22:53:14.728112] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe8bf70 (9): Bad file descriptor 00:22:07.135 [2024-12-10 22:53:14.728142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:07.135 [2024-12-10 22:53:14.728153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.135 [2024-12-10 22:53:14.728170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:07.135 [2024-12-10 22:53:14.728180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.135 [2024-12-10 22:53:14.728190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:07.135 [2024-12-10 22:53:14.728199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.135 [2024-12-10 22:53:14.728210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:07.135 [2024-12-10 22:53:14.728219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.135 [2024-12-10 22:53:14.728227] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe849c0 is same with the state(6) to be set 00:22:07.135 [2024-12-10 22:53:14.728247] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa0ac90 (9): Bad file descriptor 00:22:07.135 [2024-12-10 22:53:14.728282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:07.135 [2024-12-10 22:53:14.728294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.135 [2024-12-10 22:53:14.728305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:07.135 [2024-12-10 22:53:14.728315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.135 [2024-12-10 22:53:14.728325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:07.135 [2024-12-10 22:53:14.728334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.135 [2024-12-10 22:53:14.728344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:07.135 [2024-12-10 22:53:14.728353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.135 [2024-12-10 22:53:14.728362] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe3eab0 is same with the state(6) to be set 00:22:07.135 [2024-12-10 22:53:14.728396] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:07.135 [2024-12-10 22:53:14.728408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.135 [2024-12-10 22:53:14.728421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:07.135 [2024-12-10 22:53:14.728431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.135 [2024-12-10 22:53:14.728442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:07.135 [2024-12-10 22:53:14.728452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.135 [2024-12-10 22:53:14.728462] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:07.135 [2024-12-10 22:53:14.728471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.135 [2024-12-10 22:53:14.728481] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92b610 is same with the state(6) to be set 00:22:07.135 [2024-12-10 22:53:14.728502] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa169c0 (9): Bad file descriptor 00:22:07.135 [2024-12-10 22:53:14.728525] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa0d080 (9): Bad file descriptor 00:22:07.135 [2024-12-10 22:53:14.728544] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa16e50 (9): Bad file descriptor 00:22:07.135 [2024-12-10 22:53:14.730579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.135 [2024-12-10 22:53:14.730614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.135 [2024-12-10 22:53:14.730634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.135 [2024-12-10 22:53:14.730644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.135 [2024-12-10 22:53:14.730656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.135 [2024-12-10 22:53:14.730666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.135 [2024-12-10 22:53:14.730679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.135 [2024-12-10 22:53:14.730687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.135 [2024-12-10 22:53:14.730699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.135 [2024-12-10 22:53:14.730709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.135 [2024-12-10 22:53:14.730720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.135 [2024-12-10 22:53:14.730730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.135 [2024-12-10 22:53:14.730741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.135 [2024-12-10 22:53:14.730750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.135 [2024-12-10 22:53:14.730762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.135 [2024-12-10 22:53:14.730778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.135 [2024-12-10 22:53:14.730789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.135 [2024-12-10 22:53:14.730798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.135 [2024-12-10 22:53:14.730810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.135 [2024-12-10 22:53:14.730819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.135 [2024-12-10 22:53:14.730833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.135 [2024-12-10 22:53:14.730843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.135 [2024-12-10 22:53:14.730854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.135 [2024-12-10 22:53:14.730864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.135 [2024-12-10 22:53:14.730876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.135 [2024-12-10 22:53:14.730885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.135 [2024-12-10 22:53:14.730896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.135 [2024-12-10 22:53:14.730906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.136 [2024-12-10 22:53:14.730916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.136 [2024-12-10 22:53:14.730927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.136 [2024-12-10 22:53:14.730938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.136 [2024-12-10 22:53:14.730948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.136 [2024-12-10 22:53:14.730960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.136 [2024-12-10 22:53:14.730969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.136 [2024-12-10 22:53:14.730981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.136 [2024-12-10 22:53:14.730991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.136 [2024-12-10 22:53:14.731003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.136 [2024-12-10 22:53:14.731012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.136 [2024-12-10 22:53:14.731023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.136 [2024-12-10 22:53:14.731032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.136 [2024-12-10 22:53:14.731046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.136 [2024-12-10 22:53:14.731055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.136 [2024-12-10 22:53:14.731067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.136 [2024-12-10 22:53:14.731076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.136 [2024-12-10 22:53:14.731088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.136 [2024-12-10 22:53:14.731096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.136 [2024-12-10 22:53:14.731108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.136 [2024-12-10 22:53:14.731118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.136 [2024-12-10 22:53:14.731129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.136 [2024-12-10 22:53:14.731138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.136 [2024-12-10 22:53:14.731149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.136 [2024-12-10 22:53:14.731167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.136 [2024-12-10 22:53:14.731179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.136 [2024-12-10 22:53:14.731188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.136 [2024-12-10 22:53:14.731199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.136 [2024-12-10 22:53:14.731209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.136 [2024-12-10 22:53:14.731220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.136 [2024-12-10 22:53:14.731229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.136 [2024-12-10 22:53:14.731241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.136 [2024-12-10 22:53:14.731250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.136 [2024-12-10 22:53:14.731261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.136 [2024-12-10 22:53:14.731271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.136 [2024-12-10 22:53:14.731284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.136 [2024-12-10 22:53:14.731293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.136 [2024-12-10 22:53:14.731306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.136 [2024-12-10 22:53:14.731317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.136 [2024-12-10 22:53:14.731329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.136 [2024-12-10 22:53:14.731339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.136 [2024-12-10 22:53:14.731351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.136 [2024-12-10 22:53:14.731360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.136 [2024-12-10 22:53:14.731371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.136 [2024-12-10 22:53:14.731381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.136 [2024-12-10 22:53:14.731392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.136 [2024-12-10 22:53:14.731401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.136 [2024-12-10 22:53:14.731413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.136 [2024-12-10 22:53:14.731422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.136 [2024-12-10 22:53:14.731434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.136 [2024-12-10 22:53:14.731444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.136 [2024-12-10 22:53:14.731455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.136 [2024-12-10 22:53:14.731465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.136 [2024-12-10 22:53:14.731477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.136 [2024-12-10 22:53:14.731486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.136 [2024-12-10 22:53:14.731498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.136 [2024-12-10 22:53:14.731508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.136 [2024-12-10 22:53:14.731520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.136 [2024-12-10 22:53:14.731530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.136 [2024-12-10 22:53:14.731541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.136 [2024-12-10 22:53:14.731551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.136 [2024-12-10 22:53:14.731562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.136 [2024-12-10 22:53:14.731572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.136 [2024-12-10 22:53:14.731585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.136 [2024-12-10 22:53:14.731594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.136 [2024-12-10 22:53:14.731605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.136 [2024-12-10 22:53:14.731615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.136 [2024-12-10 22:53:14.731625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.136 [2024-12-10 22:53:14.731635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.136 [2024-12-10 22:53:14.731648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.136 [2024-12-10 22:53:14.731657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.136 [2024-12-10 22:53:14.731669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.136 [2024-12-10 22:53:14.731678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.136 [2024-12-10 22:53:14.731689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.136 [2024-12-10 22:53:14.731699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.136 [2024-12-10 22:53:14.731711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.136 [2024-12-10 22:53:14.731720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.136 [2024-12-10 22:53:14.731731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.136 [2024-12-10 22:53:14.731741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.136 [2024-12-10 22:53:14.731752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.136 [2024-12-10 22:53:14.731761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.136 [2024-12-10 22:53:14.731772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.137 [2024-12-10 22:53:14.731782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.137 [2024-12-10 22:53:14.731793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.137 [2024-12-10 22:53:14.731802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.137 [2024-12-10 22:53:14.731813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.137 [2024-12-10 22:53:14.731822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.137 [2024-12-10 22:53:14.731834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.137 [2024-12-10 22:53:14.731852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.137 [2024-12-10 22:53:14.731864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.137 [2024-12-10 22:53:14.731873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.137 [2024-12-10 22:53:14.731885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.137 [2024-12-10 22:53:14.731894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.137 [2024-12-10 22:53:14.731907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.137 [2024-12-10 22:53:14.731916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.137 [2024-12-10 22:53:14.731927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.137 [2024-12-10 22:53:14.731936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.137 [2024-12-10 22:53:14.731948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.137 [2024-12-10 22:53:14.731957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.137 [2024-12-10 22:53:14.731968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.137 [2024-12-10 22:53:14.731977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.137 [2024-12-10 22:53:14.732008] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:07.137 [2024-12-10 22:53:14.733280] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:22:07.137 [2024-12-10 22:53:14.734955] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:22:07.137 [2024-12-10 22:53:14.735114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.137 [2024-12-10 22:53:14.735135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa0ae90 with addr=10.0.0.2, port=4420 00:22:07.137 [2024-12-10 22:53:14.735147] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ae90 is same with the state(6) to be set 00:22:07.137 [2024-12-10 22:53:14.736155] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:22:07.137 [2024-12-10 22:53:14.736197] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe8bd50 (9): Bad file descriptor 00:22:07.137 [2024-12-10 22:53:14.736452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.137 [2024-12-10 22:53:14.736471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe8bf70 with addr=10.0.0.2, port=4420 00:22:07.137 [2024-12-10 22:53:14.736483] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8bf70 is same with the state(6) to be set 00:22:07.137 [2024-12-10 22:53:14.736496] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa0ae90 (9): Bad file descriptor 00:22:07.137 [2024-12-10 22:53:14.736550] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:07.137 [2024-12-10 22:53:14.736623] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:07.137 [2024-12-10 22:53:14.736684] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:07.137 [2024-12-10 22:53:14.736793] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:07.137 [2024-12-10 22:53:14.737236] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe8bf70 (9): Bad file descriptor 00:22:07.137 [2024-12-10 22:53:14.737257] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:22:07.137 [2024-12-10 22:53:14.737268] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:22:07.137 [2024-12-10 22:53:14.737278] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:22:07.137 [2024-12-10 22:53:14.737289] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:22:07.137 [2024-12-10 22:53:14.737408] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:07.137 [2024-12-10 22:53:14.737465] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:07.137 [2024-12-10 22:53:14.737684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.137 [2024-12-10 22:53:14.737703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe8bd50 with addr=10.0.0.2, port=4420 00:22:07.137 [2024-12-10 22:53:14.737714] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8bd50 is same with the state(6) to be set 00:22:07.137 [2024-12-10 22:53:14.737725] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:22:07.137 [2024-12-10 22:53:14.737735] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:22:07.137 [2024-12-10 22:53:14.737745] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:22:07.137 [2024-12-10 22:53:14.737755] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:22:07.137 [2024-12-10 22:53:14.737821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.137 [2024-12-10 22:53:14.737836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.137 [2024-12-10 22:53:14.737854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.137 [2024-12-10 22:53:14.737865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.137 [2024-12-10 22:53:14.737877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.137 [2024-12-10 22:53:14.737888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.137 [2024-12-10 22:53:14.737900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.137 [2024-12-10 22:53:14.737910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.137 [2024-12-10 22:53:14.737921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.137 [2024-12-10 22:53:14.737931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.137 [2024-12-10 22:53:14.737943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.137 [2024-12-10 22:53:14.737952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.137 [2024-12-10 22:53:14.737965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.137 [2024-12-10 22:53:14.737978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.137 [2024-12-10 22:53:14.737990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.137 [2024-12-10 22:53:14.738000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.137 [2024-12-10 22:53:14.738012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.137 [2024-12-10 22:53:14.738022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.137 [2024-12-10 22:53:14.738034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.137 [2024-12-10 22:53:14.738043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.137 [2024-12-10 22:53:14.738056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.137 [2024-12-10 22:53:14.738065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.137 [2024-12-10 22:53:14.738077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.137 [2024-12-10 22:53:14.738088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.137 [2024-12-10 22:53:14.738099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.137 [2024-12-10 22:53:14.738109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.137 [2024-12-10 22:53:14.738120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.137 [2024-12-10 22:53:14.738130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.137 [2024-12-10 22:53:14.738142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.137 [2024-12-10 22:53:14.738152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.138 [2024-12-10 22:53:14.738173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.138 [2024-12-10 22:53:14.738183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.138 [2024-12-10 22:53:14.738195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.138 [2024-12-10 22:53:14.738206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.138 [2024-12-10 22:53:14.738218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.138 [2024-12-10 22:53:14.738228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.138 [2024-12-10 22:53:14.738240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.138 [2024-12-10 22:53:14.738249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.138 [2024-12-10 22:53:14.738264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.138 [2024-12-10 22:53:14.738273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.138 [2024-12-10 22:53:14.738285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.138 [2024-12-10 22:53:14.738295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.138 [2024-12-10 22:53:14.738307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.138 [2024-12-10 22:53:14.738316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.138 [2024-12-10 22:53:14.738329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.138 [2024-12-10 22:53:14.738337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.138 [2024-12-10 22:53:14.738350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.138 [2024-12-10 22:53:14.738359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.138 [2024-12-10 22:53:14.738371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.138 [2024-12-10 22:53:14.738381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.138 [2024-12-10 22:53:14.738392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.138 [2024-12-10 22:53:14.738402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.138 [2024-12-10 22:53:14.738414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.138 [2024-12-10 22:53:14.738424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.138 [2024-12-10 22:53:14.738435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.138 [2024-12-10 22:53:14.738445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.138 [2024-12-10 22:53:14.738457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.138 [2024-12-10 22:53:14.738466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.138 [2024-12-10 22:53:14.738478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.138 [2024-12-10 22:53:14.738487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.138 [2024-12-10 22:53:14.738498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.138 [2024-12-10 22:53:14.738507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.138 [2024-12-10 22:53:14.738519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.138 [2024-12-10 22:53:14.738531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.138 [2024-12-10 22:53:14.738543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.138 [2024-12-10 22:53:14.738552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.138 [2024-12-10 22:53:14.738564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.138 [2024-12-10 22:53:14.738573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.138 [2024-12-10 22:53:14.738585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.138 [2024-12-10 22:53:14.738594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.138 [2024-12-10 22:53:14.738606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.138 [2024-12-10 22:53:14.738615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.138 [2024-12-10 22:53:14.738626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.138 [2024-12-10 22:53:14.738638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.138 [2024-12-10 22:53:14.738650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.138 [2024-12-10 22:53:14.738660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.138 [2024-12-10 22:53:14.738672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.138 [2024-12-10 22:53:14.738681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.138 [2024-12-10 22:53:14.738693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.138 [2024-12-10 22:53:14.738702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.138 [2024-12-10 22:53:14.738715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.138 [2024-12-10 22:53:14.738725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.138 [2024-12-10 22:53:14.738736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.138 [2024-12-10 22:53:14.738746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.138 [2024-12-10 22:53:14.738757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.138 [2024-12-10 22:53:14.738767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.138 [2024-12-10 22:53:14.738779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.138 [2024-12-10 22:53:14.738788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.138 [2024-12-10 22:53:14.738802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.138 [2024-12-10 22:53:14.738813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.138 [2024-12-10 22:53:14.738826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.138 [2024-12-10 22:53:14.738835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.138 [2024-12-10 22:53:14.738846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.138 [2024-12-10 22:53:14.738856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.138 [2024-12-10 22:53:14.738867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.138 [2024-12-10 22:53:14.738876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.138 [2024-12-10 22:53:14.738887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.138 [2024-12-10 22:53:14.738897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.138 [2024-12-10 22:53:14.738909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.138 [2024-12-10 22:53:14.738919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.138 [2024-12-10 22:53:14.738931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.139 [2024-12-10 22:53:14.738940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.139 [2024-12-10 22:53:14.738953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.139 [2024-12-10 22:53:14.738963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.139 [2024-12-10 22:53:14.738975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.139 [2024-12-10 22:53:14.738984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.139 [2024-12-10 22:53:14.738996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.139 [2024-12-10 22:53:14.739006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.139 [2024-12-10 22:53:14.739017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.139 [2024-12-10 22:53:14.739027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.139 [2024-12-10 22:53:14.739038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.139 [2024-12-10 22:53:14.739048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.139 [2024-12-10 22:53:14.739060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.139 [2024-12-10 22:53:14.739071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.139 [2024-12-10 22:53:14.739083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.139 [2024-12-10 22:53:14.739093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.139 [2024-12-10 22:53:14.739105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.139 [2024-12-10 22:53:14.739114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.139 [2024-12-10 22:53:14.739126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.139 [2024-12-10 22:53:14.739135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.139 [2024-12-10 22:53:14.739147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.139 [2024-12-10 22:53:14.739161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.139 [2024-12-10 22:53:14.739174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.139 [2024-12-10 22:53:14.739184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.139 [2024-12-10 22:53:14.739195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.139 [2024-12-10 22:53:14.739205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.139 [2024-12-10 22:53:14.739217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.139 [2024-12-10 22:53:14.739226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.139 [2024-12-10 22:53:14.739238] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1c5b0 is same with the state(6) to be set 00:22:07.139 [2024-12-10 22:53:14.739358] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe8bd50 (9): Bad file descriptor 00:22:07.139 [2024-12-10 22:53:14.739381] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe849c0 (9): Bad file descriptor 00:22:07.139 [2024-12-10 22:53:14.739405] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:22:07.139 [2024-12-10 22:53:14.739421] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe3eab0 (9): Bad file descriptor 00:22:07.139 [2024-12-10 22:53:14.739442] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x92b610 (9): Bad file descriptor 00:22:07.139 [2024-12-10 22:53:14.740795] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:22:07.139 [2024-12-10 22:53:14.740831] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:22:07.139 [2024-12-10 22:53:14.740843] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:22:07.139 [2024-12-10 22:53:14.740854] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:22:07.139 [2024-12-10 22:53:14.740863] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:22:07.139 [2024-12-10 22:53:14.740918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.139 [2024-12-10 22:53:14.740932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.139 [2024-12-10 22:53:14.740948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.139 [2024-12-10 22:53:14.740958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.139 [2024-12-10 22:53:14.740971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.139 [2024-12-10 22:53:14.740981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.139 [2024-12-10 22:53:14.740994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.139 [2024-12-10 22:53:14.741004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.139 [2024-12-10 22:53:14.741017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.139 [2024-12-10 22:53:14.741027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.139 [2024-12-10 22:53:14.741039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.139 [2024-12-10 22:53:14.741050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.139 [2024-12-10 22:53:14.741062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.139 [2024-12-10 22:53:14.741074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.139 [2024-12-10 22:53:14.741086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.139 [2024-12-10 22:53:14.741100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.139 [2024-12-10 22:53:14.741112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.139 [2024-12-10 22:53:14.741121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.139 [2024-12-10 22:53:14.741135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.139 [2024-12-10 22:53:14.741145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.139 [2024-12-10 22:53:14.741167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.139 [2024-12-10 22:53:14.741179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.139 [2024-12-10 22:53:14.741191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.139 [2024-12-10 22:53:14.741202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.139 [2024-12-10 22:53:14.741214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.139 [2024-12-10 22:53:14.741228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.139 [2024-12-10 22:53:14.741243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.139 [2024-12-10 22:53:14.741253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.139 [2024-12-10 22:53:14.741266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.139 [2024-12-10 22:53:14.741276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.139 [2024-12-10 22:53:14.741289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.139 [2024-12-10 22:53:14.741299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.139 [2024-12-10 22:53:14.741310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.139 [2024-12-10 22:53:14.741321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.139 [2024-12-10 22:53:14.741333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.139 [2024-12-10 22:53:14.741343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.139 [2024-12-10 22:53:14.741357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.139 [2024-12-10 22:53:14.741367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.139 [2024-12-10 22:53:14.741380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.139 [2024-12-10 22:53:14.741391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.139 [2024-12-10 22:53:14.741403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.139 [2024-12-10 22:53:14.741413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.139 [2024-12-10 22:53:14.741425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.139 [2024-12-10 22:53:14.741435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.140 [2024-12-10 22:53:14.741448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.140 [2024-12-10 22:53:14.741461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.140 [2024-12-10 22:53:14.741473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.140 [2024-12-10 22:53:14.741484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.140 [2024-12-10 22:53:14.741495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.140 [2024-12-10 22:53:14.741505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.140 [2024-12-10 22:53:14.741518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.140 [2024-12-10 22:53:14.741528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.140 [2024-12-10 22:53:14.741541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.140 [2024-12-10 22:53:14.741549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.140 [2024-12-10 22:53:14.741562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.140 [2024-12-10 22:53:14.741571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.140 [2024-12-10 22:53:14.741583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.140 [2024-12-10 22:53:14.741593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.140 [2024-12-10 22:53:14.741605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.140 [2024-12-10 22:53:14.741615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.140 [2024-12-10 22:53:14.741627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.140 [2024-12-10 22:53:14.741636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.140 [2024-12-10 22:53:14.741651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.140 [2024-12-10 22:53:14.741660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.140 [2024-12-10 22:53:14.741673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.140 [2024-12-10 22:53:14.741683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.140 [2024-12-10 22:53:14.741694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.140 [2024-12-10 22:53:14.741704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.140 [2024-12-10 22:53:14.741715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.140 [2024-12-10 22:53:14.741725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.140 [2024-12-10 22:53:14.741737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.140 [2024-12-10 22:53:14.741748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.140 [2024-12-10 22:53:14.741760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.140 [2024-12-10 22:53:14.741769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.140 [2024-12-10 22:53:14.741782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.140 [2024-12-10 22:53:14.741793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.140 [2024-12-10 22:53:14.741810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.140 [2024-12-10 22:53:14.741819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.140 [2024-12-10 22:53:14.741831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.140 [2024-12-10 22:53:14.741840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.140 [2024-12-10 22:53:14.741852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.140 [2024-12-10 22:53:14.741862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.140 [2024-12-10 22:53:14.741874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.140 [2024-12-10 22:53:14.741884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.140 [2024-12-10 22:53:14.741896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.140 [2024-12-10 22:53:14.741906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.140 [2024-12-10 22:53:14.741917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.140 [2024-12-10 22:53:14.741927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.140 [2024-12-10 22:53:14.741938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.140 [2024-12-10 22:53:14.741947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.140 [2024-12-10 22:53:14.741959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.140 [2024-12-10 22:53:14.741968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.140 [2024-12-10 22:53:14.741980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.140 [2024-12-10 22:53:14.741990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.140 [2024-12-10 22:53:14.742001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.140 [2024-12-10 22:53:14.742010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.140 [2024-12-10 22:53:14.742022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.140 [2024-12-10 22:53:14.742032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.140 [2024-12-10 22:53:14.742044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.140 [2024-12-10 22:53:14.742053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.140 [2024-12-10 22:53:14.742067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.140 [2024-12-10 22:53:14.742077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.140 [2024-12-10 22:53:14.742089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.140 [2024-12-10 22:53:14.742098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.140 [2024-12-10 22:53:14.742110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.140 [2024-12-10 22:53:14.742119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.140 [2024-12-10 22:53:14.742130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.140 [2024-12-10 22:53:14.742140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.140 [2024-12-10 22:53:14.742152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.140 [2024-12-10 22:53:14.742168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.140 [2024-12-10 22:53:14.742180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.140 [2024-12-10 22:53:14.742191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.140 [2024-12-10 22:53:14.742203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.140 [2024-12-10 22:53:14.742213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.140 [2024-12-10 22:53:14.742224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.140 [2024-12-10 22:53:14.742234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.140 [2024-12-10 22:53:14.742246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.140 [2024-12-10 22:53:14.742256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.140 [2024-12-10 22:53:14.742267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.140 [2024-12-10 22:53:14.742277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.140 [2024-12-10 22:53:14.742288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.140 [2024-12-10 22:53:14.742297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.140 [2024-12-10 22:53:14.742310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.140 [2024-12-10 22:53:14.742319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.141 [2024-12-10 22:53:14.742332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.141 [2024-12-10 22:53:14.742344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.141 [2024-12-10 22:53:14.742355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.141 [2024-12-10 22:53:14.742365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.141 [2024-12-10 22:53:14.742375] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1ae10 is same with the state(6) to be set 00:22:07.141 [2024-12-10 22:53:14.743737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.141 [2024-12-10 22:53:14.743756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.141 [2024-12-10 22:53:14.743775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.141 [2024-12-10 22:53:14.743787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.141 [2024-12-10 22:53:14.743798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.141 [2024-12-10 22:53:14.743811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.141 [2024-12-10 22:53:14.743824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.141 [2024-12-10 22:53:14.743834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.141 [2024-12-10 22:53:14.743847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.141 [2024-12-10 22:53:14.743857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.141 [2024-12-10 22:53:14.743869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.141 [2024-12-10 22:53:14.743879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.141 [2024-12-10 22:53:14.743891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.141 [2024-12-10 22:53:14.743903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.141 [2024-12-10 22:53:14.743916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.141 [2024-12-10 22:53:14.743926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.141 [2024-12-10 22:53:14.743939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.141 [2024-12-10 22:53:14.743949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.141 [2024-12-10 22:53:14.743960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.141 [2024-12-10 22:53:14.743971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.141 [2024-12-10 22:53:14.743983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.141 [2024-12-10 22:53:14.743995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.141 [2024-12-10 22:53:14.744008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.141 [2024-12-10 22:53:14.744021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.141 [2024-12-10 22:53:14.744035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.141 [2024-12-10 22:53:14.744046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.141 [2024-12-10 22:53:14.744058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.141 [2024-12-10 22:53:14.744068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.141 [2024-12-10 22:53:14.744081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.141 [2024-12-10 22:53:14.744090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.141 [2024-12-10 22:53:14.744103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.141 [2024-12-10 22:53:14.744113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.141 [2024-12-10 22:53:14.744125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.141 [2024-12-10 22:53:14.744135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.141 [2024-12-10 22:53:14.744146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.141 [2024-12-10 22:53:14.744161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.141 [2024-12-10 22:53:14.744173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.141 [2024-12-10 22:53:14.744185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.141 [2024-12-10 22:53:14.744198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.141 [2024-12-10 22:53:14.744207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.141 [2024-12-10 22:53:14.744220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.141 [2024-12-10 22:53:14.744230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.141 [2024-12-10 22:53:14.744242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.141 [2024-12-10 22:53:14.744253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.141 [2024-12-10 22:53:14.744268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.141 [2024-12-10 22:53:14.744278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.141 [2024-12-10 22:53:14.744292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.141 [2024-12-10 22:53:14.744311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.141 [2024-12-10 22:53:14.744324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.141 [2024-12-10 22:53:14.744334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.141 [2024-12-10 22:53:14.744346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.141 [2024-12-10 22:53:14.744356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.141 [2024-12-10 22:53:14.744369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.141 [2024-12-10 22:53:14.744379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.141 [2024-12-10 22:53:14.744390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.141 [2024-12-10 22:53:14.744401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.141 [2024-12-10 22:53:14.744412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.141 [2024-12-10 22:53:14.744422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.141 [2024-12-10 22:53:14.744434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.141 [2024-12-10 22:53:14.744444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.141 [2024-12-10 22:53:14.744456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.141 [2024-12-10 22:53:14.744465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.141 [2024-12-10 22:53:14.744477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.141 [2024-12-10 22:53:14.744487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.141 [2024-12-10 22:53:14.744499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.141 [2024-12-10 22:53:14.744509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.141 [2024-12-10 22:53:14.744520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.141 [2024-12-10 22:53:14.744530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.141 [2024-12-10 22:53:14.744543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.141 [2024-12-10 22:53:14.744553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.141 [2024-12-10 22:53:14.744566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.141 [2024-12-10 22:53:14.744576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.141 [2024-12-10 22:53:14.744594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.141 [2024-12-10 22:53:14.744604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.141 [2024-12-10 22:53:14.744616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.141 [2024-12-10 22:53:14.744626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.141 [2024-12-10 22:53:14.744639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.142 [2024-12-10 22:53:14.744650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.142 [2024-12-10 22:53:14.744662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.142 [2024-12-10 22:53:14.744671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.142 [2024-12-10 22:53:14.744683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.142 [2024-12-10 22:53:14.744693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.142 [2024-12-10 22:53:14.744705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.142 [2024-12-10 22:53:14.744714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.142 [2024-12-10 22:53:14.744727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.142 [2024-12-10 22:53:14.744737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.142 [2024-12-10 22:53:14.744750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.142 [2024-12-10 22:53:14.744762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.142 [2024-12-10 22:53:14.744773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.142 [2024-12-10 22:53:14.744783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.142 [2024-12-10 22:53:14.744793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.142 [2024-12-10 22:53:14.744814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.142 [2024-12-10 22:53:14.744824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.142 [2024-12-10 22:53:14.744830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.142 [2024-12-10 22:53:14.744838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.142 [2024-12-10 22:53:14.744846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.142 [2024-12-10 22:53:14.744854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.142 [2024-12-10 22:53:14.744863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.142 [2024-12-10 22:53:14.744871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.142 [2024-12-10 22:53:14.744878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.142 [2024-12-10 22:53:14.744887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.142 [2024-12-10 22:53:14.744895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.142 [2024-12-10 22:53:14.744903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.142 [2024-12-10 22:53:14.744911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.142 [2024-12-10 22:53:14.744919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.142 [2024-12-10 22:53:14.744926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.142 [2024-12-10 22:53:14.744934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.142 [2024-12-10 22:53:14.744941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.142 [2024-12-10 22:53:14.744950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.142 [2024-12-10 22:53:14.744957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.142 [2024-12-10 22:53:14.744966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.142 [2024-12-10 22:53:14.744973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.142 [2024-12-10 22:53:14.744982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.142 [2024-12-10 22:53:14.744990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.142 [2024-12-10 22:53:14.744998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.142 [2024-12-10 22:53:14.745006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.142 [2024-12-10 22:53:14.745014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.142 [2024-12-10 22:53:14.745021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.142 [2024-12-10 22:53:14.745031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.142 [2024-12-10 22:53:14.745040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.142 [2024-12-10 22:53:14.745049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.142 [2024-12-10 22:53:14.745057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.142 [2024-12-10 22:53:14.745068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.142 [2024-12-10 22:53:14.745075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.142 [2024-12-10 22:53:14.745086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.142 [2024-12-10 22:53:14.745092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.142 [2024-12-10 22:53:14.745101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.142 [2024-12-10 22:53:14.745109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.142 [2024-12-10 22:53:14.745117] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdee200 is same with the state(6) to be set 00:22:07.142 [2024-12-10 22:53:14.746116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.142 [2024-12-10 22:53:14.746129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.142 [2024-12-10 22:53:14.746142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.142 [2024-12-10 22:53:14.746150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.142 [2024-12-10 22:53:14.746163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.142 [2024-12-10 22:53:14.746171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.142 [2024-12-10 22:53:14.746181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.142 [2024-12-10 22:53:14.746189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.142 [2024-12-10 22:53:14.746198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.142 [2024-12-10 22:53:14.746206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.142 [2024-12-10 22:53:14.746215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.142 [2024-12-10 22:53:14.746222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.142 [2024-12-10 22:53:14.746231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.142 [2024-12-10 22:53:14.746239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.142 [2024-12-10 22:53:14.746247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.142 [2024-12-10 22:53:14.746255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.142 [2024-12-10 22:53:14.746264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.142 [2024-12-10 22:53:14.746271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.142 [2024-12-10 22:53:14.746282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.142 [2024-12-10 22:53:14.746290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.142 [2024-12-10 22:53:14.746299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.142 [2024-12-10 22:53:14.746307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.142 [2024-12-10 22:53:14.746317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.142 [2024-12-10 22:53:14.746324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.142 [2024-12-10 22:53:14.746333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.142 [2024-12-10 22:53:14.746340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.142 [2024-12-10 22:53:14.746349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.142 [2024-12-10 22:53:14.746356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.142 [2024-12-10 22:53:14.746365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.143 [2024-12-10 22:53:14.746372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.143 [2024-12-10 22:53:14.746381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.143 [2024-12-10 22:53:14.746389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.143 [2024-12-10 22:53:14.746397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.143 [2024-12-10 22:53:14.746405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.143 [2024-12-10 22:53:14.746413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.143 [2024-12-10 22:53:14.746421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.143 [2024-12-10 22:53:14.746430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.143 [2024-12-10 22:53:14.746438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.143 [2024-12-10 22:53:14.746447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.143 [2024-12-10 22:53:14.746454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.143 [2024-12-10 22:53:14.746464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.143 [2024-12-10 22:53:14.746470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.143 [2024-12-10 22:53:14.746480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.143 [2024-12-10 22:53:14.746490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.143 [2024-12-10 22:53:14.746499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.143 [2024-12-10 22:53:14.746506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.143 [2024-12-10 22:53:14.746515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.143 [2024-12-10 22:53:14.746522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.143 [2024-12-10 22:53:14.746530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.143 [2024-12-10 22:53:14.746537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.143 [2024-12-10 22:53:14.746545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.143 [2024-12-10 22:53:14.746552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.143 [2024-12-10 22:53:14.746561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.143 [2024-12-10 22:53:14.746569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.143 [2024-12-10 22:53:14.746577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.143 [2024-12-10 22:53:14.746584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.143 [2024-12-10 22:53:14.746593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.143 [2024-12-10 22:53:14.746601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.143 [2024-12-10 22:53:14.746609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.143 [2024-12-10 22:53:14.746617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.143 [2024-12-10 22:53:14.746626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.143 [2024-12-10 22:53:14.746633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.143 [2024-12-10 22:53:14.746642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.143 [2024-12-10 22:53:14.746649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.143 [2024-12-10 22:53:14.746658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.143 [2024-12-10 22:53:14.746665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.143 [2024-12-10 22:53:14.746674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.143 [2024-12-10 22:53:14.746681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.143 [2024-12-10 22:53:14.746690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.143 [2024-12-10 22:53:14.746698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.143 [2024-12-10 22:53:14.746706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.143 [2024-12-10 22:53:14.746714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.143 [2024-12-10 22:53:14.746722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.143 [2024-12-10 22:53:14.746730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.143 [2024-12-10 22:53:14.746738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.143 [2024-12-10 22:53:14.746745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.143 [2024-12-10 22:53:14.746753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.143 [2024-12-10 22:53:14.746760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.143 [2024-12-10 22:53:14.746769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.143 [2024-12-10 22:53:14.746775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.143 [2024-12-10 22:53:14.746784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.143 [2024-12-10 22:53:14.746791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.143 [2024-12-10 22:53:14.746799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.143 [2024-12-10 22:53:14.746807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.143 [2024-12-10 22:53:14.746815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.143 [2024-12-10 22:53:14.746823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.143 [2024-12-10 22:53:14.746831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.143 [2024-12-10 22:53:14.746838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.143 [2024-12-10 22:53:14.746846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.143 [2024-12-10 22:53:14.746853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.143 [2024-12-10 22:53:14.746861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.143 [2024-12-10 22:53:14.746870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.143 [2024-12-10 22:53:14.746878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.143 [2024-12-10 22:53:14.746887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.143 [2024-12-10 22:53:14.746896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.143 [2024-12-10 22:53:14.746904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.143 [2024-12-10 22:53:14.746913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.143 [2024-12-10 22:53:14.746919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.143 [2024-12-10 22:53:14.746928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.143 [2024-12-10 22:53:14.746935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.144 [2024-12-10 22:53:14.746943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.144 [2024-12-10 22:53:14.746950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.144 [2024-12-10 22:53:14.746958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.144 [2024-12-10 22:53:14.746965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.144 [2024-12-10 22:53:14.746974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.144 [2024-12-10 22:53:14.746981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.144 [2024-12-10 22:53:14.746989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.144 [2024-12-10 22:53:14.746996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.144 [2024-12-10 22:53:14.747005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.144 [2024-12-10 22:53:14.747011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.144 [2024-12-10 22:53:14.747020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.144 [2024-12-10 22:53:14.747027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.144 [2024-12-10 22:53:14.747035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.144 [2024-12-10 22:53:14.747042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.144 [2024-12-10 22:53:14.747050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.144 [2024-12-10 22:53:14.747058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.144 [2024-12-10 22:53:14.747067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.144 [2024-12-10 22:53:14.747074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.144 [2024-12-10 22:53:14.747084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.144 [2024-12-10 22:53:14.747091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.144 [2024-12-10 22:53:14.747100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.144 [2024-12-10 22:53:14.747106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.144 [2024-12-10 22:53:14.747115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.144 [2024-12-10 22:53:14.747123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.144 [2024-12-10 22:53:14.747133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.144 [2024-12-10 22:53:14.747139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.144 [2024-12-10 22:53:14.747148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.144 [2024-12-10 22:53:14.747155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.144 [2024-12-10 22:53:14.747167] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1b340 is same with the state(6) to be set 00:22:07.144 [2024-12-10 22:53:14.748172] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:07.144 [2024-12-10 22:53:14.748189] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:22:07.144 [2024-12-10 22:53:14.748201] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:22:07.144 [2024-12-10 22:53:14.748501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.144 [2024-12-10 22:53:14.748518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa0ac90 with addr=10.0.0.2, port=4420 00:22:07.144 [2024-12-10 22:53:14.748528] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ac90 is same with the state(6) to be set 00:22:07.144 [2024-12-10 22:53:14.748970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.144 [2024-12-10 22:53:14.748988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa16e50 with addr=10.0.0.2, port=4420 00:22:07.144 [2024-12-10 22:53:14.748997] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa16e50 is same with the state(6) to be set 00:22:07.144 [2024-12-10 22:53:14.749072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.144 [2024-12-10 22:53:14.749085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa0d080 with addr=10.0.0.2, port=4420 00:22:07.144 [2024-12-10 22:53:14.749094] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0d080 is same with the state(6) to be set 00:22:07.144 [2024-12-10 22:53:14.749232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.144 [2024-12-10 22:53:14.749246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa169c0 with addr=10.0.0.2, port=4420 00:22:07.144 [2024-12-10 22:53:14.749255] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa169c0 is same with the state(6) to be set 00:22:07.144 [2024-12-10 22:53:14.749266] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa0ac90 (9): Bad file descriptor 00:22:07.144 [2024-12-10 22:53:14.749948] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:22:07.144 [2024-12-10 22:53:14.749968] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:22:07.144 [2024-12-10 22:53:14.749979] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:22:07.144 [2024-12-10 22:53:14.750005] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa16e50 (9): Bad file descriptor 00:22:07.144 [2024-12-10 22:53:14.750016] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa0d080 (9): Bad file descriptor 00:22:07.144 [2024-12-10 22:53:14.750024] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa169c0 (9): Bad file descriptor 00:22:07.144 [2024-12-10 22:53:14.750034] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:22:07.144 [2024-12-10 22:53:14.750041] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:22:07.144 [2024-12-10 22:53:14.750049] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:22:07.144 [2024-12-10 22:53:14.750057] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:22:07.144 [2024-12-10 22:53:14.750198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.144 [2024-12-10 22:53:14.750214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa0ae90 with addr=10.0.0.2, port=4420 00:22:07.144 [2024-12-10 22:53:14.750223] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ae90 is same with the state(6) to be set 00:22:07.144 [2024-12-10 22:53:14.750301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.144 [2024-12-10 22:53:14.750313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe8bf70 with addr=10.0.0.2, port=4420 00:22:07.144 [2024-12-10 22:53:14.750322] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8bf70 is same with the state(6) to be set 00:22:07.144 [2024-12-10 22:53:14.750444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.144 [2024-12-10 22:53:14.750456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe8bd50 with addr=10.0.0.2, port=4420 00:22:07.144 [2024-12-10 22:53:14.750465] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8bd50 is same with the state(6) to be set 00:22:07.144 [2024-12-10 22:53:14.750472] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:07.144 [2024-12-10 22:53:14.750480] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:07.144 [2024-12-10 22:53:14.750489] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:07.144 [2024-12-10 22:53:14.750496] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:07.144 [2024-12-10 22:53:14.750504] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:22:07.144 [2024-12-10 22:53:14.750511] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:22:07.144 [2024-12-10 22:53:14.750518] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:22:07.144 [2024-12-10 22:53:14.750525] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:22:07.144 [2024-12-10 22:53:14.750532] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:22:07.144 [2024-12-10 22:53:14.750539] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:22:07.144 [2024-12-10 22:53:14.750545] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:22:07.144 [2024-12-10 22:53:14.750555] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:22:07.144 [2024-12-10 22:53:14.750628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.144 [2024-12-10 22:53:14.750640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.144 [2024-12-10 22:53:14.750651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.144 [2024-12-10 22:53:14.750660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.144 [2024-12-10 22:53:14.750671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.144 [2024-12-10 22:53:14.750678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.144 [2024-12-10 22:53:14.750687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.144 [2024-12-10 22:53:14.750695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.144 [2024-12-10 22:53:14.750703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.144 [2024-12-10 22:53:14.750712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.144 [2024-12-10 22:53:14.750720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.144 [2024-12-10 22:53:14.750727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.145 [2024-12-10 22:53:14.750736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.145 [2024-12-10 22:53:14.750743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.145 [2024-12-10 22:53:14.750752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.145 [2024-12-10 22:53:14.750759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.145 [2024-12-10 22:53:14.750768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.145 [2024-12-10 22:53:14.750775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.145 [2024-12-10 22:53:14.750784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.145 [2024-12-10 22:53:14.750791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.145 [2024-12-10 22:53:14.750800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.145 [2024-12-10 22:53:14.750807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.145 [2024-12-10 22:53:14.750815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.145 [2024-12-10 22:53:14.750823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.145 [2024-12-10 22:53:14.750835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.145 [2024-12-10 22:53:14.750843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.145 [2024-12-10 22:53:14.750851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.145 [2024-12-10 22:53:14.750859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.145 [2024-12-10 22:53:14.750868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.145 [2024-12-10 22:53:14.750876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.145 [2024-12-10 22:53:14.750884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.145 [2024-12-10 22:53:14.750897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.145 [2024-12-10 22:53:14.750906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.145 [2024-12-10 22:53:14.750914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.145 [2024-12-10 22:53:14.750922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.145 [2024-12-10 22:53:14.750930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.145 [2024-12-10 22:53:14.750939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.145 [2024-12-10 22:53:14.750947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.145 [2024-12-10 22:53:14.750956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.145 [2024-12-10 22:53:14.750964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.145 [2024-12-10 22:53:14.750972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.145 [2024-12-10 22:53:14.750981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.145 [2024-12-10 22:53:14.750989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.145 [2024-12-10 22:53:14.750998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.145 [2024-12-10 22:53:14.751006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.145 [2024-12-10 22:53:14.751014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.145 [2024-12-10 22:53:14.751023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.145 [2024-12-10 22:53:14.751030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.145 [2024-12-10 22:53:14.751039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.145 [2024-12-10 22:53:14.751048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.145 [2024-12-10 22:53:14.751057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.145 [2024-12-10 22:53:14.751064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.145 [2024-12-10 22:53:14.751073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.145 [2024-12-10 22:53:14.751080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.145 [2024-12-10 22:53:14.751089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.145 [2024-12-10 22:53:14.751096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.145 [2024-12-10 22:53:14.751105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.145 [2024-12-10 22:53:14.751113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.145 [2024-12-10 22:53:14.751121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.145 [2024-12-10 22:53:14.751128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.145 [2024-12-10 22:53:14.751136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.145 [2024-12-10 22:53:14.751144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.145 [2024-12-10 22:53:14.751152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.145 [2024-12-10 22:53:14.751163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.145 [2024-12-10 22:53:14.751173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.145 [2024-12-10 22:53:14.751180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.145 [2024-12-10 22:53:14.751189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.145 [2024-12-10 22:53:14.751196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.145 [2024-12-10 22:53:14.751205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.145 [2024-12-10 22:53:14.751212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.145 [2024-12-10 22:53:14.751221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.145 [2024-12-10 22:53:14.751229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.145 [2024-12-10 22:53:14.751238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.145 [2024-12-10 22:53:14.751244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.145 [2024-12-10 22:53:14.751253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.145 [2024-12-10 22:53:14.751261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.145 [2024-12-10 22:53:14.751270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.145 [2024-12-10 22:53:14.751277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.145 [2024-12-10 22:53:14.751286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.145 [2024-12-10 22:53:14.751294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.145 [2024-12-10 22:53:14.751302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.145 [2024-12-10 22:53:14.751310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.145 [2024-12-10 22:53:14.751318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.145 [2024-12-10 22:53:14.751326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.145 [2024-12-10 22:53:14.751334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.145 [2024-12-10 22:53:14.751342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.145 [2024-12-10 22:53:14.751351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.145 [2024-12-10 22:53:14.751359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.145 [2024-12-10 22:53:14.751367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.145 [2024-12-10 22:53:14.751374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.145 [2024-12-10 22:53:14.751382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.145 [2024-12-10 22:53:14.751389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.145 [2024-12-10 22:53:14.751398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.146 [2024-12-10 22:53:14.751404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.146 [2024-12-10 22:53:14.751413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.146 [2024-12-10 22:53:14.751420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.146 [2024-12-10 22:53:14.751430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.146 [2024-12-10 22:53:14.751437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.146 [2024-12-10 22:53:14.751446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.146 [2024-12-10 22:53:14.751454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.146 [2024-12-10 22:53:14.751464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.146 [2024-12-10 22:53:14.751471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.146 [2024-12-10 22:53:14.751480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.146 [2024-12-10 22:53:14.751487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.146 [2024-12-10 22:53:14.751495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.146 [2024-12-10 22:53:14.751502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.146 [2024-12-10 22:53:14.751512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.146 [2024-12-10 22:53:14.751519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.146 [2024-12-10 22:53:14.751528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.146 [2024-12-10 22:53:14.751536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.146 [2024-12-10 22:53:14.751544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.146 [2024-12-10 22:53:14.751551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.146 [2024-12-10 22:53:14.751560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.146 [2024-12-10 22:53:14.751567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.146 [2024-12-10 22:53:14.751576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.146 [2024-12-10 22:53:14.751583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.146 [2024-12-10 22:53:14.751593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.146 [2024-12-10 22:53:14.751600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.146 [2024-12-10 22:53:14.751609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.146 [2024-12-10 22:53:14.751616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.146 [2024-12-10 22:53:14.751625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.146 [2024-12-10 22:53:14.751632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.146 [2024-12-10 22:53:14.751641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.146 [2024-12-10 22:53:14.751648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.146 [2024-12-10 22:53:14.751657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.146 [2024-12-10 22:53:14.751666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.146 [2024-12-10 22:53:14.751675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.146 [2024-12-10 22:53:14.751682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.146 [2024-12-10 22:53:14.751689] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe1d820 is same with the state(6) to be set 00:22:07.146 [2024-12-10 22:53:14.752700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.146 [2024-12-10 22:53:14.752715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.146 [2024-12-10 22:53:14.752726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.146 [2024-12-10 22:53:14.752733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.146 [2024-12-10 22:53:14.752743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.146 [2024-12-10 22:53:14.752750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.146 [2024-12-10 22:53:14.752760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.146 [2024-12-10 22:53:14.752766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.146 [2024-12-10 22:53:14.752775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.146 [2024-12-10 22:53:14.752783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.146 [2024-12-10 22:53:14.752791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.146 [2024-12-10 22:53:14.752799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.146 [2024-12-10 22:53:14.752808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.146 [2024-12-10 22:53:14.752816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.146 [2024-12-10 22:53:14.752824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.146 [2024-12-10 22:53:14.752832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.146 [2024-12-10 22:53:14.752841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.146 [2024-12-10 22:53:14.752849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.146 [2024-12-10 22:53:14.752858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.146 [2024-12-10 22:53:14.752866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.146 [2024-12-10 22:53:14.752876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.146 [2024-12-10 22:53:14.752889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.146 [2024-12-10 22:53:14.752897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.146 [2024-12-10 22:53:14.752905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.146 [2024-12-10 22:53:14.752914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.146 [2024-12-10 22:53:14.752922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.146 [2024-12-10 22:53:14.752931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.146 [2024-12-10 22:53:14.752939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.146 [2024-12-10 22:53:14.752948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.146 [2024-12-10 22:53:14.752955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.146 [2024-12-10 22:53:14.752965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.146 [2024-12-10 22:53:14.752972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.146 [2024-12-10 22:53:14.752981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.146 [2024-12-10 22:53:14.752988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.146 [2024-12-10 22:53:14.752999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.146 [2024-12-10 22:53:14.753007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.146 [2024-12-10 22:53:14.753016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.146 [2024-12-10 22:53:14.753023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.146 [2024-12-10 22:53:14.753031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.146 [2024-12-10 22:53:14.753039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.146 [2024-12-10 22:53:14.753047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.146 [2024-12-10 22:53:14.753054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.146 [2024-12-10 22:53:14.753062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.146 [2024-12-10 22:53:14.753070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.146 [2024-12-10 22:53:14.753078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.146 [2024-12-10 22:53:14.753086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.147 [2024-12-10 22:53:14.753096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.147 [2024-12-10 22:53:14.753103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.147 [2024-12-10 22:53:14.753111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.147 [2024-12-10 22:53:14.753118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.147 [2024-12-10 22:53:14.753127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.147 [2024-12-10 22:53:14.753134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.147 [2024-12-10 22:53:14.753143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.147 [2024-12-10 22:53:14.753150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.147 [2024-12-10 22:53:14.753168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.147 [2024-12-10 22:53:14.753175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.147 [2024-12-10 22:53:14.753185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.147 [2024-12-10 22:53:14.753192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.147 [2024-12-10 22:53:14.753201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.147 [2024-12-10 22:53:14.753209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.147 [2024-12-10 22:53:14.753219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.147 [2024-12-10 22:53:14.753225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.147 [2024-12-10 22:53:14.753235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.147 [2024-12-10 22:53:14.753241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.147 [2024-12-10 22:53:14.753251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.147 [2024-12-10 22:53:14.753258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.147 [2024-12-10 22:53:14.753267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.147 [2024-12-10 22:53:14.753274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.147 [2024-12-10 22:53:14.753282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.147 [2024-12-10 22:53:14.753290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.147 [2024-12-10 22:53:14.753299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.147 [2024-12-10 22:53:14.753308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.147 [2024-12-10 22:53:14.753318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.147 [2024-12-10 22:53:14.753324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.147 [2024-12-10 22:53:14.753332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.147 [2024-12-10 22:53:14.753341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.147 [2024-12-10 22:53:14.753349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.147 [2024-12-10 22:53:14.753356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.147 [2024-12-10 22:53:14.753364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.147 [2024-12-10 22:53:14.753374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.147 [2024-12-10 22:53:14.753382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.147 [2024-12-10 22:53:14.753390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.147 [2024-12-10 22:53:14.753398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.147 [2024-12-10 22:53:14.753406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.147 [2024-12-10 22:53:14.753415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.147 [2024-12-10 22:53:14.753422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.147 [2024-12-10 22:53:14.753431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.147 [2024-12-10 22:53:14.753438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.147 [2024-12-10 22:53:14.753447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.147 [2024-12-10 22:53:14.753454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.147 [2024-12-10 22:53:14.753463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.147 [2024-12-10 22:53:14.753469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.147 [2024-12-10 22:53:14.753478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.147 [2024-12-10 22:53:14.753484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.147 [2024-12-10 22:53:14.753493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.147 [2024-12-10 22:53:14.753500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.147 [2024-12-10 22:53:14.753510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.147 [2024-12-10 22:53:14.753516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.147 [2024-12-10 22:53:14.753526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.147 [2024-12-10 22:53:14.753533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.147 [2024-12-10 22:53:14.753542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.147 [2024-12-10 22:53:14.753549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.147 [2024-12-10 22:53:14.753559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.147 [2024-12-10 22:53:14.753566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.147 [2024-12-10 22:53:14.753574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.147 [2024-12-10 22:53:14.753581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.147 [2024-12-10 22:53:14.753589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.147 [2024-12-10 22:53:14.753596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.147 [2024-12-10 22:53:14.753604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.147 [2024-12-10 22:53:14.753612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.147 [2024-12-10 22:53:14.753620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.147 [2024-12-10 22:53:14.753627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.147 [2024-12-10 22:53:14.753636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.147 [2024-12-10 22:53:14.753643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.147 [2024-12-10 22:53:14.753652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.147 [2024-12-10 22:53:14.753658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.147 [2024-12-10 22:53:14.753667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.148 [2024-12-10 22:53:14.753673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.148 [2024-12-10 22:53:14.753682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.148 [2024-12-10 22:53:14.753689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.148 [2024-12-10 22:53:14.753696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.148 [2024-12-10 22:53:14.753705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.148 [2024-12-10 22:53:14.753714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.148 [2024-12-10 22:53:14.753721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.148 [2024-12-10 22:53:14.753730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.148 [2024-12-10 22:53:14.753737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.148 [2024-12-10 22:53:14.753745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.148 [2024-12-10 22:53:14.753752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.148 [2024-12-10 22:53:14.753760] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b18410 is same with the state(6) to be set 00:22:07.148 [2024-12-10 22:53:14.754762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.148 [2024-12-10 22:53:14.754777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.148 [2024-12-10 22:53:14.754788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.148 [2024-12-10 22:53:14.754796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.148 [2024-12-10 22:53:14.754805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.148 [2024-12-10 22:53:14.754814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.148 [2024-12-10 22:53:14.754822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.148 [2024-12-10 22:53:14.754830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.148 [2024-12-10 22:53:14.754839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.148 [2024-12-10 22:53:14.754846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.148 [2024-12-10 22:53:14.754855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.148 [2024-12-10 22:53:14.754862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.148 [2024-12-10 22:53:14.754871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.148 [2024-12-10 22:53:14.754879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.148 [2024-12-10 22:53:14.754888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.148 [2024-12-10 22:53:14.754895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.148 [2024-12-10 22:53:14.754903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.148 [2024-12-10 22:53:14.754914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.148 [2024-12-10 22:53:14.754924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.148 [2024-12-10 22:53:14.754931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.148 [2024-12-10 22:53:14.754942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.148 [2024-12-10 22:53:14.754950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.148 [2024-12-10 22:53:14.754960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.148 [2024-12-10 22:53:14.754967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.148 [2024-12-10 22:53:14.754975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.148 [2024-12-10 22:53:14.754983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.148 [2024-12-10 22:53:14.754991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.148 [2024-12-10 22:53:14.754999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.148 [2024-12-10 22:53:14.755007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.148 [2024-12-10 22:53:14.755015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.148 [2024-12-10 22:53:14.755023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.148 [2024-12-10 22:53:14.755031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.148 [2024-12-10 22:53:14.755040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.148 [2024-12-10 22:53:14.755048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.148 [2024-12-10 22:53:14.755056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.148 [2024-12-10 22:53:14.755063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.148 [2024-12-10 22:53:14.755072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.148 [2024-12-10 22:53:14.755079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.148 [2024-12-10 22:53:14.755088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.148 [2024-12-10 22:53:14.755095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.148 [2024-12-10 22:53:14.755105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.148 [2024-12-10 22:53:14.755112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.148 [2024-12-10 22:53:14.755122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.148 [2024-12-10 22:53:14.755130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.148 [2024-12-10 22:53:14.755138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.148 [2024-12-10 22:53:14.755145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.148 [2024-12-10 22:53:14.755154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.148 [2024-12-10 22:53:14.755165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.148 [2024-12-10 22:53:14.755174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.148 [2024-12-10 22:53:14.755181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.148 [2024-12-10 22:53:14.755191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.148 [2024-12-10 22:53:14.755199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.148 [2024-12-10 22:53:14.755208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.148 [2024-12-10 22:53:14.755215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.148 [2024-12-10 22:53:14.755224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.148 [2024-12-10 22:53:14.755231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.148 [2024-12-10 22:53:14.755240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.148 [2024-12-10 22:53:14.755247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.148 [2024-12-10 22:53:14.755256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.148 [2024-12-10 22:53:14.755262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.148 [2024-12-10 22:53:14.755272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.148 [2024-12-10 22:53:14.755279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.148 [2024-12-10 22:53:14.755289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.148 [2024-12-10 22:53:14.755296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.148 [2024-12-10 22:53:14.755304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.148 [2024-12-10 22:53:14.755312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.148 [2024-12-10 22:53:14.755320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.148 [2024-12-10 22:53:14.755330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.148 [2024-12-10 22:53:14.755339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.149 [2024-12-10 22:53:14.755346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.149 [2024-12-10 22:53:14.755355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.149 [2024-12-10 22:53:14.755362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.149 [2024-12-10 22:53:14.755371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.149 [2024-12-10 22:53:14.755379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.149 [2024-12-10 22:53:14.755387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.149 [2024-12-10 22:53:14.755394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.149 [2024-12-10 22:53:14.755403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.149 [2024-12-10 22:53:14.755410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.149 [2024-12-10 22:53:14.755419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.149 [2024-12-10 22:53:14.755425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.149 [2024-12-10 22:53:14.755434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.149 [2024-12-10 22:53:14.755441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.149 [2024-12-10 22:53:14.755450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.149 [2024-12-10 22:53:14.755457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.149 [2024-12-10 22:53:14.755467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.149 [2024-12-10 22:53:14.755474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.149 [2024-12-10 22:53:14.755482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.149 [2024-12-10 22:53:14.755489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.149 [2024-12-10 22:53:14.755498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.149 [2024-12-10 22:53:14.755505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.149 [2024-12-10 22:53:14.755513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.149 [2024-12-10 22:53:14.755520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.149 [2024-12-10 22:53:14.755530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.149 [2024-12-10 22:53:14.755537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.149 [2024-12-10 22:53:14.755546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.149 [2024-12-10 22:53:14.755554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.149 [2024-12-10 22:53:14.755563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.149 [2024-12-10 22:53:14.755570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.149 [2024-12-10 22:53:14.755579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.149 [2024-12-10 22:53:14.755586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.149 [2024-12-10 22:53:14.755595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.149 [2024-12-10 22:53:14.755602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.149 [2024-12-10 22:53:14.755610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.149 [2024-12-10 22:53:14.755617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.149 [2024-12-10 22:53:14.755627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.149 [2024-12-10 22:53:14.755633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.149 [2024-12-10 22:53:14.755641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.149 [2024-12-10 22:53:14.755649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.149 [2024-12-10 22:53:14.755658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.149 [2024-12-10 22:53:14.755665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.149 [2024-12-10 22:53:14.755674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.149 [2024-12-10 22:53:14.755681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.149 [2024-12-10 22:53:14.755690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.149 [2024-12-10 22:53:14.755697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.149 [2024-12-10 22:53:14.755706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.149 [2024-12-10 22:53:14.755713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.149 [2024-12-10 22:53:14.755722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.149 [2024-12-10 22:53:14.755732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.149 [2024-12-10 22:53:14.755741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.149 [2024-12-10 22:53:14.755749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.149 [2024-12-10 22:53:14.755757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.149 [2024-12-10 22:53:14.755765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.149 [2024-12-10 22:53:14.755773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.149 [2024-12-10 22:53:14.755780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.149 [2024-12-10 22:53:14.755789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.149 [2024-12-10 22:53:14.755796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.149 [2024-12-10 22:53:14.755805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:07.149 [2024-12-10 22:53:14.755812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:07.149 [2024-12-10 22:53:14.755820] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d65b20 is same with the state(6) to be set 00:22:07.149 [2024-12-10 22:53:14.756796] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:22:07.149 [2024-12-10 22:53:14.756813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:22:07.149 task offset: 27648 on job bdev=Nvme2n1 fails 00:22:07.149 00:22:07.149 Latency(us) 00:22:07.149 [2024-12-10T21:53:14.878Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:07.149 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:07.149 Job: Nvme1n1 ended in about 0.95 seconds with error 00:22:07.149 Verification LBA range: start 0x0 length 0x400 00:22:07.149 Nvme1n1 : 0.95 202.19 12.64 67.40 0.00 235140.90 16640.45 217009.64 00:22:07.149 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:07.149 Job: Nvme2n1 ended in about 0.94 seconds with error 00:22:07.149 Verification LBA range: start 0x0 length 0x400 00:22:07.149 Nvme2n1 : 0.94 205.15 12.82 68.38 0.00 227693.75 16868.40 227951.30 00:22:07.149 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:07.149 Job: Nvme3n1 ended in about 0.95 seconds with error 00:22:07.149 Verification LBA range: start 0x0 length 0x400 00:22:07.149 Nvme3n1 : 0.95 201.63 12.60 67.21 0.00 227823.08 14246.96 238892.97 00:22:07.149 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:07.149 Job: Nvme4n1 ended in about 0.95 seconds with error 00:22:07.149 Verification LBA range: start 0x0 length 0x400 00:22:07.149 Nvme4n1 : 0.95 201.20 12.57 67.07 0.00 224318.11 21883.33 220656.86 00:22:07.149 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:07.149 Job: Nvme5n1 ended in about 0.95 seconds with error 00:22:07.149 Verification LBA range: start 0x0 length 0x400 00:22:07.149 Nvme5n1 : 0.95 202.80 12.68 67.60 0.00 218399.17 17552.25 220656.86 00:22:07.149 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:07.149 Job: Nvme6n1 ended in about 0.96 seconds with error 00:22:07.149 Verification LBA range: start 0x0 length 0x400 00:22:07.149 Nvme6n1 : 0.96 200.25 12.52 66.75 0.00 217445.95 29633.67 204244.37 00:22:07.149 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:07.149 Job: Nvme7n1 ended in about 0.96 seconds with error 00:22:07.149 Verification LBA range: start 0x0 length 0x400 00:22:07.149 Nvme7n1 : 0.96 199.82 12.49 66.61 0.00 213964.91 14474.91 223392.28 00:22:07.149 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:07.149 Job: Nvme8n1 ended in about 0.96 seconds with error 00:22:07.149 Verification LBA range: start 0x0 length 0x400 00:22:07.150 Nvme8n1 : 0.96 203.55 12.72 66.46 0.00 207243.57 5784.26 223392.28 00:22:07.150 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:07.150 Job: Nvme9n1 ended in about 0.94 seconds with error 00:22:07.150 Verification LBA range: start 0x0 length 0x400 00:22:07.150 Nvme9n1 : 0.94 204.05 12.75 68.02 0.00 200967.35 16982.37 242540.19 00:22:07.150 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:07.150 Job: Nvme10n1 ended in about 0.94 seconds with error 00:22:07.150 Verification LBA range: start 0x0 length 0x400 00:22:07.150 Nvme10n1 : 0.94 204.41 12.78 68.14 0.00 196694.15 19603.81 225215.89 00:22:07.150 [2024-12-10T21:53:14.879Z] =================================================================================================================== 00:22:07.150 [2024-12-10T21:53:14.879Z] Total : 2025.05 126.57 673.63 0.00 216953.92 5784.26 242540.19 00:22:07.150 [2024-12-10 22:53:14.789113] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:07.150 [2024-12-10 22:53:14.789181] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:22:07.150 [2024-12-10 22:53:14.789243] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa0ae90 (9): Bad file descriptor 00:22:07.150 [2024-12-10 22:53:14.789259] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe8bf70 (9): Bad file descriptor 00:22:07.150 [2024-12-10 22:53:14.789269] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe8bd50 (9): Bad file descriptor 00:22:07.150 [2024-12-10 22:53:14.789890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.150 [2024-12-10 22:53:14.789917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe3eab0 with addr=10.0.0.2, port=4420 00:22:07.150 [2024-12-10 22:53:14.789930] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe3eab0 is same with the state(6) to be set 00:22:07.150 [2024-12-10 22:53:14.790066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.150 [2024-12-10 22:53:14.790078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x92b610 with addr=10.0.0.2, port=4420 00:22:07.150 [2024-12-10 22:53:14.790087] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x92b610 is same with the state(6) to be set 00:22:07.150 [2024-12-10 22:53:14.790155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.150 [2024-12-10 22:53:14.790171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe849c0 with addr=10.0.0.2, port=4420 00:22:07.150 [2024-12-10 22:53:14.790179] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe849c0 is same with the state(6) to be set 00:22:07.150 [2024-12-10 22:53:14.790188] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:22:07.150 [2024-12-10 22:53:14.790194] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:22:07.150 [2024-12-10 22:53:14.790204] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:22:07.150 [2024-12-10 22:53:14.790214] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:22:07.150 [2024-12-10 22:53:14.790224] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:22:07.150 [2024-12-10 22:53:14.790234] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:22:07.150 [2024-12-10 22:53:14.790242] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:22:07.150 [2024-12-10 22:53:14.790248] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:22:07.150 [2024-12-10 22:53:14.790256] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:22:07.150 [2024-12-10 22:53:14.790263] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:22:07.150 [2024-12-10 22:53:14.790270] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:22:07.150 [2024-12-10 22:53:14.790276] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:22:07.150 [2024-12-10 22:53:14.790333] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:22:07.150 [2024-12-10 22:53:14.791014] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:22:07.150 [2024-12-10 22:53:14.791073] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe3eab0 (9): Bad file descriptor 00:22:07.150 [2024-12-10 22:53:14.791087] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x92b610 (9): Bad file descriptor 00:22:07.150 [2024-12-10 22:53:14.791097] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe849c0 (9): Bad file descriptor 00:22:07.150 [2024-12-10 22:53:14.791148] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:22:07.150 [2024-12-10 22:53:14.791167] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:22:07.150 [2024-12-10 22:53:14.791177] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:07.150 [2024-12-10 22:53:14.791185] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:22:07.150 [2024-12-10 22:53:14.791195] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:22:07.150 [2024-12-10 22:53:14.791204] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:22:07.150 [2024-12-10 22:53:14.791478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.150 [2024-12-10 22:53:14.791492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa0ac90 with addr=10.0.0.2, port=4420 00:22:07.150 [2024-12-10 22:53:14.791501] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ac90 is same with the state(6) to be set 00:22:07.150 [2024-12-10 22:53:14.791509] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:22:07.150 [2024-12-10 22:53:14.791515] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:22:07.150 [2024-12-10 22:53:14.791522] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:22:07.150 [2024-12-10 22:53:14.791530] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:22:07.150 [2024-12-10 22:53:14.791539] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:22:07.150 [2024-12-10 22:53:14.791546] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:22:07.150 [2024-12-10 22:53:14.791554] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:22:07.150 [2024-12-10 22:53:14.791560] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:22:07.150 [2024-12-10 22:53:14.791570] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:22:07.150 [2024-12-10 22:53:14.791577] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:22:07.150 [2024-12-10 22:53:14.791584] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:22:07.150 [2024-12-10 22:53:14.791590] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:22:07.150 [2024-12-10 22:53:14.791812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.150 [2024-12-10 22:53:14.791825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa169c0 with addr=10.0.0.2, port=4420 00:22:07.150 [2024-12-10 22:53:14.791833] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa169c0 is same with the state(6) to be set 00:22:07.150 [2024-12-10 22:53:14.791988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.150 [2024-12-10 22:53:14.792001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa0d080 with addr=10.0.0.2, port=4420 00:22:07.150 [2024-12-10 22:53:14.792009] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0d080 is same with the state(6) to be set 00:22:07.150 [2024-12-10 22:53:14.792130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.150 [2024-12-10 22:53:14.792142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa16e50 with addr=10.0.0.2, port=4420 00:22:07.150 [2024-12-10 22:53:14.792151] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa16e50 is same with the state(6) to be set 00:22:07.150 [2024-12-10 22:53:14.792278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.150 [2024-12-10 22:53:14.792290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe8bd50 with addr=10.0.0.2, port=4420 00:22:07.150 [2024-12-10 22:53:14.792299] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8bd50 is same with the state(6) to be set 00:22:07.150 [2024-12-10 22:53:14.792422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.150 [2024-12-10 22:53:14.792434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe8bf70 with addr=10.0.0.2, port=4420 00:22:07.150 [2024-12-10 22:53:14.792442] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe8bf70 is same with the state(6) to be set 00:22:07.150 [2024-12-10 22:53:14.792509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:07.150 [2024-12-10 22:53:14.792521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa0ae90 with addr=10.0.0.2, port=4420 00:22:07.150 [2024-12-10 22:53:14.792529] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa0ae90 is same with the state(6) to be set 00:22:07.150 [2024-12-10 22:53:14.792539] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa0ac90 (9): Bad file descriptor 00:22:07.150 [2024-12-10 22:53:14.792568] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa169c0 (9): Bad file descriptor 00:22:07.150 [2024-12-10 22:53:14.792579] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa0d080 (9): Bad file descriptor 00:22:07.150 [2024-12-10 22:53:14.792588] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa16e50 (9): Bad file descriptor 00:22:07.150 [2024-12-10 22:53:14.792598] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe8bd50 (9): Bad file descriptor 00:22:07.150 [2024-12-10 22:53:14.792608] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe8bf70 (9): Bad file descriptor 00:22:07.150 [2024-12-10 22:53:14.792617] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa0ae90 (9): Bad file descriptor 00:22:07.150 [2024-12-10 22:53:14.792626] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:22:07.150 [2024-12-10 22:53:14.792635] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:22:07.150 [2024-12-10 22:53:14.792643] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:22:07.150 [2024-12-10 22:53:14.792650] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:22:07.150 [2024-12-10 22:53:14.792675] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:22:07.150 [2024-12-10 22:53:14.792684] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:22:07.150 [2024-12-10 22:53:14.792691] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:22:07.151 [2024-12-10 22:53:14.792697] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:22:07.151 [2024-12-10 22:53:14.792705] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:22:07.151 [2024-12-10 22:53:14.792711] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:22:07.151 [2024-12-10 22:53:14.792718] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:22:07.151 [2024-12-10 22:53:14.792725] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:22:07.151 [2024-12-10 22:53:14.792733] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:22:07.151 [2024-12-10 22:53:14.792739] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:22:07.151 [2024-12-10 22:53:14.792746] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:07.151 [2024-12-10 22:53:14.792753] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:22:07.151 [2024-12-10 22:53:14.792919] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:22:07.151 [2024-12-10 22:53:14.792929] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:22:07.151 [2024-12-10 22:53:14.792936] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:22:07.151 [2024-12-10 22:53:14.792943] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:22:07.151 [2024-12-10 22:53:14.792951] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:22:07.151 [2024-12-10 22:53:14.792958] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:22:07.151 [2024-12-10 22:53:14.792965] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:22:07.151 [2024-12-10 22:53:14.792972] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:22:07.151 [2024-12-10 22:53:14.792980] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:22:07.151 [2024-12-10 22:53:14.792986] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:22:07.151 [2024-12-10 22:53:14.792993] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:22:07.151 [2024-12-10 22:53:14.793000] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:22:07.409 22:53:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:22:08.785 22:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 2434472 00:22:08.785 22:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:22:08.785 22:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2434472 00:22:08.785 22:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:22:08.785 22:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:08.785 22:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:22:08.785 22:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:08.785 22:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 2434472 00:22:08.785 22:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:22:08.785 22:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:08.785 22:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:22:08.785 22:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:22:08.785 22:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:22:08.785 22:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:08.785 22:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:22:08.785 22:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:08.785 22:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/bdevperf.conf 00:22:08.786 22:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/rpcs.txt 00:22:08.786 22:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:08.786 22:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:08.786 22:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:22:08.786 22:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:08.786 22:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:22:08.786 22:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:08.786 22:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:08.786 rmmod nvme_tcp 00:22:08.786 rmmod nvme_fabrics 00:22:08.786 rmmod nvme_keyring 00:22:08.786 22:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:08.786 22:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:22:08.786 22:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:22:08.786 22:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 2434187 ']' 00:22:08.786 22:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 2434187 00:22:08.786 22:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2434187 ']' 00:22:08.786 22:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2434187 00:22:08.786 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/common/autotest_common.sh: line 958: kill: (2434187) - No such process 00:22:08.786 22:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2434187 is not found' 00:22:08.786 Process with pid 2434187 is not found 00:22:08.786 22:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:08.786 22:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:08.786 22:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:08.786 22:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:22:08.786 22:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:22:08.786 22:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:08.786 22:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:22:08.786 22:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:08.786 22:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:08.786 22:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:08.786 22:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:08.786 22:53:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:10.691 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:10.691 00:22:10.691 real 0m7.716s 00:22:10.691 user 0m18.745s 00:22:10.691 sys 0m1.340s 00:22:10.691 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:10.691 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:22:10.691 ************************************ 00:22:10.691 END TEST nvmf_shutdown_tc3 00:22:10.691 ************************************ 00:22:10.691 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:22:10.691 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:22:10.691 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:22:10.691 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:22:10.691 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:10.691 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:10.691 ************************************ 00:22:10.691 START TEST nvmf_shutdown_tc4 00:22:10.691 ************************************ 00:22:10.691 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:22:10.691 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:22:10.691 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:22:10.691 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:10.691 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:10.691 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:10.691 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:10.691 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:10.691 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:10.691 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:10.691 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:10.691 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:10.691 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:10.691 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:22:10.691 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:10.691 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:10.691 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:22:10.691 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:10.691 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:10.691 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:10.691 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:10.691 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:10.691 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:22:10.691 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:10.691 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:22:10.691 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:22:10.691 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:22:10.691 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:22:10.691 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:22:10.691 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:22:10.691 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:10.691 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:10.691 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:10.691 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:10.691 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:10.691 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:10.691 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:10.691 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:10.691 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:10.691 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:10.691 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:10.691 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:10.691 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:10.691 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:10.691 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:10.691 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:10.691 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:10.691 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:10.691 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:10.691 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:10.691 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:10.691 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:10.691 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:10.691 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:10.691 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:10.691 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:10.691 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:10.691 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:10.691 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:10.691 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:10.691 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:10.691 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:10.691 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:10.691 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:10.691 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:10.691 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:10.692 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:10.692 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:10.692 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:10.692 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:10.692 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:10.692 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:10.692 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:10.692 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:10.692 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:10.692 Found net devices under 0000:86:00.0: cvl_0_0 00:22:10.692 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:10.692 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:10.692 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:10.692 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:10.692 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:10.692 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:10.692 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:10.692 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:10.692 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:10.692 Found net devices under 0000:86:00.1: cvl_0_1 00:22:10.692 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:10.692 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:10.692 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:22:10.692 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:10.692 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:10.692 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:10.692 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:10.692 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:10.692 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:10.692 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:10.692 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:10.692 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:10.692 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:10.692 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:10.692 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:10.692 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:10.692 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:10.692 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:10.692 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:10.692 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:10.692 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:10.951 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:10.951 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:10.951 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:10.951 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:10.951 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:10.951 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:10.951 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:10.951 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:10.951 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:10.951 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.394 ms 00:22:10.951 00:22:10.951 --- 10.0.0.2 ping statistics --- 00:22:10.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:10.951 rtt min/avg/max/mdev = 0.394/0.394/0.394/0.000 ms 00:22:10.951 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:10.951 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:10.951 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:22:10.951 00:22:10.951 --- 10.0.0.1 ping statistics --- 00:22:10.951 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:10.951 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:22:10.951 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:10.951 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:22:10.951 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:10.951 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:10.951 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:10.951 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:10.951 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:10.951 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:10.952 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:10.952 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:22:10.952 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:10.952 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:10.952 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:10.952 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=2435722 00:22:10.952 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 2435722 00:22:10.952 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:10.952 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 2435722 ']' 00:22:10.952 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:10.952 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:10.952 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:10.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:10.952 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:10.952 22:53:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:11.210 [2024-12-10 22:53:18.720316] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:22:11.210 [2024-12-10 22:53:18.720357] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:11.210 [2024-12-10 22:53:18.798125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:11.211 [2024-12-10 22:53:18.838974] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:11.211 [2024-12-10 22:53:18.839016] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:11.211 [2024-12-10 22:53:18.839023] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:11.211 [2024-12-10 22:53:18.839030] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:11.211 [2024-12-10 22:53:18.839035] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:11.211 [2024-12-10 22:53:18.840509] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:11.211 [2024-12-10 22:53:18.840615] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:22:11.211 [2024-12-10 22:53:18.840720] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:22:11.211 [2024-12-10 22:53:18.840722] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:22:12.145 22:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:12.145 22:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:22:12.145 22:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:12.145 22:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:12.145 22:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:12.145 22:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:12.145 22:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:12.145 22:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.145 22:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:12.145 [2024-12-10 22:53:19.608025] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:12.145 22:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.145 22:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:22:12.145 22:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:22:12.145 22:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:12.145 22:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:12.145 22:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/rpcs.txt 00:22:12.145 22:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:12.145 22:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:12.145 22:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:12.145 22:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:12.145 22:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:12.145 22:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:12.145 22:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:12.145 22:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:12.145 22:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:12.145 22:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:12.145 22:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:12.145 22:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:12.145 22:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:12.145 22:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:12.145 22:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:12.145 22:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:12.145 22:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:12.145 22:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:12.145 22:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:22:12.145 22:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:22:12.145 22:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:22:12.146 22:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.146 22:53:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:12.146 Malloc1 00:22:12.146 [2024-12-10 22:53:19.718300] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:12.146 Malloc2 00:22:12.146 Malloc3 00:22:12.146 Malloc4 00:22:12.146 Malloc5 00:22:12.404 Malloc6 00:22:12.404 Malloc7 00:22:12.404 Malloc8 00:22:12.404 Malloc9 00:22:12.404 Malloc10 00:22:12.404 22:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.404 22:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:22:12.404 22:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:12.404 22:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:12.663 22:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=2436004 00:22:12.663 22:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:22:12.663 22:53:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:22:12.663 [2024-12-10 22:53:20.231948] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:17.944 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:17.944 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 2435722 00:22:17.944 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2435722 ']' 00:22:17.944 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2435722 00:22:17.944 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:22:17.944 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:17.944 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2435722 00:22:17.944 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:17.944 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:17.944 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2435722' 00:22:17.944 killing process with pid 2435722 00:22:17.944 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 2435722 00:22:17.944 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 2435722 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 starting I/O failed: -6 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 starting I/O failed: -6 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 starting I/O failed: -6 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 starting I/O failed: -6 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 starting I/O failed: -6 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 starting I/O failed: -6 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 starting I/O failed: -6 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 starting I/O failed: -6 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 starting I/O failed: -6 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 starting I/O failed: -6 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 [2024-12-10 22:53:25.231304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 starting I/O failed: -6 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 starting I/O failed: -6 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 starting I/O failed: -6 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 starting I/O failed: -6 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 starting I/O failed: -6 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 starting I/O failed: -6 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 starting I/O failed: -6 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 starting I/O failed: -6 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 starting I/O failed: -6 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 starting I/O failed: -6 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 starting I/O failed: -6 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 starting I/O failed: -6 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 starting I/O failed: -6 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 starting I/O failed: -6 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 starting I/O failed: -6 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 starting I/O failed: -6 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 starting I/O failed: -6 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 starting I/O failed: -6 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 starting I/O failed: -6 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 starting I/O failed: -6 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 [2024-12-10 22:53:25.232183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 starting I/O failed: -6 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 starting I/O failed: -6 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 starting I/O failed: -6 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 starting I/O failed: -6 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 starting I/O failed: -6 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 starting I/O failed: -6 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 starting I/O failed: -6 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 starting I/O failed: -6 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 starting I/O failed: -6 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 starting I/O failed: -6 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 starting I/O failed: -6 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 starting I/O failed: -6 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 starting I/O failed: -6 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 starting I/O failed: -6 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 starting I/O failed: -6 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 starting I/O failed: -6 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 starting I/O failed: -6 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 starting I/O failed: -6 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 starting I/O failed: -6 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 starting I/O failed: -6 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 starting I/O failed: -6 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 starting I/O failed: -6 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 Write completed with error (sct=0, sc=8) 00:22:17.944 starting I/O failed: -6 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 starting I/O failed: -6 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 starting I/O failed: -6 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 starting I/O failed: -6 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 starting I/O failed: -6 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 starting I/O failed: -6 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 starting I/O failed: -6 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 starting I/O failed: -6 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 starting I/O failed: -6 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 starting I/O failed: -6 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 starting I/O failed: -6 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 starting I/O failed: -6 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 starting I/O failed: -6 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 starting I/O failed: -6 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 starting I/O failed: -6 00:22:17.945 [2024-12-10 22:53:25.233213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 starting I/O failed: -6 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 starting I/O failed: -6 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 starting I/O failed: -6 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 starting I/O failed: -6 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 starting I/O failed: -6 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 starting I/O failed: -6 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 starting I/O failed: -6 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 starting I/O failed: -6 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 starting I/O failed: -6 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 starting I/O failed: -6 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 starting I/O failed: -6 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 starting I/O failed: -6 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 starting I/O failed: -6 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 starting I/O failed: -6 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 starting I/O failed: -6 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 starting I/O failed: -6 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 starting I/O failed: -6 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 starting I/O failed: -6 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 starting I/O failed: -6 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 starting I/O failed: -6 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 starting I/O failed: -6 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 starting I/O failed: -6 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 starting I/O failed: -6 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 starting I/O failed: -6 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 starting I/O failed: -6 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 starting I/O failed: -6 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 starting I/O failed: -6 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 starting I/O failed: -6 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 starting I/O failed: -6 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 starting I/O failed: -6 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 starting I/O failed: -6 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 starting I/O failed: -6 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 starting I/O failed: -6 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 starting I/O failed: -6 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 starting I/O failed: -6 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 starting I/O failed: -6 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 starting I/O failed: -6 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 starting I/O failed: -6 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 starting I/O failed: -6 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 starting I/O failed: -6 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 starting I/O failed: -6 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 starting I/O failed: -6 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 starting I/O failed: -6 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 starting I/O failed: -6 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 starting I/O failed: -6 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 starting I/O failed: -6 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 starting I/O failed: -6 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 starting I/O failed: -6 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 starting I/O failed: -6 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 starting I/O failed: -6 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 starting I/O failed: -6 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 starting I/O failed: -6 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 starting I/O failed: -6 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 starting I/O failed: -6 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 starting I/O failed: -6 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 starting I/O failed: -6 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 starting I/O failed: -6 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 starting I/O failed: -6 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 starting I/O failed: -6 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 starting I/O failed: -6 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 starting I/O failed: -6 00:22:17.945 [2024-12-10 22:53:25.234942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:17.945 NVMe io qpair process completion error 00:22:17.945 [2024-12-10 22:53:25.235118] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1102e70 is same with the state(6) to be set 00:22:17.945 [2024-12-10 22:53:25.235154] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1102e70 is same with the state(6) to be set 00:22:17.945 [2024-12-10 22:53:25.235172] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1102e70 is same with the state(6) to be set 00:22:17.945 [2024-12-10 22:53:25.235180] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1102e70 is same with the state(6) to be set 00:22:17.945 [2024-12-10 22:53:25.235187] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1102e70 is same with the state(6) to be set 00:22:17.945 [2024-12-10 22:53:25.236844] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11067e0 is same with the state(6) to be set 00:22:17.945 [2024-12-10 22:53:25.236867] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11067e0 is same with the state(6) to be set 00:22:17.945 [2024-12-10 22:53:25.236874] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11067e0 is same with the state(6) to be set 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 starting I/O failed: -6 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 starting I/O failed: -6 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 starting I/O failed: -6 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 starting I/O failed: -6 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 starting I/O failed: -6 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 starting I/O failed: -6 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 Write completed with error (sct=0, sc=8) 00:22:17.945 [2024-12-10 22:53:25.237299] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1106cb0 is same with Write completed with error (sct=0, sc=8) 00:22:17.945 the state(6) to be set 00:22:17.945 starting I/O failed: -6 00:22:17.945 [2024-12-10 22:53:25.237327] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1106cb0 is same with the state(6) to be set 00:22:17.945 [2024-12-10 22:53:25.237336] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1106cb0 is same with the state(6) to be set 00:22:17.945 starting I/O failed: -6 00:22:17.945 starting I/O failed: -6 00:22:17.945 [2024-12-10 22:53:25.237504] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1105e40 is same with the state(6) to be set 00:22:17.946 [2024-12-10 22:53:25.237528] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1105e40 is same with the state(6) to be set 00:22:17.946 [2024-12-10 22:53:25.237536] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1105e40 is same with the state(6) to be set 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 [2024-12-10 22:53:25.237543] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1105e40 is same with the state(6) to be set 00:22:17.946 starting I/O failed: -6 00:22:17.946 [2024-12-10 22:53:25.237552] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1105e40 is same with the state(6) to be set 00:22:17.946 [2024-12-10 22:53:25.237559] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1105e40 is same with the state(6) to be set 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 [2024-12-10 22:53:25.237566] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1105e40 is same with the state(6) to be set 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 starting I/O failed: -6 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 starting I/O failed: -6 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 starting I/O failed: -6 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 starting I/O failed: -6 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 starting I/O failed: -6 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 starting I/O failed: -6 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 starting I/O failed: -6 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 starting I/O failed: -6 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 starting I/O failed: -6 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 starting I/O failed: -6 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 starting I/O failed: -6 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 starting I/O failed: -6 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 starting I/O failed: -6 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 starting I/O failed: -6 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 starting I/O failed: -6 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 starting I/O failed: -6 00:22:17.946 [2024-12-10 22:53:25.238091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 starting I/O failed: -6 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 starting I/O failed: -6 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 starting I/O failed: -6 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 starting I/O failed: -6 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 starting I/O failed: -6 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 starting I/O failed: -6 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 starting I/O failed: -6 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 starting I/O failed: -6 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 starting I/O failed: -6 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 starting I/O failed: -6 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 starting I/O failed: -6 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 starting I/O failed: -6 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 starting I/O failed: -6 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 starting I/O failed: -6 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 starting I/O failed: -6 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 starting I/O failed: -6 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 starting I/O failed: -6 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 starting I/O failed: -6 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 starting I/O failed: -6 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 starting I/O failed: -6 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 starting I/O failed: -6 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 starting I/O failed: -6 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 starting I/O failed: -6 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 starting I/O failed: -6 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 starting I/O failed: -6 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 starting I/O failed: -6 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 starting I/O failed: -6 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 starting I/O failed: -6 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 starting I/O failed: -6 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 starting I/O failed: -6 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 starting I/O failed: -6 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 starting I/O failed: -6 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 starting I/O failed: -6 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 starting I/O failed: -6 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 starting I/O failed: -6 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 starting I/O failed: -6 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 starting I/O failed: -6 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 starting I/O failed: -6 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 starting I/O failed: -6 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 starting I/O failed: -6 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 starting I/O failed: -6 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 starting I/O failed: -6 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 starting I/O failed: -6 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 starting I/O failed: -6 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 starting I/O failed: -6 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 starting I/O failed: -6 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 starting I/O failed: -6 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 starting I/O failed: -6 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 starting I/O failed: -6 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 starting I/O failed: -6 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 starting I/O failed: -6 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 starting I/O failed: -6 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 starting I/O failed: -6 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 starting I/O failed: -6 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 starting I/O failed: -6 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 starting I/O failed: -6 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 starting I/O failed: -6 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 starting I/O failed: -6 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 starting I/O failed: -6 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 starting I/O failed: -6 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 starting I/O failed: -6 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 starting I/O failed: -6 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 starting I/O failed: -6 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 starting I/O failed: -6 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 starting I/O failed: -6 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 starting I/O failed: -6 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 starting I/O failed: -6 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 starting I/O failed: -6 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 starting I/O failed: -6 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 starting I/O failed: -6 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 starting I/O failed: -6 00:22:17.946 Write completed with error (sct=0, sc=8) 00:22:17.946 starting I/O failed: -6 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 starting I/O failed: -6 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 starting I/O failed: -6 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 starting I/O failed: -6 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 starting I/O failed: -6 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 starting I/O failed: -6 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 starting I/O failed: -6 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 starting I/O failed: -6 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 starting I/O failed: -6 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 starting I/O failed: -6 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 starting I/O failed: -6 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 starting I/O failed: -6 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 starting I/O failed: -6 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 starting I/O failed: -6 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 starting I/O failed: -6 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 starting I/O failed: -6 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 starting I/O failed: -6 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 starting I/O failed: -6 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 starting I/O failed: -6 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 starting I/O failed: -6 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 starting I/O failed: -6 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 starting I/O failed: -6 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 starting I/O failed: -6 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 starting I/O failed: -6 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 starting I/O failed: -6 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 starting I/O failed: -6 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 starting I/O failed: -6 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 starting I/O failed: -6 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 starting I/O failed: -6 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 starting I/O failed: -6 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 starting I/O failed: -6 00:22:17.947 [2024-12-10 22:53:25.240759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:17.947 NVMe io qpair process completion error 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 starting I/O failed: -6 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 starting I/O failed: -6 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 starting I/O failed: -6 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 starting I/O failed: -6 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 starting I/O failed: -6 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 starting I/O failed: -6 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 starting I/O failed: -6 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 starting I/O failed: -6 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 starting I/O failed: -6 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 starting I/O failed: -6 00:22:17.947 [2024-12-10 22:53:25.241769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 starting I/O failed: -6 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 starting I/O failed: -6 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 starting I/O failed: -6 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 starting I/O failed: -6 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 starting I/O failed: -6 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 starting I/O failed: -6 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 starting I/O failed: -6 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 starting I/O failed: -6 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 starting I/O failed: -6 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 starting I/O failed: -6 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 starting I/O failed: -6 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 starting I/O failed: -6 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 starting I/O failed: -6 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 starting I/O failed: -6 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 starting I/O failed: -6 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 starting I/O failed: -6 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 starting I/O failed: -6 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 starting I/O failed: -6 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 starting I/O failed: -6 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 starting I/O failed: -6 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 starting I/O failed: -6 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 starting I/O failed: -6 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 starting I/O failed: -6 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 [2024-12-10 22:53:25.242692] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 starting I/O failed: -6 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 starting I/O failed: -6 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 starting I/O failed: -6 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 starting I/O failed: -6 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 starting I/O failed: -6 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 starting I/O failed: -6 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 starting I/O failed: -6 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 starting I/O failed: -6 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 starting I/O failed: -6 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 starting I/O failed: -6 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 starting I/O failed: -6 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 starting I/O failed: -6 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 starting I/O failed: -6 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 starting I/O failed: -6 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 starting I/O failed: -6 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 starting I/O failed: -6 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 starting I/O failed: -6 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.947 starting I/O failed: -6 00:22:17.947 Write completed with error (sct=0, sc=8) 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 starting I/O failed: -6 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 starting I/O failed: -6 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 starting I/O failed: -6 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 starting I/O failed: -6 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 starting I/O failed: -6 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 starting I/O failed: -6 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 starting I/O failed: -6 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 starting I/O failed: -6 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 starting I/O failed: -6 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 starting I/O failed: -6 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 starting I/O failed: -6 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 starting I/O failed: -6 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 starting I/O failed: -6 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 starting I/O failed: -6 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 starting I/O failed: -6 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 starting I/O failed: -6 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 starting I/O failed: -6 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 starting I/O failed: -6 00:22:17.948 [2024-12-10 22:53:25.243681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 starting I/O failed: -6 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 starting I/O failed: -6 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 starting I/O failed: -6 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 starting I/O failed: -6 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 starting I/O failed: -6 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 starting I/O failed: -6 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 starting I/O failed: -6 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 starting I/O failed: -6 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 starting I/O failed: -6 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 starting I/O failed: -6 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 starting I/O failed: -6 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 starting I/O failed: -6 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 starting I/O failed: -6 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 starting I/O failed: -6 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 starting I/O failed: -6 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 starting I/O failed: -6 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 starting I/O failed: -6 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 starting I/O failed: -6 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 starting I/O failed: -6 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 starting I/O failed: -6 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 starting I/O failed: -6 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 starting I/O failed: -6 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 starting I/O failed: -6 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 starting I/O failed: -6 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 starting I/O failed: -6 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 starting I/O failed: -6 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 starting I/O failed: -6 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 starting I/O failed: -6 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 starting I/O failed: -6 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 starting I/O failed: -6 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 starting I/O failed: -6 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 starting I/O failed: -6 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 starting I/O failed: -6 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 starting I/O failed: -6 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 starting I/O failed: -6 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 starting I/O failed: -6 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 starting I/O failed: -6 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 starting I/O failed: -6 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 starting I/O failed: -6 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 starting I/O failed: -6 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 starting I/O failed: -6 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 starting I/O failed: -6 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 starting I/O failed: -6 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 starting I/O failed: -6 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 starting I/O failed: -6 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 starting I/O failed: -6 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 starting I/O failed: -6 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 starting I/O failed: -6 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 starting I/O failed: -6 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 starting I/O failed: -6 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 starting I/O failed: -6 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 starting I/O failed: -6 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 starting I/O failed: -6 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 starting I/O failed: -6 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 starting I/O failed: -6 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 starting I/O failed: -6 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 starting I/O failed: -6 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 starting I/O failed: -6 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 starting I/O failed: -6 00:22:17.948 [2024-12-10 22:53:25.245578] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:17.948 NVMe io qpair process completion error 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 starting I/O failed: -6 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 starting I/O failed: -6 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 starting I/O failed: -6 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 starting I/O failed: -6 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 starting I/O failed: -6 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 Write completed with error (sct=0, sc=8) 00:22:17.948 starting I/O failed: -6 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 starting I/O failed: -6 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 starting I/O failed: -6 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 starting I/O failed: -6 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 [2024-12-10 22:53:25.246595] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:17.949 starting I/O failed: -6 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 starting I/O failed: -6 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 starting I/O failed: -6 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 starting I/O failed: -6 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 starting I/O failed: -6 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 starting I/O failed: -6 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 starting I/O failed: -6 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 starting I/O failed: -6 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 starting I/O failed: -6 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 starting I/O failed: -6 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 starting I/O failed: -6 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 starting I/O failed: -6 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 starting I/O failed: -6 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 starting I/O failed: -6 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 starting I/O failed: -6 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 starting I/O failed: -6 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 starting I/O failed: -6 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 starting I/O failed: -6 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 starting I/O failed: -6 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 starting I/O failed: -6 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 [2024-12-10 22:53:25.247408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 starting I/O failed: -6 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 starting I/O failed: -6 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 starting I/O failed: -6 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 starting I/O failed: -6 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 starting I/O failed: -6 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 starting I/O failed: -6 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 starting I/O failed: -6 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 starting I/O failed: -6 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 starting I/O failed: -6 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 starting I/O failed: -6 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 starting I/O failed: -6 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 starting I/O failed: -6 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 starting I/O failed: -6 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 starting I/O failed: -6 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 starting I/O failed: -6 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 starting I/O failed: -6 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 starting I/O failed: -6 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 starting I/O failed: -6 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 starting I/O failed: -6 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 starting I/O failed: -6 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 starting I/O failed: -6 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 starting I/O failed: -6 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 starting I/O failed: -6 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 starting I/O failed: -6 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 starting I/O failed: -6 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 starting I/O failed: -6 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 starting I/O failed: -6 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 starting I/O failed: -6 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 starting I/O failed: -6 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 starting I/O failed: -6 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 starting I/O failed: -6 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 starting I/O failed: -6 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 starting I/O failed: -6 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 starting I/O failed: -6 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 starting I/O failed: -6 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 starting I/O failed: -6 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 starting I/O failed: -6 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 starting I/O failed: -6 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 starting I/O failed: -6 00:22:17.949 [2024-12-10 22:53:25.248477] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 starting I/O failed: -6 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 starting I/O failed: -6 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 starting I/O failed: -6 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 starting I/O failed: -6 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 starting I/O failed: -6 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 starting I/O failed: -6 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 starting I/O failed: -6 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 starting I/O failed: -6 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 starting I/O failed: -6 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 starting I/O failed: -6 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 starting I/O failed: -6 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 starting I/O failed: -6 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 starting I/O failed: -6 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 starting I/O failed: -6 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 starting I/O failed: -6 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 starting I/O failed: -6 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 starting I/O failed: -6 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 starting I/O failed: -6 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 starting I/O failed: -6 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 starting I/O failed: -6 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 starting I/O failed: -6 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 starting I/O failed: -6 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 starting I/O failed: -6 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 starting I/O failed: -6 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 starting I/O failed: -6 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 starting I/O failed: -6 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 starting I/O failed: -6 00:22:17.949 Write completed with error (sct=0, sc=8) 00:22:17.949 starting I/O failed: -6 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 starting I/O failed: -6 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 starting I/O failed: -6 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 starting I/O failed: -6 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 starting I/O failed: -6 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 starting I/O failed: -6 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 starting I/O failed: -6 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 starting I/O failed: -6 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 starting I/O failed: -6 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 starting I/O failed: -6 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 starting I/O failed: -6 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 starting I/O failed: -6 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 starting I/O failed: -6 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 starting I/O failed: -6 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 starting I/O failed: -6 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 starting I/O failed: -6 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 starting I/O failed: -6 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 starting I/O failed: -6 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 starting I/O failed: -6 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 starting I/O failed: -6 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 starting I/O failed: -6 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 starting I/O failed: -6 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 starting I/O failed: -6 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 starting I/O failed: -6 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 starting I/O failed: -6 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 starting I/O failed: -6 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 starting I/O failed: -6 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 starting I/O failed: -6 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 starting I/O failed: -6 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 starting I/O failed: -6 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 starting I/O failed: -6 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 starting I/O failed: -6 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 starting I/O failed: -6 00:22:17.950 [2024-12-10 22:53:25.250531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:17.950 NVMe io qpair process completion error 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 starting I/O failed: -6 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 starting I/O failed: -6 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 starting I/O failed: -6 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 starting I/O failed: -6 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 starting I/O failed: -6 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 starting I/O failed: -6 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 starting I/O failed: -6 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 starting I/O failed: -6 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 starting I/O failed: -6 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 starting I/O failed: -6 00:22:17.950 [2024-12-10 22:53:25.251500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 starting I/O failed: -6 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 starting I/O failed: -6 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 starting I/O failed: -6 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 starting I/O failed: -6 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 starting I/O failed: -6 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 starting I/O failed: -6 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 starting I/O failed: -6 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 starting I/O failed: -6 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 starting I/O failed: -6 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 starting I/O failed: -6 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 starting I/O failed: -6 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 starting I/O failed: -6 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 starting I/O failed: -6 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 starting I/O failed: -6 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 starting I/O failed: -6 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 starting I/O failed: -6 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 starting I/O failed: -6 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 starting I/O failed: -6 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 starting I/O failed: -6 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 starting I/O failed: -6 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 starting I/O failed: -6 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 starting I/O failed: -6 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 [2024-12-10 22:53:25.252398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 starting I/O failed: -6 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 starting I/O failed: -6 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 starting I/O failed: -6 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 starting I/O failed: -6 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 starting I/O failed: -6 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 starting I/O failed: -6 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 starting I/O failed: -6 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 starting I/O failed: -6 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 starting I/O failed: -6 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 starting I/O failed: -6 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 starting I/O failed: -6 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 starting I/O failed: -6 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 starting I/O failed: -6 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 starting I/O failed: -6 00:22:17.950 Write completed with error (sct=0, sc=8) 00:22:17.950 starting I/O failed: -6 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 starting I/O failed: -6 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 starting I/O failed: -6 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 starting I/O failed: -6 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 starting I/O failed: -6 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 starting I/O failed: -6 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 starting I/O failed: -6 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 starting I/O failed: -6 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 starting I/O failed: -6 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 starting I/O failed: -6 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 starting I/O failed: -6 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 starting I/O failed: -6 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 starting I/O failed: -6 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 starting I/O failed: -6 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 starting I/O failed: -6 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 starting I/O failed: -6 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 starting I/O failed: -6 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 starting I/O failed: -6 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 starting I/O failed: -6 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 starting I/O failed: -6 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 starting I/O failed: -6 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 starting I/O failed: -6 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 starting I/O failed: -6 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 starting I/O failed: -6 00:22:17.951 [2024-12-10 22:53:25.253459] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 starting I/O failed: -6 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 starting I/O failed: -6 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 starting I/O failed: -6 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 starting I/O failed: -6 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 starting I/O failed: -6 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 starting I/O failed: -6 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 starting I/O failed: -6 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 starting I/O failed: -6 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 starting I/O failed: -6 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 starting I/O failed: -6 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 starting I/O failed: -6 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 starting I/O failed: -6 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 starting I/O failed: -6 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 starting I/O failed: -6 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 starting I/O failed: -6 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 starting I/O failed: -6 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 starting I/O failed: -6 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 starting I/O failed: -6 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 starting I/O failed: -6 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 starting I/O failed: -6 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 starting I/O failed: -6 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 starting I/O failed: -6 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 starting I/O failed: -6 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 starting I/O failed: -6 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 starting I/O failed: -6 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 starting I/O failed: -6 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 starting I/O failed: -6 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 starting I/O failed: -6 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 starting I/O failed: -6 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 starting I/O failed: -6 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 starting I/O failed: -6 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 starting I/O failed: -6 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 starting I/O failed: -6 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 starting I/O failed: -6 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 starting I/O failed: -6 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 starting I/O failed: -6 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 starting I/O failed: -6 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 starting I/O failed: -6 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 starting I/O failed: -6 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 starting I/O failed: -6 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 starting I/O failed: -6 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 starting I/O failed: -6 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 starting I/O failed: -6 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 starting I/O failed: -6 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 starting I/O failed: -6 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 starting I/O failed: -6 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 starting I/O failed: -6 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 starting I/O failed: -6 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 starting I/O failed: -6 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 starting I/O failed: -6 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 starting I/O failed: -6 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 starting I/O failed: -6 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 starting I/O failed: -6 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 starting I/O failed: -6 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 starting I/O failed: -6 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 starting I/O failed: -6 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 starting I/O failed: -6 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 starting I/O failed: -6 00:22:17.951 [2024-12-10 22:53:25.258328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:17.951 NVMe io qpair process completion error 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 starting I/O failed: -6 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 starting I/O failed: -6 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 starting I/O failed: -6 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 starting I/O failed: -6 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 starting I/O failed: -6 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 starting I/O failed: -6 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 starting I/O failed: -6 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 starting I/O failed: -6 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.951 Write completed with error (sct=0, sc=8) 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 starting I/O failed: -6 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 starting I/O failed: -6 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 [2024-12-10 22:53:25.259317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:17.952 starting I/O failed: -6 00:22:17.952 starting I/O failed: -6 00:22:17.952 starting I/O failed: -6 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 starting I/O failed: -6 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 starting I/O failed: -6 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 starting I/O failed: -6 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 starting I/O failed: -6 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 starting I/O failed: -6 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 starting I/O failed: -6 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 starting I/O failed: -6 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 starting I/O failed: -6 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 starting I/O failed: -6 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 starting I/O failed: -6 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 starting I/O failed: -6 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 starting I/O failed: -6 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 starting I/O failed: -6 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 starting I/O failed: -6 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 starting I/O failed: -6 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 starting I/O failed: -6 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 starting I/O failed: -6 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 starting I/O failed: -6 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 [2024-12-10 22:53:25.260280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 starting I/O failed: -6 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 starting I/O failed: -6 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 starting I/O failed: -6 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 starting I/O failed: -6 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 starting I/O failed: -6 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 starting I/O failed: -6 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 starting I/O failed: -6 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 starting I/O failed: -6 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 starting I/O failed: -6 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 starting I/O failed: -6 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 starting I/O failed: -6 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 starting I/O failed: -6 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 starting I/O failed: -6 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 starting I/O failed: -6 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 starting I/O failed: -6 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 starting I/O failed: -6 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 starting I/O failed: -6 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 starting I/O failed: -6 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 starting I/O failed: -6 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 starting I/O failed: -6 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 starting I/O failed: -6 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 starting I/O failed: -6 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 starting I/O failed: -6 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 starting I/O failed: -6 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 starting I/O failed: -6 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 starting I/O failed: -6 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 starting I/O failed: -6 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 starting I/O failed: -6 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 starting I/O failed: -6 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 starting I/O failed: -6 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 starting I/O failed: -6 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 starting I/O failed: -6 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 starting I/O failed: -6 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 starting I/O failed: -6 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 starting I/O failed: -6 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 starting I/O failed: -6 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 [2024-12-10 22:53:25.261288] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 starting I/O failed: -6 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 starting I/O failed: -6 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 starting I/O failed: -6 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 starting I/O failed: -6 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 starting I/O failed: -6 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 starting I/O failed: -6 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 starting I/O failed: -6 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 starting I/O failed: -6 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 starting I/O failed: -6 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 starting I/O failed: -6 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 starting I/O failed: -6 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 starting I/O failed: -6 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 starting I/O failed: -6 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 starting I/O failed: -6 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 starting I/O failed: -6 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 starting I/O failed: -6 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 starting I/O failed: -6 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 starting I/O failed: -6 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 starting I/O failed: -6 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 starting I/O failed: -6 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 starting I/O failed: -6 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 starting I/O failed: -6 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 starting I/O failed: -6 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 starting I/O failed: -6 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 starting I/O failed: -6 00:22:17.952 Write completed with error (sct=0, sc=8) 00:22:17.952 starting I/O failed: -6 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 starting I/O failed: -6 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 starting I/O failed: -6 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 starting I/O failed: -6 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 starting I/O failed: -6 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 starting I/O failed: -6 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 starting I/O failed: -6 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 starting I/O failed: -6 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 starting I/O failed: -6 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 starting I/O failed: -6 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 starting I/O failed: -6 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 starting I/O failed: -6 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 starting I/O failed: -6 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 starting I/O failed: -6 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 starting I/O failed: -6 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 starting I/O failed: -6 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 starting I/O failed: -6 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 starting I/O failed: -6 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 starting I/O failed: -6 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 starting I/O failed: -6 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 starting I/O failed: -6 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 starting I/O failed: -6 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 starting I/O failed: -6 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 starting I/O failed: -6 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 starting I/O failed: -6 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 starting I/O failed: -6 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 starting I/O failed: -6 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 starting I/O failed: -6 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 starting I/O failed: -6 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 starting I/O failed: -6 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 starting I/O failed: -6 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 starting I/O failed: -6 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 starting I/O failed: -6 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 starting I/O failed: -6 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 starting I/O failed: -6 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 starting I/O failed: -6 00:22:17.953 [2024-12-10 22:53:25.264884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:17.953 NVMe io qpair process completion error 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 starting I/O failed: -6 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 starting I/O failed: -6 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 starting I/O failed: -6 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 starting I/O failed: -6 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 starting I/O failed: -6 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 starting I/O failed: -6 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 starting I/O failed: -6 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 starting I/O failed: -6 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 starting I/O failed: -6 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 starting I/O failed: -6 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 [2024-12-10 22:53:25.265880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:17.953 starting I/O failed: -6 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 starting I/O failed: -6 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 starting I/O failed: -6 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 starting I/O failed: -6 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 starting I/O failed: -6 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 starting I/O failed: -6 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 starting I/O failed: -6 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 starting I/O failed: -6 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 starting I/O failed: -6 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 starting I/O failed: -6 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 starting I/O failed: -6 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 starting I/O failed: -6 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 starting I/O failed: -6 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 starting I/O failed: -6 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 starting I/O failed: -6 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 starting I/O failed: -6 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 starting I/O failed: -6 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 starting I/O failed: -6 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 starting I/O failed: -6 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 starting I/O failed: -6 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 starting I/O failed: -6 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 starting I/O failed: -6 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 starting I/O failed: -6 00:22:17.953 Write completed with error (sct=0, sc=8) 00:22:17.953 starting I/O failed: -6 00:22:17.954 [2024-12-10 22:53:25.266792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 [2024-12-10 22:53:25.267773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 [2024-12-10 22:53:25.269561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:17.954 NVMe io qpair process completion error 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 starting I/O failed: -6 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.954 Write completed with error (sct=0, sc=8) 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 starting I/O failed: -6 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 starting I/O failed: -6 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 starting I/O failed: -6 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 starting I/O failed: -6 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 starting I/O failed: -6 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 [2024-12-10 22:53:25.270520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 starting I/O failed: -6 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 starting I/O failed: -6 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 starting I/O failed: -6 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 starting I/O failed: -6 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 starting I/O failed: -6 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 starting I/O failed: -6 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 starting I/O failed: -6 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 starting I/O failed: -6 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 starting I/O failed: -6 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 starting I/O failed: -6 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 starting I/O failed: -6 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 starting I/O failed: -6 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 starting I/O failed: -6 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 starting I/O failed: -6 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 starting I/O failed: -6 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 starting I/O failed: -6 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 starting I/O failed: -6 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 starting I/O failed: -6 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 starting I/O failed: -6 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 starting I/O failed: -6 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 starting I/O failed: -6 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 starting I/O failed: -6 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 [2024-12-10 22:53:25.271397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 starting I/O failed: -6 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 starting I/O failed: -6 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 starting I/O failed: -6 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 starting I/O failed: -6 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 starting I/O failed: -6 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 starting I/O failed: -6 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 starting I/O failed: -6 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 starting I/O failed: -6 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 starting I/O failed: -6 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 starting I/O failed: -6 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 starting I/O failed: -6 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 starting I/O failed: -6 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 starting I/O failed: -6 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 starting I/O failed: -6 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 starting I/O failed: -6 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 starting I/O failed: -6 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 starting I/O failed: -6 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 starting I/O failed: -6 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 starting I/O failed: -6 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 starting I/O failed: -6 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 starting I/O failed: -6 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 starting I/O failed: -6 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 starting I/O failed: -6 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 starting I/O failed: -6 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 starting I/O failed: -6 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 starting I/O failed: -6 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 starting I/O failed: -6 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 starting I/O failed: -6 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 starting I/O failed: -6 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 starting I/O failed: -6 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 starting I/O failed: -6 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 starting I/O failed: -6 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 starting I/O failed: -6 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 starting I/O failed: -6 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 starting I/O failed: -6 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 starting I/O failed: -6 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 starting I/O failed: -6 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 starting I/O failed: -6 00:22:17.955 [2024-12-10 22:53:25.272451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 starting I/O failed: -6 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 starting I/O failed: -6 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 starting I/O failed: -6 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 starting I/O failed: -6 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 starting I/O failed: -6 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 starting I/O failed: -6 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 starting I/O failed: -6 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 starting I/O failed: -6 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 starting I/O failed: -6 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 starting I/O failed: -6 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 starting I/O failed: -6 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 starting I/O failed: -6 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 starting I/O failed: -6 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 starting I/O failed: -6 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 starting I/O failed: -6 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 starting I/O failed: -6 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 starting I/O failed: -6 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 starting I/O failed: -6 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 starting I/O failed: -6 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 starting I/O failed: -6 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 starting I/O failed: -6 00:22:17.955 Write completed with error (sct=0, sc=8) 00:22:17.955 starting I/O failed: -6 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 starting I/O failed: -6 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 starting I/O failed: -6 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 starting I/O failed: -6 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 starting I/O failed: -6 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 starting I/O failed: -6 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 starting I/O failed: -6 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 starting I/O failed: -6 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 starting I/O failed: -6 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 starting I/O failed: -6 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 starting I/O failed: -6 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 starting I/O failed: -6 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 starting I/O failed: -6 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 starting I/O failed: -6 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 starting I/O failed: -6 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 starting I/O failed: -6 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 starting I/O failed: -6 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 starting I/O failed: -6 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 starting I/O failed: -6 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 starting I/O failed: -6 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 starting I/O failed: -6 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 starting I/O failed: -6 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 starting I/O failed: -6 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 starting I/O failed: -6 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 starting I/O failed: -6 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 starting I/O failed: -6 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 starting I/O failed: -6 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 starting I/O failed: -6 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 starting I/O failed: -6 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 starting I/O failed: -6 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 starting I/O failed: -6 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 starting I/O failed: -6 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 starting I/O failed: -6 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 starting I/O failed: -6 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 starting I/O failed: -6 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 starting I/O failed: -6 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 starting I/O failed: -6 00:22:17.956 [2024-12-10 22:53:25.278308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:17.956 NVMe io qpair process completion error 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 starting I/O failed: -6 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 starting I/O failed: -6 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 starting I/O failed: -6 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 starting I/O failed: -6 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 starting I/O failed: -6 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 starting I/O failed: -6 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 starting I/O failed: -6 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 starting I/O failed: -6 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 [2024-12-10 22:53:25.279123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 starting I/O failed: -6 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 starting I/O failed: -6 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 starting I/O failed: -6 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 starting I/O failed: -6 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 starting I/O failed: -6 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 starting I/O failed: -6 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 starting I/O failed: -6 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 starting I/O failed: -6 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 starting I/O failed: -6 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 starting I/O failed: -6 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 starting I/O failed: -6 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 starting I/O failed: -6 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 starting I/O failed: -6 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 starting I/O failed: -6 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 starting I/O failed: -6 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 starting I/O failed: -6 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 starting I/O failed: -6 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 starting I/O failed: -6 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 starting I/O failed: -6 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 [2024-12-10 22:53:25.279900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:17.956 starting I/O failed: -6 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 starting I/O failed: -6 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 starting I/O failed: -6 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 starting I/O failed: -6 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 starting I/O failed: -6 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 starting I/O failed: -6 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 starting I/O failed: -6 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 starting I/O failed: -6 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 starting I/O failed: -6 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 starting I/O failed: -6 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 starting I/O failed: -6 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 starting I/O failed: -6 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 starting I/O failed: -6 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 starting I/O failed: -6 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 starting I/O failed: -6 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 starting I/O failed: -6 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 starting I/O failed: -6 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 starting I/O failed: -6 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 starting I/O failed: -6 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 starting I/O failed: -6 00:22:17.956 Write completed with error (sct=0, sc=8) 00:22:17.956 starting I/O failed: -6 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 starting I/O failed: -6 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 starting I/O failed: -6 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 starting I/O failed: -6 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 starting I/O failed: -6 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 starting I/O failed: -6 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 starting I/O failed: -6 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 starting I/O failed: -6 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 starting I/O failed: -6 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 starting I/O failed: -6 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 starting I/O failed: -6 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 starting I/O failed: -6 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 starting I/O failed: -6 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 starting I/O failed: -6 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 starting I/O failed: -6 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 starting I/O failed: -6 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 starting I/O failed: -6 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 starting I/O failed: -6 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 starting I/O failed: -6 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 starting I/O failed: -6 00:22:17.957 [2024-12-10 22:53:25.280999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 starting I/O failed: -6 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 starting I/O failed: -6 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 starting I/O failed: -6 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 starting I/O failed: -6 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 starting I/O failed: -6 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 starting I/O failed: -6 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 starting I/O failed: -6 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 starting I/O failed: -6 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 starting I/O failed: -6 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 starting I/O failed: -6 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 starting I/O failed: -6 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 starting I/O failed: -6 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 starting I/O failed: -6 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 starting I/O failed: -6 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 starting I/O failed: -6 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 starting I/O failed: -6 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 starting I/O failed: -6 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 starting I/O failed: -6 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 starting I/O failed: -6 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 starting I/O failed: -6 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 starting I/O failed: -6 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 starting I/O failed: -6 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 starting I/O failed: -6 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 starting I/O failed: -6 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 starting I/O failed: -6 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 starting I/O failed: -6 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 starting I/O failed: -6 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 starting I/O failed: -6 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 starting I/O failed: -6 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 starting I/O failed: -6 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 starting I/O failed: -6 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 starting I/O failed: -6 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 starting I/O failed: -6 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 starting I/O failed: -6 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 starting I/O failed: -6 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 starting I/O failed: -6 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 starting I/O failed: -6 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 starting I/O failed: -6 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 starting I/O failed: -6 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 starting I/O failed: -6 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 starting I/O failed: -6 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 starting I/O failed: -6 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 starting I/O failed: -6 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 starting I/O failed: -6 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 starting I/O failed: -6 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 starting I/O failed: -6 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 starting I/O failed: -6 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 starting I/O failed: -6 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 starting I/O failed: -6 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 starting I/O failed: -6 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 starting I/O failed: -6 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 starting I/O failed: -6 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 starting I/O failed: -6 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 starting I/O failed: -6 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 starting I/O failed: -6 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 starting I/O failed: -6 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 starting I/O failed: -6 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 starting I/O failed: -6 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 starting I/O failed: -6 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 starting I/O failed: -6 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 starting I/O failed: -6 00:22:17.957 [2024-12-10 22:53:25.285220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:17.957 NVMe io qpair process completion error 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 starting I/O failed: -6 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 starting I/O failed: -6 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 starting I/O failed: -6 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 starting I/O failed: -6 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 starting I/O failed: -6 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 starting I/O failed: -6 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 starting I/O failed: -6 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 starting I/O failed: -6 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 starting I/O failed: -6 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 Write completed with error (sct=0, sc=8) 00:22:17.957 starting I/O failed: -6 00:22:17.958 [2024-12-10 22:53:25.286209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 starting I/O failed: -6 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 starting I/O failed: -6 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 starting I/O failed: -6 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 starting I/O failed: -6 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 starting I/O failed: -6 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 starting I/O failed: -6 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 starting I/O failed: -6 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 starting I/O failed: -6 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 starting I/O failed: -6 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 starting I/O failed: -6 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 starting I/O failed: -6 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 starting I/O failed: -6 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 starting I/O failed: -6 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 starting I/O failed: -6 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 starting I/O failed: -6 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 starting I/O failed: -6 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 starting I/O failed: -6 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 starting I/O failed: -6 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 starting I/O failed: -6 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 starting I/O failed: -6 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 starting I/O failed: -6 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 starting I/O failed: -6 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 [2024-12-10 22:53:25.287111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 starting I/O failed: -6 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 starting I/O failed: -6 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 starting I/O failed: -6 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 starting I/O failed: -6 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 starting I/O failed: -6 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 starting I/O failed: -6 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 starting I/O failed: -6 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 starting I/O failed: -6 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 starting I/O failed: -6 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 starting I/O failed: -6 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 starting I/O failed: -6 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 starting I/O failed: -6 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 starting I/O failed: -6 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 starting I/O failed: -6 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 starting I/O failed: -6 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 starting I/O failed: -6 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 starting I/O failed: -6 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 starting I/O failed: -6 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 starting I/O failed: -6 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 starting I/O failed: -6 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 starting I/O failed: -6 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 starting I/O failed: -6 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 starting I/O failed: -6 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 starting I/O failed: -6 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 starting I/O failed: -6 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 starting I/O failed: -6 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 starting I/O failed: -6 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 starting I/O failed: -6 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 starting I/O failed: -6 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 starting I/O failed: -6 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 starting I/O failed: -6 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 starting I/O failed: -6 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 starting I/O failed: -6 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 starting I/O failed: -6 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 starting I/O failed: -6 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 starting I/O failed: -6 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 starting I/O failed: -6 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 starting I/O failed: -6 00:22:17.958 [2024-12-10 22:53:25.288182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 starting I/O failed: -6 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 starting I/O failed: -6 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 starting I/O failed: -6 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 starting I/O failed: -6 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 starting I/O failed: -6 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 starting I/O failed: -6 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 starting I/O failed: -6 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 starting I/O failed: -6 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 starting I/O failed: -6 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 starting I/O failed: -6 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 starting I/O failed: -6 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 starting I/O failed: -6 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 starting I/O failed: -6 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 starting I/O failed: -6 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 starting I/O failed: -6 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 starting I/O failed: -6 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 starting I/O failed: -6 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 starting I/O failed: -6 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 starting I/O failed: -6 00:22:17.958 Write completed with error (sct=0, sc=8) 00:22:17.958 starting I/O failed: -6 00:22:17.959 Write completed with error (sct=0, sc=8) 00:22:17.959 starting I/O failed: -6 00:22:17.959 Write completed with error (sct=0, sc=8) 00:22:17.959 starting I/O failed: -6 00:22:17.959 Write completed with error (sct=0, sc=8) 00:22:17.959 starting I/O failed: -6 00:22:17.959 Write completed with error (sct=0, sc=8) 00:22:17.959 starting I/O failed: -6 00:22:17.959 Write completed with error (sct=0, sc=8) 00:22:17.959 starting I/O failed: -6 00:22:17.959 Write completed with error (sct=0, sc=8) 00:22:17.959 starting I/O failed: -6 00:22:17.959 Write completed with error (sct=0, sc=8) 00:22:17.959 starting I/O failed: -6 00:22:17.959 Write completed with error (sct=0, sc=8) 00:22:17.959 starting I/O failed: -6 00:22:17.959 Write completed with error (sct=0, sc=8) 00:22:17.959 starting I/O failed: -6 00:22:17.959 Write completed with error (sct=0, sc=8) 00:22:17.959 starting I/O failed: -6 00:22:17.959 Write completed with error (sct=0, sc=8) 00:22:17.959 starting I/O failed: -6 00:22:17.959 Write completed with error (sct=0, sc=8) 00:22:17.959 starting I/O failed: -6 00:22:17.959 Write completed with error (sct=0, sc=8) 00:22:17.959 starting I/O failed: -6 00:22:17.959 Write completed with error (sct=0, sc=8) 00:22:17.959 starting I/O failed: -6 00:22:17.959 Write completed with error (sct=0, sc=8) 00:22:17.959 starting I/O failed: -6 00:22:17.959 Write completed with error (sct=0, sc=8) 00:22:17.959 starting I/O failed: -6 00:22:17.959 Write completed with error (sct=0, sc=8) 00:22:17.959 starting I/O failed: -6 00:22:17.959 Write completed with error (sct=0, sc=8) 00:22:17.959 starting I/O failed: -6 00:22:17.959 Write completed with error (sct=0, sc=8) 00:22:17.959 starting I/O failed: -6 00:22:17.959 Write completed with error (sct=0, sc=8) 00:22:17.959 starting I/O failed: -6 00:22:17.959 Write completed with error (sct=0, sc=8) 00:22:17.959 starting I/O failed: -6 00:22:17.959 Write completed with error (sct=0, sc=8) 00:22:17.959 starting I/O failed: -6 00:22:17.959 Write completed with error (sct=0, sc=8) 00:22:17.959 starting I/O failed: -6 00:22:17.959 Write completed with error (sct=0, sc=8) 00:22:17.959 starting I/O failed: -6 00:22:17.959 Write completed with error (sct=0, sc=8) 00:22:17.959 starting I/O failed: -6 00:22:17.959 Write completed with error (sct=0, sc=8) 00:22:17.959 starting I/O failed: -6 00:22:17.959 Write completed with error (sct=0, sc=8) 00:22:17.959 starting I/O failed: -6 00:22:17.959 Write completed with error (sct=0, sc=8) 00:22:17.959 starting I/O failed: -6 00:22:17.959 Write completed with error (sct=0, sc=8) 00:22:17.959 starting I/O failed: -6 00:22:17.959 Write completed with error (sct=0, sc=8) 00:22:17.959 starting I/O failed: -6 00:22:17.959 Write completed with error (sct=0, sc=8) 00:22:17.959 starting I/O failed: -6 00:22:17.959 Write completed with error (sct=0, sc=8) 00:22:17.959 starting I/O failed: -6 00:22:17.959 Write completed with error (sct=0, sc=8) 00:22:17.959 starting I/O failed: -6 00:22:17.959 Write completed with error (sct=0, sc=8) 00:22:17.959 starting I/O failed: -6 00:22:17.959 Write completed with error (sct=0, sc=8) 00:22:17.959 starting I/O failed: -6 00:22:17.959 Write completed with error (sct=0, sc=8) 00:22:17.959 starting I/O failed: -6 00:22:17.959 Write completed with error (sct=0, sc=8) 00:22:17.959 starting I/O failed: -6 00:22:17.959 Write completed with error (sct=0, sc=8) 00:22:17.959 starting I/O failed: -6 00:22:17.959 [2024-12-10 22:53:25.290586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:22:17.959 NVMe io qpair process completion error 00:22:17.959 Initializing NVMe Controllers 00:22:17.959 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:22:17.959 Controller IO queue size 128, less than required. 00:22:17.959 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:17.959 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:22:17.959 Controller IO queue size 128, less than required. 00:22:17.959 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:17.959 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:22:17.959 Controller IO queue size 128, less than required. 00:22:17.959 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:17.959 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:17.959 Controller IO queue size 128, less than required. 00:22:17.959 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:17.959 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:22:17.959 Controller IO queue size 128, less than required. 00:22:17.959 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:17.959 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:22:17.959 Controller IO queue size 128, less than required. 00:22:17.959 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:17.959 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:22:17.959 Controller IO queue size 128, less than required. 00:22:17.959 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:17.959 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:22:17.959 Controller IO queue size 128, less than required. 00:22:17.959 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:17.959 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:22:17.959 Controller IO queue size 128, less than required. 00:22:17.959 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:17.959 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:22:17.959 Controller IO queue size 128, less than required. 00:22:17.959 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:17.959 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:22:17.959 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:22:17.959 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:22:17.959 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:17.959 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:22:17.959 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:22:17.959 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:22:17.959 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:22:17.959 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:22:17.959 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:22:17.959 Initialization complete. Launching workers. 00:22:17.959 ======================================================== 00:22:17.959 Latency(us) 00:22:17.959 Device Information : IOPS MiB/s Average min max 00:22:17.959 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2160.94 92.85 59238.24 892.68 118480.36 00:22:17.959 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2176.75 93.53 58869.18 640.11 110031.51 00:22:17.959 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2143.45 92.10 59825.89 957.62 108448.67 00:22:17.959 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2171.27 93.30 58332.79 787.29 105996.05 00:22:17.959 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2138.60 91.89 59253.46 454.40 104789.71 00:22:17.959 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2100.24 90.24 60345.62 721.08 103191.13 00:22:17.959 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2128.90 91.48 59546.99 826.95 101668.63 00:22:17.959 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2120.47 91.11 59799.83 932.51 107613.71 00:22:17.959 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2159.46 92.79 58772.40 700.36 99248.64 00:22:17.959 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2193.82 94.27 57884.52 932.57 98304.64 00:22:17.959 ======================================================== 00:22:17.959 Total : 21493.89 923.57 59178.73 454.40 118480.36 00:22:17.959 00:22:17.959 [2024-12-10 22:53:25.293568] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d11740 is same with the state(6) to be set 00:22:17.959 [2024-12-10 22:53:25.293616] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d10890 is same with the state(6) to be set 00:22:17.959 [2024-12-10 22:53:25.293647] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d10ef0 is same with the state(6) to be set 00:22:17.959 [2024-12-10 22:53:25.293677] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d12720 is same with the state(6) to be set 00:22:17.959 [2024-12-10 22:53:25.293704] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d10bc0 is same with the state(6) to be set 00:22:17.959 [2024-12-10 22:53:25.293742] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d10560 is same with the state(6) to be set 00:22:17.959 [2024-12-10 22:53:25.293770] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d11410 is same with the state(6) to be set 00:22:17.959 [2024-12-10 22:53:25.293798] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d11a70 is same with the state(6) to be set 00:22:17.959 [2024-12-10 22:53:25.293828] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d12900 is same with the state(6) to be set 00:22:17.959 [2024-12-10 22:53:25.293856] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d12ae0 is same with the state(6) to be set 00:22:17.959 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf: errors occurred 00:22:17.959 22:53:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:22:18.898 22:53:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 2436004 00:22:18.898 22:53:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:22:18.898 22:53:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2436004 00:22:18.898 22:53:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:22:18.898 22:53:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:18.898 22:53:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:22:18.898 22:53:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:18.898 22:53:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 2436004 00:22:18.898 22:53:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:22:18.898 22:53:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:18.898 22:53:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:18.898 22:53:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:18.898 22:53:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:22:18.898 22:53:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:22:18.898 22:53:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/bdevperf.conf 00:22:18.898 22:53:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/rpcs.txt 00:22:19.158 22:53:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:22:19.158 22:53:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:19.158 22:53:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:22:19.158 22:53:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:19.158 22:53:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:22:19.158 22:53:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:19.158 22:53:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:19.158 rmmod nvme_tcp 00:22:19.158 rmmod nvme_fabrics 00:22:19.158 rmmod nvme_keyring 00:22:19.158 22:53:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:19.158 22:53:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:22:19.158 22:53:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:22:19.158 22:53:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 2435722 ']' 00:22:19.158 22:53:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 2435722 00:22:19.158 22:53:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2435722 ']' 00:22:19.158 22:53:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2435722 00:22:19.158 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/common/autotest_common.sh: line 958: kill: (2435722) - No such process 00:22:19.158 22:53:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2435722 is not found' 00:22:19.158 Process with pid 2435722 is not found 00:22:19.158 22:53:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:19.158 22:53:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:19.158 22:53:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:19.158 22:53:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:22:19.158 22:53:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:22:19.158 22:53:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:19.158 22:53:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:22:19.158 22:53:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:19.158 22:53:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:19.158 22:53:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:19.158 22:53:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:19.158 22:53:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:21.064 22:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:21.064 00:22:21.064 real 0m10.437s 00:22:21.064 user 0m27.700s 00:22:21.064 sys 0m5.162s 00:22:21.064 22:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:21.064 22:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:22:21.064 ************************************ 00:22:21.064 END TEST nvmf_shutdown_tc4 00:22:21.064 ************************************ 00:22:21.323 22:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:22:21.323 00:22:21.323 real 0m41.593s 00:22:21.323 user 1m43.556s 00:22:21.323 sys 0m13.937s 00:22:21.323 22:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:21.323 22:53:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:21.323 ************************************ 00:22:21.323 END TEST nvmf_shutdown 00:22:21.323 ************************************ 00:22:21.323 22:53:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:22:21.323 22:53:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:21.323 22:53:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:21.323 22:53:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:21.323 ************************************ 00:22:21.323 START TEST nvmf_nsid 00:22:21.323 ************************************ 00:22:21.323 22:53:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:22:21.323 * Looking for test storage... 00:22:21.323 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:22:21.323 22:53:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:21.323 22:53:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:22:21.323 22:53:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:21.323 22:53:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:21.323 22:53:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:21.323 22:53:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:21.323 22:53:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:21.323 22:53:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:22:21.323 22:53:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:22:21.323 22:53:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:22:21.323 22:53:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:22:21.323 22:53:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:22:21.323 22:53:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:22:21.323 22:53:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:22:21.323 22:53:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:21.323 22:53:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:22:21.323 22:53:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:22:21.323 22:53:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:21.323 22:53:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:21.324 22:53:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:22:21.324 22:53:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:22:21.324 22:53:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:21.324 22:53:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:22:21.324 22:53:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:22:21.324 22:53:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:22:21.583 22:53:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:22:21.583 22:53:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:21.583 22:53:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:22:21.583 22:53:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:22:21.583 22:53:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:21.583 22:53:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:21.583 22:53:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:22:21.583 22:53:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:21.583 22:53:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:21.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:21.583 --rc genhtml_branch_coverage=1 00:22:21.583 --rc genhtml_function_coverage=1 00:22:21.583 --rc genhtml_legend=1 00:22:21.583 --rc geninfo_all_blocks=1 00:22:21.583 --rc geninfo_unexecuted_blocks=1 00:22:21.583 00:22:21.583 ' 00:22:21.583 22:53:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:21.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:21.583 --rc genhtml_branch_coverage=1 00:22:21.583 --rc genhtml_function_coverage=1 00:22:21.583 --rc genhtml_legend=1 00:22:21.583 --rc geninfo_all_blocks=1 00:22:21.583 --rc geninfo_unexecuted_blocks=1 00:22:21.583 00:22:21.583 ' 00:22:21.583 22:53:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:21.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:21.583 --rc genhtml_branch_coverage=1 00:22:21.583 --rc genhtml_function_coverage=1 00:22:21.583 --rc genhtml_legend=1 00:22:21.583 --rc geninfo_all_blocks=1 00:22:21.583 --rc geninfo_unexecuted_blocks=1 00:22:21.583 00:22:21.583 ' 00:22:21.583 22:53:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:21.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:21.583 --rc genhtml_branch_coverage=1 00:22:21.583 --rc genhtml_function_coverage=1 00:22:21.583 --rc genhtml_legend=1 00:22:21.583 --rc geninfo_all_blocks=1 00:22:21.583 --rc geninfo_unexecuted_blocks=1 00:22:21.583 00:22:21.583 ' 00:22:21.583 22:53:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:22:21.583 22:53:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:22:21.583 22:53:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:21.583 22:53:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:21.583 22:53:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:21.583 22:53:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:21.583 22:53:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:21.583 22:53:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:21.583 22:53:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:21.583 22:53:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:21.583 22:53:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:21.583 22:53:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:21.583 22:53:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:21.584 22:53:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:21.584 22:53:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:21.584 22:53:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:21.584 22:53:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:21.584 22:53:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:21.584 22:53:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:22:21.584 22:53:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:22:21.584 22:53:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:21.584 22:53:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:21.584 22:53:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:21.584 22:53:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:21.584 22:53:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:21.584 22:53:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:21.584 22:53:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:22:21.584 22:53:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:21.584 22:53:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:22:21.584 22:53:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:21.584 22:53:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:21.584 22:53:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:21.584 22:53:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:21.584 22:53:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:21.584 22:53:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:21.584 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:21.584 22:53:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:21.584 22:53:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:21.584 22:53:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:21.584 22:53:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:22:21.584 22:53:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:22:21.584 22:53:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:22:21.584 22:53:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:22:21.584 22:53:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:22:21.584 22:53:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:22:21.584 22:53:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:21.584 22:53:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:21.584 22:53:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:21.584 22:53:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:21.584 22:53:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:21.584 22:53:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:21.584 22:53:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:21.584 22:53:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:21.584 22:53:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:21.584 22:53:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:21.584 22:53:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:22:21.584 22:53:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:28.154 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:28.154 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:22:28.154 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:28.154 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:28.154 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:28.154 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:28.154 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:28.154 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:28.155 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:28.155 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:28.155 Found net devices under 0000:86:00.0: cvl_0_0 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:28.155 Found net devices under 0000:86:00.1: cvl_0_1 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:28.155 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:28.155 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.316 ms 00:22:28.155 00:22:28.155 --- 10.0.0.2 ping statistics --- 00:22:28.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:28.155 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:28.155 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:28.155 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:22:28.155 00:22:28.155 --- 10.0.0.1 ping statistics --- 00:22:28.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:28.155 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:28.155 22:53:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:28.155 22:53:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:22:28.155 22:53:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:28.155 22:53:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:28.155 22:53:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:28.155 22:53:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=2440461 00:22:28.156 22:53:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 2440461 00:22:28.156 22:53:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:22:28.156 22:53:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 2440461 ']' 00:22:28.156 22:53:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:28.156 22:53:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:28.156 22:53:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:28.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:28.156 22:53:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:28.156 22:53:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:28.156 [2024-12-10 22:53:35.069826] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:22:28.156 [2024-12-10 22:53:35.069872] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:28.156 [2024-12-10 22:53:35.147718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:28.156 [2024-12-10 22:53:35.190873] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:28.156 [2024-12-10 22:53:35.190908] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:28.156 [2024-12-10 22:53:35.190916] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:28.156 [2024-12-10 22:53:35.190925] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:28.156 [2024-12-10 22:53:35.190930] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:28.156 [2024-12-10 22:53:35.191479] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:22:28.156 22:53:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:28.156 22:53:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:22:28.156 22:53:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:28.156 22:53:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:28.156 22:53:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:28.156 22:53:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:28.156 22:53:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:28.156 22:53:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=2440512 00:22:28.156 22:53:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:22:28.156 22:53:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:22:28.156 22:53:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:22:28.156 22:53:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:22:28.156 22:53:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:22:28.156 22:53:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:22:28.156 22:53:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:28.156 22:53:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:28.156 22:53:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:22:28.156 22:53:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:28.156 22:53:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:22:28.156 22:53:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:22:28.156 22:53:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:22:28.156 22:53:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:22:28.156 22:53:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:22:28.156 22:53:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=26d9c496-dd4c-4f66-84d0-c1bc88a76b03 00:22:28.156 22:53:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:22:28.156 22:53:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=3cfe0925-ddb4-4992-9dce-62840cb4e67d 00:22:28.156 22:53:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:22:28.156 22:53:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=3c1c8063-ed70-4473-aa3d-a360890e0d70 00:22:28.156 22:53:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:22:28.156 22:53:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.156 22:53:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:28.156 null0 00:22:28.156 null1 00:22:28.156 [2024-12-10 22:53:35.386279] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:22:28.156 [2024-12-10 22:53:35.386325] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2440512 ] 00:22:28.156 null2 00:22:28.156 [2024-12-10 22:53:35.392985] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:28.156 [2024-12-10 22:53:35.417203] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:28.156 22:53:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.156 22:53:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 2440512 /var/tmp/tgt2.sock 00:22:28.156 22:53:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 2440512 ']' 00:22:28.156 22:53:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:22:28.156 22:53:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:28.156 22:53:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:22:28.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:22:28.156 22:53:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:28.156 22:53:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:28.156 [2024-12-10 22:53:35.460246] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:28.156 [2024-12-10 22:53:35.501032] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:22:28.156 22:53:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:28.156 22:53:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:22:28.156 22:53:35 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:22:28.415 [2024-12-10 22:53:36.021326] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:28.415 [2024-12-10 22:53:36.037436] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:22:28.415 nvme0n1 nvme0n2 00:22:28.415 nvme1n1 00:22:28.415 22:53:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:22:28.415 22:53:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:22:28.415 22:53:36 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:29.793 22:53:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:22:29.793 22:53:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:22:29.793 22:53:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:22:29.793 22:53:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:22:29.793 22:53:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:22:29.793 22:53:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:22:29.793 22:53:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:22:29.793 22:53:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:22:29.793 22:53:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:29.793 22:53:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:29.793 22:53:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:22:29.793 22:53:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:22:29.793 22:53:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:22:30.731 22:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:30.731 22:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:30.731 22:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:30.731 22:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:22:30.731 22:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:22:30.731 22:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 26d9c496-dd4c-4f66-84d0-c1bc88a76b03 00:22:30.731 22:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:22:30.731 22:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:22:30.731 22:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:22:30.731 22:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:22:30.731 22:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:22:30.731 22:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=26d9c496dd4c4f6684d0c1bc88a76b03 00:22:30.731 22:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 26D9C496DD4C4F6684D0C1BC88A76B03 00:22:30.731 22:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 26D9C496DD4C4F6684D0C1BC88A76B03 == \2\6\D\9\C\4\9\6\D\D\4\C\4\F\6\6\8\4\D\0\C\1\B\C\8\8\A\7\6\B\0\3 ]] 00:22:30.731 22:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:22:30.731 22:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:22:30.731 22:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:30.731 22:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:22:30.731 22:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:30.731 22:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:22:30.731 22:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:22:30.731 22:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 3cfe0925-ddb4-4992-9dce-62840cb4e67d 00:22:30.731 22:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:22:30.731 22:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:22:30.731 22:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:22:30.731 22:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:22:30.731 22:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:22:30.731 22:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=3cfe0925ddb449929dce62840cb4e67d 00:22:30.731 22:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 3CFE0925DDB449929DCE62840CB4E67D 00:22:30.731 22:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 3CFE0925DDB449929DCE62840CB4E67D == \3\C\F\E\0\9\2\5\D\D\B\4\4\9\9\2\9\D\C\E\6\2\8\4\0\C\B\4\E\6\7\D ]] 00:22:30.731 22:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:22:30.731 22:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:22:30.731 22:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:30.731 22:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:22:30.731 22:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:30.731 22:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:22:30.731 22:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:22:30.731 22:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 3c1c8063-ed70-4473-aa3d-a360890e0d70 00:22:30.731 22:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:22:30.731 22:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:22:30.731 22:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:22:30.731 22:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:22:30.731 22:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:22:30.731 22:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=3c1c8063ed704473aa3da360890e0d70 00:22:30.731 22:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 3C1C8063ED704473AA3DA360890E0D70 00:22:30.731 22:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 3C1C8063ED704473AA3DA360890E0D70 == \3\C\1\C\8\0\6\3\E\D\7\0\4\4\7\3\A\A\3\D\A\3\6\0\8\9\0\E\0\D\7\0 ]] 00:22:30.731 22:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:22:30.990 22:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:22:30.990 22:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:22:30.990 22:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 2440512 00:22:30.990 22:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 2440512 ']' 00:22:30.990 22:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 2440512 00:22:30.990 22:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:22:30.990 22:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:30.990 22:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2440512 00:22:30.990 22:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:30.990 22:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:30.991 22:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2440512' 00:22:30.991 killing process with pid 2440512 00:22:30.991 22:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 2440512 00:22:30.991 22:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 2440512 00:22:31.249 22:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:22:31.249 22:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:31.249 22:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:22:31.249 22:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:31.249 22:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:22:31.249 22:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:31.249 22:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:31.249 rmmod nvme_tcp 00:22:31.249 rmmod nvme_fabrics 00:22:31.249 rmmod nvme_keyring 00:22:31.510 22:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:31.510 22:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:22:31.510 22:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:22:31.510 22:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 2440461 ']' 00:22:31.510 22:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 2440461 00:22:31.510 22:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 2440461 ']' 00:22:31.510 22:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 2440461 00:22:31.510 22:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:22:31.510 22:53:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:31.510 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2440461 00:22:31.510 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:31.510 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:31.510 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2440461' 00:22:31.510 killing process with pid 2440461 00:22:31.510 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 2440461 00:22:31.510 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 2440461 00:22:31.510 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:31.510 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:31.510 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:31.510 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:22:31.510 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:22:31.510 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:31.510 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:22:31.510 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:31.510 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:31.510 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:31.510 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:31.510 22:53:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:34.044 22:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:34.044 00:22:34.044 real 0m12.394s 00:22:34.044 user 0m9.706s 00:22:34.044 sys 0m5.504s 00:22:34.044 22:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:34.044 22:53:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:34.044 ************************************ 00:22:34.044 END TEST nvmf_nsid 00:22:34.044 ************************************ 00:22:34.044 22:53:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:22:34.044 00:22:34.044 real 12m1.947s 00:22:34.044 user 25m43.488s 00:22:34.044 sys 3m42.001s 00:22:34.044 22:53:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:34.044 22:53:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:34.044 ************************************ 00:22:34.044 END TEST nvmf_target_extra 00:22:34.044 ************************************ 00:22:34.044 22:53:41 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:22:34.044 22:53:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:34.044 22:53:41 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:34.044 22:53:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:34.044 ************************************ 00:22:34.044 START TEST nvmf_host 00:22:34.044 ************************************ 00:22:34.044 22:53:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:22:34.044 * Looking for test storage... 00:22:34.044 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf 00:22:34.044 22:53:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:34.044 22:53:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:22:34.044 22:53:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:34.044 22:53:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:34.044 22:53:41 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:34.044 22:53:41 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:34.044 22:53:41 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:34.044 22:53:41 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:22:34.044 22:53:41 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:22:34.044 22:53:41 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:22:34.044 22:53:41 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:22:34.044 22:53:41 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:22:34.044 22:53:41 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:22:34.044 22:53:41 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:22:34.044 22:53:41 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:34.044 22:53:41 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:22:34.044 22:53:41 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:22:34.044 22:53:41 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:34.044 22:53:41 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:34.044 22:53:41 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:22:34.044 22:53:41 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:22:34.044 22:53:41 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:34.044 22:53:41 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:22:34.044 22:53:41 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:22:34.044 22:53:41 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:22:34.044 22:53:41 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:22:34.044 22:53:41 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:34.044 22:53:41 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:22:34.044 22:53:41 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:22:34.044 22:53:41 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:34.044 22:53:41 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:34.044 22:53:41 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:22:34.044 22:53:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:34.044 22:53:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:34.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.044 --rc genhtml_branch_coverage=1 00:22:34.044 --rc genhtml_function_coverage=1 00:22:34.044 --rc genhtml_legend=1 00:22:34.044 --rc geninfo_all_blocks=1 00:22:34.044 --rc geninfo_unexecuted_blocks=1 00:22:34.044 00:22:34.044 ' 00:22:34.044 22:53:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:34.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.044 --rc genhtml_branch_coverage=1 00:22:34.044 --rc genhtml_function_coverage=1 00:22:34.044 --rc genhtml_legend=1 00:22:34.044 --rc geninfo_all_blocks=1 00:22:34.044 --rc geninfo_unexecuted_blocks=1 00:22:34.044 00:22:34.044 ' 00:22:34.044 22:53:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:34.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.044 --rc genhtml_branch_coverage=1 00:22:34.044 --rc genhtml_function_coverage=1 00:22:34.044 --rc genhtml_legend=1 00:22:34.044 --rc geninfo_all_blocks=1 00:22:34.044 --rc geninfo_unexecuted_blocks=1 00:22:34.044 00:22:34.044 ' 00:22:34.044 22:53:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:34.044 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.044 --rc genhtml_branch_coverage=1 00:22:34.044 --rc genhtml_function_coverage=1 00:22:34.044 --rc genhtml_legend=1 00:22:34.044 --rc geninfo_all_blocks=1 00:22:34.044 --rc geninfo_unexecuted_blocks=1 00:22:34.044 00:22:34.044 ' 00:22:34.044 22:53:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:22:34.044 22:53:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:22:34.044 22:53:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:34.044 22:53:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:34.044 22:53:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:34.044 22:53:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:34.044 22:53:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:34.044 22:53:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:34.045 22:53:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:34.045 22:53:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:34.045 22:53:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:34.045 22:53:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:34.045 22:53:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:34.045 22:53:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:34.045 22:53:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:34.045 22:53:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:34.045 22:53:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:34.045 22:53:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:34.045 22:53:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:22:34.045 22:53:41 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:34.045 22:53:41 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:34.045 22:53:41 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:34.045 22:53:41 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:34.045 22:53:41 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.045 22:53:41 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.045 22:53:41 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.045 22:53:41 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:22:34.045 22:53:41 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.045 22:53:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:22:34.045 22:53:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:34.045 22:53:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:34.045 22:53:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:34.045 22:53:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:34.045 22:53:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:34.045 22:53:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:34.045 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:34.045 22:53:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:34.045 22:53:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:34.045 22:53:41 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:34.045 22:53:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:22:34.045 22:53:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:22:34.045 22:53:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:22:34.045 22:53:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:34.045 22:53:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:34.045 22:53:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:34.045 22:53:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:34.045 ************************************ 00:22:34.045 START TEST nvmf_multicontroller 00:22:34.045 ************************************ 00:22:34.045 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:34.045 * Looking for test storage... 00:22:34.045 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host 00:22:34.045 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:34.045 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version 00:22:34.045 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:34.306 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:34.306 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:34.306 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:34.306 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:34.306 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:22:34.306 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:22:34.306 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:22:34.306 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:22:34.306 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:22:34.306 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:22:34.306 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:22:34.306 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:34.306 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:22:34.306 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:22:34.306 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:34.306 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:34.306 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:22:34.306 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:22:34.306 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:34.306 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:22:34.306 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:22:34.306 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:22:34.306 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:22:34.306 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:34.306 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:22:34.306 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:22:34.306 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:34.306 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:34.306 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:22:34.306 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:34.306 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:34.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.306 --rc genhtml_branch_coverage=1 00:22:34.306 --rc genhtml_function_coverage=1 00:22:34.306 --rc genhtml_legend=1 00:22:34.306 --rc geninfo_all_blocks=1 00:22:34.306 --rc geninfo_unexecuted_blocks=1 00:22:34.306 00:22:34.306 ' 00:22:34.306 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:34.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.306 --rc genhtml_branch_coverage=1 00:22:34.306 --rc genhtml_function_coverage=1 00:22:34.306 --rc genhtml_legend=1 00:22:34.306 --rc geninfo_all_blocks=1 00:22:34.306 --rc geninfo_unexecuted_blocks=1 00:22:34.306 00:22:34.306 ' 00:22:34.306 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:34.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.306 --rc genhtml_branch_coverage=1 00:22:34.306 --rc genhtml_function_coverage=1 00:22:34.306 --rc genhtml_legend=1 00:22:34.306 --rc geninfo_all_blocks=1 00:22:34.306 --rc geninfo_unexecuted_blocks=1 00:22:34.306 00:22:34.306 ' 00:22:34.306 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:34.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.306 --rc genhtml_branch_coverage=1 00:22:34.306 --rc genhtml_function_coverage=1 00:22:34.306 --rc genhtml_legend=1 00:22:34.306 --rc geninfo_all_blocks=1 00:22:34.306 --rc geninfo_unexecuted_blocks=1 00:22:34.306 00:22:34.306 ' 00:22:34.306 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:22:34.306 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:22:34.306 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:34.306 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:34.306 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:34.306 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:34.306 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:34.306 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:34.306 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:34.306 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:34.306 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:34.306 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:34.306 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:34.306 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:34.306 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:34.306 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:34.306 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:34.306 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:34.306 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:22:34.306 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:22:34.306 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:34.306 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:34.306 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:34.306 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.306 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.306 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.306 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:22:34.306 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.306 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:22:34.306 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:34.306 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:34.306 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:34.306 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:34.306 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:34.306 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:34.306 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:34.306 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:34.306 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:34.306 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:34.306 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:34.306 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:34.306 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:22:34.306 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:22:34.306 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:34.306 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:22:34.306 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:22:34.307 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:34.307 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:34.307 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:34.307 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:34.307 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:34.307 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:34.307 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:34.307 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:34.307 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:34.307 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:34.307 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:22:34.307 22:53:41 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:40.883 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:40.883 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:22:40.883 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:40.883 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:40.883 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:40.883 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:40.883 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:40.883 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:22:40.883 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:40.883 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:22:40.883 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:22:40.883 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:22:40.883 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:22:40.883 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:22:40.883 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:22:40.883 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:40.883 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:40.883 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:40.883 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:40.883 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:40.883 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:40.883 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:40.883 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:40.883 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:40.883 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:40.883 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:40.883 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:40.883 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:40.883 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:40.883 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:40.883 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:40.883 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:40.883 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:40.883 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:40.883 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:40.883 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:40.883 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:40.883 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:40.883 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:40.883 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:40.883 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:40.883 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:40.883 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:40.883 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:40.883 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:40.883 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:40.883 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:40.883 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:40.883 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:40.883 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:40.883 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:40.883 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:40.883 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:40.883 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:40.883 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:40.883 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:40.883 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:40.883 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:40.883 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:40.883 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:40.883 Found net devices under 0000:86:00.0: cvl_0_0 00:22:40.883 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:40.883 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:40.883 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:40.883 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:40.883 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:40.883 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:40.883 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:40.883 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:40.883 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:40.883 Found net devices under 0000:86:00.1: cvl_0_1 00:22:40.883 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:40.883 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:40.883 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:22:40.884 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:40.884 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:40.884 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:40.884 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:40.884 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:40.884 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:40.884 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:40.884 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:40.884 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:40.884 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:40.884 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:40.884 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:40.884 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:40.884 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:40.884 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:40.884 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:40.884 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:40.884 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:40.884 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:40.884 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:40.884 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:40.884 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:40.884 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:40.884 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:40.884 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:40.884 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:40.884 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:40.884 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.410 ms 00:22:40.884 00:22:40.884 --- 10.0.0.2 ping statistics --- 00:22:40.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:40.884 rtt min/avg/max/mdev = 0.410/0.410/0.410/0.000 ms 00:22:40.884 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:40.884 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:40.884 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:22:40.884 00:22:40.884 --- 10.0.0.1 ping statistics --- 00:22:40.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:40.884 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:22:40.884 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:40.884 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:22:40.884 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:40.884 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:40.884 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:40.884 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:40.884 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:40.884 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:40.884 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:40.884 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:22:40.884 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:40.884 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:40.884 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:40.884 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=2444798 00:22:40.884 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:40.884 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 2444798 00:22:40.884 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 2444798 ']' 00:22:40.884 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:40.884 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:40.884 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:40.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:40.884 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:40.884 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:40.884 [2024-12-10 22:53:47.778417] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:22:40.884 [2024-12-10 22:53:47.778469] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:40.884 [2024-12-10 22:53:47.858979] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:40.884 [2024-12-10 22:53:47.900754] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:40.884 [2024-12-10 22:53:47.900792] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:40.884 [2024-12-10 22:53:47.900798] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:40.884 [2024-12-10 22:53:47.900804] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:40.884 [2024-12-10 22:53:47.900809] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:40.884 [2024-12-10 22:53:47.902212] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:40.884 [2024-12-10 22:53:47.902264] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:22:40.884 [2024-12-10 22:53:47.902265] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:22:40.884 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:40.884 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:22:40.884 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:40.884 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:40.884 22:53:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:40.884 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:40.884 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:40.884 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.884 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:40.884 [2024-12-10 22:53:48.040341] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:40.884 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.884 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:40.884 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.884 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:40.884 Malloc0 00:22:40.884 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.884 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:40.884 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.884 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:40.884 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.884 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:40.884 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.884 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:40.884 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.884 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:40.884 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.884 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:40.884 [2024-12-10 22:53:48.099796] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:40.884 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.884 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:40.884 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.884 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:40.884 [2024-12-10 22:53:48.111739] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:40.884 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.885 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:40.885 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.885 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:40.885 Malloc1 00:22:40.885 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.885 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:22:40.885 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.885 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:40.885 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.885 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:22:40.885 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.885 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:40.885 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.885 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:40.885 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.885 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:40.885 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.885 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:22:40.885 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.885 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:40.885 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.885 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2444827 00:22:40.885 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:22:40.885 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:40.885 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2444827 /var/tmp/bdevperf.sock 00:22:40.885 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 2444827 ']' 00:22:40.885 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:40.885 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:40.885 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:40.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:40.885 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:40.885 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:40.885 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:40.885 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:22:40.885 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:22:40.885 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.885 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:41.145 NVMe0n1 00:22:41.145 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.145 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:41.145 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:22:41.145 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.145 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:41.145 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.145 1 00:22:41.145 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:41.145 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:22:41.145 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:41.145 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:41.145 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:41.145 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:41.145 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:41.145 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:22:41.145 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.145 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:41.145 request: 00:22:41.145 { 00:22:41.145 "name": "NVMe0", 00:22:41.145 "trtype": "tcp", 00:22:41.145 "traddr": "10.0.0.2", 00:22:41.145 "adrfam": "ipv4", 00:22:41.145 "trsvcid": "4420", 00:22:41.145 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:41.145 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:22:41.145 "hostaddr": "10.0.0.1", 00:22:41.145 "prchk_reftag": false, 00:22:41.145 "prchk_guard": false, 00:22:41.145 "hdgst": false, 00:22:41.145 "ddgst": false, 00:22:41.145 "allow_unrecognized_csi": false, 00:22:41.145 "method": "bdev_nvme_attach_controller", 00:22:41.145 "req_id": 1 00:22:41.145 } 00:22:41.145 Got JSON-RPC error response 00:22:41.145 response: 00:22:41.145 { 00:22:41.145 "code": -114, 00:22:41.145 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:41.145 } 00:22:41.145 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:41.145 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:22:41.145 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:41.145 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:41.145 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:41.145 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:41.145 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:22:41.145 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:41.145 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:41.145 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:41.145 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:41.145 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:41.145 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:22:41.145 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.145 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:41.145 request: 00:22:41.145 { 00:22:41.145 "name": "NVMe0", 00:22:41.145 "trtype": "tcp", 00:22:41.145 "traddr": "10.0.0.2", 00:22:41.145 "adrfam": "ipv4", 00:22:41.145 "trsvcid": "4420", 00:22:41.145 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:41.145 "hostaddr": "10.0.0.1", 00:22:41.145 "prchk_reftag": false, 00:22:41.145 "prchk_guard": false, 00:22:41.145 "hdgst": false, 00:22:41.145 "ddgst": false, 00:22:41.145 "allow_unrecognized_csi": false, 00:22:41.145 "method": "bdev_nvme_attach_controller", 00:22:41.145 "req_id": 1 00:22:41.145 } 00:22:41.145 Got JSON-RPC error response 00:22:41.145 response: 00:22:41.145 { 00:22:41.145 "code": -114, 00:22:41.145 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:41.145 } 00:22:41.145 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:41.145 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:22:41.145 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:41.145 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:41.145 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:41.146 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:41.146 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:22:41.146 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:41.146 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:41.146 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:41.146 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:41.146 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:41.146 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:22:41.146 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.146 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:41.146 request: 00:22:41.146 { 00:22:41.146 "name": "NVMe0", 00:22:41.146 "trtype": "tcp", 00:22:41.146 "traddr": "10.0.0.2", 00:22:41.146 "adrfam": "ipv4", 00:22:41.146 "trsvcid": "4420", 00:22:41.146 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:41.146 "hostaddr": "10.0.0.1", 00:22:41.146 "prchk_reftag": false, 00:22:41.146 "prchk_guard": false, 00:22:41.146 "hdgst": false, 00:22:41.146 "ddgst": false, 00:22:41.146 "multipath": "disable", 00:22:41.146 "allow_unrecognized_csi": false, 00:22:41.146 "method": "bdev_nvme_attach_controller", 00:22:41.146 "req_id": 1 00:22:41.146 } 00:22:41.146 Got JSON-RPC error response 00:22:41.146 response: 00:22:41.146 { 00:22:41.146 "code": -114, 00:22:41.146 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:22:41.146 } 00:22:41.146 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:41.146 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:22:41.146 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:41.146 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:41.146 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:41.146 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:41.146 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:22:41.146 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:41.146 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:41.146 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:41.146 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:41.146 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:41.146 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:22:41.146 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.146 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:41.146 request: 00:22:41.146 { 00:22:41.146 "name": "NVMe0", 00:22:41.146 "trtype": "tcp", 00:22:41.146 "traddr": "10.0.0.2", 00:22:41.146 "adrfam": "ipv4", 00:22:41.146 "trsvcid": "4420", 00:22:41.146 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:41.146 "hostaddr": "10.0.0.1", 00:22:41.146 "prchk_reftag": false, 00:22:41.146 "prchk_guard": false, 00:22:41.146 "hdgst": false, 00:22:41.146 "ddgst": false, 00:22:41.146 "multipath": "failover", 00:22:41.146 "allow_unrecognized_csi": false, 00:22:41.146 "method": "bdev_nvme_attach_controller", 00:22:41.146 "req_id": 1 00:22:41.146 } 00:22:41.146 Got JSON-RPC error response 00:22:41.146 response: 00:22:41.146 { 00:22:41.146 "code": -114, 00:22:41.146 "message": "A controller named NVMe0 already exists with the specified network path" 00:22:41.146 } 00:22:41.146 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:41.146 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:22:41.146 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:41.146 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:41.146 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:41.146 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:41.146 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.146 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:41.146 NVMe0n1 00:22:41.405 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.405 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:41.405 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.405 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:41.405 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.405 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:22:41.405 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.405 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:41.405 00:22:41.405 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.405 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:41.405 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:22:41.405 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.405 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:41.405 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.405 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:22:41.405 22:53:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:42.784 { 00:22:42.784 "results": [ 00:22:42.784 { 00:22:42.784 "job": "NVMe0n1", 00:22:42.784 "core_mask": "0x1", 00:22:42.784 "workload": "write", 00:22:42.784 "status": "finished", 00:22:42.784 "queue_depth": 128, 00:22:42.784 "io_size": 4096, 00:22:42.784 "runtime": 1.005042, 00:22:42.784 "iops": 24538.27800231234, 00:22:42.784 "mibps": 95.85264844653258, 00:22:42.784 "io_failed": 0, 00:22:42.784 "io_timeout": 0, 00:22:42.784 "avg_latency_us": 5209.939328874205, 00:22:42.784 "min_latency_us": 1517.3008695652175, 00:22:42.784 "max_latency_us": 9061.064347826086 00:22:42.784 } 00:22:42.784 ], 00:22:42.784 "core_count": 1 00:22:42.784 } 00:22:42.784 22:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:22:42.784 22:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.784 22:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:42.784 22:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.784 22:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:22:42.784 22:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 2444827 00:22:42.784 22:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 2444827 ']' 00:22:42.784 22:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 2444827 00:22:42.784 22:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:22:42.784 22:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:42.784 22:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2444827 00:22:42.784 22:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:42.784 22:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:42.784 22:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2444827' 00:22:42.784 killing process with pid 2444827 00:22:42.784 22:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 2444827 00:22:42.784 22:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 2444827 00:22:42.784 22:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:42.784 22:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.784 22:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:42.784 22:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.784 22:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:42.784 22:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.784 22:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:42.784 22:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.784 22:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:22:42.785 22:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/try.txt 00:22:42.785 22:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:22:42.785 22:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/try.txt -type f 00:22:42.785 22:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:22:42.785 22:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:22:42.785 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/try.txt --- 00:22:42.785 [2024-12-10 22:53:48.218929] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:22:42.785 [2024-12-10 22:53:48.218979] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2444827 ] 00:22:42.785 [2024-12-10 22:53:48.295125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:42.785 [2024-12-10 22:53:48.335673] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:22:42.785 [2024-12-10 22:53:48.957471] bdev.c:4957:bdev_name_add: *ERROR*: Bdev name b488bf5e-161d-47a5-9f44-fa5a4b744f74 already exists 00:22:42.785 [2024-12-10 22:53:48.957496] bdev.c:8177:bdev_register: *ERROR*: Unable to add uuid:b488bf5e-161d-47a5-9f44-fa5a4b744f74 alias for bdev NVMe1n1 00:22:42.785 [2024-12-10 22:53:48.957504] bdev_nvme.c:4666:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:22:42.785 Running I/O for 1 seconds... 00:22:42.785 24534.00 IOPS, 95.84 MiB/s 00:22:42.785 Latency(us) 00:22:42.785 [2024-12-10T21:53:50.514Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:42.785 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:22:42.785 NVMe0n1 : 1.01 24538.28 95.85 0.00 0.00 5209.94 1517.30 9061.06 00:22:42.785 [2024-12-10T21:53:50.514Z] =================================================================================================================== 00:22:42.785 [2024-12-10T21:53:50.514Z] Total : 24538.28 95.85 0.00 0.00 5209.94 1517.30 9061.06 00:22:42.785 Received shutdown signal, test time was about 1.000000 seconds 00:22:42.785 00:22:42.785 Latency(us) 00:22:42.785 [2024-12-10T21:53:50.514Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:42.785 [2024-12-10T21:53:50.514Z] =================================================================================================================== 00:22:42.785 [2024-12-10T21:53:50.514Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:42.785 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/try.txt --- 00:22:42.785 22:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/try.txt 00:22:42.785 22:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:22:42.785 22:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:22:42.785 22:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:42.785 22:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:22:42.785 22:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:42.785 22:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:22:42.785 22:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:42.785 22:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:42.785 rmmod nvme_tcp 00:22:42.785 rmmod nvme_fabrics 00:22:42.785 rmmod nvme_keyring 00:22:42.785 22:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:42.785 22:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:22:42.785 22:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:22:42.785 22:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 2444798 ']' 00:22:42.785 22:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 2444798 00:22:42.785 22:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 2444798 ']' 00:22:42.785 22:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 2444798 00:22:42.785 22:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:22:42.785 22:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:42.785 22:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2444798 00:22:42.785 22:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:42.785 22:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:42.785 22:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2444798' 00:22:42.785 killing process with pid 2444798 00:22:42.785 22:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 2444798 00:22:42.785 22:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 2444798 00:22:43.045 22:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:43.045 22:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:43.045 22:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:43.045 22:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:22:43.045 22:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:22:43.045 22:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:43.045 22:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:22:43.045 22:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:43.045 22:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:43.045 22:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:43.045 22:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:43.045 22:53:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:45.583 22:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:45.583 00:22:45.583 real 0m11.105s 00:22:45.583 user 0m12.205s 00:22:45.583 sys 0m5.200s 00:22:45.583 22:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:45.583 22:53:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:45.583 ************************************ 00:22:45.583 END TEST nvmf_multicontroller 00:22:45.583 ************************************ 00:22:45.583 22:53:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:45.583 22:53:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:45.583 22:53:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:45.583 22:53:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:45.583 ************************************ 00:22:45.583 START TEST nvmf_aer 00:22:45.583 ************************************ 00:22:45.583 22:53:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:45.583 * Looking for test storage... 00:22:45.583 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host 00:22:45.583 22:53:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:45.583 22:53:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version 00:22:45.583 22:53:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:45.583 22:53:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:45.583 22:53:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:45.583 22:53:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:45.583 22:53:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:45.583 22:53:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:22:45.583 22:53:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:22:45.583 22:53:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:22:45.583 22:53:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:22:45.583 22:53:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:22:45.583 22:53:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:22:45.583 22:53:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:22:45.583 22:53:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:45.583 22:53:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:22:45.584 22:53:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:22:45.584 22:53:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:45.584 22:53:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:45.584 22:53:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:22:45.584 22:53:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:22:45.584 22:53:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:45.584 22:53:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:22:45.584 22:53:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:22:45.584 22:53:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:22:45.584 22:53:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:22:45.584 22:53:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:45.584 22:53:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:22:45.584 22:53:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:22:45.584 22:53:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:45.584 22:53:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:45.584 22:53:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:22:45.584 22:53:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:45.584 22:53:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:45.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:45.584 --rc genhtml_branch_coverage=1 00:22:45.584 --rc genhtml_function_coverage=1 00:22:45.584 --rc genhtml_legend=1 00:22:45.584 --rc geninfo_all_blocks=1 00:22:45.584 --rc geninfo_unexecuted_blocks=1 00:22:45.584 00:22:45.584 ' 00:22:45.584 22:53:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:45.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:45.584 --rc genhtml_branch_coverage=1 00:22:45.584 --rc genhtml_function_coverage=1 00:22:45.584 --rc genhtml_legend=1 00:22:45.584 --rc geninfo_all_blocks=1 00:22:45.584 --rc geninfo_unexecuted_blocks=1 00:22:45.584 00:22:45.584 ' 00:22:45.584 22:53:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:45.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:45.584 --rc genhtml_branch_coverage=1 00:22:45.584 --rc genhtml_function_coverage=1 00:22:45.584 --rc genhtml_legend=1 00:22:45.584 --rc geninfo_all_blocks=1 00:22:45.584 --rc geninfo_unexecuted_blocks=1 00:22:45.584 00:22:45.584 ' 00:22:45.584 22:53:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:45.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:45.584 --rc genhtml_branch_coverage=1 00:22:45.584 --rc genhtml_function_coverage=1 00:22:45.584 --rc genhtml_legend=1 00:22:45.584 --rc geninfo_all_blocks=1 00:22:45.584 --rc geninfo_unexecuted_blocks=1 00:22:45.584 00:22:45.584 ' 00:22:45.584 22:53:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:22:45.584 22:53:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:22:45.584 22:53:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:45.584 22:53:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:45.584 22:53:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:45.584 22:53:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:45.584 22:53:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:45.584 22:53:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:45.584 22:53:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:45.584 22:53:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:45.584 22:53:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:45.584 22:53:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:45.584 22:53:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:45.584 22:53:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:45.584 22:53:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:45.584 22:53:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:45.584 22:53:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:45.584 22:53:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:45.584 22:53:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:22:45.584 22:53:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:22:45.584 22:53:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:45.584 22:53:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:45.584 22:53:53 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:45.584 22:53:53 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.584 22:53:53 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.584 22:53:53 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.584 22:53:53 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:22:45.584 22:53:53 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:45.584 22:53:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:22:45.584 22:53:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:45.584 22:53:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:45.584 22:53:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:45.584 22:53:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:45.584 22:53:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:45.584 22:53:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:45.584 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:45.584 22:53:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:45.584 22:53:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:45.584 22:53:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:45.584 22:53:53 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:22:45.584 22:53:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:45.584 22:53:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:45.584 22:53:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:45.584 22:53:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:45.584 22:53:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:45.584 22:53:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:45.584 22:53:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:45.584 22:53:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:45.584 22:53:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:45.584 22:53:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:45.584 22:53:53 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:22:45.584 22:53:53 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:52.161 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:52.161 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:22:52.161 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:52.161 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:52.161 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:52.161 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:52.161 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:52.161 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:22:52.161 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:52.161 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:22:52.161 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:22:52.161 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:22:52.161 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:22:52.161 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:22:52.161 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:22:52.161 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:52.161 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:52.161 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:52.161 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:52.161 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:52.161 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:52.161 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:52.161 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:52.161 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:52.161 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:52.161 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:52.161 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:52.161 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:52.161 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:52.161 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:52.161 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:52.161 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:52.161 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:52.161 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:52.161 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:52.161 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:52.161 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:52.161 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:52.161 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:52.161 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:52.161 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:52.161 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:52.161 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:52.161 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:52.161 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:52.161 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:52.162 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:52.162 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:52.162 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:52.162 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:52.162 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:52.162 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:52.162 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:52.162 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:52.162 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:52.162 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:52.162 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:52.162 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:52.162 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:52.162 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:52.162 Found net devices under 0000:86:00.0: cvl_0_0 00:22:52.162 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:52.162 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:52.162 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:52.162 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:52.162 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:52.162 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:52.162 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:52.162 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:52.162 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:52.162 Found net devices under 0000:86:00.1: cvl_0_1 00:22:52.162 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:52.162 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:52.162 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:22:52.162 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:52.162 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:52.162 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:52.162 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:52.162 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:52.162 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:52.162 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:52.162 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:52.162 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:52.162 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:52.162 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:52.162 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:52.162 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:52.162 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:52.162 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:52.162 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:52.162 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:52.162 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:52.162 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:52.162 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:52.162 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:52.162 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:52.162 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:52.162 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:52.162 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:52.162 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:52.162 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:52.162 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.393 ms 00:22:52.162 00:22:52.162 --- 10.0.0.2 ping statistics --- 00:22:52.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:52.162 rtt min/avg/max/mdev = 0.393/0.393/0.393/0.000 ms 00:22:52.162 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:52.162 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:52.162 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:22:52.162 00:22:52.162 --- 10.0.0.1 ping statistics --- 00:22:52.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:52.162 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:22:52.162 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:52.162 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:22:52.162 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:52.162 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:52.162 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:52.162 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:52.162 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:52.162 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:52.162 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:52.162 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:22:52.162 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:52.162 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:52.162 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:52.162 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=2448811 00:22:52.162 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 2448811 00:22:52.162 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:52.162 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 2448811 ']' 00:22:52.162 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:52.162 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:52.162 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:52.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:52.162 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:52.162 22:53:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:52.162 [2024-12-10 22:53:58.994469] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:22:52.162 [2024-12-10 22:53:58.994519] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:52.162 [2024-12-10 22:53:59.072971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:52.162 [2024-12-10 22:53:59.114100] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:52.162 [2024-12-10 22:53:59.114137] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:52.162 [2024-12-10 22:53:59.114144] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:52.162 [2024-12-10 22:53:59.114150] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:52.162 [2024-12-10 22:53:59.114155] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:52.162 [2024-12-10 22:53:59.115690] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:22:52.162 [2024-12-10 22:53:59.115726] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:52.162 [2024-12-10 22:53:59.115834] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:22:52.162 [2024-12-10 22:53:59.115835] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:22:52.162 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:52.162 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:22:52.162 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:52.162 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:52.162 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:52.162 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:52.162 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:52.162 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.162 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:52.162 [2024-12-10 22:53:59.265710] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:52.162 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.162 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:22:52.162 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.162 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:52.162 Malloc0 00:22:52.162 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.162 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:22:52.162 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.162 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:52.162 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.162 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:52.162 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.162 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:52.163 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.163 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:52.163 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.163 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:52.163 [2024-12-10 22:53:59.332870] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:52.163 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.163 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:22:52.163 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.163 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:52.163 [ 00:22:52.163 { 00:22:52.163 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:52.163 "subtype": "Discovery", 00:22:52.163 "listen_addresses": [], 00:22:52.163 "allow_any_host": true, 00:22:52.163 "hosts": [] 00:22:52.163 }, 00:22:52.163 { 00:22:52.163 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:52.163 "subtype": "NVMe", 00:22:52.163 "listen_addresses": [ 00:22:52.163 { 00:22:52.163 "trtype": "TCP", 00:22:52.163 "adrfam": "IPv4", 00:22:52.163 "traddr": "10.0.0.2", 00:22:52.163 "trsvcid": "4420" 00:22:52.163 } 00:22:52.163 ], 00:22:52.163 "allow_any_host": true, 00:22:52.163 "hosts": [], 00:22:52.163 "serial_number": "SPDK00000000000001", 00:22:52.163 "model_number": "SPDK bdev Controller", 00:22:52.163 "max_namespaces": 2, 00:22:52.163 "min_cntlid": 1, 00:22:52.163 "max_cntlid": 65519, 00:22:52.163 "namespaces": [ 00:22:52.163 { 00:22:52.163 "nsid": 1, 00:22:52.163 "bdev_name": "Malloc0", 00:22:52.163 "name": "Malloc0", 00:22:52.163 "nguid": "88A7FDAFCB6C4208934F08956EE5AB38", 00:22:52.163 "uuid": "88a7fdaf-cb6c-4208-934f-08956ee5ab38" 00:22:52.163 } 00:22:52.163 ] 00:22:52.163 } 00:22:52.163 ] 00:22:52.163 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.163 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:22:52.163 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:22:52.163 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=2448837 00:22:52.163 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:22:52.163 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:22:52.163 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:22:52.163 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:52.163 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:22:52.163 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:22:52.163 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:22:52.163 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:52.163 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:22:52.163 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:22:52.163 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:22:52.163 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:52.163 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:52.163 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:22:52.163 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:22:52.163 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.163 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:52.163 Malloc1 00:22:52.163 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.163 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:22:52.163 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.163 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:52.163 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.163 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:22:52.163 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.163 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:52.163 Asynchronous Event Request test 00:22:52.163 Attaching to 10.0.0.2 00:22:52.163 Attached to 10.0.0.2 00:22:52.163 Registering asynchronous event callbacks... 00:22:52.163 Starting namespace attribute notice tests for all controllers... 00:22:52.163 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:22:52.163 aer_cb - Changed Namespace 00:22:52.163 Cleaning up... 00:22:52.163 [ 00:22:52.163 { 00:22:52.163 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:52.163 "subtype": "Discovery", 00:22:52.163 "listen_addresses": [], 00:22:52.163 "allow_any_host": true, 00:22:52.163 "hosts": [] 00:22:52.163 }, 00:22:52.163 { 00:22:52.163 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:52.163 "subtype": "NVMe", 00:22:52.163 "listen_addresses": [ 00:22:52.163 { 00:22:52.163 "trtype": "TCP", 00:22:52.163 "adrfam": "IPv4", 00:22:52.163 "traddr": "10.0.0.2", 00:22:52.163 "trsvcid": "4420" 00:22:52.163 } 00:22:52.163 ], 00:22:52.163 "allow_any_host": true, 00:22:52.163 "hosts": [], 00:22:52.163 "serial_number": "SPDK00000000000001", 00:22:52.163 "model_number": "SPDK bdev Controller", 00:22:52.163 "max_namespaces": 2, 00:22:52.163 "min_cntlid": 1, 00:22:52.163 "max_cntlid": 65519, 00:22:52.163 "namespaces": [ 00:22:52.163 { 00:22:52.163 "nsid": 1, 00:22:52.163 "bdev_name": "Malloc0", 00:22:52.163 "name": "Malloc0", 00:22:52.163 "nguid": "88A7FDAFCB6C4208934F08956EE5AB38", 00:22:52.163 "uuid": "88a7fdaf-cb6c-4208-934f-08956ee5ab38" 00:22:52.163 }, 00:22:52.163 { 00:22:52.163 "nsid": 2, 00:22:52.163 "bdev_name": "Malloc1", 00:22:52.163 "name": "Malloc1", 00:22:52.163 "nguid": "1E17E93953EE4B72B2CA0791F09C684C", 00:22:52.163 "uuid": "1e17e939-53ee-4b72-b2ca-0791f09c684c" 00:22:52.163 } 00:22:52.163 ] 00:22:52.163 } 00:22:52.163 ] 00:22:52.163 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.163 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 2448837 00:22:52.163 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:22:52.163 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.163 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:52.163 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.163 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:22:52.163 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.163 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:52.163 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.163 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:52.163 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.163 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:52.163 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.163 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:22:52.163 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:22:52.163 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:52.163 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:22:52.163 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:52.163 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:22:52.163 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:52.163 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:52.163 rmmod nvme_tcp 00:22:52.163 rmmod nvme_fabrics 00:22:52.163 rmmod nvme_keyring 00:22:52.163 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:52.163 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:22:52.163 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:22:52.163 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 2448811 ']' 00:22:52.163 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 2448811 00:22:52.163 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 2448811 ']' 00:22:52.163 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 2448811 00:22:52.163 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:22:52.163 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:52.163 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2448811 00:22:52.163 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:52.163 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:52.163 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2448811' 00:22:52.163 killing process with pid 2448811 00:22:52.163 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 2448811 00:22:52.163 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 2448811 00:22:52.423 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:52.423 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:52.423 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:52.423 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:22:52.423 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:22:52.423 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:52.423 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:22:52.423 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:52.423 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:52.423 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:52.423 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:52.423 22:53:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:54.404 22:54:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:54.404 00:22:54.404 real 0m9.248s 00:22:54.404 user 0m5.221s 00:22:54.404 sys 0m4.865s 00:22:54.404 22:54:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:54.404 22:54:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:54.404 ************************************ 00:22:54.404 END TEST nvmf_aer 00:22:54.404 ************************************ 00:22:54.404 22:54:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:54.404 22:54:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:54.404 22:54:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:54.404 22:54:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.698 ************************************ 00:22:54.698 START TEST nvmf_async_init 00:22:54.698 ************************************ 00:22:54.698 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:54.698 * Looking for test storage... 00:22:54.698 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host 00:22:54.698 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:54.698 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version 00:22:54.698 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:54.698 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:54.698 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:54.698 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:54.698 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:54.698 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:22:54.698 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:22:54.698 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:22:54.698 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:22:54.698 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:22:54.698 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:22:54.698 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:22:54.698 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:54.698 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:22:54.698 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:22:54.698 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:54.698 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:54.698 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:22:54.698 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:22:54.698 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:54.698 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:22:54.698 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:22:54.698 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:22:54.698 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:22:54.698 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:54.698 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:22:54.698 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:22:54.698 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:54.698 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:54.698 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:22:54.698 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:54.698 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:54.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:54.698 --rc genhtml_branch_coverage=1 00:22:54.698 --rc genhtml_function_coverage=1 00:22:54.698 --rc genhtml_legend=1 00:22:54.698 --rc geninfo_all_blocks=1 00:22:54.698 --rc geninfo_unexecuted_blocks=1 00:22:54.698 00:22:54.698 ' 00:22:54.698 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:54.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:54.698 --rc genhtml_branch_coverage=1 00:22:54.698 --rc genhtml_function_coverage=1 00:22:54.698 --rc genhtml_legend=1 00:22:54.698 --rc geninfo_all_blocks=1 00:22:54.698 --rc geninfo_unexecuted_blocks=1 00:22:54.698 00:22:54.698 ' 00:22:54.698 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:54.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:54.698 --rc genhtml_branch_coverage=1 00:22:54.698 --rc genhtml_function_coverage=1 00:22:54.698 --rc genhtml_legend=1 00:22:54.698 --rc geninfo_all_blocks=1 00:22:54.698 --rc geninfo_unexecuted_blocks=1 00:22:54.698 00:22:54.698 ' 00:22:54.698 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:54.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:54.698 --rc genhtml_branch_coverage=1 00:22:54.698 --rc genhtml_function_coverage=1 00:22:54.698 --rc genhtml_legend=1 00:22:54.698 --rc geninfo_all_blocks=1 00:22:54.698 --rc geninfo_unexecuted_blocks=1 00:22:54.698 00:22:54.698 ' 00:22:54.698 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:22:54.698 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:22:54.698 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:54.698 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:54.698 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:54.698 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:54.698 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:54.698 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:54.698 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:54.698 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:54.698 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:54.698 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:54.698 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:54.698 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:54.698 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:54.698 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:54.698 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:54.698 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:54.698 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:22:54.698 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:22:54.698 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:54.698 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:54.699 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:54.699 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.699 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.699 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.699 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:22:54.699 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.699 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:22:54.699 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:54.699 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:54.699 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:54.699 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:54.699 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:54.699 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:54.699 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:54.699 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:54.699 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:54.699 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:54.699 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:22:54.699 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:22:54.699 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:22:54.699 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:22:54.699 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:22:54.699 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:22:54.699 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=961162bd634e466698ed0e36d9c3eb04 00:22:54.699 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:22:54.699 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:54.699 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:54.699 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:54.699 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:54.699 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:54.699 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:54.699 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:54.699 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:54.699 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:54.699 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:54.699 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:22:54.699 22:54:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:01.320 22:54:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:01.320 22:54:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:23:01.320 22:54:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:01.320 22:54:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:01.320 22:54:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:01.320 22:54:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:01.320 22:54:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:01.320 22:54:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:23:01.320 22:54:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:01.320 22:54:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:23:01.320 22:54:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:23:01.320 22:54:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:23:01.320 22:54:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:23:01.320 22:54:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:23:01.320 22:54:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:23:01.320 22:54:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:01.320 22:54:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:01.320 22:54:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:01.320 22:54:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:01.320 22:54:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:01.320 22:54:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:01.320 22:54:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:01.320 22:54:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:01.320 22:54:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:01.320 22:54:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:01.320 22:54:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:01.320 22:54:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:01.320 22:54:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:01.320 22:54:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:01.320 22:54:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:01.320 22:54:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:01.320 22:54:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:01.320 22:54:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:01.320 22:54:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:01.320 22:54:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:01.320 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:01.320 22:54:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:01.320 22:54:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:01.320 22:54:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:01.320 22:54:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:01.320 22:54:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:01.320 22:54:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:01.320 22:54:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:01.320 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:01.320 22:54:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:01.320 22:54:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:01.320 22:54:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:01.320 22:54:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:01.320 22:54:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:01.320 22:54:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:01.320 22:54:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:01.320 22:54:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:01.320 22:54:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:01.320 22:54:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:01.320 22:54:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:01.320 22:54:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:01.320 22:54:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:01.320 22:54:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:01.320 22:54:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:01.321 22:54:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:01.321 Found net devices under 0000:86:00.0: cvl_0_0 00:23:01.321 22:54:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:01.321 22:54:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:01.321 22:54:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:01.321 22:54:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:01.321 22:54:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:01.321 22:54:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:01.321 22:54:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:01.321 22:54:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:01.321 22:54:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:01.321 Found net devices under 0000:86:00.1: cvl_0_1 00:23:01.321 22:54:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:01.321 22:54:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:01.321 22:54:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:23:01.321 22:54:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:01.321 22:54:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:01.321 22:54:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:01.321 22:54:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:01.321 22:54:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:01.321 22:54:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:01.321 22:54:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:01.321 22:54:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:01.321 22:54:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:01.321 22:54:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:01.321 22:54:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:01.321 22:54:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:01.321 22:54:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:01.321 22:54:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:01.321 22:54:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:01.321 22:54:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:01.321 22:54:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:01.321 22:54:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:01.321 22:54:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:01.321 22:54:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:01.321 22:54:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:01.321 22:54:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:01.321 22:54:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:01.321 22:54:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:01.321 22:54:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:01.321 22:54:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:01.321 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:01.321 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.400 ms 00:23:01.321 00:23:01.321 --- 10.0.0.2 ping statistics --- 00:23:01.321 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:01.321 rtt min/avg/max/mdev = 0.400/0.400/0.400/0.000 ms 00:23:01.321 22:54:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:01.321 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:01.321 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:23:01.321 00:23:01.321 --- 10.0.0.1 ping statistics --- 00:23:01.321 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:01.321 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:23:01.321 22:54:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:01.321 22:54:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:23:01.321 22:54:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:01.321 22:54:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:01.321 22:54:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:01.321 22:54:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:01.321 22:54:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:01.321 22:54:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:01.321 22:54:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:01.321 22:54:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:23:01.321 22:54:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:01.321 22:54:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:01.321 22:54:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:01.321 22:54:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=2452594 00:23:01.321 22:54:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:01.321 22:54:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 2452594 00:23:01.321 22:54:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 2452594 ']' 00:23:01.321 22:54:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:01.321 22:54:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:01.321 22:54:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:01.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:01.321 22:54:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:01.321 22:54:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:01.321 [2024-12-10 22:54:08.381549] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:23:01.321 [2024-12-10 22:54:08.381594] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:01.321 [2024-12-10 22:54:08.458992] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:01.321 [2024-12-10 22:54:08.499392] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:01.321 [2024-12-10 22:54:08.499427] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:01.321 [2024-12-10 22:54:08.499434] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:01.321 [2024-12-10 22:54:08.499440] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:01.321 [2024-12-10 22:54:08.499445] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:01.321 [2024-12-10 22:54:08.499998] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:23:01.321 22:54:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:01.321 22:54:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:23:01.321 22:54:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:01.321 22:54:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:01.321 22:54:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:01.321 22:54:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:01.321 22:54:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:23:01.321 22:54:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.321 22:54:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:01.321 [2024-12-10 22:54:08.641085] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:01.321 22:54:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.321 22:54:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:23:01.321 22:54:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.321 22:54:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:01.321 null0 00:23:01.321 22:54:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.321 22:54:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:23:01.321 22:54:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.321 22:54:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:01.321 22:54:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.321 22:54:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:23:01.321 22:54:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.321 22:54:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:01.321 22:54:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.321 22:54:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 961162bd634e466698ed0e36d9c3eb04 00:23:01.321 22:54:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.321 22:54:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:01.321 22:54:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.321 22:54:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:01.321 22:54:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.321 22:54:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:01.321 [2024-12-10 22:54:08.693351] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:01.321 22:54:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.322 22:54:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:23:01.322 22:54:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.322 22:54:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:01.322 nvme0n1 00:23:01.322 22:54:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.322 22:54:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:01.322 22:54:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.322 22:54:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:01.322 [ 00:23:01.322 { 00:23:01.322 "name": "nvme0n1", 00:23:01.322 "aliases": [ 00:23:01.322 "961162bd-634e-4666-98ed-0e36d9c3eb04" 00:23:01.322 ], 00:23:01.322 "product_name": "NVMe disk", 00:23:01.322 "block_size": 512, 00:23:01.322 "num_blocks": 2097152, 00:23:01.322 "uuid": "961162bd-634e-4666-98ed-0e36d9c3eb04", 00:23:01.322 "numa_id": 1, 00:23:01.322 "assigned_rate_limits": { 00:23:01.322 "rw_ios_per_sec": 0, 00:23:01.322 "rw_mbytes_per_sec": 0, 00:23:01.322 "r_mbytes_per_sec": 0, 00:23:01.322 "w_mbytes_per_sec": 0 00:23:01.322 }, 00:23:01.322 "claimed": false, 00:23:01.322 "zoned": false, 00:23:01.322 "supported_io_types": { 00:23:01.322 "read": true, 00:23:01.322 "write": true, 00:23:01.322 "unmap": false, 00:23:01.322 "flush": true, 00:23:01.322 "reset": true, 00:23:01.322 "nvme_admin": true, 00:23:01.322 "nvme_io": true, 00:23:01.322 "nvme_io_md": false, 00:23:01.322 "write_zeroes": true, 00:23:01.322 "zcopy": false, 00:23:01.322 "get_zone_info": false, 00:23:01.322 "zone_management": false, 00:23:01.322 "zone_append": false, 00:23:01.322 "compare": true, 00:23:01.322 "compare_and_write": true, 00:23:01.322 "abort": true, 00:23:01.322 "seek_hole": false, 00:23:01.322 "seek_data": false, 00:23:01.322 "copy": true, 00:23:01.322 "nvme_iov_md": false 00:23:01.322 }, 00:23:01.322 "memory_domains": [ 00:23:01.322 { 00:23:01.322 "dma_device_id": "system", 00:23:01.322 "dma_device_type": 1 00:23:01.322 } 00:23:01.322 ], 00:23:01.322 "driver_specific": { 00:23:01.322 "nvme": [ 00:23:01.322 { 00:23:01.322 "trid": { 00:23:01.322 "trtype": "TCP", 00:23:01.322 "adrfam": "IPv4", 00:23:01.322 "traddr": "10.0.0.2", 00:23:01.322 "trsvcid": "4420", 00:23:01.322 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:01.322 }, 00:23:01.322 "ctrlr_data": { 00:23:01.322 "cntlid": 1, 00:23:01.322 "vendor_id": "0x8086", 00:23:01.322 "model_number": "SPDK bdev Controller", 00:23:01.322 "serial_number": "00000000000000000000", 00:23:01.322 "firmware_revision": "25.01", 00:23:01.322 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:01.322 "oacs": { 00:23:01.322 "security": 0, 00:23:01.322 "format": 0, 00:23:01.322 "firmware": 0, 00:23:01.322 "ns_manage": 0 00:23:01.322 }, 00:23:01.322 "multi_ctrlr": true, 00:23:01.322 "ana_reporting": false 00:23:01.322 }, 00:23:01.322 "vs": { 00:23:01.322 "nvme_version": "1.3" 00:23:01.322 }, 00:23:01.322 "ns_data": { 00:23:01.322 "id": 1, 00:23:01.322 "can_share": true 00:23:01.322 } 00:23:01.322 } 00:23:01.322 ], 00:23:01.322 "mp_policy": "active_passive" 00:23:01.322 } 00:23:01.322 } 00:23:01.322 ] 00:23:01.322 22:54:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.322 22:54:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:23:01.322 22:54:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.322 22:54:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:01.322 [2024-12-10 22:54:08.961876] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:01.322 [2024-12-10 22:54:08.961932] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c6c40 (9): Bad file descriptor 00:23:01.581 [2024-12-10 22:54:09.094238] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:23:01.581 22:54:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.581 22:54:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:01.581 22:54:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.581 22:54:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:01.581 [ 00:23:01.581 { 00:23:01.581 "name": "nvme0n1", 00:23:01.581 "aliases": [ 00:23:01.581 "961162bd-634e-4666-98ed-0e36d9c3eb04" 00:23:01.581 ], 00:23:01.581 "product_name": "NVMe disk", 00:23:01.581 "block_size": 512, 00:23:01.581 "num_blocks": 2097152, 00:23:01.581 "uuid": "961162bd-634e-4666-98ed-0e36d9c3eb04", 00:23:01.581 "numa_id": 1, 00:23:01.581 "assigned_rate_limits": { 00:23:01.581 "rw_ios_per_sec": 0, 00:23:01.581 "rw_mbytes_per_sec": 0, 00:23:01.581 "r_mbytes_per_sec": 0, 00:23:01.581 "w_mbytes_per_sec": 0 00:23:01.581 }, 00:23:01.581 "claimed": false, 00:23:01.581 "zoned": false, 00:23:01.581 "supported_io_types": { 00:23:01.581 "read": true, 00:23:01.581 "write": true, 00:23:01.581 "unmap": false, 00:23:01.581 "flush": true, 00:23:01.581 "reset": true, 00:23:01.581 "nvme_admin": true, 00:23:01.581 "nvme_io": true, 00:23:01.581 "nvme_io_md": false, 00:23:01.581 "write_zeroes": true, 00:23:01.581 "zcopy": false, 00:23:01.581 "get_zone_info": false, 00:23:01.581 "zone_management": false, 00:23:01.581 "zone_append": false, 00:23:01.581 "compare": true, 00:23:01.581 "compare_and_write": true, 00:23:01.581 "abort": true, 00:23:01.581 "seek_hole": false, 00:23:01.581 "seek_data": false, 00:23:01.581 "copy": true, 00:23:01.581 "nvme_iov_md": false 00:23:01.581 }, 00:23:01.581 "memory_domains": [ 00:23:01.581 { 00:23:01.581 "dma_device_id": "system", 00:23:01.581 "dma_device_type": 1 00:23:01.581 } 00:23:01.581 ], 00:23:01.581 "driver_specific": { 00:23:01.581 "nvme": [ 00:23:01.581 { 00:23:01.581 "trid": { 00:23:01.581 "trtype": "TCP", 00:23:01.581 "adrfam": "IPv4", 00:23:01.581 "traddr": "10.0.0.2", 00:23:01.581 "trsvcid": "4420", 00:23:01.581 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:01.581 }, 00:23:01.581 "ctrlr_data": { 00:23:01.581 "cntlid": 2, 00:23:01.581 "vendor_id": "0x8086", 00:23:01.581 "model_number": "SPDK bdev Controller", 00:23:01.581 "serial_number": "00000000000000000000", 00:23:01.581 "firmware_revision": "25.01", 00:23:01.581 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:01.581 "oacs": { 00:23:01.581 "security": 0, 00:23:01.581 "format": 0, 00:23:01.581 "firmware": 0, 00:23:01.581 "ns_manage": 0 00:23:01.581 }, 00:23:01.581 "multi_ctrlr": true, 00:23:01.581 "ana_reporting": false 00:23:01.581 }, 00:23:01.581 "vs": { 00:23:01.581 "nvme_version": "1.3" 00:23:01.581 }, 00:23:01.581 "ns_data": { 00:23:01.581 "id": 1, 00:23:01.581 "can_share": true 00:23:01.581 } 00:23:01.581 } 00:23:01.581 ], 00:23:01.581 "mp_policy": "active_passive" 00:23:01.581 } 00:23:01.581 } 00:23:01.581 ] 00:23:01.581 22:54:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.581 22:54:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:01.581 22:54:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.581 22:54:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:01.581 22:54:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.581 22:54:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:23:01.581 22:54:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.M7Ogo95chy 00:23:01.581 22:54:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:01.581 22:54:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.M7Ogo95chy 00:23:01.581 22:54:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.M7Ogo95chy 00:23:01.581 22:54:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.581 22:54:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:01.581 22:54:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.581 22:54:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:23:01.581 22:54:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.581 22:54:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:01.581 22:54:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.581 22:54:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:23:01.581 22:54:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.581 22:54:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:01.581 [2024-12-10 22:54:09.170519] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:01.582 [2024-12-10 22:54:09.170613] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:01.582 22:54:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.582 22:54:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:23:01.582 22:54:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.582 22:54:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:01.582 22:54:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.582 22:54:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:01.582 22:54:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.582 22:54:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:01.582 [2024-12-10 22:54:09.190584] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:01.582 nvme0n1 00:23:01.582 22:54:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.582 22:54:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:01.582 22:54:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.582 22:54:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:01.582 [ 00:23:01.582 { 00:23:01.582 "name": "nvme0n1", 00:23:01.582 "aliases": [ 00:23:01.582 "961162bd-634e-4666-98ed-0e36d9c3eb04" 00:23:01.582 ], 00:23:01.582 "product_name": "NVMe disk", 00:23:01.582 "block_size": 512, 00:23:01.582 "num_blocks": 2097152, 00:23:01.582 "uuid": "961162bd-634e-4666-98ed-0e36d9c3eb04", 00:23:01.582 "numa_id": 1, 00:23:01.582 "assigned_rate_limits": { 00:23:01.582 "rw_ios_per_sec": 0, 00:23:01.582 "rw_mbytes_per_sec": 0, 00:23:01.582 "r_mbytes_per_sec": 0, 00:23:01.582 "w_mbytes_per_sec": 0 00:23:01.582 }, 00:23:01.582 "claimed": false, 00:23:01.582 "zoned": false, 00:23:01.582 "supported_io_types": { 00:23:01.582 "read": true, 00:23:01.582 "write": true, 00:23:01.582 "unmap": false, 00:23:01.582 "flush": true, 00:23:01.582 "reset": true, 00:23:01.582 "nvme_admin": true, 00:23:01.582 "nvme_io": true, 00:23:01.582 "nvme_io_md": false, 00:23:01.582 "write_zeroes": true, 00:23:01.582 "zcopy": false, 00:23:01.582 "get_zone_info": false, 00:23:01.582 "zone_management": false, 00:23:01.582 "zone_append": false, 00:23:01.582 "compare": true, 00:23:01.582 "compare_and_write": true, 00:23:01.582 "abort": true, 00:23:01.582 "seek_hole": false, 00:23:01.582 "seek_data": false, 00:23:01.582 "copy": true, 00:23:01.582 "nvme_iov_md": false 00:23:01.582 }, 00:23:01.582 "memory_domains": [ 00:23:01.582 { 00:23:01.582 "dma_device_id": "system", 00:23:01.582 "dma_device_type": 1 00:23:01.582 } 00:23:01.582 ], 00:23:01.582 "driver_specific": { 00:23:01.582 "nvme": [ 00:23:01.582 { 00:23:01.582 "trid": { 00:23:01.582 "trtype": "TCP", 00:23:01.582 "adrfam": "IPv4", 00:23:01.582 "traddr": "10.0.0.2", 00:23:01.582 "trsvcid": "4421", 00:23:01.582 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:01.582 }, 00:23:01.582 "ctrlr_data": { 00:23:01.582 "cntlid": 3, 00:23:01.582 "vendor_id": "0x8086", 00:23:01.582 "model_number": "SPDK bdev Controller", 00:23:01.582 "serial_number": "00000000000000000000", 00:23:01.582 "firmware_revision": "25.01", 00:23:01.582 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:01.582 "oacs": { 00:23:01.582 "security": 0, 00:23:01.582 "format": 0, 00:23:01.582 "firmware": 0, 00:23:01.582 "ns_manage": 0 00:23:01.582 }, 00:23:01.582 "multi_ctrlr": true, 00:23:01.582 "ana_reporting": false 00:23:01.582 }, 00:23:01.582 "vs": { 00:23:01.582 "nvme_version": "1.3" 00:23:01.582 }, 00:23:01.582 "ns_data": { 00:23:01.582 "id": 1, 00:23:01.582 "can_share": true 00:23:01.582 } 00:23:01.582 } 00:23:01.582 ], 00:23:01.582 "mp_policy": "active_passive" 00:23:01.582 } 00:23:01.582 } 00:23:01.582 ] 00:23:01.582 22:54:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.582 22:54:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:01.582 22:54:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.582 22:54:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:01.582 22:54:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.582 22:54:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.M7Ogo95chy 00:23:01.582 22:54:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:23:01.582 22:54:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:23:01.582 22:54:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:01.582 22:54:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:23:01.582 22:54:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:01.582 22:54:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:23:01.582 22:54:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:01.582 22:54:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:01.582 rmmod nvme_tcp 00:23:01.841 rmmod nvme_fabrics 00:23:01.841 rmmod nvme_keyring 00:23:01.841 22:54:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:01.841 22:54:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:23:01.841 22:54:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:23:01.841 22:54:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 2452594 ']' 00:23:01.841 22:54:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 2452594 00:23:01.841 22:54:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 2452594 ']' 00:23:01.841 22:54:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 2452594 00:23:01.841 22:54:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:23:01.841 22:54:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:01.841 22:54:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2452594 00:23:01.841 22:54:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:01.841 22:54:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:01.841 22:54:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2452594' 00:23:01.841 killing process with pid 2452594 00:23:01.841 22:54:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 2452594 00:23:01.841 22:54:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 2452594 00:23:01.841 22:54:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:01.841 22:54:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:01.841 22:54:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:01.841 22:54:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:23:01.841 22:54:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:23:01.841 22:54:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:01.841 22:54:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:23:01.841 22:54:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:01.841 22:54:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:01.841 22:54:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:01.841 22:54:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:01.841 22:54:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:04.376 22:54:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:04.376 00:23:04.376 real 0m9.497s 00:23:04.376 user 0m3.048s 00:23:04.376 sys 0m4.843s 00:23:04.376 22:54:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:04.376 22:54:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:04.377 ************************************ 00:23:04.377 END TEST nvmf_async_init 00:23:04.377 ************************************ 00:23:04.377 22:54:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:04.377 22:54:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:04.377 22:54:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:04.377 22:54:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.377 ************************************ 00:23:04.377 START TEST dma 00:23:04.377 ************************************ 00:23:04.377 22:54:11 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:04.377 * Looking for test storage... 00:23:04.377 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host 00:23:04.377 22:54:11 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:04.377 22:54:11 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version 00:23:04.377 22:54:11 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:04.377 22:54:11 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:04.377 22:54:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:04.377 22:54:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:04.377 22:54:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:04.377 22:54:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:23:04.377 22:54:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:23:04.377 22:54:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:23:04.377 22:54:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:23:04.377 22:54:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:23:04.377 22:54:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:23:04.377 22:54:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:23:04.377 22:54:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:04.377 22:54:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:23:04.377 22:54:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:23:04.377 22:54:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:04.377 22:54:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:04.377 22:54:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:23:04.377 22:54:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:23:04.377 22:54:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:04.377 22:54:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:23:04.377 22:54:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:23:04.377 22:54:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:23:04.377 22:54:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:23:04.377 22:54:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:04.377 22:54:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:23:04.377 22:54:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:23:04.377 22:54:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:04.377 22:54:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:04.377 22:54:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:23:04.377 22:54:11 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:04.377 22:54:11 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:04.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:04.377 --rc genhtml_branch_coverage=1 00:23:04.377 --rc genhtml_function_coverage=1 00:23:04.377 --rc genhtml_legend=1 00:23:04.377 --rc geninfo_all_blocks=1 00:23:04.377 --rc geninfo_unexecuted_blocks=1 00:23:04.377 00:23:04.377 ' 00:23:04.377 22:54:11 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:04.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:04.377 --rc genhtml_branch_coverage=1 00:23:04.377 --rc genhtml_function_coverage=1 00:23:04.377 --rc genhtml_legend=1 00:23:04.377 --rc geninfo_all_blocks=1 00:23:04.377 --rc geninfo_unexecuted_blocks=1 00:23:04.377 00:23:04.377 ' 00:23:04.377 22:54:11 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:04.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:04.377 --rc genhtml_branch_coverage=1 00:23:04.377 --rc genhtml_function_coverage=1 00:23:04.377 --rc genhtml_legend=1 00:23:04.377 --rc geninfo_all_blocks=1 00:23:04.377 --rc geninfo_unexecuted_blocks=1 00:23:04.377 00:23:04.377 ' 00:23:04.377 22:54:11 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:04.377 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:04.377 --rc genhtml_branch_coverage=1 00:23:04.377 --rc genhtml_function_coverage=1 00:23:04.377 --rc genhtml_legend=1 00:23:04.377 --rc geninfo_all_blocks=1 00:23:04.377 --rc geninfo_unexecuted_blocks=1 00:23:04.377 00:23:04.377 ' 00:23:04.377 22:54:11 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:23:04.377 22:54:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:23:04.377 22:54:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:04.377 22:54:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:04.377 22:54:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:04.377 22:54:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:04.377 22:54:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:04.377 22:54:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:04.377 22:54:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:04.377 22:54:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:04.377 22:54:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:04.377 22:54:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:04.377 22:54:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:04.377 22:54:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:04.377 22:54:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:04.377 22:54:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:04.377 22:54:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:04.377 22:54:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:04.377 22:54:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:23:04.377 22:54:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:23:04.377 22:54:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:04.377 22:54:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:04.377 22:54:11 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:04.377 22:54:11 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.377 22:54:11 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.377 22:54:11 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.377 22:54:11 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:23:04.377 22:54:11 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.377 22:54:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:23:04.377 22:54:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:04.377 22:54:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:04.377 22:54:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:04.377 22:54:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:04.377 22:54:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:04.377 22:54:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:04.377 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:04.377 22:54:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:04.377 22:54:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:04.377 22:54:11 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:04.377 22:54:11 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:23:04.377 22:54:11 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:23:04.377 00:23:04.377 real 0m0.209s 00:23:04.377 user 0m0.132s 00:23:04.377 sys 0m0.090s 00:23:04.377 22:54:11 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:04.377 22:54:11 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:23:04.377 ************************************ 00:23:04.377 END TEST dma 00:23:04.378 ************************************ 00:23:04.378 22:54:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:04.378 22:54:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:04.378 22:54:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:04.378 22:54:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:04.378 ************************************ 00:23:04.378 START TEST nvmf_identify 00:23:04.378 ************************************ 00:23:04.378 22:54:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:04.378 * Looking for test storage... 00:23:04.378 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host 00:23:04.378 22:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:04.378 22:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:23:04.378 22:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:04.638 22:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:04.638 22:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:04.638 22:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:04.638 22:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:04.638 22:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:23:04.638 22:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:23:04.638 22:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:23:04.638 22:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:23:04.638 22:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:23:04.638 22:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:23:04.638 22:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:23:04.638 22:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:04.638 22:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:23:04.638 22:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:23:04.638 22:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:04.638 22:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:04.638 22:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:23:04.638 22:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:23:04.638 22:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:04.638 22:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:23:04.638 22:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:23:04.638 22:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:23:04.638 22:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:23:04.638 22:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:04.638 22:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:23:04.638 22:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:23:04.638 22:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:04.638 22:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:04.638 22:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:23:04.638 22:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:04.638 22:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:04.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:04.638 --rc genhtml_branch_coverage=1 00:23:04.638 --rc genhtml_function_coverage=1 00:23:04.638 --rc genhtml_legend=1 00:23:04.638 --rc geninfo_all_blocks=1 00:23:04.638 --rc geninfo_unexecuted_blocks=1 00:23:04.638 00:23:04.638 ' 00:23:04.638 22:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:04.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:04.638 --rc genhtml_branch_coverage=1 00:23:04.638 --rc genhtml_function_coverage=1 00:23:04.638 --rc genhtml_legend=1 00:23:04.638 --rc geninfo_all_blocks=1 00:23:04.638 --rc geninfo_unexecuted_blocks=1 00:23:04.638 00:23:04.638 ' 00:23:04.638 22:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:04.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:04.638 --rc genhtml_branch_coverage=1 00:23:04.638 --rc genhtml_function_coverage=1 00:23:04.638 --rc genhtml_legend=1 00:23:04.638 --rc geninfo_all_blocks=1 00:23:04.638 --rc geninfo_unexecuted_blocks=1 00:23:04.638 00:23:04.638 ' 00:23:04.638 22:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:04.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:04.638 --rc genhtml_branch_coverage=1 00:23:04.638 --rc genhtml_function_coverage=1 00:23:04.638 --rc genhtml_legend=1 00:23:04.638 --rc geninfo_all_blocks=1 00:23:04.638 --rc geninfo_unexecuted_blocks=1 00:23:04.638 00:23:04.638 ' 00:23:04.638 22:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:23:04.638 22:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:23:04.638 22:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:04.638 22:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:04.638 22:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:04.638 22:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:04.638 22:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:04.638 22:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:04.638 22:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:04.638 22:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:04.638 22:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:04.638 22:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:04.638 22:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:04.638 22:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:04.638 22:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:04.638 22:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:04.638 22:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:04.638 22:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:04.638 22:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:23:04.638 22:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:23:04.638 22:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:04.638 22:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:04.638 22:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:04.638 22:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.638 22:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.638 22:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.638 22:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:23:04.638 22:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:04.638 22:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:23:04.638 22:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:04.638 22:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:04.638 22:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:04.638 22:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:04.638 22:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:04.638 22:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:04.638 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:04.638 22:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:04.639 22:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:04.639 22:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:04.639 22:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:04.639 22:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:04.639 22:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:23:04.639 22:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:04.639 22:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:04.639 22:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:04.639 22:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:04.639 22:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:04.639 22:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:04.639 22:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:04.639 22:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:04.639 22:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:04.639 22:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:04.639 22:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:23:04.639 22:54:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:11.213 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:11.213 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:23:11.213 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:11.213 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:11.213 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:11.213 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:11.213 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:11.213 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:23:11.213 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:11.213 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:23:11.213 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:23:11.213 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:23:11.213 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:23:11.213 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:23:11.213 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:23:11.213 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:11.213 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:11.213 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:11.213 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:11.213 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:11.213 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:11.213 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:11.213 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:11.213 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:11.213 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:11.213 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:11.213 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:11.213 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:11.213 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:11.213 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:11.213 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:11.213 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:11.213 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:11.213 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:11.213 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:11.213 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:11.213 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:11.213 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:11.213 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:11.213 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:11.213 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:11.213 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:11.213 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:11.213 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:11.213 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:11.213 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:11.213 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:11.213 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:11.213 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:11.213 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:11.213 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:11.213 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:11.213 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:11.213 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:11.213 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:11.213 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:11.213 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:11.213 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:11.213 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:11.213 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:11.213 Found net devices under 0000:86:00.0: cvl_0_0 00:23:11.213 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:11.213 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:11.214 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:11.214 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:11.214 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:11.214 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:11.214 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:11.214 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:11.214 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:11.214 Found net devices under 0000:86:00.1: cvl_0_1 00:23:11.214 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:11.214 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:11.214 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:23:11.214 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:11.214 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:11.214 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:11.214 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:11.214 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:11.214 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:11.214 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:11.214 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:11.214 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:11.214 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:11.214 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:11.214 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:11.214 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:11.214 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:11.214 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:11.214 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:11.214 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:11.214 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:11.214 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:11.214 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:11.214 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:11.214 22:54:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:11.214 22:54:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:11.214 22:54:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:11.214 22:54:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:11.214 22:54:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:11.214 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:11.214 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.397 ms 00:23:11.214 00:23:11.214 --- 10.0.0.2 ping statistics --- 00:23:11.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:11.214 rtt min/avg/max/mdev = 0.397/0.397/0.397/0.000 ms 00:23:11.214 22:54:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:11.214 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:11.214 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:23:11.214 00:23:11.214 --- 10.0.0.1 ping statistics --- 00:23:11.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:11.214 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:23:11.214 22:54:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:11.214 22:54:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:23:11.214 22:54:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:11.214 22:54:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:11.214 22:54:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:11.214 22:54:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:11.214 22:54:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:11.214 22:54:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:11.214 22:54:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:11.214 22:54:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:23:11.214 22:54:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:11.214 22:54:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:11.214 22:54:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2456713 00:23:11.214 22:54:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:11.214 22:54:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:11.214 22:54:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2456713 00:23:11.214 22:54:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 2456713 ']' 00:23:11.214 22:54:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:11.214 22:54:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:11.214 22:54:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:11.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:11.214 22:54:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:11.214 22:54:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:11.214 [2024-12-10 22:54:18.268654] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:23:11.214 [2024-12-10 22:54:18.268705] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:11.214 [2024-12-10 22:54:18.349717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:11.214 [2024-12-10 22:54:18.393413] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:11.214 [2024-12-10 22:54:18.393445] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:11.214 [2024-12-10 22:54:18.393453] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:11.214 [2024-12-10 22:54:18.393461] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:11.214 [2024-12-10 22:54:18.393467] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:11.214 [2024-12-10 22:54:18.394811] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:11.214 [2024-12-10 22:54:18.394920] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:11.214 [2024-12-10 22:54:18.394943] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:23:11.214 [2024-12-10 22:54:18.394944] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:23:11.214 22:54:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:11.214 22:54:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:23:11.214 22:54:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:11.214 22:54:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.214 22:54:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:11.214 [2024-12-10 22:54:18.497775] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:11.214 22:54:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.214 22:54:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:23:11.214 22:54:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:11.214 22:54:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:11.214 22:54:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:11.214 22:54:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.214 22:54:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:11.214 Malloc0 00:23:11.214 22:54:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.214 22:54:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:11.214 22:54:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.214 22:54:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:11.214 22:54:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.214 22:54:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:23:11.214 22:54:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.214 22:54:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:11.214 22:54:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.214 22:54:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:11.214 22:54:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.214 22:54:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:11.214 [2024-12-10 22:54:18.600221] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:11.214 22:54:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.214 22:54:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:11.214 22:54:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.214 22:54:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:11.214 22:54:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.214 22:54:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:23:11.214 22:54:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.214 22:54:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:11.214 [ 00:23:11.214 { 00:23:11.215 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:11.215 "subtype": "Discovery", 00:23:11.215 "listen_addresses": [ 00:23:11.215 { 00:23:11.215 "trtype": "TCP", 00:23:11.215 "adrfam": "IPv4", 00:23:11.215 "traddr": "10.0.0.2", 00:23:11.215 "trsvcid": "4420" 00:23:11.215 } 00:23:11.215 ], 00:23:11.215 "allow_any_host": true, 00:23:11.215 "hosts": [] 00:23:11.215 }, 00:23:11.215 { 00:23:11.215 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:11.215 "subtype": "NVMe", 00:23:11.215 "listen_addresses": [ 00:23:11.215 { 00:23:11.215 "trtype": "TCP", 00:23:11.215 "adrfam": "IPv4", 00:23:11.215 "traddr": "10.0.0.2", 00:23:11.215 "trsvcid": "4420" 00:23:11.215 } 00:23:11.215 ], 00:23:11.215 "allow_any_host": true, 00:23:11.215 "hosts": [], 00:23:11.215 "serial_number": "SPDK00000000000001", 00:23:11.215 "model_number": "SPDK bdev Controller", 00:23:11.215 "max_namespaces": 32, 00:23:11.215 "min_cntlid": 1, 00:23:11.215 "max_cntlid": 65519, 00:23:11.215 "namespaces": [ 00:23:11.215 { 00:23:11.215 "nsid": 1, 00:23:11.215 "bdev_name": "Malloc0", 00:23:11.215 "name": "Malloc0", 00:23:11.215 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:23:11.215 "eui64": "ABCDEF0123456789", 00:23:11.215 "uuid": "19b8a2f7-22d3-4c45-8683-5ecd8b2e8c0d" 00:23:11.215 } 00:23:11.215 ] 00:23:11.215 } 00:23:11.215 ] 00:23:11.215 22:54:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.215 22:54:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:23:11.215 [2024-12-10 22:54:18.657904] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:23:11.215 [2024-12-10 22:54:18.657951] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2456862 ] 00:23:11.215 [2024-12-10 22:54:18.699992] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:23:11.215 [2024-12-10 22:54:18.700036] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:11.215 [2024-12-10 22:54:18.700041] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:11.215 [2024-12-10 22:54:18.700052] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:11.215 [2024-12-10 22:54:18.700060] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:11.215 [2024-12-10 22:54:18.703457] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:23:11.215 [2024-12-10 22:54:18.703495] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x243c690 0 00:23:11.215 [2024-12-10 22:54:18.711177] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:11.215 [2024-12-10 22:54:18.711193] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:11.215 [2024-12-10 22:54:18.711197] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:11.215 [2024-12-10 22:54:18.711200] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:11.215 [2024-12-10 22:54:18.711231] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.215 [2024-12-10 22:54:18.711237] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.215 [2024-12-10 22:54:18.711240] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x243c690) 00:23:11.215 [2024-12-10 22:54:18.711253] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:11.215 [2024-12-10 22:54:18.711273] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x249e100, cid 0, qid 0 00:23:11.215 [2024-12-10 22:54:18.718167] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.215 [2024-12-10 22:54:18.718176] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.215 [2024-12-10 22:54:18.718179] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.215 [2024-12-10 22:54:18.718183] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x249e100) on tqpair=0x243c690 00:23:11.215 [2024-12-10 22:54:18.718213] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:11.215 [2024-12-10 22:54:18.718219] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:23:11.215 [2024-12-10 22:54:18.718224] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:23:11.215 [2024-12-10 22:54:18.718235] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.215 [2024-12-10 22:54:18.718239] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.215 [2024-12-10 22:54:18.718242] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x243c690) 00:23:11.215 [2024-12-10 22:54:18.718249] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.215 [2024-12-10 22:54:18.718262] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x249e100, cid 0, qid 0 00:23:11.215 [2024-12-10 22:54:18.718403] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.215 [2024-12-10 22:54:18.718408] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.215 [2024-12-10 22:54:18.718411] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.215 [2024-12-10 22:54:18.718415] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x249e100) on tqpair=0x243c690 00:23:11.215 [2024-12-10 22:54:18.718420] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:23:11.215 [2024-12-10 22:54:18.718426] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:23:11.215 [2024-12-10 22:54:18.718433] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.215 [2024-12-10 22:54:18.718436] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.215 [2024-12-10 22:54:18.718439] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x243c690) 00:23:11.215 [2024-12-10 22:54:18.718445] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.215 [2024-12-10 22:54:18.718455] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x249e100, cid 0, qid 0 00:23:11.215 [2024-12-10 22:54:18.718523] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.215 [2024-12-10 22:54:18.718529] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.215 [2024-12-10 22:54:18.718532] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.215 [2024-12-10 22:54:18.718535] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x249e100) on tqpair=0x243c690 00:23:11.215 [2024-12-10 22:54:18.718540] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:23:11.215 [2024-12-10 22:54:18.718547] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:23:11.215 [2024-12-10 22:54:18.718553] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.215 [2024-12-10 22:54:18.718556] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.215 [2024-12-10 22:54:18.718559] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x243c690) 00:23:11.215 [2024-12-10 22:54:18.718565] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.215 [2024-12-10 22:54:18.718577] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x249e100, cid 0, qid 0 00:23:11.215 [2024-12-10 22:54:18.718640] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.215 [2024-12-10 22:54:18.718647] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.215 [2024-12-10 22:54:18.718650] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.215 [2024-12-10 22:54:18.718653] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x249e100) on tqpair=0x243c690 00:23:11.215 [2024-12-10 22:54:18.718657] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:11.215 [2024-12-10 22:54:18.718665] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.215 [2024-12-10 22:54:18.718669] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.215 [2024-12-10 22:54:18.718672] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x243c690) 00:23:11.215 [2024-12-10 22:54:18.718678] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.215 [2024-12-10 22:54:18.718688] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x249e100, cid 0, qid 0 00:23:11.215 [2024-12-10 22:54:18.718753] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.215 [2024-12-10 22:54:18.718759] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.215 [2024-12-10 22:54:18.718762] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.215 [2024-12-10 22:54:18.718766] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x249e100) on tqpair=0x243c690 00:23:11.215 [2024-12-10 22:54:18.718770] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:23:11.215 [2024-12-10 22:54:18.718774] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:23:11.215 [2024-12-10 22:54:18.718780] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:11.215 [2024-12-10 22:54:18.718888] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:23:11.215 [2024-12-10 22:54:18.718893] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:11.215 [2024-12-10 22:54:18.718902] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.215 [2024-12-10 22:54:18.718905] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.215 [2024-12-10 22:54:18.718908] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x243c690) 00:23:11.215 [2024-12-10 22:54:18.718914] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.215 [2024-12-10 22:54:18.718924] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x249e100, cid 0, qid 0 00:23:11.215 [2024-12-10 22:54:18.718998] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.215 [2024-12-10 22:54:18.719003] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.215 [2024-12-10 22:54:18.719006] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.215 [2024-12-10 22:54:18.719009] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x249e100) on tqpair=0x243c690 00:23:11.215 [2024-12-10 22:54:18.719013] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:11.215 [2024-12-10 22:54:18.719023] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.215 [2024-12-10 22:54:18.719026] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.216 [2024-12-10 22:54:18.719029] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x243c690) 00:23:11.216 [2024-12-10 22:54:18.719037] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.216 [2024-12-10 22:54:18.719046] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x249e100, cid 0, qid 0 00:23:11.216 [2024-12-10 22:54:18.719110] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.216 [2024-12-10 22:54:18.719116] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.216 [2024-12-10 22:54:18.719119] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.216 [2024-12-10 22:54:18.719122] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x249e100) on tqpair=0x243c690 00:23:11.216 [2024-12-10 22:54:18.719126] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:11.216 [2024-12-10 22:54:18.719130] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:23:11.216 [2024-12-10 22:54:18.719137] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:23:11.216 [2024-12-10 22:54:18.719148] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:23:11.216 [2024-12-10 22:54:18.719161] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.216 [2024-12-10 22:54:18.719165] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x243c690) 00:23:11.216 [2024-12-10 22:54:18.719171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.216 [2024-12-10 22:54:18.719181] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x249e100, cid 0, qid 0 00:23:11.216 [2024-12-10 22:54:18.719281] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:11.216 [2024-12-10 22:54:18.719287] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:11.216 [2024-12-10 22:54:18.719290] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:11.216 [2024-12-10 22:54:18.719293] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x243c690): datao=0, datal=4096, cccid=0 00:23:11.216 [2024-12-10 22:54:18.719297] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x249e100) on tqpair(0x243c690): expected_datao=0, payload_size=4096 00:23:11.216 [2024-12-10 22:54:18.719302] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.216 [2024-12-10 22:54:18.719308] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:11.216 [2024-12-10 22:54:18.719312] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:11.216 [2024-12-10 22:54:18.719322] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.216 [2024-12-10 22:54:18.719328] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.216 [2024-12-10 22:54:18.719331] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.216 [2024-12-10 22:54:18.719334] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x249e100) on tqpair=0x243c690 00:23:11.216 [2024-12-10 22:54:18.719341] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:23:11.216 [2024-12-10 22:54:18.719345] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:23:11.216 [2024-12-10 22:54:18.719349] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:23:11.216 [2024-12-10 22:54:18.719354] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:23:11.216 [2024-12-10 22:54:18.719358] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:23:11.216 [2024-12-10 22:54:18.719362] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:23:11.216 [2024-12-10 22:54:18.719374] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:23:11.216 [2024-12-10 22:54:18.719384] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.216 [2024-12-10 22:54:18.719388] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.216 [2024-12-10 22:54:18.719391] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x243c690) 00:23:11.216 [2024-12-10 22:54:18.719397] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:11.216 [2024-12-10 22:54:18.719407] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x249e100, cid 0, qid 0 00:23:11.216 [2024-12-10 22:54:18.719484] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.216 [2024-12-10 22:54:18.719489] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.216 [2024-12-10 22:54:18.719492] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.216 [2024-12-10 22:54:18.719495] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x249e100) on tqpair=0x243c690 00:23:11.216 [2024-12-10 22:54:18.719502] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.216 [2024-12-10 22:54:18.719505] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.216 [2024-12-10 22:54:18.719508] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x243c690) 00:23:11.216 [2024-12-10 22:54:18.719513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.216 [2024-12-10 22:54:18.719519] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.216 [2024-12-10 22:54:18.719522] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.216 [2024-12-10 22:54:18.719525] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x243c690) 00:23:11.216 [2024-12-10 22:54:18.719530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.216 [2024-12-10 22:54:18.719535] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.216 [2024-12-10 22:54:18.719538] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.216 [2024-12-10 22:54:18.719541] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x243c690) 00:23:11.216 [2024-12-10 22:54:18.719546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.216 [2024-12-10 22:54:18.719551] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.216 [2024-12-10 22:54:18.719555] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.216 [2024-12-10 22:54:18.719558] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x243c690) 00:23:11.216 [2024-12-10 22:54:18.719563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.216 [2024-12-10 22:54:18.719567] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:11.216 [2024-12-10 22:54:18.719577] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:11.216 [2024-12-10 22:54:18.719582] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.216 [2024-12-10 22:54:18.719586] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x243c690) 00:23:11.216 [2024-12-10 22:54:18.719591] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.216 [2024-12-10 22:54:18.719602] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x249e100, cid 0, qid 0 00:23:11.216 [2024-12-10 22:54:18.719608] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x249e280, cid 1, qid 0 00:23:11.216 [2024-12-10 22:54:18.719613] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x249e400, cid 2, qid 0 00:23:11.216 [2024-12-10 22:54:18.719617] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x249e580, cid 3, qid 0 00:23:11.216 [2024-12-10 22:54:18.719621] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x249e700, cid 4, qid 0 00:23:11.216 [2024-12-10 22:54:18.719719] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.216 [2024-12-10 22:54:18.719725] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.216 [2024-12-10 22:54:18.719728] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.216 [2024-12-10 22:54:18.719732] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x249e700) on tqpair=0x243c690 00:23:11.216 [2024-12-10 22:54:18.719736] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:23:11.216 [2024-12-10 22:54:18.719741] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:23:11.216 [2024-12-10 22:54:18.719750] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.216 [2024-12-10 22:54:18.719753] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x243c690) 00:23:11.216 [2024-12-10 22:54:18.719759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.216 [2024-12-10 22:54:18.719768] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x249e700, cid 4, qid 0 00:23:11.216 [2024-12-10 22:54:18.719846] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:11.216 [2024-12-10 22:54:18.719852] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:11.216 [2024-12-10 22:54:18.719855] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:11.216 [2024-12-10 22:54:18.719858] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x243c690): datao=0, datal=4096, cccid=4 00:23:11.216 [2024-12-10 22:54:18.719862] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x249e700) on tqpair(0x243c690): expected_datao=0, payload_size=4096 00:23:11.216 [2024-12-10 22:54:18.719865] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.216 [2024-12-10 22:54:18.719871] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:11.216 [2024-12-10 22:54:18.719874] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:11.216 [2024-12-10 22:54:18.719884] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.216 [2024-12-10 22:54:18.719889] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.216 [2024-12-10 22:54:18.719892] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.216 [2024-12-10 22:54:18.719895] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x249e700) on tqpair=0x243c690 00:23:11.216 [2024-12-10 22:54:18.719906] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:23:11.216 [2024-12-10 22:54:18.719928] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.216 [2024-12-10 22:54:18.719932] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x243c690) 00:23:11.216 [2024-12-10 22:54:18.719937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.216 [2024-12-10 22:54:18.719943] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.216 [2024-12-10 22:54:18.719946] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.216 [2024-12-10 22:54:18.719949] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x243c690) 00:23:11.216 [2024-12-10 22:54:18.719954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.216 [2024-12-10 22:54:18.719967] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x249e700, cid 4, qid 0 00:23:11.216 [2024-12-10 22:54:18.719973] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x249e880, cid 5, qid 0 00:23:11.217 [2024-12-10 22:54:18.720075] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:11.217 [2024-12-10 22:54:18.720081] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:11.217 [2024-12-10 22:54:18.720085] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:11.217 [2024-12-10 22:54:18.720088] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x243c690): datao=0, datal=1024, cccid=4 00:23:11.217 [2024-12-10 22:54:18.720091] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x249e700) on tqpair(0x243c690): expected_datao=0, payload_size=1024 00:23:11.217 [2024-12-10 22:54:18.720095] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.217 [2024-12-10 22:54:18.720101] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:11.217 [2024-12-10 22:54:18.720104] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:11.217 [2024-12-10 22:54:18.720109] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.217 [2024-12-10 22:54:18.720114] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.217 [2024-12-10 22:54:18.720117] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.217 [2024-12-10 22:54:18.720120] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x249e880) on tqpair=0x243c690 00:23:11.217 [2024-12-10 22:54:18.764169] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.217 [2024-12-10 22:54:18.764183] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.217 [2024-12-10 22:54:18.764186] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.217 [2024-12-10 22:54:18.764190] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x249e700) on tqpair=0x243c690 00:23:11.217 [2024-12-10 22:54:18.764202] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.217 [2024-12-10 22:54:18.764206] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x243c690) 00:23:11.217 [2024-12-10 22:54:18.764213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.217 [2024-12-10 22:54:18.764229] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x249e700, cid 4, qid 0 00:23:11.217 [2024-12-10 22:54:18.764385] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:11.217 [2024-12-10 22:54:18.764391] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:11.217 [2024-12-10 22:54:18.764394] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:11.217 [2024-12-10 22:54:18.764397] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x243c690): datao=0, datal=3072, cccid=4 00:23:11.217 [2024-12-10 22:54:18.764401] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x249e700) on tqpair(0x243c690): expected_datao=0, payload_size=3072 00:23:11.217 [2024-12-10 22:54:18.764406] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.217 [2024-12-10 22:54:18.764421] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:11.217 [2024-12-10 22:54:18.764425] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:11.217 [2024-12-10 22:54:18.806290] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.217 [2024-12-10 22:54:18.806299] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.217 [2024-12-10 22:54:18.806303] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.217 [2024-12-10 22:54:18.806306] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x249e700) on tqpair=0x243c690 00:23:11.217 [2024-12-10 22:54:18.806315] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.217 [2024-12-10 22:54:18.806319] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x243c690) 00:23:11.217 [2024-12-10 22:54:18.806325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.217 [2024-12-10 22:54:18.806344] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x249e700, cid 4, qid 0 00:23:11.217 [2024-12-10 22:54:18.806412] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:11.217 [2024-12-10 22:54:18.806418] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:11.217 [2024-12-10 22:54:18.806421] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:11.217 [2024-12-10 22:54:18.806424] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x243c690): datao=0, datal=8, cccid=4 00:23:11.217 [2024-12-10 22:54:18.806428] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x249e700) on tqpair(0x243c690): expected_datao=0, payload_size=8 00:23:11.217 [2024-12-10 22:54:18.806432] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.217 [2024-12-10 22:54:18.806438] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:11.217 [2024-12-10 22:54:18.806441] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:11.217 [2024-12-10 22:54:18.852167] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.217 [2024-12-10 22:54:18.852176] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.217 [2024-12-10 22:54:18.852180] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.217 [2024-12-10 22:54:18.852183] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x249e700) on tqpair=0x243c690 00:23:11.217 ===================================================== 00:23:11.217 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:11.217 ===================================================== 00:23:11.217 Controller Capabilities/Features 00:23:11.217 ================================ 00:23:11.217 Vendor ID: 0000 00:23:11.217 Subsystem Vendor ID: 0000 00:23:11.217 Serial Number: .................... 00:23:11.217 Model Number: ........................................ 00:23:11.217 Firmware Version: 25.01 00:23:11.217 Recommended Arb Burst: 0 00:23:11.217 IEEE OUI Identifier: 00 00 00 00:23:11.217 Multi-path I/O 00:23:11.217 May have multiple subsystem ports: No 00:23:11.217 May have multiple controllers: No 00:23:11.217 Associated with SR-IOV VF: No 00:23:11.217 Max Data Transfer Size: 131072 00:23:11.217 Max Number of Namespaces: 0 00:23:11.217 Max Number of I/O Queues: 1024 00:23:11.217 NVMe Specification Version (VS): 1.3 00:23:11.217 NVMe Specification Version (Identify): 1.3 00:23:11.217 Maximum Queue Entries: 128 00:23:11.217 Contiguous Queues Required: Yes 00:23:11.217 Arbitration Mechanisms Supported 00:23:11.217 Weighted Round Robin: Not Supported 00:23:11.217 Vendor Specific: Not Supported 00:23:11.217 Reset Timeout: 15000 ms 00:23:11.217 Doorbell Stride: 4 bytes 00:23:11.217 NVM Subsystem Reset: Not Supported 00:23:11.217 Command Sets Supported 00:23:11.217 NVM Command Set: Supported 00:23:11.217 Boot Partition: Not Supported 00:23:11.217 Memory Page Size Minimum: 4096 bytes 00:23:11.217 Memory Page Size Maximum: 4096 bytes 00:23:11.217 Persistent Memory Region: Not Supported 00:23:11.217 Optional Asynchronous Events Supported 00:23:11.217 Namespace Attribute Notices: Not Supported 00:23:11.217 Firmware Activation Notices: Not Supported 00:23:11.217 ANA Change Notices: Not Supported 00:23:11.217 PLE Aggregate Log Change Notices: Not Supported 00:23:11.217 LBA Status Info Alert Notices: Not Supported 00:23:11.217 EGE Aggregate Log Change Notices: Not Supported 00:23:11.217 Normal NVM Subsystem Shutdown event: Not Supported 00:23:11.217 Zone Descriptor Change Notices: Not Supported 00:23:11.217 Discovery Log Change Notices: Supported 00:23:11.217 Controller Attributes 00:23:11.217 128-bit Host Identifier: Not Supported 00:23:11.217 Non-Operational Permissive Mode: Not Supported 00:23:11.217 NVM Sets: Not Supported 00:23:11.217 Read Recovery Levels: Not Supported 00:23:11.217 Endurance Groups: Not Supported 00:23:11.217 Predictable Latency Mode: Not Supported 00:23:11.217 Traffic Based Keep ALive: Not Supported 00:23:11.217 Namespace Granularity: Not Supported 00:23:11.217 SQ Associations: Not Supported 00:23:11.217 UUID List: Not Supported 00:23:11.217 Multi-Domain Subsystem: Not Supported 00:23:11.217 Fixed Capacity Management: Not Supported 00:23:11.217 Variable Capacity Management: Not Supported 00:23:11.217 Delete Endurance Group: Not Supported 00:23:11.217 Delete NVM Set: Not Supported 00:23:11.217 Extended LBA Formats Supported: Not Supported 00:23:11.217 Flexible Data Placement Supported: Not Supported 00:23:11.217 00:23:11.217 Controller Memory Buffer Support 00:23:11.217 ================================ 00:23:11.217 Supported: No 00:23:11.217 00:23:11.217 Persistent Memory Region Support 00:23:11.217 ================================ 00:23:11.217 Supported: No 00:23:11.217 00:23:11.217 Admin Command Set Attributes 00:23:11.217 ============================ 00:23:11.217 Security Send/Receive: Not Supported 00:23:11.217 Format NVM: Not Supported 00:23:11.217 Firmware Activate/Download: Not Supported 00:23:11.217 Namespace Management: Not Supported 00:23:11.217 Device Self-Test: Not Supported 00:23:11.217 Directives: Not Supported 00:23:11.217 NVMe-MI: Not Supported 00:23:11.217 Virtualization Management: Not Supported 00:23:11.217 Doorbell Buffer Config: Not Supported 00:23:11.217 Get LBA Status Capability: Not Supported 00:23:11.218 Command & Feature Lockdown Capability: Not Supported 00:23:11.218 Abort Command Limit: 1 00:23:11.218 Async Event Request Limit: 4 00:23:11.218 Number of Firmware Slots: N/A 00:23:11.218 Firmware Slot 1 Read-Only: N/A 00:23:11.218 Firmware Activation Without Reset: N/A 00:23:11.218 Multiple Update Detection Support: N/A 00:23:11.218 Firmware Update Granularity: No Information Provided 00:23:11.218 Per-Namespace SMART Log: No 00:23:11.218 Asymmetric Namespace Access Log Page: Not Supported 00:23:11.218 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:11.218 Command Effects Log Page: Not Supported 00:23:11.218 Get Log Page Extended Data: Supported 00:23:11.218 Telemetry Log Pages: Not Supported 00:23:11.218 Persistent Event Log Pages: Not Supported 00:23:11.218 Supported Log Pages Log Page: May Support 00:23:11.218 Commands Supported & Effects Log Page: Not Supported 00:23:11.218 Feature Identifiers & Effects Log Page:May Support 00:23:11.218 NVMe-MI Commands & Effects Log Page: May Support 00:23:11.218 Data Area 4 for Telemetry Log: Not Supported 00:23:11.218 Error Log Page Entries Supported: 128 00:23:11.218 Keep Alive: Not Supported 00:23:11.218 00:23:11.218 NVM Command Set Attributes 00:23:11.218 ========================== 00:23:11.218 Submission Queue Entry Size 00:23:11.218 Max: 1 00:23:11.218 Min: 1 00:23:11.218 Completion Queue Entry Size 00:23:11.218 Max: 1 00:23:11.218 Min: 1 00:23:11.218 Number of Namespaces: 0 00:23:11.218 Compare Command: Not Supported 00:23:11.218 Write Uncorrectable Command: Not Supported 00:23:11.218 Dataset Management Command: Not Supported 00:23:11.218 Write Zeroes Command: Not Supported 00:23:11.218 Set Features Save Field: Not Supported 00:23:11.218 Reservations: Not Supported 00:23:11.218 Timestamp: Not Supported 00:23:11.218 Copy: Not Supported 00:23:11.218 Volatile Write Cache: Not Present 00:23:11.218 Atomic Write Unit (Normal): 1 00:23:11.218 Atomic Write Unit (PFail): 1 00:23:11.218 Atomic Compare & Write Unit: 1 00:23:11.218 Fused Compare & Write: Supported 00:23:11.218 Scatter-Gather List 00:23:11.218 SGL Command Set: Supported 00:23:11.218 SGL Keyed: Supported 00:23:11.218 SGL Bit Bucket Descriptor: Not Supported 00:23:11.218 SGL Metadata Pointer: Not Supported 00:23:11.218 Oversized SGL: Not Supported 00:23:11.218 SGL Metadata Address: Not Supported 00:23:11.218 SGL Offset: Supported 00:23:11.218 Transport SGL Data Block: Not Supported 00:23:11.218 Replay Protected Memory Block: Not Supported 00:23:11.218 00:23:11.218 Firmware Slot Information 00:23:11.218 ========================= 00:23:11.218 Active slot: 0 00:23:11.218 00:23:11.218 00:23:11.218 Error Log 00:23:11.218 ========= 00:23:11.218 00:23:11.218 Active Namespaces 00:23:11.218 ================= 00:23:11.218 Discovery Log Page 00:23:11.218 ================== 00:23:11.218 Generation Counter: 2 00:23:11.218 Number of Records: 2 00:23:11.218 Record Format: 0 00:23:11.218 00:23:11.218 Discovery Log Entry 0 00:23:11.218 ---------------------- 00:23:11.218 Transport Type: 3 (TCP) 00:23:11.218 Address Family: 1 (IPv4) 00:23:11.218 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:11.218 Entry Flags: 00:23:11.218 Duplicate Returned Information: 1 00:23:11.218 Explicit Persistent Connection Support for Discovery: 1 00:23:11.218 Transport Requirements: 00:23:11.218 Secure Channel: Not Required 00:23:11.218 Port ID: 0 (0x0000) 00:23:11.218 Controller ID: 65535 (0xffff) 00:23:11.218 Admin Max SQ Size: 128 00:23:11.218 Transport Service Identifier: 4420 00:23:11.218 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:11.218 Transport Address: 10.0.0.2 00:23:11.218 Discovery Log Entry 1 00:23:11.218 ---------------------- 00:23:11.218 Transport Type: 3 (TCP) 00:23:11.218 Address Family: 1 (IPv4) 00:23:11.218 Subsystem Type: 2 (NVM Subsystem) 00:23:11.218 Entry Flags: 00:23:11.218 Duplicate Returned Information: 0 00:23:11.218 Explicit Persistent Connection Support for Discovery: 0 00:23:11.218 Transport Requirements: 00:23:11.218 Secure Channel: Not Required 00:23:11.218 Port ID: 0 (0x0000) 00:23:11.218 Controller ID: 65535 (0xffff) 00:23:11.218 Admin Max SQ Size: 128 00:23:11.218 Transport Service Identifier: 4420 00:23:11.218 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:23:11.218 Transport Address: 10.0.0.2 [2024-12-10 22:54:18.852263] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:23:11.218 [2024-12-10 22:54:18.852274] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x249e100) on tqpair=0x243c690 00:23:11.218 [2024-12-10 22:54:18.852280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.218 [2024-12-10 22:54:18.852285] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x249e280) on tqpair=0x243c690 00:23:11.218 [2024-12-10 22:54:18.852289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.218 [2024-12-10 22:54:18.852293] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x249e400) on tqpair=0x243c690 00:23:11.218 [2024-12-10 22:54:18.852297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.218 [2024-12-10 22:54:18.852301] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x249e580) on tqpair=0x243c690 00:23:11.218 [2024-12-10 22:54:18.852305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.218 [2024-12-10 22:54:18.852313] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.218 [2024-12-10 22:54:18.852316] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.218 [2024-12-10 22:54:18.852319] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x243c690) 00:23:11.218 [2024-12-10 22:54:18.852326] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.218 [2024-12-10 22:54:18.852339] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x249e580, cid 3, qid 0 00:23:11.218 [2024-12-10 22:54:18.852406] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.218 [2024-12-10 22:54:18.852412] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.218 [2024-12-10 22:54:18.852415] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.218 [2024-12-10 22:54:18.852419] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x249e580) on tqpair=0x243c690 00:23:11.218 [2024-12-10 22:54:18.852425] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.218 [2024-12-10 22:54:18.852429] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.218 [2024-12-10 22:54:18.852432] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x243c690) 00:23:11.218 [2024-12-10 22:54:18.852437] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.218 [2024-12-10 22:54:18.852452] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x249e580, cid 3, qid 0 00:23:11.218 [2024-12-10 22:54:18.852528] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.218 [2024-12-10 22:54:18.852534] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.218 [2024-12-10 22:54:18.852537] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.218 [2024-12-10 22:54:18.852540] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x249e580) on tqpair=0x243c690 00:23:11.218 [2024-12-10 22:54:18.852544] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:23:11.218 [2024-12-10 22:54:18.852548] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:23:11.218 [2024-12-10 22:54:18.852556] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.218 [2024-12-10 22:54:18.852560] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.218 [2024-12-10 22:54:18.852563] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x243c690) 00:23:11.218 [2024-12-10 22:54:18.852569] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.218 [2024-12-10 22:54:18.852578] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x249e580, cid 3, qid 0 00:23:11.218 [2024-12-10 22:54:18.852655] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.218 [2024-12-10 22:54:18.852660] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.218 [2024-12-10 22:54:18.852663] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.218 [2024-12-10 22:54:18.852667] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x249e580) on tqpair=0x243c690 00:23:11.218 [2024-12-10 22:54:18.852677] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.218 [2024-12-10 22:54:18.852680] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.218 [2024-12-10 22:54:18.852683] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x243c690) 00:23:11.218 [2024-12-10 22:54:18.852689] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.218 [2024-12-10 22:54:18.852698] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x249e580, cid 3, qid 0 00:23:11.218 [2024-12-10 22:54:18.852765] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.218 [2024-12-10 22:54:18.852771] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.218 [2024-12-10 22:54:18.852774] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.218 [2024-12-10 22:54:18.852777] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x249e580) on tqpair=0x243c690 00:23:11.218 [2024-12-10 22:54:18.852785] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.218 [2024-12-10 22:54:18.852789] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.218 [2024-12-10 22:54:18.852792] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x243c690) 00:23:11.219 [2024-12-10 22:54:18.852798] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.219 [2024-12-10 22:54:18.852807] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x249e580, cid 3, qid 0 00:23:11.219 [2024-12-10 22:54:18.852882] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.219 [2024-12-10 22:54:18.852888] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.219 [2024-12-10 22:54:18.852891] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.219 [2024-12-10 22:54:18.852894] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x249e580) on tqpair=0x243c690 00:23:11.219 [2024-12-10 22:54:18.852902] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.219 [2024-12-10 22:54:18.852906] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.219 [2024-12-10 22:54:18.852909] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x243c690) 00:23:11.219 [2024-12-10 22:54:18.852916] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.219 [2024-12-10 22:54:18.852926] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x249e580, cid 3, qid 0 00:23:11.219 [2024-12-10 22:54:18.852999] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.219 [2024-12-10 22:54:18.853005] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.219 [2024-12-10 22:54:18.853008] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.219 [2024-12-10 22:54:18.853011] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x249e580) on tqpair=0x243c690 00:23:11.219 [2024-12-10 22:54:18.853019] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.219 [2024-12-10 22:54:18.853023] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.219 [2024-12-10 22:54:18.853026] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x243c690) 00:23:11.219 [2024-12-10 22:54:18.853032] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.219 [2024-12-10 22:54:18.853041] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x249e580, cid 3, qid 0 00:23:11.219 [2024-12-10 22:54:18.853116] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.219 [2024-12-10 22:54:18.853122] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.219 [2024-12-10 22:54:18.853124] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.219 [2024-12-10 22:54:18.853128] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x249e580) on tqpair=0x243c690 00:23:11.219 [2024-12-10 22:54:18.853136] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.219 [2024-12-10 22:54:18.853140] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.219 [2024-12-10 22:54:18.853143] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x243c690) 00:23:11.219 [2024-12-10 22:54:18.853149] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.219 [2024-12-10 22:54:18.853162] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x249e580, cid 3, qid 0 00:23:11.219 [2024-12-10 22:54:18.853223] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.219 [2024-12-10 22:54:18.853228] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.219 [2024-12-10 22:54:18.853231] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.219 [2024-12-10 22:54:18.853234] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x249e580) on tqpair=0x243c690 00:23:11.219 [2024-12-10 22:54:18.853243] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.219 [2024-12-10 22:54:18.853247] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.219 [2024-12-10 22:54:18.853250] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x243c690) 00:23:11.219 [2024-12-10 22:54:18.853256] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.219 [2024-12-10 22:54:18.853266] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x249e580, cid 3, qid 0 00:23:11.219 [2024-12-10 22:54:18.853332] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.219 [2024-12-10 22:54:18.853337] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.219 [2024-12-10 22:54:18.853340] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.219 [2024-12-10 22:54:18.853344] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x249e580) on tqpair=0x243c690 00:23:11.219 [2024-12-10 22:54:18.853352] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.219 [2024-12-10 22:54:18.853355] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.219 [2024-12-10 22:54:18.853358] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x243c690) 00:23:11.219 [2024-12-10 22:54:18.853364] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.219 [2024-12-10 22:54:18.853375] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x249e580, cid 3, qid 0 00:23:11.219 [2024-12-10 22:54:18.853451] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.219 [2024-12-10 22:54:18.853457] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.219 [2024-12-10 22:54:18.853460] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.219 [2024-12-10 22:54:18.853463] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x249e580) on tqpair=0x243c690 00:23:11.219 [2024-12-10 22:54:18.853471] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.219 [2024-12-10 22:54:18.853475] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.219 [2024-12-10 22:54:18.853478] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x243c690) 00:23:11.219 [2024-12-10 22:54:18.853484] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.219 [2024-12-10 22:54:18.853493] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x249e580, cid 3, qid 0 00:23:11.219 [2024-12-10 22:54:18.853551] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.219 [2024-12-10 22:54:18.853557] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.219 [2024-12-10 22:54:18.853560] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.219 [2024-12-10 22:54:18.853563] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x249e580) on tqpair=0x243c690 00:23:11.219 [2024-12-10 22:54:18.853571] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.219 [2024-12-10 22:54:18.853575] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.219 [2024-12-10 22:54:18.853578] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x243c690) 00:23:11.219 [2024-12-10 22:54:18.853584] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.219 [2024-12-10 22:54:18.853593] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x249e580, cid 3, qid 0 00:23:11.219 [2024-12-10 22:54:18.853657] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.219 [2024-12-10 22:54:18.853663] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.219 [2024-12-10 22:54:18.853666] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.219 [2024-12-10 22:54:18.853669] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x249e580) on tqpair=0x243c690 00:23:11.219 [2024-12-10 22:54:18.853678] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.219 [2024-12-10 22:54:18.853682] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.219 [2024-12-10 22:54:18.853685] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x243c690) 00:23:11.219 [2024-12-10 22:54:18.853690] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.219 [2024-12-10 22:54:18.853700] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x249e580, cid 3, qid 0 00:23:11.219 [2024-12-10 22:54:18.853760] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.219 [2024-12-10 22:54:18.853766] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.219 [2024-12-10 22:54:18.853769] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.219 [2024-12-10 22:54:18.853772] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x249e580) on tqpair=0x243c690 00:23:11.219 [2024-12-10 22:54:18.853781] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.219 [2024-12-10 22:54:18.853784] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.219 [2024-12-10 22:54:18.853787] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x243c690) 00:23:11.219 [2024-12-10 22:54:18.853793] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.219 [2024-12-10 22:54:18.853804] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x249e580, cid 3, qid 0 00:23:11.219 [2024-12-10 22:54:18.853878] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.219 [2024-12-10 22:54:18.853884] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.219 [2024-12-10 22:54:18.853887] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.219 [2024-12-10 22:54:18.853890] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x249e580) on tqpair=0x243c690 00:23:11.219 [2024-12-10 22:54:18.853899] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.219 [2024-12-10 22:54:18.853903] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.219 [2024-12-10 22:54:18.853906] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x243c690) 00:23:11.219 [2024-12-10 22:54:18.853911] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.219 [2024-12-10 22:54:18.853921] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x249e580, cid 3, qid 0 00:23:11.219 [2024-12-10 22:54:18.853995] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.219 [2024-12-10 22:54:18.854000] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.219 [2024-12-10 22:54:18.854003] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.219 [2024-12-10 22:54:18.854007] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x249e580) on tqpair=0x243c690 00:23:11.219 [2024-12-10 22:54:18.854015] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.219 [2024-12-10 22:54:18.854018] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.219 [2024-12-10 22:54:18.854022] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x243c690) 00:23:11.219 [2024-12-10 22:54:18.854027] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.219 [2024-12-10 22:54:18.854037] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x249e580, cid 3, qid 0 00:23:11.219 [2024-12-10 22:54:18.854096] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.219 [2024-12-10 22:54:18.854101] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.219 [2024-12-10 22:54:18.854104] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.219 [2024-12-10 22:54:18.854107] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x249e580) on tqpair=0x243c690 00:23:11.219 [2024-12-10 22:54:18.854116] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.219 [2024-12-10 22:54:18.854120] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.219 [2024-12-10 22:54:18.854123] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x243c690) 00:23:11.219 [2024-12-10 22:54:18.854129] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.219 [2024-12-10 22:54:18.854138] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x249e580, cid 3, qid 0 00:23:11.220 [2024-12-10 22:54:18.854208] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.220 [2024-12-10 22:54:18.854214] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.220 [2024-12-10 22:54:18.854217] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.220 [2024-12-10 22:54:18.854220] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x249e580) on tqpair=0x243c690 00:23:11.220 [2024-12-10 22:54:18.854228] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.220 [2024-12-10 22:54:18.854232] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.220 [2024-12-10 22:54:18.854235] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x243c690) 00:23:11.220 [2024-12-10 22:54:18.854241] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.220 [2024-12-10 22:54:18.854250] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x249e580, cid 3, qid 0 00:23:11.220 [2024-12-10 22:54:18.854331] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.220 [2024-12-10 22:54:18.854337] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.220 [2024-12-10 22:54:18.854340] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.220 [2024-12-10 22:54:18.854343] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x249e580) on tqpair=0x243c690 00:23:11.220 [2024-12-10 22:54:18.854352] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.220 [2024-12-10 22:54:18.854356] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.220 [2024-12-10 22:54:18.854359] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x243c690) 00:23:11.220 [2024-12-10 22:54:18.854365] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.220 [2024-12-10 22:54:18.854375] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x249e580, cid 3, qid 0 00:23:11.220 [2024-12-10 22:54:18.854442] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.220 [2024-12-10 22:54:18.854448] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.220 [2024-12-10 22:54:18.854451] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.220 [2024-12-10 22:54:18.854454] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x249e580) on tqpair=0x243c690 00:23:11.220 [2024-12-10 22:54:18.854462] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.220 [2024-12-10 22:54:18.854466] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.220 [2024-12-10 22:54:18.854469] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x243c690) 00:23:11.220 [2024-12-10 22:54:18.854475] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.220 [2024-12-10 22:54:18.854484] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x249e580, cid 3, qid 0 00:23:11.220 [2024-12-10 22:54:18.854548] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.220 [2024-12-10 22:54:18.854553] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.220 [2024-12-10 22:54:18.854556] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.220 [2024-12-10 22:54:18.854559] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x249e580) on tqpair=0x243c690 00:23:11.220 [2024-12-10 22:54:18.854569] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.220 [2024-12-10 22:54:18.854572] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.220 [2024-12-10 22:54:18.854575] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x243c690) 00:23:11.220 [2024-12-10 22:54:18.854581] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.220 [2024-12-10 22:54:18.854590] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x249e580, cid 3, qid 0 00:23:11.220 [2024-12-10 22:54:18.854654] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.220 [2024-12-10 22:54:18.854660] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.220 [2024-12-10 22:54:18.854663] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.220 [2024-12-10 22:54:18.854666] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x249e580) on tqpair=0x243c690 00:23:11.220 [2024-12-10 22:54:18.854674] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.220 [2024-12-10 22:54:18.854678] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.220 [2024-12-10 22:54:18.854681] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x243c690) 00:23:11.220 [2024-12-10 22:54:18.854686] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.220 [2024-12-10 22:54:18.854696] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x249e580, cid 3, qid 0 00:23:11.220 [2024-12-10 22:54:18.854770] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.220 [2024-12-10 22:54:18.854778] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.220 [2024-12-10 22:54:18.854781] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.220 [2024-12-10 22:54:18.854784] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x249e580) on tqpair=0x243c690 00:23:11.220 [2024-12-10 22:54:18.854792] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.220 [2024-12-10 22:54:18.854796] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.220 [2024-12-10 22:54:18.854799] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x243c690) 00:23:11.220 [2024-12-10 22:54:18.854805] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.220 [2024-12-10 22:54:18.854814] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x249e580, cid 3, qid 0 00:23:11.220 [2024-12-10 22:54:18.854889] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.220 [2024-12-10 22:54:18.854895] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.220 [2024-12-10 22:54:18.854898] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.220 [2024-12-10 22:54:18.854901] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x249e580) on tqpair=0x243c690 00:23:11.220 [2024-12-10 22:54:18.854909] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.220 [2024-12-10 22:54:18.854913] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.220 [2024-12-10 22:54:18.854916] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x243c690) 00:23:11.220 [2024-12-10 22:54:18.854921] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.220 [2024-12-10 22:54:18.854931] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x249e580, cid 3, qid 0 00:23:11.220 [2024-12-10 22:54:18.854995] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.220 [2024-12-10 22:54:18.855000] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.220 [2024-12-10 22:54:18.855003] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.220 [2024-12-10 22:54:18.855006] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x249e580) on tqpair=0x243c690 00:23:11.220 [2024-12-10 22:54:18.855016] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.220 [2024-12-10 22:54:18.855020] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.220 [2024-12-10 22:54:18.855023] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x243c690) 00:23:11.220 [2024-12-10 22:54:18.855028] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.220 [2024-12-10 22:54:18.855037] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x249e580, cid 3, qid 0 00:23:11.220 [2024-12-10 22:54:18.855099] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.220 [2024-12-10 22:54:18.855105] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.220 [2024-12-10 22:54:18.855108] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.220 [2024-12-10 22:54:18.855111] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x249e580) on tqpair=0x243c690 00:23:11.220 [2024-12-10 22:54:18.855119] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.220 [2024-12-10 22:54:18.855123] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.220 [2024-12-10 22:54:18.855126] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x243c690) 00:23:11.220 [2024-12-10 22:54:18.855131] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.220 [2024-12-10 22:54:18.855141] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x249e580, cid 3, qid 0 00:23:11.220 [2024-12-10 22:54:18.855219] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.220 [2024-12-10 22:54:18.855225] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.220 [2024-12-10 22:54:18.855230] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.220 [2024-12-10 22:54:18.855233] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x249e580) on tqpair=0x243c690 00:23:11.220 [2024-12-10 22:54:18.855241] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.220 [2024-12-10 22:54:18.855245] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.220 [2024-12-10 22:54:18.855248] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x243c690) 00:23:11.220 [2024-12-10 22:54:18.855254] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.220 [2024-12-10 22:54:18.855264] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x249e580, cid 3, qid 0 00:23:11.220 [2024-12-10 22:54:18.855437] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.220 [2024-12-10 22:54:18.855443] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.220 [2024-12-10 22:54:18.855446] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.220 [2024-12-10 22:54:18.855450] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x249e580) on tqpair=0x243c690 00:23:11.220 [2024-12-10 22:54:18.855460] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.220 [2024-12-10 22:54:18.855463] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.220 [2024-12-10 22:54:18.855466] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x243c690) 00:23:11.220 [2024-12-10 22:54:18.855472] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.220 [2024-12-10 22:54:18.855481] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x249e580, cid 3, qid 0 00:23:11.220 [2024-12-10 22:54:18.855546] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.220 [2024-12-10 22:54:18.855552] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.220 [2024-12-10 22:54:18.855555] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.220 [2024-12-10 22:54:18.855558] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x249e580) on tqpair=0x243c690 00:23:11.220 [2024-12-10 22:54:18.855566] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.220 [2024-12-10 22:54:18.855570] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.220 [2024-12-10 22:54:18.855573] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x243c690) 00:23:11.220 [2024-12-10 22:54:18.855579] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.220 [2024-12-10 22:54:18.855588] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x249e580, cid 3, qid 0 00:23:11.220 [2024-12-10 22:54:18.859162] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.220 [2024-12-10 22:54:18.859170] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.220 [2024-12-10 22:54:18.859173] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.221 [2024-12-10 22:54:18.859177] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x249e580) on tqpair=0x243c690 00:23:11.221 [2024-12-10 22:54:18.859187] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.221 [2024-12-10 22:54:18.859191] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.221 [2024-12-10 22:54:18.859194] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x243c690) 00:23:11.221 [2024-12-10 22:54:18.859200] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.221 [2024-12-10 22:54:18.859210] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x249e580, cid 3, qid 0 00:23:11.221 [2024-12-10 22:54:18.859351] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.221 [2024-12-10 22:54:18.859357] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.221 [2024-12-10 22:54:18.859360] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.221 [2024-12-10 22:54:18.859365] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x249e580) on tqpair=0x243c690 00:23:11.221 [2024-12-10 22:54:18.859372] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 6 milliseconds 00:23:11.221 00:23:11.221 22:54:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:23:11.221 [2024-12-10 22:54:18.897130] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:23:11.221 [2024-12-10 22:54:18.897183] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2456957 ] 00:23:11.483 [2024-12-10 22:54:18.937829] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:23:11.484 [2024-12-10 22:54:18.937869] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:11.484 [2024-12-10 22:54:18.937874] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:11.484 [2024-12-10 22:54:18.937886] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:11.484 [2024-12-10 22:54:18.937893] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:11.484 [2024-12-10 22:54:18.941321] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:23:11.484 [2024-12-10 22:54:18.941351] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1b06690 0 00:23:11.484 [2024-12-10 22:54:18.948288] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:11.484 [2024-12-10 22:54:18.948301] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:11.484 [2024-12-10 22:54:18.948305] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:11.484 [2024-12-10 22:54:18.948308] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:11.484 [2024-12-10 22:54:18.948333] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.484 [2024-12-10 22:54:18.948338] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.484 [2024-12-10 22:54:18.948342] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b06690) 00:23:11.484 [2024-12-10 22:54:18.948352] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:11.484 [2024-12-10 22:54:18.948366] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68100, cid 0, qid 0 00:23:11.484 [2024-12-10 22:54:18.956168] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.484 [2024-12-10 22:54:18.956176] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.484 [2024-12-10 22:54:18.956179] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.484 [2024-12-10 22:54:18.956183] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68100) on tqpair=0x1b06690 00:23:11.484 [2024-12-10 22:54:18.956194] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:11.484 [2024-12-10 22:54:18.956200] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:23:11.484 [2024-12-10 22:54:18.956205] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:23:11.484 [2024-12-10 22:54:18.956214] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.484 [2024-12-10 22:54:18.956218] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.484 [2024-12-10 22:54:18.956221] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b06690) 00:23:11.484 [2024-12-10 22:54:18.956230] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.484 [2024-12-10 22:54:18.956242] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68100, cid 0, qid 0 00:23:11.484 [2024-12-10 22:54:18.956372] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.484 [2024-12-10 22:54:18.956378] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.484 [2024-12-10 22:54:18.956382] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.484 [2024-12-10 22:54:18.956385] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68100) on tqpair=0x1b06690 00:23:11.484 [2024-12-10 22:54:18.956389] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:23:11.484 [2024-12-10 22:54:18.956396] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:23:11.484 [2024-12-10 22:54:18.956402] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.484 [2024-12-10 22:54:18.956405] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.484 [2024-12-10 22:54:18.956409] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b06690) 00:23:11.484 [2024-12-10 22:54:18.956415] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.484 [2024-12-10 22:54:18.956424] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68100, cid 0, qid 0 00:23:11.484 [2024-12-10 22:54:18.956489] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.484 [2024-12-10 22:54:18.956494] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.484 [2024-12-10 22:54:18.956498] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.484 [2024-12-10 22:54:18.956501] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68100) on tqpair=0x1b06690 00:23:11.484 [2024-12-10 22:54:18.956505] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:23:11.484 [2024-12-10 22:54:18.956512] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:23:11.484 [2024-12-10 22:54:18.956518] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.484 [2024-12-10 22:54:18.956521] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.484 [2024-12-10 22:54:18.956524] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b06690) 00:23:11.484 [2024-12-10 22:54:18.956530] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.484 [2024-12-10 22:54:18.956539] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68100, cid 0, qid 0 00:23:11.484 [2024-12-10 22:54:18.956603] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.484 [2024-12-10 22:54:18.956609] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.484 [2024-12-10 22:54:18.956612] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.484 [2024-12-10 22:54:18.956616] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68100) on tqpair=0x1b06690 00:23:11.484 [2024-12-10 22:54:18.956620] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:11.484 [2024-12-10 22:54:18.956629] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.484 [2024-12-10 22:54:18.956633] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.484 [2024-12-10 22:54:18.956636] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b06690) 00:23:11.484 [2024-12-10 22:54:18.956641] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.484 [2024-12-10 22:54:18.956651] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68100, cid 0, qid 0 00:23:11.484 [2024-12-10 22:54:18.956724] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.484 [2024-12-10 22:54:18.956730] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.484 [2024-12-10 22:54:18.956733] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.484 [2024-12-10 22:54:18.956736] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68100) on tqpair=0x1b06690 00:23:11.484 [2024-12-10 22:54:18.956740] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:23:11.484 [2024-12-10 22:54:18.956744] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:23:11.484 [2024-12-10 22:54:18.956751] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:11.484 [2024-12-10 22:54:18.956859] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:23:11.484 [2024-12-10 22:54:18.956863] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:11.484 [2024-12-10 22:54:18.956870] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.484 [2024-12-10 22:54:18.956873] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.484 [2024-12-10 22:54:18.956876] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b06690) 00:23:11.484 [2024-12-10 22:54:18.956882] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.484 [2024-12-10 22:54:18.956891] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68100, cid 0, qid 0 00:23:11.484 [2024-12-10 22:54:18.956953] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.484 [2024-12-10 22:54:18.956959] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.484 [2024-12-10 22:54:18.956962] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.484 [2024-12-10 22:54:18.956966] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68100) on tqpair=0x1b06690 00:23:11.484 [2024-12-10 22:54:18.956969] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:11.484 [2024-12-10 22:54:18.956978] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.484 [2024-12-10 22:54:18.956981] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.484 [2024-12-10 22:54:18.956984] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b06690) 00:23:11.484 [2024-12-10 22:54:18.956990] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.484 [2024-12-10 22:54:18.956999] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68100, cid 0, qid 0 00:23:11.484 [2024-12-10 22:54:18.957071] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.484 [2024-12-10 22:54:18.957077] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.484 [2024-12-10 22:54:18.957080] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.484 [2024-12-10 22:54:18.957083] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68100) on tqpair=0x1b06690 00:23:11.484 [2024-12-10 22:54:18.957087] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:11.484 [2024-12-10 22:54:18.957091] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:23:11.484 [2024-12-10 22:54:18.957097] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:23:11.484 [2024-12-10 22:54:18.957104] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:23:11.484 [2024-12-10 22:54:18.957114] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.484 [2024-12-10 22:54:18.957117] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b06690) 00:23:11.484 [2024-12-10 22:54:18.957122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.484 [2024-12-10 22:54:18.957132] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68100, cid 0, qid 0 00:23:11.484 [2024-12-10 22:54:18.957226] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:11.484 [2024-12-10 22:54:18.957232] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:11.484 [2024-12-10 22:54:18.957235] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:11.484 [2024-12-10 22:54:18.957239] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b06690): datao=0, datal=4096, cccid=0 00:23:11.484 [2024-12-10 22:54:18.957243] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b68100) on tqpair(0x1b06690): expected_datao=0, payload_size=4096 00:23:11.484 [2024-12-10 22:54:18.957247] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.484 [2024-12-10 22:54:18.957258] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:11.485 [2024-12-10 22:54:18.957262] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:11.485 [2024-12-10 22:54:18.998290] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.485 [2024-12-10 22:54:18.998300] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.485 [2024-12-10 22:54:18.998304] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.485 [2024-12-10 22:54:18.998307] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68100) on tqpair=0x1b06690 00:23:11.485 [2024-12-10 22:54:18.998314] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:23:11.485 [2024-12-10 22:54:18.998319] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:23:11.485 [2024-12-10 22:54:18.998323] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:23:11.485 [2024-12-10 22:54:18.998326] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:23:11.485 [2024-12-10 22:54:18.998330] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:23:11.485 [2024-12-10 22:54:18.998335] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:23:11.485 [2024-12-10 22:54:18.998346] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:23:11.485 [2024-12-10 22:54:18.998355] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.485 [2024-12-10 22:54:18.998359] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.485 [2024-12-10 22:54:18.998362] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b06690) 00:23:11.485 [2024-12-10 22:54:18.998369] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:11.485 [2024-12-10 22:54:18.998381] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68100, cid 0, qid 0 00:23:11.485 [2024-12-10 22:54:18.998441] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.485 [2024-12-10 22:54:18.998447] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.485 [2024-12-10 22:54:18.998450] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.485 [2024-12-10 22:54:18.998454] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68100) on tqpair=0x1b06690 00:23:11.485 [2024-12-10 22:54:18.998459] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.485 [2024-12-10 22:54:18.998462] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.485 [2024-12-10 22:54:18.998470] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b06690) 00:23:11.485 [2024-12-10 22:54:18.998475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.485 [2024-12-10 22:54:18.998481] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.485 [2024-12-10 22:54:18.998484] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.485 [2024-12-10 22:54:18.998487] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1b06690) 00:23:11.485 [2024-12-10 22:54:18.998492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.485 [2024-12-10 22:54:18.998497] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.485 [2024-12-10 22:54:18.998500] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.485 [2024-12-10 22:54:18.998503] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1b06690) 00:23:11.485 [2024-12-10 22:54:18.998508] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.485 [2024-12-10 22:54:18.998513] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.485 [2024-12-10 22:54:18.998517] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.485 [2024-12-10 22:54:18.998520] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b06690) 00:23:11.485 [2024-12-10 22:54:18.998524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.485 [2024-12-10 22:54:18.998529] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:11.485 [2024-12-10 22:54:18.998539] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:11.485 [2024-12-10 22:54:18.998545] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.485 [2024-12-10 22:54:18.998548] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b06690) 00:23:11.485 [2024-12-10 22:54:18.998554] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.485 [2024-12-10 22:54:18.998565] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68100, cid 0, qid 0 00:23:11.485 [2024-12-10 22:54:18.998570] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68280, cid 1, qid 0 00:23:11.485 [2024-12-10 22:54:18.998574] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68400, cid 2, qid 0 00:23:11.485 [2024-12-10 22:54:18.998578] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68580, cid 3, qid 0 00:23:11.485 [2024-12-10 22:54:18.998582] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68700, cid 4, qid 0 00:23:11.485 [2024-12-10 22:54:18.998680] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.485 [2024-12-10 22:54:18.998686] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.485 [2024-12-10 22:54:18.998689] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.485 [2024-12-10 22:54:18.998692] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68700) on tqpair=0x1b06690 00:23:11.485 [2024-12-10 22:54:18.998696] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:23:11.485 [2024-12-10 22:54:18.998701] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:23:11.485 [2024-12-10 22:54:18.998708] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:23:11.485 [2024-12-10 22:54:18.998722] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:23:11.485 [2024-12-10 22:54:18.998730] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.485 [2024-12-10 22:54:18.998733] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.485 [2024-12-10 22:54:18.998736] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b06690) 00:23:11.485 [2024-12-10 22:54:18.998742] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:11.485 [2024-12-10 22:54:18.998752] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68700, cid 4, qid 0 00:23:11.485 [2024-12-10 22:54:18.998819] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.485 [2024-12-10 22:54:18.998824] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.485 [2024-12-10 22:54:18.998827] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.485 [2024-12-10 22:54:18.998831] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68700) on tqpair=0x1b06690 00:23:11.485 [2024-12-10 22:54:18.998882] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:23:11.485 [2024-12-10 22:54:18.998893] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:23:11.485 [2024-12-10 22:54:18.998899] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.485 [2024-12-10 22:54:18.998903] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b06690) 00:23:11.485 [2024-12-10 22:54:18.998908] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.485 [2024-12-10 22:54:18.998918] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68700, cid 4, qid 0 00:23:11.485 [2024-12-10 22:54:18.998992] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:11.485 [2024-12-10 22:54:18.998999] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:11.485 [2024-12-10 22:54:18.999002] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:11.485 [2024-12-10 22:54:18.999005] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b06690): datao=0, datal=4096, cccid=4 00:23:11.485 [2024-12-10 22:54:18.999009] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b68700) on tqpair(0x1b06690): expected_datao=0, payload_size=4096 00:23:11.485 [2024-12-10 22:54:18.999013] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.485 [2024-12-10 22:54:18.999025] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:11.485 [2024-12-10 22:54:18.999029] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:11.485 [2024-12-10 22:54:19.043167] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.485 [2024-12-10 22:54:19.043180] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.485 [2024-12-10 22:54:19.043183] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.485 [2024-12-10 22:54:19.043186] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68700) on tqpair=0x1b06690 00:23:11.485 [2024-12-10 22:54:19.043200] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:23:11.485 [2024-12-10 22:54:19.043212] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:23:11.485 [2024-12-10 22:54:19.043221] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:23:11.485 [2024-12-10 22:54:19.043228] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.485 [2024-12-10 22:54:19.043231] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b06690) 00:23:11.485 [2024-12-10 22:54:19.043238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.485 [2024-12-10 22:54:19.043253] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68700, cid 4, qid 0 00:23:11.485 [2024-12-10 22:54:19.043352] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:11.485 [2024-12-10 22:54:19.043358] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:11.485 [2024-12-10 22:54:19.043362] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:11.485 [2024-12-10 22:54:19.043365] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b06690): datao=0, datal=4096, cccid=4 00:23:11.485 [2024-12-10 22:54:19.043369] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b68700) on tqpair(0x1b06690): expected_datao=0, payload_size=4096 00:23:11.485 [2024-12-10 22:54:19.043373] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.485 [2024-12-10 22:54:19.043385] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:11.485 [2024-12-10 22:54:19.043389] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:11.485 [2024-12-10 22:54:19.085293] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.485 [2024-12-10 22:54:19.085303] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.485 [2024-12-10 22:54:19.085306] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.485 [2024-12-10 22:54:19.085310] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68700) on tqpair=0x1b06690 00:23:11.486 [2024-12-10 22:54:19.085324] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:23:11.486 [2024-12-10 22:54:19.085332] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:23:11.486 [2024-12-10 22:54:19.085340] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.486 [2024-12-10 22:54:19.085344] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b06690) 00:23:11.486 [2024-12-10 22:54:19.085350] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.486 [2024-12-10 22:54:19.085362] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68700, cid 4, qid 0 00:23:11.486 [2024-12-10 22:54:19.085430] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:11.486 [2024-12-10 22:54:19.085437] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:11.486 [2024-12-10 22:54:19.085440] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:11.486 [2024-12-10 22:54:19.085443] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b06690): datao=0, datal=4096, cccid=4 00:23:11.486 [2024-12-10 22:54:19.085447] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b68700) on tqpair(0x1b06690): expected_datao=0, payload_size=4096 00:23:11.486 [2024-12-10 22:54:19.085451] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.486 [2024-12-10 22:54:19.085463] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:11.486 [2024-12-10 22:54:19.085467] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:11.486 [2024-12-10 22:54:19.129167] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.486 [2024-12-10 22:54:19.129176] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.486 [2024-12-10 22:54:19.129179] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.486 [2024-12-10 22:54:19.129183] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68700) on tqpair=0x1b06690 00:23:11.486 [2024-12-10 22:54:19.129190] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:23:11.486 [2024-12-10 22:54:19.129198] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:23:11.486 [2024-12-10 22:54:19.129207] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:23:11.486 [2024-12-10 22:54:19.129216] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:23:11.486 [2024-12-10 22:54:19.129221] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:23:11.486 [2024-12-10 22:54:19.129226] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:23:11.486 [2024-12-10 22:54:19.129230] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:23:11.486 [2024-12-10 22:54:19.129234] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:23:11.486 [2024-12-10 22:54:19.129239] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:23:11.486 [2024-12-10 22:54:19.129254] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.486 [2024-12-10 22:54:19.129258] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b06690) 00:23:11.486 [2024-12-10 22:54:19.129264] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.486 [2024-12-10 22:54:19.129270] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.486 [2024-12-10 22:54:19.129274] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.486 [2024-12-10 22:54:19.129277] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1b06690) 00:23:11.486 [2024-12-10 22:54:19.129282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.486 [2024-12-10 22:54:19.129296] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68700, cid 4, qid 0 00:23:11.486 [2024-12-10 22:54:19.129301] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68880, cid 5, qid 0 00:23:11.486 [2024-12-10 22:54:19.129376] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.486 [2024-12-10 22:54:19.129382] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.486 [2024-12-10 22:54:19.129385] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.486 [2024-12-10 22:54:19.129388] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68700) on tqpair=0x1b06690 00:23:11.486 [2024-12-10 22:54:19.129394] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.486 [2024-12-10 22:54:19.129399] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.486 [2024-12-10 22:54:19.129402] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.486 [2024-12-10 22:54:19.129405] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68880) on tqpair=0x1b06690 00:23:11.486 [2024-12-10 22:54:19.129413] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.486 [2024-12-10 22:54:19.129417] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1b06690) 00:23:11.486 [2024-12-10 22:54:19.129423] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.486 [2024-12-10 22:54:19.129433] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68880, cid 5, qid 0 00:23:11.486 [2024-12-10 22:54:19.129506] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.486 [2024-12-10 22:54:19.129512] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.486 [2024-12-10 22:54:19.129515] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.486 [2024-12-10 22:54:19.129518] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68880) on tqpair=0x1b06690 00:23:11.486 [2024-12-10 22:54:19.129526] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.486 [2024-12-10 22:54:19.129530] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1b06690) 00:23:11.486 [2024-12-10 22:54:19.129535] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.486 [2024-12-10 22:54:19.129547] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68880, cid 5, qid 0 00:23:11.486 [2024-12-10 22:54:19.129604] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.486 [2024-12-10 22:54:19.129610] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.486 [2024-12-10 22:54:19.129613] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.486 [2024-12-10 22:54:19.129616] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68880) on tqpair=0x1b06690 00:23:11.486 [2024-12-10 22:54:19.129624] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.486 [2024-12-10 22:54:19.129627] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1b06690) 00:23:11.486 [2024-12-10 22:54:19.129633] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.486 [2024-12-10 22:54:19.129641] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68880, cid 5, qid 0 00:23:11.486 [2024-12-10 22:54:19.129700] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.486 [2024-12-10 22:54:19.129705] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.486 [2024-12-10 22:54:19.129708] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.486 [2024-12-10 22:54:19.129712] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68880) on tqpair=0x1b06690 00:23:11.486 [2024-12-10 22:54:19.129725] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.486 [2024-12-10 22:54:19.129729] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1b06690) 00:23:11.486 [2024-12-10 22:54:19.129734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.486 [2024-12-10 22:54:19.129741] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.486 [2024-12-10 22:54:19.129744] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b06690) 00:23:11.486 [2024-12-10 22:54:19.129749] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.486 [2024-12-10 22:54:19.129755] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.486 [2024-12-10 22:54:19.129759] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1b06690) 00:23:11.486 [2024-12-10 22:54:19.129764] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.486 [2024-12-10 22:54:19.129770] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.486 [2024-12-10 22:54:19.129774] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1b06690) 00:23:11.486 [2024-12-10 22:54:19.129779] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.486 [2024-12-10 22:54:19.129789] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68880, cid 5, qid 0 00:23:11.486 [2024-12-10 22:54:19.129794] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68700, cid 4, qid 0 00:23:11.486 [2024-12-10 22:54:19.129798] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68a00, cid 6, qid 0 00:23:11.486 [2024-12-10 22:54:19.129802] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68b80, cid 7, qid 0 00:23:11.486 [2024-12-10 22:54:19.129933] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:11.486 [2024-12-10 22:54:19.129939] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:11.486 [2024-12-10 22:54:19.129942] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:11.486 [2024-12-10 22:54:19.129945] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b06690): datao=0, datal=8192, cccid=5 00:23:11.486 [2024-12-10 22:54:19.129951] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b68880) on tqpair(0x1b06690): expected_datao=0, payload_size=8192 00:23:11.486 [2024-12-10 22:54:19.129955] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.486 [2024-12-10 22:54:19.129982] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:11.486 [2024-12-10 22:54:19.129987] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:11.486 [2024-12-10 22:54:19.129991] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:11.486 [2024-12-10 22:54:19.129996] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:11.486 [2024-12-10 22:54:19.129999] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:11.486 [2024-12-10 22:54:19.130002] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b06690): datao=0, datal=512, cccid=4 00:23:11.486 [2024-12-10 22:54:19.130006] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b68700) on tqpair(0x1b06690): expected_datao=0, payload_size=512 00:23:11.486 [2024-12-10 22:54:19.130010] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.486 [2024-12-10 22:54:19.130015] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:11.486 [2024-12-10 22:54:19.130019] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:11.486 [2024-12-10 22:54:19.130023] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:11.486 [2024-12-10 22:54:19.130028] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:11.486 [2024-12-10 22:54:19.130031] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:11.487 [2024-12-10 22:54:19.130034] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b06690): datao=0, datal=512, cccid=6 00:23:11.487 [2024-12-10 22:54:19.130038] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b68a00) on tqpair(0x1b06690): expected_datao=0, payload_size=512 00:23:11.487 [2024-12-10 22:54:19.130042] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.487 [2024-12-10 22:54:19.130047] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:11.487 [2024-12-10 22:54:19.130050] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:11.487 [2024-12-10 22:54:19.130055] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:11.487 [2024-12-10 22:54:19.130060] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:11.487 [2024-12-10 22:54:19.130063] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:11.487 [2024-12-10 22:54:19.130066] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b06690): datao=0, datal=4096, cccid=7 00:23:11.487 [2024-12-10 22:54:19.130070] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b68b80) on tqpair(0x1b06690): expected_datao=0, payload_size=4096 00:23:11.487 [2024-12-10 22:54:19.130073] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.487 [2024-12-10 22:54:19.130079] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:11.487 [2024-12-10 22:54:19.130082] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:11.487 [2024-12-10 22:54:19.130090] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.487 [2024-12-10 22:54:19.130095] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.487 [2024-12-10 22:54:19.130098] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.487 [2024-12-10 22:54:19.130101] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68880) on tqpair=0x1b06690 00:23:11.487 [2024-12-10 22:54:19.130114] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.487 [2024-12-10 22:54:19.130119] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.487 [2024-12-10 22:54:19.130123] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.487 [2024-12-10 22:54:19.130126] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68700) on tqpair=0x1b06690 00:23:11.487 [2024-12-10 22:54:19.130135] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.487 [2024-12-10 22:54:19.130141] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.487 [2024-12-10 22:54:19.130145] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.487 [2024-12-10 22:54:19.130148] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68a00) on tqpair=0x1b06690 00:23:11.487 [2024-12-10 22:54:19.130154] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.487 [2024-12-10 22:54:19.130165] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.487 [2024-12-10 22:54:19.130168] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.487 [2024-12-10 22:54:19.130171] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68b80) on tqpair=0x1b06690 00:23:11.487 ===================================================== 00:23:11.487 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:11.487 ===================================================== 00:23:11.487 Controller Capabilities/Features 00:23:11.487 ================================ 00:23:11.487 Vendor ID: 8086 00:23:11.487 Subsystem Vendor ID: 8086 00:23:11.487 Serial Number: SPDK00000000000001 00:23:11.487 Model Number: SPDK bdev Controller 00:23:11.487 Firmware Version: 25.01 00:23:11.487 Recommended Arb Burst: 6 00:23:11.487 IEEE OUI Identifier: e4 d2 5c 00:23:11.487 Multi-path I/O 00:23:11.487 May have multiple subsystem ports: Yes 00:23:11.487 May have multiple controllers: Yes 00:23:11.487 Associated with SR-IOV VF: No 00:23:11.487 Max Data Transfer Size: 131072 00:23:11.487 Max Number of Namespaces: 32 00:23:11.487 Max Number of I/O Queues: 127 00:23:11.487 NVMe Specification Version (VS): 1.3 00:23:11.487 NVMe Specification Version (Identify): 1.3 00:23:11.487 Maximum Queue Entries: 128 00:23:11.487 Contiguous Queues Required: Yes 00:23:11.487 Arbitration Mechanisms Supported 00:23:11.487 Weighted Round Robin: Not Supported 00:23:11.487 Vendor Specific: Not Supported 00:23:11.487 Reset Timeout: 15000 ms 00:23:11.487 Doorbell Stride: 4 bytes 00:23:11.487 NVM Subsystem Reset: Not Supported 00:23:11.487 Command Sets Supported 00:23:11.487 NVM Command Set: Supported 00:23:11.487 Boot Partition: Not Supported 00:23:11.487 Memory Page Size Minimum: 4096 bytes 00:23:11.487 Memory Page Size Maximum: 4096 bytes 00:23:11.487 Persistent Memory Region: Not Supported 00:23:11.487 Optional Asynchronous Events Supported 00:23:11.487 Namespace Attribute Notices: Supported 00:23:11.487 Firmware Activation Notices: Not Supported 00:23:11.487 ANA Change Notices: Not Supported 00:23:11.487 PLE Aggregate Log Change Notices: Not Supported 00:23:11.487 LBA Status Info Alert Notices: Not Supported 00:23:11.487 EGE Aggregate Log Change Notices: Not Supported 00:23:11.487 Normal NVM Subsystem Shutdown event: Not Supported 00:23:11.487 Zone Descriptor Change Notices: Not Supported 00:23:11.487 Discovery Log Change Notices: Not Supported 00:23:11.487 Controller Attributes 00:23:11.487 128-bit Host Identifier: Supported 00:23:11.487 Non-Operational Permissive Mode: Not Supported 00:23:11.487 NVM Sets: Not Supported 00:23:11.487 Read Recovery Levels: Not Supported 00:23:11.487 Endurance Groups: Not Supported 00:23:11.487 Predictable Latency Mode: Not Supported 00:23:11.487 Traffic Based Keep ALive: Not Supported 00:23:11.487 Namespace Granularity: Not Supported 00:23:11.487 SQ Associations: Not Supported 00:23:11.487 UUID List: Not Supported 00:23:11.487 Multi-Domain Subsystem: Not Supported 00:23:11.487 Fixed Capacity Management: Not Supported 00:23:11.487 Variable Capacity Management: Not Supported 00:23:11.487 Delete Endurance Group: Not Supported 00:23:11.487 Delete NVM Set: Not Supported 00:23:11.487 Extended LBA Formats Supported: Not Supported 00:23:11.487 Flexible Data Placement Supported: Not Supported 00:23:11.487 00:23:11.487 Controller Memory Buffer Support 00:23:11.487 ================================ 00:23:11.487 Supported: No 00:23:11.487 00:23:11.487 Persistent Memory Region Support 00:23:11.487 ================================ 00:23:11.487 Supported: No 00:23:11.487 00:23:11.487 Admin Command Set Attributes 00:23:11.487 ============================ 00:23:11.487 Security Send/Receive: Not Supported 00:23:11.487 Format NVM: Not Supported 00:23:11.487 Firmware Activate/Download: Not Supported 00:23:11.487 Namespace Management: Not Supported 00:23:11.487 Device Self-Test: Not Supported 00:23:11.487 Directives: Not Supported 00:23:11.487 NVMe-MI: Not Supported 00:23:11.487 Virtualization Management: Not Supported 00:23:11.487 Doorbell Buffer Config: Not Supported 00:23:11.487 Get LBA Status Capability: Not Supported 00:23:11.487 Command & Feature Lockdown Capability: Not Supported 00:23:11.487 Abort Command Limit: 4 00:23:11.487 Async Event Request Limit: 4 00:23:11.487 Number of Firmware Slots: N/A 00:23:11.487 Firmware Slot 1 Read-Only: N/A 00:23:11.487 Firmware Activation Without Reset: N/A 00:23:11.487 Multiple Update Detection Support: N/A 00:23:11.487 Firmware Update Granularity: No Information Provided 00:23:11.487 Per-Namespace SMART Log: No 00:23:11.487 Asymmetric Namespace Access Log Page: Not Supported 00:23:11.487 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:23:11.487 Command Effects Log Page: Supported 00:23:11.487 Get Log Page Extended Data: Supported 00:23:11.487 Telemetry Log Pages: Not Supported 00:23:11.487 Persistent Event Log Pages: Not Supported 00:23:11.487 Supported Log Pages Log Page: May Support 00:23:11.487 Commands Supported & Effects Log Page: Not Supported 00:23:11.487 Feature Identifiers & Effects Log Page:May Support 00:23:11.487 NVMe-MI Commands & Effects Log Page: May Support 00:23:11.487 Data Area 4 for Telemetry Log: Not Supported 00:23:11.487 Error Log Page Entries Supported: 128 00:23:11.487 Keep Alive: Supported 00:23:11.487 Keep Alive Granularity: 10000 ms 00:23:11.487 00:23:11.487 NVM Command Set Attributes 00:23:11.487 ========================== 00:23:11.487 Submission Queue Entry Size 00:23:11.487 Max: 64 00:23:11.487 Min: 64 00:23:11.487 Completion Queue Entry Size 00:23:11.487 Max: 16 00:23:11.487 Min: 16 00:23:11.487 Number of Namespaces: 32 00:23:11.487 Compare Command: Supported 00:23:11.487 Write Uncorrectable Command: Not Supported 00:23:11.487 Dataset Management Command: Supported 00:23:11.487 Write Zeroes Command: Supported 00:23:11.487 Set Features Save Field: Not Supported 00:23:11.487 Reservations: Supported 00:23:11.487 Timestamp: Not Supported 00:23:11.487 Copy: Supported 00:23:11.487 Volatile Write Cache: Present 00:23:11.487 Atomic Write Unit (Normal): 1 00:23:11.487 Atomic Write Unit (PFail): 1 00:23:11.487 Atomic Compare & Write Unit: 1 00:23:11.487 Fused Compare & Write: Supported 00:23:11.487 Scatter-Gather List 00:23:11.487 SGL Command Set: Supported 00:23:11.487 SGL Keyed: Supported 00:23:11.487 SGL Bit Bucket Descriptor: Not Supported 00:23:11.487 SGL Metadata Pointer: Not Supported 00:23:11.487 Oversized SGL: Not Supported 00:23:11.487 SGL Metadata Address: Not Supported 00:23:11.487 SGL Offset: Supported 00:23:11.487 Transport SGL Data Block: Not Supported 00:23:11.487 Replay Protected Memory Block: Not Supported 00:23:11.487 00:23:11.487 Firmware Slot Information 00:23:11.487 ========================= 00:23:11.487 Active slot: 1 00:23:11.487 Slot 1 Firmware Revision: 25.01 00:23:11.487 00:23:11.487 00:23:11.487 Commands Supported and Effects 00:23:11.487 ============================== 00:23:11.487 Admin Commands 00:23:11.487 -------------- 00:23:11.487 Get Log Page (02h): Supported 00:23:11.487 Identify (06h): Supported 00:23:11.487 Abort (08h): Supported 00:23:11.487 Set Features (09h): Supported 00:23:11.487 Get Features (0Ah): Supported 00:23:11.487 Asynchronous Event Request (0Ch): Supported 00:23:11.488 Keep Alive (18h): Supported 00:23:11.488 I/O Commands 00:23:11.488 ------------ 00:23:11.488 Flush (00h): Supported LBA-Change 00:23:11.488 Write (01h): Supported LBA-Change 00:23:11.488 Read (02h): Supported 00:23:11.488 Compare (05h): Supported 00:23:11.488 Write Zeroes (08h): Supported LBA-Change 00:23:11.488 Dataset Management (09h): Supported LBA-Change 00:23:11.488 Copy (19h): Supported LBA-Change 00:23:11.488 00:23:11.488 Error Log 00:23:11.488 ========= 00:23:11.488 00:23:11.488 Arbitration 00:23:11.488 =========== 00:23:11.488 Arbitration Burst: 1 00:23:11.488 00:23:11.488 Power Management 00:23:11.488 ================ 00:23:11.488 Number of Power States: 1 00:23:11.488 Current Power State: Power State #0 00:23:11.488 Power State #0: 00:23:11.488 Max Power: 0.00 W 00:23:11.488 Non-Operational State: Operational 00:23:11.488 Entry Latency: Not Reported 00:23:11.488 Exit Latency: Not Reported 00:23:11.488 Relative Read Throughput: 0 00:23:11.488 Relative Read Latency: 0 00:23:11.488 Relative Write Throughput: 0 00:23:11.488 Relative Write Latency: 0 00:23:11.488 Idle Power: Not Reported 00:23:11.488 Active Power: Not Reported 00:23:11.488 Non-Operational Permissive Mode: Not Supported 00:23:11.488 00:23:11.488 Health Information 00:23:11.488 ================== 00:23:11.488 Critical Warnings: 00:23:11.488 Available Spare Space: OK 00:23:11.488 Temperature: OK 00:23:11.488 Device Reliability: OK 00:23:11.488 Read Only: No 00:23:11.488 Volatile Memory Backup: OK 00:23:11.488 Current Temperature: 0 Kelvin (-273 Celsius) 00:23:11.488 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:23:11.488 Available Spare: 0% 00:23:11.488 Available Spare Threshold: 0% 00:23:11.488 Life Percentage Used:[2024-12-10 22:54:19.130255] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.488 [2024-12-10 22:54:19.130260] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1b06690) 00:23:11.488 [2024-12-10 22:54:19.130266] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.488 [2024-12-10 22:54:19.130277] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68b80, cid 7, qid 0 00:23:11.488 [2024-12-10 22:54:19.130349] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.488 [2024-12-10 22:54:19.130355] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.488 [2024-12-10 22:54:19.130358] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.488 [2024-12-10 22:54:19.130362] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68b80) on tqpair=0x1b06690 00:23:11.488 [2024-12-10 22:54:19.130388] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:23:11.488 [2024-12-10 22:54:19.130397] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68100) on tqpair=0x1b06690 00:23:11.488 [2024-12-10 22:54:19.130402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.488 [2024-12-10 22:54:19.130407] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68280) on tqpair=0x1b06690 00:23:11.488 [2024-12-10 22:54:19.130411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.488 [2024-12-10 22:54:19.130415] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68400) on tqpair=0x1b06690 00:23:11.488 [2024-12-10 22:54:19.130419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.488 [2024-12-10 22:54:19.130424] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68580) on tqpair=0x1b06690 00:23:11.488 [2024-12-10 22:54:19.130427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.488 [2024-12-10 22:54:19.130435] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.488 [2024-12-10 22:54:19.130438] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.488 [2024-12-10 22:54:19.130441] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b06690) 00:23:11.488 [2024-12-10 22:54:19.130447] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.488 [2024-12-10 22:54:19.130458] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68580, cid 3, qid 0 00:23:11.488 [2024-12-10 22:54:19.130522] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.488 [2024-12-10 22:54:19.130527] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.488 [2024-12-10 22:54:19.130530] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.488 [2024-12-10 22:54:19.130534] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68580) on tqpair=0x1b06690 00:23:11.488 [2024-12-10 22:54:19.130540] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.488 [2024-12-10 22:54:19.130543] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.488 [2024-12-10 22:54:19.130546] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b06690) 00:23:11.488 [2024-12-10 22:54:19.130554] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.488 [2024-12-10 22:54:19.130566] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68580, cid 3, qid 0 00:23:11.488 [2024-12-10 22:54:19.130641] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.488 [2024-12-10 22:54:19.130646] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.488 [2024-12-10 22:54:19.130649] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.488 [2024-12-10 22:54:19.130653] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68580) on tqpair=0x1b06690 00:23:11.488 [2024-12-10 22:54:19.130657] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:23:11.488 [2024-12-10 22:54:19.130661] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:23:11.488 [2024-12-10 22:54:19.130669] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.488 [2024-12-10 22:54:19.130672] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.488 [2024-12-10 22:54:19.130675] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b06690) 00:23:11.488 [2024-12-10 22:54:19.130681] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.488 [2024-12-10 22:54:19.130690] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68580, cid 3, qid 0 00:23:11.488 [2024-12-10 22:54:19.130755] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.488 [2024-12-10 22:54:19.130761] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.488 [2024-12-10 22:54:19.130764] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.488 [2024-12-10 22:54:19.130767] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68580) on tqpair=0x1b06690 00:23:11.488 [2024-12-10 22:54:19.130775] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.488 [2024-12-10 22:54:19.130779] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.488 [2024-12-10 22:54:19.130782] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b06690) 00:23:11.488 [2024-12-10 22:54:19.130788] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.488 [2024-12-10 22:54:19.130797] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68580, cid 3, qid 0 00:23:11.488 [2024-12-10 22:54:19.130865] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.488 [2024-12-10 22:54:19.130871] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.488 [2024-12-10 22:54:19.130874] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.488 [2024-12-10 22:54:19.130877] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68580) on tqpair=0x1b06690 00:23:11.488 [2024-12-10 22:54:19.130885] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.488 [2024-12-10 22:54:19.130889] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.488 [2024-12-10 22:54:19.130892] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b06690) 00:23:11.488 [2024-12-10 22:54:19.130897] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.488 [2024-12-10 22:54:19.130907] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68580, cid 3, qid 0 00:23:11.488 [2024-12-10 22:54:19.130965] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.488 [2024-12-10 22:54:19.130970] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.488 [2024-12-10 22:54:19.130973] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.488 [2024-12-10 22:54:19.130977] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68580) on tqpair=0x1b06690 00:23:11.488 [2024-12-10 22:54:19.130985] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.488 [2024-12-10 22:54:19.130990] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.488 [2024-12-10 22:54:19.130994] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b06690) 00:23:11.488 [2024-12-10 22:54:19.130999] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.488 [2024-12-10 22:54:19.131010] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68580, cid 3, qid 0 00:23:11.489 [2024-12-10 22:54:19.131070] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.489 [2024-12-10 22:54:19.131075] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.489 [2024-12-10 22:54:19.131079] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.489 [2024-12-10 22:54:19.131082] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68580) on tqpair=0x1b06690 00:23:11.489 [2024-12-10 22:54:19.131090] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.489 [2024-12-10 22:54:19.131093] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.489 [2024-12-10 22:54:19.131097] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b06690) 00:23:11.489 [2024-12-10 22:54:19.131102] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.489 [2024-12-10 22:54:19.131111] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68580, cid 3, qid 0 00:23:11.489 [2024-12-10 22:54:19.131173] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.489 [2024-12-10 22:54:19.131179] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.489 [2024-12-10 22:54:19.131182] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.489 [2024-12-10 22:54:19.131185] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68580) on tqpair=0x1b06690 00:23:11.489 [2024-12-10 22:54:19.131193] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.489 [2024-12-10 22:54:19.131197] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.489 [2024-12-10 22:54:19.131200] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b06690) 00:23:11.489 [2024-12-10 22:54:19.131206] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.489 [2024-12-10 22:54:19.131215] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68580, cid 3, qid 0 00:23:11.489 [2024-12-10 22:54:19.131285] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.489 [2024-12-10 22:54:19.131290] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.489 [2024-12-10 22:54:19.131293] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.489 [2024-12-10 22:54:19.131296] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68580) on tqpair=0x1b06690 00:23:11.489 [2024-12-10 22:54:19.131305] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.489 [2024-12-10 22:54:19.131308] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.489 [2024-12-10 22:54:19.131311] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b06690) 00:23:11.489 [2024-12-10 22:54:19.131317] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.489 [2024-12-10 22:54:19.131327] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68580, cid 3, qid 0 00:23:11.489 [2024-12-10 22:54:19.131392] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.489 [2024-12-10 22:54:19.131398] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.489 [2024-12-10 22:54:19.131401] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.489 [2024-12-10 22:54:19.131404] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68580) on tqpair=0x1b06690 00:23:11.489 [2024-12-10 22:54:19.131412] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.489 [2024-12-10 22:54:19.131415] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.489 [2024-12-10 22:54:19.131422] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b06690) 00:23:11.489 [2024-12-10 22:54:19.131428] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.489 [2024-12-10 22:54:19.131437] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68580, cid 3, qid 0 00:23:11.489 [2024-12-10 22:54:19.131501] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.489 [2024-12-10 22:54:19.131507] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.489 [2024-12-10 22:54:19.131510] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.489 [2024-12-10 22:54:19.131513] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68580) on tqpair=0x1b06690 00:23:11.489 [2024-12-10 22:54:19.131521] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.489 [2024-12-10 22:54:19.131525] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.489 [2024-12-10 22:54:19.131528] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b06690) 00:23:11.489 [2024-12-10 22:54:19.131534] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.489 [2024-12-10 22:54:19.131544] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68580, cid 3, qid 0 00:23:11.489 [2024-12-10 22:54:19.131609] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.489 [2024-12-10 22:54:19.131615] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.489 [2024-12-10 22:54:19.131618] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.489 [2024-12-10 22:54:19.131621] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68580) on tqpair=0x1b06690 00:23:11.489 [2024-12-10 22:54:19.131629] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.489 [2024-12-10 22:54:19.131633] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.489 [2024-12-10 22:54:19.131636] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b06690) 00:23:11.489 [2024-12-10 22:54:19.131641] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.489 [2024-12-10 22:54:19.131650] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68580, cid 3, qid 0 00:23:11.489 [2024-12-10 22:54:19.131719] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.489 [2024-12-10 22:54:19.131725] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.489 [2024-12-10 22:54:19.131728] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.489 [2024-12-10 22:54:19.131732] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68580) on tqpair=0x1b06690 00:23:11.489 [2024-12-10 22:54:19.131740] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.489 [2024-12-10 22:54:19.131743] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.489 [2024-12-10 22:54:19.131746] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b06690) 00:23:11.489 [2024-12-10 22:54:19.131752] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.489 [2024-12-10 22:54:19.131762] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68580, cid 3, qid 0 00:23:11.489 [2024-12-10 22:54:19.135164] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.489 [2024-12-10 22:54:19.135172] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.489 [2024-12-10 22:54:19.135175] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.489 [2024-12-10 22:54:19.135179] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68580) on tqpair=0x1b06690 00:23:11.489 [2024-12-10 22:54:19.135189] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:11.489 [2024-12-10 22:54:19.135192] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:11.489 [2024-12-10 22:54:19.135196] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b06690) 00:23:11.489 [2024-12-10 22:54:19.135204] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:11.489 [2024-12-10 22:54:19.135215] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b68580, cid 3, qid 0 00:23:11.489 [2024-12-10 22:54:19.135365] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:11.489 [2024-12-10 22:54:19.135371] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:11.489 [2024-12-10 22:54:19.135373] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:11.489 [2024-12-10 22:54:19.135377] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b68580) on tqpair=0x1b06690 00:23:11.489 [2024-12-10 22:54:19.135383] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 4 milliseconds 00:23:11.489 0% 00:23:11.489 Data Units Read: 0 00:23:11.489 Data Units Written: 0 00:23:11.489 Host Read Commands: 0 00:23:11.489 Host Write Commands: 0 00:23:11.489 Controller Busy Time: 0 minutes 00:23:11.489 Power Cycles: 0 00:23:11.489 Power On Hours: 0 hours 00:23:11.489 Unsafe Shutdowns: 0 00:23:11.489 Unrecoverable Media Errors: 0 00:23:11.489 Lifetime Error Log Entries: 0 00:23:11.489 Warning Temperature Time: 0 minutes 00:23:11.489 Critical Temperature Time: 0 minutes 00:23:11.489 00:23:11.489 Number of Queues 00:23:11.489 ================ 00:23:11.489 Number of I/O Submission Queues: 127 00:23:11.489 Number of I/O Completion Queues: 127 00:23:11.489 00:23:11.489 Active Namespaces 00:23:11.489 ================= 00:23:11.489 Namespace ID:1 00:23:11.489 Error Recovery Timeout: Unlimited 00:23:11.489 Command Set Identifier: NVM (00h) 00:23:11.489 Deallocate: Supported 00:23:11.489 Deallocated/Unwritten Error: Not Supported 00:23:11.489 Deallocated Read Value: Unknown 00:23:11.489 Deallocate in Write Zeroes: Not Supported 00:23:11.489 Deallocated Guard Field: 0xFFFF 00:23:11.489 Flush: Supported 00:23:11.489 Reservation: Supported 00:23:11.489 Namespace Sharing Capabilities: Multiple Controllers 00:23:11.489 Size (in LBAs): 131072 (0GiB) 00:23:11.489 Capacity (in LBAs): 131072 (0GiB) 00:23:11.489 Utilization (in LBAs): 131072 (0GiB) 00:23:11.489 NGUID: ABCDEF0123456789ABCDEF0123456789 00:23:11.489 EUI64: ABCDEF0123456789 00:23:11.489 UUID: 19b8a2f7-22d3-4c45-8683-5ecd8b2e8c0d 00:23:11.489 Thin Provisioning: Not Supported 00:23:11.489 Per-NS Atomic Units: Yes 00:23:11.489 Atomic Boundary Size (Normal): 0 00:23:11.489 Atomic Boundary Size (PFail): 0 00:23:11.489 Atomic Boundary Offset: 0 00:23:11.489 Maximum Single Source Range Length: 65535 00:23:11.489 Maximum Copy Length: 65535 00:23:11.489 Maximum Source Range Count: 1 00:23:11.489 NGUID/EUI64 Never Reused: No 00:23:11.489 Namespace Write Protected: No 00:23:11.489 Number of LBA Formats: 1 00:23:11.489 Current LBA Format: LBA Format #00 00:23:11.489 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:11.489 00:23:11.489 22:54:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:23:11.489 22:54:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:11.489 22:54:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.489 22:54:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:11.489 22:54:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.489 22:54:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:23:11.489 22:54:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:23:11.489 22:54:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:11.490 22:54:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:23:11.490 22:54:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:11.490 22:54:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:23:11.490 22:54:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:11.490 22:54:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:11.490 rmmod nvme_tcp 00:23:11.490 rmmod nvme_fabrics 00:23:11.490 rmmod nvme_keyring 00:23:11.749 22:54:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:11.749 22:54:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:23:11.749 22:54:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:23:11.749 22:54:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 2456713 ']' 00:23:11.749 22:54:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 2456713 00:23:11.749 22:54:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 2456713 ']' 00:23:11.749 22:54:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 2456713 00:23:11.749 22:54:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:23:11.749 22:54:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:11.749 22:54:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2456713 00:23:11.749 22:54:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:11.749 22:54:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:11.749 22:54:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2456713' 00:23:11.749 killing process with pid 2456713 00:23:11.749 22:54:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 2456713 00:23:11.749 22:54:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 2456713 00:23:11.749 22:54:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:11.749 22:54:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:11.749 22:54:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:11.749 22:54:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:23:11.749 22:54:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:23:11.749 22:54:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:11.749 22:54:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:23:11.749 22:54:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:11.749 22:54:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:11.749 22:54:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:11.749 22:54:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:11.749 22:54:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:14.286 22:54:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:14.286 00:23:14.286 real 0m9.556s 00:23:14.286 user 0m6.063s 00:23:14.286 sys 0m4.810s 00:23:14.286 22:54:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:14.286 22:54:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:14.286 ************************************ 00:23:14.286 END TEST nvmf_identify 00:23:14.286 ************************************ 00:23:14.286 22:54:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:14.286 22:54:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:14.286 22:54:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:14.286 22:54:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.286 ************************************ 00:23:14.286 START TEST nvmf_perf 00:23:14.286 ************************************ 00:23:14.286 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:14.286 * Looking for test storage... 00:23:14.286 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host 00:23:14.286 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:14.286 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:23:14.286 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:14.286 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:14.286 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:14.287 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:14.287 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:14.287 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:23:14.287 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:23:14.287 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:23:14.287 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:23:14.287 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:23:14.287 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:23:14.287 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:23:14.287 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:14.287 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:23:14.287 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:23:14.287 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:14.287 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:14.287 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:23:14.287 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:23:14.287 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:14.287 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:23:14.287 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:23:14.287 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:23:14.287 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:23:14.287 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:14.287 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:23:14.287 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:23:14.287 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:14.287 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:14.287 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:23:14.287 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:14.287 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:14.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:14.287 --rc genhtml_branch_coverage=1 00:23:14.287 --rc genhtml_function_coverage=1 00:23:14.287 --rc genhtml_legend=1 00:23:14.287 --rc geninfo_all_blocks=1 00:23:14.287 --rc geninfo_unexecuted_blocks=1 00:23:14.287 00:23:14.287 ' 00:23:14.287 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:14.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:14.287 --rc genhtml_branch_coverage=1 00:23:14.287 --rc genhtml_function_coverage=1 00:23:14.287 --rc genhtml_legend=1 00:23:14.287 --rc geninfo_all_blocks=1 00:23:14.287 --rc geninfo_unexecuted_blocks=1 00:23:14.287 00:23:14.287 ' 00:23:14.287 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:14.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:14.287 --rc genhtml_branch_coverage=1 00:23:14.287 --rc genhtml_function_coverage=1 00:23:14.287 --rc genhtml_legend=1 00:23:14.287 --rc geninfo_all_blocks=1 00:23:14.287 --rc geninfo_unexecuted_blocks=1 00:23:14.287 00:23:14.287 ' 00:23:14.287 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:14.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:14.287 --rc genhtml_branch_coverage=1 00:23:14.287 --rc genhtml_function_coverage=1 00:23:14.287 --rc genhtml_legend=1 00:23:14.287 --rc geninfo_all_blocks=1 00:23:14.287 --rc geninfo_unexecuted_blocks=1 00:23:14.287 00:23:14.287 ' 00:23:14.287 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:23:14.287 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:23:14.287 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:14.287 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:14.287 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:14.287 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:14.287 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:14.287 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:14.287 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:14.287 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:14.287 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:14.287 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:14.287 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:14.287 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:14.287 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:14.287 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:14.287 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:14.287 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:14.287 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:23:14.287 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:23:14.287 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:14.287 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:14.287 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:14.287 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.287 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.287 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.287 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:23:14.287 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:14.287 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:23:14.287 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:14.287 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:14.287 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:14.287 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:14.287 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:14.287 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:14.287 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:14.287 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:14.287 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:14.287 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:14.287 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:14.287 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:14.287 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:23:14.287 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:23:14.287 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:14.287 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:14.287 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:14.287 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:14.287 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:14.287 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:14.287 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:14.287 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:14.287 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:14.287 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:14.287 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:23:14.287 22:54:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:20.858 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:20.858 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:23:20.858 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:20.858 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:20.858 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:20.858 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:20.858 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:20.858 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:23:20.858 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:20.858 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:23:20.858 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:23:20.858 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:23:20.858 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:23:20.858 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:23:20.858 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:23:20.858 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:20.858 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:20.858 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:20.858 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:20.858 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:20.858 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:20.858 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:20.858 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:20.858 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:20.858 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:20.858 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:20.858 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:20.858 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:20.858 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:20.858 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:20.858 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:20.858 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:20.858 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:20.858 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:20.858 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:20.858 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:20.858 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:20.858 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:20.858 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:20.858 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:20.858 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:20.858 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:20.858 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:20.858 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:20.858 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:20.858 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:20.858 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:20.858 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:20.858 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:20.858 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:20.858 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:20.858 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:20.858 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:20.858 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:20.858 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:20.858 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:20.858 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:20.858 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:20.859 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:20.859 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:20.859 Found net devices under 0000:86:00.0: cvl_0_0 00:23:20.859 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:20.859 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:20.859 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:20.859 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:20.859 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:20.859 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:20.859 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:20.859 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:20.859 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:20.859 Found net devices under 0000:86:00.1: cvl_0_1 00:23:20.859 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:20.859 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:20.859 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:23:20.859 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:20.859 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:20.859 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:20.859 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:20.859 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:20.859 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:20.859 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:20.859 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:20.859 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:20.859 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:20.859 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:20.859 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:20.859 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:20.859 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:20.859 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:20.859 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:20.859 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:20.859 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:20.859 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:20.859 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:20.859 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:20.859 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:20.859 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:20.859 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:20.859 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:20.859 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:20.859 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:20.859 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.369 ms 00:23:20.859 00:23:20.859 --- 10.0.0.2 ping statistics --- 00:23:20.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:20.859 rtt min/avg/max/mdev = 0.369/0.369/0.369/0.000 ms 00:23:20.859 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:20.859 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:20.859 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:23:20.859 00:23:20.859 --- 10.0.0.1 ping statistics --- 00:23:20.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:20.859 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:23:20.859 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:20.859 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:23:20.859 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:20.859 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:20.859 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:20.859 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:20.859 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:20.859 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:20.859 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:20.859 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:23:20.859 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:20.859 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:20.859 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:20.859 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=2460481 00:23:20.859 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 2460481 00:23:20.859 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:20.859 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 2460481 ']' 00:23:20.859 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:20.859 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:20.859 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:20.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:20.859 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:20.859 22:54:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:20.859 [2024-12-10 22:54:27.782377] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:23:20.859 [2024-12-10 22:54:27.782421] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:20.859 [2024-12-10 22:54:27.863161] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:20.859 [2024-12-10 22:54:27.905351] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:20.859 [2024-12-10 22:54:27.905386] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:20.859 [2024-12-10 22:54:27.905393] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:20.859 [2024-12-10 22:54:27.905399] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:20.859 [2024-12-10 22:54:27.905408] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:20.859 [2024-12-10 22:54:27.906953] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:20.859 [2024-12-10 22:54:27.907065] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:20.859 [2024-12-10 22:54:27.907207] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:23:20.859 [2024-12-10 22:54:27.907208] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:23:20.859 22:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:20.859 22:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:23:20.859 22:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:20.859 22:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:20.859 22:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:20.859 22:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:20.859 22:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/gen_nvme.sh 00:23:20.859 22:54:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py load_subsystem_config 00:23:23.394 22:54:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py framework_get_config bdev 00:23:23.394 22:54:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:23:23.652 22:54:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:23:23.652 22:54:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:23.910 22:54:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:23:23.910 22:54:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:23:23.910 22:54:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:23:23.910 22:54:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:23:23.910 22:54:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:24.169 [2024-12-10 22:54:31.694462] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:24.169 22:54:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:24.428 22:54:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:24.428 22:54:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:24.428 22:54:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:24.428 22:54:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:23:24.687 22:54:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:24.945 [2024-12-10 22:54:32.477427] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:24.945 22:54:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:25.204 22:54:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:23:25.204 22:54:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:23:25.204 22:54:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:23:25.204 22:54:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:23:26.581 Initializing NVMe Controllers 00:23:26.581 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:23:26.581 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:23:26.581 Initialization complete. Launching workers. 00:23:26.581 ======================================================== 00:23:26.581 Latency(us) 00:23:26.581 Device Information : IOPS MiB/s Average min max 00:23:26.581 PCIE (0000:5e:00.0) NSID 1 from core 0: 96489.20 376.91 331.18 9.39 4376.46 00:23:26.581 ======================================================== 00:23:26.581 Total : 96489.20 376.91 331.18 9.39 4376.46 00:23:26.581 00:23:26.581 22:54:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:27.958 Initializing NVMe Controllers 00:23:27.958 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:27.958 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:27.958 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:27.958 Initialization complete. Launching workers. 00:23:27.958 ======================================================== 00:23:27.958 Latency(us) 00:23:27.958 Device Information : IOPS MiB/s Average min max 00:23:27.958 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 90.00 0.35 11341.02 107.77 44657.62 00:23:27.958 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 56.00 0.22 17941.60 7187.50 47885.58 00:23:27.958 ======================================================== 00:23:27.958 Total : 146.00 0.57 13872.75 107.77 47885.58 00:23:27.958 00:23:27.958 22:54:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:29.335 Initializing NVMe Controllers 00:23:29.335 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:29.335 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:29.335 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:29.335 Initialization complete. Launching workers. 00:23:29.335 ======================================================== 00:23:29.335 Latency(us) 00:23:29.335 Device Information : IOPS MiB/s Average min max 00:23:29.335 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10831.98 42.31 2960.57 415.11 6766.57 00:23:29.335 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3799.99 14.84 8456.26 5978.62 16055.79 00:23:29.335 ======================================================== 00:23:29.335 Total : 14631.97 57.16 4387.83 415.11 16055.79 00:23:29.335 00:23:29.335 22:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:23:29.335 22:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:23:29.335 22:54:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:31.869 Initializing NVMe Controllers 00:23:31.869 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:31.869 Controller IO queue size 128, less than required. 00:23:31.869 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:31.869 Controller IO queue size 128, less than required. 00:23:31.869 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:31.869 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:31.869 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:31.869 Initialization complete. Launching workers. 00:23:31.869 ======================================================== 00:23:31.869 Latency(us) 00:23:31.869 Device Information : IOPS MiB/s Average min max 00:23:31.869 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1732.33 433.08 75403.21 55121.73 122148.00 00:23:31.869 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 579.44 144.86 224969.05 80407.06 348800.16 00:23:31.869 ======================================================== 00:23:31.869 Total : 2311.77 577.94 112891.71 55121.73 348800.16 00:23:31.869 00:23:31.869 22:54:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:23:31.869 No valid NVMe controllers or AIO or URING devices found 00:23:31.869 Initializing NVMe Controllers 00:23:31.869 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:31.869 Controller IO queue size 128, less than required. 00:23:31.869 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:31.869 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:23:31.869 Controller IO queue size 128, less than required. 00:23:31.869 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:31.869 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:23:31.869 WARNING: Some requested NVMe devices were skipped 00:23:31.869 22:54:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:23:35.158 Initializing NVMe Controllers 00:23:35.158 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:35.158 Controller IO queue size 128, less than required. 00:23:35.158 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:35.158 Controller IO queue size 128, less than required. 00:23:35.158 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:35.158 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:35.158 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:35.158 Initialization complete. Launching workers. 00:23:35.158 00:23:35.158 ==================== 00:23:35.158 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:23:35.158 TCP transport: 00:23:35.158 polls: 15460 00:23:35.158 idle_polls: 12141 00:23:35.158 sock_completions: 3319 00:23:35.158 nvme_completions: 6325 00:23:35.158 submitted_requests: 9592 00:23:35.158 queued_requests: 1 00:23:35.158 00:23:35.158 ==================== 00:23:35.158 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:23:35.158 TCP transport: 00:23:35.158 polls: 14951 00:23:35.158 idle_polls: 11377 00:23:35.158 sock_completions: 3574 00:23:35.158 nvme_completions: 6323 00:23:35.158 submitted_requests: 9436 00:23:35.158 queued_requests: 1 00:23:35.158 ======================================================== 00:23:35.158 Latency(us) 00:23:35.158 Device Information : IOPS MiB/s Average min max 00:23:35.158 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1580.08 395.02 83770.85 50812.05 143569.75 00:23:35.158 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1579.58 394.90 81720.03 47930.14 114569.69 00:23:35.158 ======================================================== 00:23:35.158 Total : 3159.66 789.92 82745.60 47930.14 143569.75 00:23:35.158 00:23:35.158 22:54:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:23:35.158 22:54:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:35.158 22:54:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:23:35.158 22:54:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:23:35.158 22:54:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:23:35.158 22:54:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:35.158 22:54:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:23:35.158 22:54:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:35.158 22:54:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:23:35.158 22:54:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:35.158 22:54:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:35.158 rmmod nvme_tcp 00:23:35.158 rmmod nvme_fabrics 00:23:35.158 rmmod nvme_keyring 00:23:35.158 22:54:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:35.158 22:54:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:23:35.158 22:54:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:23:35.158 22:54:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 2460481 ']' 00:23:35.158 22:54:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 2460481 00:23:35.158 22:54:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 2460481 ']' 00:23:35.158 22:54:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 2460481 00:23:35.158 22:54:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:23:35.158 22:54:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:35.158 22:54:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2460481 00:23:35.158 22:54:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:35.158 22:54:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:35.158 22:54:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2460481' 00:23:35.158 killing process with pid 2460481 00:23:35.158 22:54:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 2460481 00:23:35.158 22:54:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 2460481 00:23:36.536 22:54:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:36.536 22:54:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:36.536 22:54:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:36.536 22:54:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:23:36.536 22:54:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:23:36.536 22:54:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:36.536 22:54:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:23:36.536 22:54:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:36.536 22:54:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:36.536 22:54:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:36.536 22:54:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:36.536 22:54:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:38.441 22:54:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:38.441 00:23:38.441 real 0m24.503s 00:23:38.441 user 1m3.966s 00:23:38.441 sys 0m8.354s 00:23:38.441 22:54:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:38.441 22:54:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:38.441 ************************************ 00:23:38.441 END TEST nvmf_perf 00:23:38.441 ************************************ 00:23:38.441 22:54:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:38.441 22:54:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:38.441 22:54:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:38.441 22:54:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.701 ************************************ 00:23:38.701 START TEST nvmf_fio_host 00:23:38.701 ************************************ 00:23:38.701 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:38.701 * Looking for test storage... 00:23:38.701 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host 00:23:38.701 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:38.701 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:23:38.701 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:38.701 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:38.701 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:38.701 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:38.701 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:38.701 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:23:38.701 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:23:38.701 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:23:38.701 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:23:38.701 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:23:38.701 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:23:38.701 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:23:38.701 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:38.701 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:23:38.701 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:23:38.701 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:38.701 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:38.701 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:23:38.701 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:23:38.701 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:38.701 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:23:38.701 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:23:38.701 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:23:38.701 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:23:38.701 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:38.701 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:23:38.701 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:23:38.701 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:38.701 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:38.701 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:23:38.701 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:38.701 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:38.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:38.701 --rc genhtml_branch_coverage=1 00:23:38.701 --rc genhtml_function_coverage=1 00:23:38.701 --rc genhtml_legend=1 00:23:38.701 --rc geninfo_all_blocks=1 00:23:38.701 --rc geninfo_unexecuted_blocks=1 00:23:38.701 00:23:38.701 ' 00:23:38.701 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:38.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:38.701 --rc genhtml_branch_coverage=1 00:23:38.701 --rc genhtml_function_coverage=1 00:23:38.701 --rc genhtml_legend=1 00:23:38.701 --rc geninfo_all_blocks=1 00:23:38.701 --rc geninfo_unexecuted_blocks=1 00:23:38.701 00:23:38.701 ' 00:23:38.701 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:38.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:38.701 --rc genhtml_branch_coverage=1 00:23:38.701 --rc genhtml_function_coverage=1 00:23:38.701 --rc genhtml_legend=1 00:23:38.701 --rc geninfo_all_blocks=1 00:23:38.701 --rc geninfo_unexecuted_blocks=1 00:23:38.701 00:23:38.701 ' 00:23:38.701 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:38.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:38.701 --rc genhtml_branch_coverage=1 00:23:38.701 --rc genhtml_function_coverage=1 00:23:38.701 --rc genhtml_legend=1 00:23:38.701 --rc geninfo_all_blocks=1 00:23:38.701 --rc geninfo_unexecuted_blocks=1 00:23:38.701 00:23:38.701 ' 00:23:38.701 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:23:38.701 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:38.701 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:38.701 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:38.701 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:38.701 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.701 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.702 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.702 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:38.702 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.702 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:23:38.702 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:23:38.702 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:38.702 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:38.702 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:38.702 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:38.702 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:38.702 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:38.702 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:38.702 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:38.702 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:38.702 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:38.702 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:38.702 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:38.702 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:38.702 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:38.702 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:38.702 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:38.702 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:23:38.702 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:38.702 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:38.702 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:38.702 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:38.702 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.702 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.702 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.702 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:38.702 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.702 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:23:38.702 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:38.702 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:38.702 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:38.702 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:38.702 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:38.702 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:38.702 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:38.702 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:38.702 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:38.702 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:38.702 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:23:38.702 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:23:38.702 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:38.702 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:38.702 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:38.702 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:38.702 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:38.702 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:38.702 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:38.702 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:38.702 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:38.702 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:38.702 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:23:38.702 22:54:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.303 22:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:45.303 22:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:23:45.303 22:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:45.303 22:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:45.303 22:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:45.303 22:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:45.303 22:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:45.303 22:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:23:45.303 22:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:45.303 22:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:23:45.303 22:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:23:45.303 22:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:23:45.303 22:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:23:45.303 22:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:23:45.303 22:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:23:45.303 22:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:45.303 22:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:45.303 22:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:45.303 22:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:45.303 22:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:45.304 22:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:45.304 22:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:45.304 22:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:45.304 22:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:45.304 22:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:45.304 22:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:45.304 22:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:45.304 22:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:45.304 22:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:45.304 22:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:45.304 22:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:45.304 22:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:45.304 22:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:45.304 22:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:45.304 22:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:45.304 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:45.304 22:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:45.304 22:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:45.304 22:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:45.304 22:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:45.304 22:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:45.304 22:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:45.304 22:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:45.304 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:45.304 22:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:45.304 22:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:45.304 22:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:45.304 22:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:45.304 22:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:45.304 22:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:45.304 22:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:45.304 22:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:45.304 22:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:45.304 22:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:45.304 22:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:45.304 22:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:45.304 22:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:45.304 22:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:45.304 22:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:45.304 22:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:45.304 Found net devices under 0000:86:00.0: cvl_0_0 00:23:45.304 22:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:45.304 22:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:45.304 22:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:45.304 22:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:45.304 22:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:45.304 22:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:45.304 22:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:45.304 22:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:45.304 22:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:45.304 Found net devices under 0000:86:00.1: cvl_0_1 00:23:45.304 22:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:45.304 22:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:45.304 22:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:23:45.304 22:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:45.304 22:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:45.304 22:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:45.304 22:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:45.304 22:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:45.304 22:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:45.304 22:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:45.304 22:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:45.304 22:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:45.304 22:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:45.304 22:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:45.304 22:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:45.304 22:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:45.304 22:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:45.304 22:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:45.304 22:54:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:45.304 22:54:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:45.304 22:54:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:45.304 22:54:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:45.304 22:54:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:45.304 22:54:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:45.304 22:54:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:45.304 22:54:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:45.304 22:54:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:45.304 22:54:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:45.304 22:54:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:45.304 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:45.304 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.397 ms 00:23:45.304 00:23:45.304 --- 10.0.0.2 ping statistics --- 00:23:45.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:45.304 rtt min/avg/max/mdev = 0.397/0.397/0.397/0.000 ms 00:23:45.304 22:54:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:45.304 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:45.304 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:23:45.304 00:23:45.304 --- 10.0.0.1 ping statistics --- 00:23:45.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:45.304 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:23:45.304 22:54:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:45.304 22:54:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:23:45.304 22:54:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:45.304 22:54:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:45.304 22:54:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:45.304 22:54:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:45.304 22:54:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:45.304 22:54:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:45.304 22:54:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:45.304 22:54:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:23:45.304 22:54:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:23:45.304 22:54:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:45.304 22:54:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.304 22:54:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2466587 00:23:45.304 22:54:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:45.304 22:54:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:45.304 22:54:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2466587 00:23:45.304 22:54:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 2466587 ']' 00:23:45.304 22:54:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:45.304 22:54:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:45.304 22:54:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:45.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:45.305 22:54:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:45.305 22:54:52 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.305 [2024-12-10 22:54:52.401077] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:23:45.305 [2024-12-10 22:54:52.401130] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:45.305 [2024-12-10 22:54:52.481500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:45.305 [2024-12-10 22:54:52.522139] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:45.305 [2024-12-10 22:54:52.522181] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:45.305 [2024-12-10 22:54:52.522189] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:45.305 [2024-12-10 22:54:52.522195] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:45.305 [2024-12-10 22:54:52.522199] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:45.305 [2024-12-10 22:54:52.523814] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:45.305 [2024-12-10 22:54:52.523929] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:45.305 [2024-12-10 22:54:52.524038] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:23:45.305 [2024-12-10 22:54:52.524039] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:23:45.563 22:54:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:45.563 22:54:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:23:45.563 22:54:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:45.822 [2024-12-10 22:54:53.420496] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:45.822 22:54:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:23:45.822 22:54:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:45.822 22:54:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.822 22:54:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:23:46.081 Malloc1 00:23:46.081 22:54:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:46.340 22:54:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:46.598 22:54:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:46.598 [2024-12-10 22:54:54.267643] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:46.598 22:54:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:46.857 22:54:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/app/fio/nvme 00:23:46.857 22:54:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:46.857 22:54:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:46.857 22:54:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:46.857 22:54:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:46.857 22:54:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:46.857 22:54:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_nvme 00:23:46.857 22:54:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:23:46.857 22:54:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:46.857 22:54:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:46.857 22:54:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_nvme 00:23:46.857 22:54:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:23:46.857 22:54:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:46.857 22:54:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:46.857 22:54:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:46.857 22:54:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:46.857 22:54:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_nvme 00:23:46.857 22:54:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:23:46.857 22:54:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:46.857 22:54:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:46.857 22:54:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:46.857 22:54:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_nvme' 00:23:46.857 22:54:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:47.116 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:23:47.116 fio-3.35 00:23:47.116 Starting 1 thread 00:23:49.646 00:23:49.646 test: (groupid=0, jobs=1): err= 0: pid=2467186: Tue Dec 10 22:54:57 2024 00:23:49.646 read: IOPS=11.6k, BW=45.5MiB/s (47.7MB/s)(91.2MiB/2005msec) 00:23:49.646 slat (nsec): min=1578, max=237426, avg=1724.56, stdev=2209.31 00:23:49.646 clat (usec): min=3067, max=10598, avg=6080.06, stdev=464.20 00:23:49.646 lat (usec): min=3104, max=10600, avg=6081.78, stdev=464.12 00:23:49.646 clat percentiles (usec): 00:23:49.646 | 1.00th=[ 4948], 5.00th=[ 5342], 10.00th=[ 5538], 20.00th=[ 5735], 00:23:49.646 | 30.00th=[ 5866], 40.00th=[ 5997], 50.00th=[ 6063], 60.00th=[ 6194], 00:23:49.646 | 70.00th=[ 6325], 80.00th=[ 6456], 90.00th=[ 6652], 95.00th=[ 6783], 00:23:49.646 | 99.00th=[ 7111], 99.50th=[ 7177], 99.90th=[ 8455], 99.95th=[ 9634], 00:23:49.646 | 99.99th=[10552] 00:23:49.646 bw ( KiB/s): min=45712, max=47048, per=99.95%, avg=46540.00, stdev=580.74, samples=4 00:23:49.646 iops : min=11428, max=11762, avg=11635.00, stdev=145.18, samples=4 00:23:49.646 write: IOPS=11.6k, BW=45.1MiB/s (47.3MB/s)(90.5MiB/2005msec); 0 zone resets 00:23:49.647 slat (nsec): min=1615, max=224143, avg=1785.79, stdev=1639.70 00:23:49.647 clat (usec): min=2421, max=8783, avg=4915.60, stdev=385.29 00:23:49.647 lat (usec): min=2435, max=8785, avg=4917.38, stdev=385.30 00:23:49.647 clat percentiles (usec): 00:23:49.647 | 1.00th=[ 4047], 5.00th=[ 4293], 10.00th=[ 4490], 20.00th=[ 4621], 00:23:49.647 | 30.00th=[ 4752], 40.00th=[ 4817], 50.00th=[ 4883], 60.00th=[ 5014], 00:23:49.647 | 70.00th=[ 5080], 80.00th=[ 5211], 90.00th=[ 5342], 95.00th=[ 5473], 00:23:49.647 | 99.00th=[ 5800], 99.50th=[ 5932], 99.90th=[ 7373], 99.95th=[ 8455], 00:23:49.647 | 99.99th=[ 8717] 00:23:49.647 bw ( KiB/s): min=45776, max=46704, per=100.00%, avg=46224.00, stdev=382.89, samples=4 00:23:49.647 iops : min=11444, max=11676, avg=11556.00, stdev=95.72, samples=4 00:23:49.647 lat (msec) : 4=0.37%, 10=99.61%, 20=0.02% 00:23:49.647 cpu : usr=74.30%, sys=24.75%, ctx=105, majf=0, minf=3 00:23:49.647 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:23:49.647 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:49.647 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:49.647 issued rwts: total=23339,23170,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:49.647 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:49.647 00:23:49.647 Run status group 0 (all jobs): 00:23:49.647 READ: bw=45.5MiB/s (47.7MB/s), 45.5MiB/s-45.5MiB/s (47.7MB/s-47.7MB/s), io=91.2MiB (95.6MB), run=2005-2005msec 00:23:49.647 WRITE: bw=45.1MiB/s (47.3MB/s), 45.1MiB/s-45.1MiB/s (47.3MB/s-47.3MB/s), io=90.5MiB (94.9MB), run=2005-2005msec 00:23:49.647 22:54:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:49.647 22:54:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:49.647 22:54:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:49.647 22:54:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:49.647 22:54:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:49.647 22:54:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_nvme 00:23:49.647 22:54:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:23:49.647 22:54:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:49.647 22:54:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:49.647 22:54:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:23:49.647 22:54:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_nvme 00:23:49.647 22:54:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:49.647 22:54:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:49.647 22:54:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:49.647 22:54:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:49.647 22:54:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_nvme 00:23:49.647 22:54:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:23:49.647 22:54:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:49.647 22:54:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:49.647 22:54:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:49.647 22:54:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_nvme' 00:23:49.647 22:54:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:49.905 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:23:49.905 fio-3.35 00:23:49.905 Starting 1 thread 00:23:51.283 [2024-12-10 22:54:58.946219] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x810230 is same with the state(6) to be set 00:23:51.283 [2024-12-10 22:54:58.946280] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x810230 is same with the state(6) to be set 00:23:52.220 00:23:52.220 test: (groupid=0, jobs=1): err= 0: pid=2467759: Tue Dec 10 22:54:59 2024 00:23:52.220 read: IOPS=10.8k, BW=170MiB/s (178MB/s)(340MiB/2006msec) 00:23:52.220 slat (nsec): min=2553, max=84509, avg=2843.17, stdev=1258.56 00:23:52.220 clat (usec): min=1953, max=12745, avg=6767.10, stdev=1571.20 00:23:52.220 lat (usec): min=1956, max=12747, avg=6769.94, stdev=1571.31 00:23:52.220 clat percentiles (usec): 00:23:52.220 | 1.00th=[ 3621], 5.00th=[ 4359], 10.00th=[ 4752], 20.00th=[ 5342], 00:23:52.220 | 30.00th=[ 5866], 40.00th=[ 6259], 50.00th=[ 6783], 60.00th=[ 7242], 00:23:52.220 | 70.00th=[ 7635], 80.00th=[ 7898], 90.00th=[ 8717], 95.00th=[ 9503], 00:23:52.220 | 99.00th=[10945], 99.50th=[11469], 99.90th=[12518], 99.95th=[12649], 00:23:52.220 | 99.99th=[12649] 00:23:52.220 bw ( KiB/s): min=79584, max=98112, per=51.14%, avg=88784.00, stdev=8431.03, samples=4 00:23:52.220 iops : min= 4974, max= 6132, avg=5549.00, stdev=526.94, samples=4 00:23:52.220 write: IOPS=6392, BW=99.9MiB/s (105MB/s)(181MiB/1810msec); 0 zone resets 00:23:52.220 slat (usec): min=29, max=381, avg=31.74, stdev= 7.18 00:23:52.220 clat (usec): min=3731, max=14564, avg=8716.71, stdev=1530.90 00:23:52.220 lat (usec): min=3762, max=14675, avg=8748.45, stdev=1532.17 00:23:52.220 clat percentiles (usec): 00:23:52.220 | 1.00th=[ 5735], 5.00th=[ 6521], 10.00th=[ 6980], 20.00th=[ 7439], 00:23:52.220 | 30.00th=[ 7832], 40.00th=[ 8160], 50.00th=[ 8455], 60.00th=[ 8848], 00:23:52.220 | 70.00th=[ 9372], 80.00th=[ 9896], 90.00th=[10814], 95.00th=[11469], 00:23:52.220 | 99.00th=[13042], 99.50th=[13435], 99.90th=[14091], 99.95th=[14353], 00:23:52.220 | 99.99th=[14484] 00:23:52.220 bw ( KiB/s): min=82144, max=102400, per=90.09%, avg=92152.00, stdev=9107.97, samples=4 00:23:52.220 iops : min= 5134, max= 6400, avg=5759.50, stdev=569.25, samples=4 00:23:52.220 lat (msec) : 2=0.01%, 4=1.44%, 10=89.71%, 20=8.85% 00:23:52.220 cpu : usr=85.59%, sys=13.67%, ctx=27, majf=0, minf=3 00:23:52.220 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:23:52.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:52.220 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:52.220 issued rwts: total=21766,11571,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:52.220 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:52.220 00:23:52.220 Run status group 0 (all jobs): 00:23:52.220 READ: bw=170MiB/s (178MB/s), 170MiB/s-170MiB/s (178MB/s-178MB/s), io=340MiB (357MB), run=2006-2006msec 00:23:52.220 WRITE: bw=99.9MiB/s (105MB/s), 99.9MiB/s-99.9MiB/s (105MB/s-105MB/s), io=181MiB (190MB), run=1810-1810msec 00:23:52.220 22:54:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:52.478 22:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:23:52.478 22:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:23:52.478 22:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:23:52.478 22:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:23:52.478 22:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:52.478 22:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:23:52.478 22:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:52.478 22:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:23:52.478 22:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:52.478 22:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:52.478 rmmod nvme_tcp 00:23:52.478 rmmod nvme_fabrics 00:23:52.478 rmmod nvme_keyring 00:23:52.737 22:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:52.737 22:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:23:52.737 22:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:23:52.737 22:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 2466587 ']' 00:23:52.737 22:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 2466587 00:23:52.737 22:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 2466587 ']' 00:23:52.737 22:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 2466587 00:23:52.737 22:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:23:52.737 22:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:52.737 22:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2466587 00:23:52.737 22:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:52.737 22:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:52.737 22:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2466587' 00:23:52.737 killing process with pid 2466587 00:23:52.737 22:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 2466587 00:23:52.737 22:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 2466587 00:23:52.996 22:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:52.996 22:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:52.996 22:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:52.996 22:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:23:52.996 22:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:23:52.996 22:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:52.996 22:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:23:52.996 22:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:52.996 22:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:52.996 22:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:52.997 22:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:52.997 22:55:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:54.902 22:55:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:54.902 00:23:54.902 real 0m16.364s 00:23:54.902 user 0m48.686s 00:23:54.902 sys 0m6.452s 00:23:54.902 22:55:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:54.902 22:55:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.902 ************************************ 00:23:54.902 END TEST nvmf_fio_host 00:23:54.902 ************************************ 00:23:54.902 22:55:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:54.902 22:55:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:54.902 22:55:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:54.902 22:55:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.902 ************************************ 00:23:54.902 START TEST nvmf_failover 00:23:54.902 ************************************ 00:23:54.902 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:55.162 * Looking for test storage... 00:23:55.162 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host 00:23:55.162 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:55.162 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:23:55.162 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:55.162 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:55.162 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:55.162 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:55.162 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:55.162 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:23:55.162 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:23:55.162 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:23:55.162 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:23:55.162 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:23:55.162 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:23:55.162 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:23:55.162 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:55.162 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:23:55.162 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:23:55.162 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:55.162 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:55.163 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:23:55.163 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:23:55.163 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:55.163 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:23:55.163 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:23:55.163 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:23:55.163 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:23:55.163 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:55.163 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:23:55.163 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:23:55.163 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:55.163 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:55.163 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:23:55.163 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:55.163 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:55.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:55.163 --rc genhtml_branch_coverage=1 00:23:55.163 --rc genhtml_function_coverage=1 00:23:55.163 --rc genhtml_legend=1 00:23:55.163 --rc geninfo_all_blocks=1 00:23:55.163 --rc geninfo_unexecuted_blocks=1 00:23:55.163 00:23:55.163 ' 00:23:55.163 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:55.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:55.163 --rc genhtml_branch_coverage=1 00:23:55.163 --rc genhtml_function_coverage=1 00:23:55.163 --rc genhtml_legend=1 00:23:55.163 --rc geninfo_all_blocks=1 00:23:55.163 --rc geninfo_unexecuted_blocks=1 00:23:55.163 00:23:55.163 ' 00:23:55.163 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:55.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:55.163 --rc genhtml_branch_coverage=1 00:23:55.163 --rc genhtml_function_coverage=1 00:23:55.163 --rc genhtml_legend=1 00:23:55.163 --rc geninfo_all_blocks=1 00:23:55.163 --rc geninfo_unexecuted_blocks=1 00:23:55.163 00:23:55.163 ' 00:23:55.163 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:55.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:55.163 --rc genhtml_branch_coverage=1 00:23:55.163 --rc genhtml_function_coverage=1 00:23:55.163 --rc genhtml_legend=1 00:23:55.163 --rc geninfo_all_blocks=1 00:23:55.163 --rc geninfo_unexecuted_blocks=1 00:23:55.163 00:23:55.163 ' 00:23:55.163 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:23:55.163 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:23:55.163 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:55.163 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:55.163 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:55.163 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:55.163 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:55.163 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:55.163 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:55.163 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:55.163 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:55.163 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:55.163 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:55.163 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:55.163 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:55.163 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:55.163 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:55.163 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:55.163 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:23:55.163 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:23:55.163 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:55.163 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:55.163 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:55.163 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.163 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.163 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.163 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:23:55.163 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.163 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:23:55.163 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:55.163 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:55.163 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:55.163 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:55.163 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:55.163 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:55.163 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:55.163 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:55.163 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:55.163 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:55.163 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:55.163 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:55.163 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:23:55.163 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:55.163 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:23:55.163 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:55.163 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:55.163 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:55.163 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:55.163 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:55.163 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:55.163 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:55.163 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:55.163 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:55.163 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:55.163 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:23:55.163 22:55:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:01.733 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:01.733 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:24:01.733 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:01.733 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:01.733 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:01.733 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:01.733 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:01.733 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:24:01.733 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:01.733 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:24:01.733 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:24:01.733 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:24:01.733 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:24:01.733 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:24:01.733 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:24:01.733 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:01.733 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:01.733 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:01.733 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:01.733 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:01.733 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:01.733 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:01.733 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:01.733 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:01.733 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:01.733 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:01.733 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:01.733 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:01.733 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:01.733 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:01.733 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:01.733 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:01.733 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:01.733 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:01.733 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:01.733 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:01.733 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:01.733 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:01.733 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:01.733 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:01.733 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:01.733 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:01.733 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:01.733 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:01.733 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:01.733 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:01.733 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:01.733 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:01.733 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:01.733 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:01.733 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:01.733 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:01.733 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:01.733 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:01.733 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:01.733 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:01.733 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:01.733 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:01.733 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:01.733 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:01.733 Found net devices under 0000:86:00.0: cvl_0_0 00:24:01.733 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:01.733 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:01.733 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:01.733 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:01.733 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:01.733 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:01.733 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:01.733 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:01.733 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:01.733 Found net devices under 0000:86:00.1: cvl_0_1 00:24:01.733 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:01.733 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:01.733 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:24:01.733 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:01.733 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:01.733 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:01.733 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:01.733 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:01.733 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:01.733 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:01.733 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:01.733 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:01.733 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:01.733 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:01.733 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:01.733 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:01.733 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:01.734 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:01.734 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:01.734 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:01.734 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:01.734 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:01.734 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:01.734 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:01.734 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:01.734 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:01.734 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:01.734 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:01.734 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:01.734 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:01.734 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.428 ms 00:24:01.734 00:24:01.734 --- 10.0.0.2 ping statistics --- 00:24:01.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:01.734 rtt min/avg/max/mdev = 0.428/0.428/0.428/0.000 ms 00:24:01.734 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:01.734 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:01.734 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:24:01.734 00:24:01.734 --- 10.0.0.1 ping statistics --- 00:24:01.734 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:01.734 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:24:01.734 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:01.734 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:24:01.734 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:01.734 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:01.734 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:01.734 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:01.734 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:01.734 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:01.734 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:01.734 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:24:01.734 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:01.734 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:01.734 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:01.734 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=2471634 00:24:01.734 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 2471634 00:24:01.734 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:01.734 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2471634 ']' 00:24:01.734 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:01.734 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:01.734 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:01.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:01.734 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:01.734 22:55:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:01.734 [2024-12-10 22:55:08.846999] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:24:01.734 [2024-12-10 22:55:08.847050] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:01.734 [2024-12-10 22:55:08.927121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:01.734 [2024-12-10 22:55:08.968012] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:01.734 [2024-12-10 22:55:08.968048] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:01.734 [2024-12-10 22:55:08.968055] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:01.734 [2024-12-10 22:55:08.968061] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:01.734 [2024-12-10 22:55:08.968066] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:01.734 [2024-12-10 22:55:08.969489] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:24:01.734 [2024-12-10 22:55:08.969587] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:24:01.734 [2024-12-10 22:55:08.969587] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:24:01.734 22:55:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:01.734 22:55:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:24:01.734 22:55:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:01.734 22:55:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:01.734 22:55:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:01.734 22:55:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:01.734 22:55:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:01.734 [2024-12-10 22:55:09.270444] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:01.734 22:55:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:01.992 Malloc0 00:24:01.992 22:55:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:01.992 22:55:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:02.251 22:55:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:02.509 [2024-12-10 22:55:10.086194] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:02.509 22:55:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:02.768 [2024-12-10 22:55:10.294730] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:02.768 22:55:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:03.027 [2024-12-10 22:55:10.503391] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:03.027 22:55:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2471989 00:24:03.027 22:55:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:24:03.027 22:55:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:03.027 22:55:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2471989 /var/tmp/bdevperf.sock 00:24:03.027 22:55:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2471989 ']' 00:24:03.027 22:55:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:03.027 22:55:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:03.028 22:55:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:03.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:03.028 22:55:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:03.028 22:55:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:03.286 22:55:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:03.286 22:55:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:24:03.286 22:55:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:03.546 NVMe0n1 00:24:03.546 22:55:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:03.804 00:24:03.804 22:55:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2472029 00:24:03.804 22:55:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:24:03.804 22:55:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:05.182 22:55:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:05.182 [2024-12-10 22:55:12.682017] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bbf30 is same with the state(6) to be set 00:24:05.182 [2024-12-10 22:55:12.682061] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bbf30 is same with the state(6) to be set 00:24:05.182 [2024-12-10 22:55:12.682068] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bbf30 is same with the state(6) to be set 00:24:05.182 [2024-12-10 22:55:12.682075] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bbf30 is same with the state(6) to be set 00:24:05.182 [2024-12-10 22:55:12.682082] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bbf30 is same with the state(6) to be set 00:24:05.182 [2024-12-10 22:55:12.682087] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bbf30 is same with the state(6) to be set 00:24:05.182 [2024-12-10 22:55:12.682094] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bbf30 is same with the state(6) to be set 00:24:05.182 [2024-12-10 22:55:12.682099] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bbf30 is same with the state(6) to be set 00:24:05.182 [2024-12-10 22:55:12.682105] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bbf30 is same with the state(6) to be set 00:24:05.182 [2024-12-10 22:55:12.682111] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bbf30 is same with the state(6) to be set 00:24:05.182 [2024-12-10 22:55:12.682122] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bbf30 is same with the state(6) to be set 00:24:05.182 [2024-12-10 22:55:12.682128] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bbf30 is same with the state(6) to be set 00:24:05.182 [2024-12-10 22:55:12.682134] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bbf30 is same with the state(6) to be set 00:24:05.182 [2024-12-10 22:55:12.682140] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bbf30 is same with the state(6) to be set 00:24:05.182 [2024-12-10 22:55:12.682145] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bbf30 is same with the state(6) to be set 00:24:05.182 [2024-12-10 22:55:12.682151] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bbf30 is same with the state(6) to be set 00:24:05.182 [2024-12-10 22:55:12.682162] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bbf30 is same with the state(6) to be set 00:24:05.182 [2024-12-10 22:55:12.682168] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bbf30 is same with the state(6) to be set 00:24:05.182 [2024-12-10 22:55:12.682174] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bbf30 is same with the state(6) to be set 00:24:05.182 [2024-12-10 22:55:12.682180] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bbf30 is same with the state(6) to be set 00:24:05.182 [2024-12-10 22:55:12.682186] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bbf30 is same with the state(6) to be set 00:24:05.182 [2024-12-10 22:55:12.682192] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bbf30 is same with the state(6) to be set 00:24:05.183 [2024-12-10 22:55:12.682197] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bbf30 is same with the state(6) to be set 00:24:05.183 [2024-12-10 22:55:12.682203] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bbf30 is same with the state(6) to be set 00:24:05.183 [2024-12-10 22:55:12.682209] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bbf30 is same with the state(6) to be set 00:24:05.183 [2024-12-10 22:55:12.682215] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bbf30 is same with the state(6) to be set 00:24:05.183 [2024-12-10 22:55:12.682221] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bbf30 is same with the state(6) to be set 00:24:05.183 [2024-12-10 22:55:12.682227] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bbf30 is same with the state(6) to be set 00:24:05.183 [2024-12-10 22:55:12.682233] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bbf30 is same with the state(6) to be set 00:24:05.183 [2024-12-10 22:55:12.682239] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bbf30 is same with the state(6) to be set 00:24:05.183 [2024-12-10 22:55:12.682245] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bbf30 is same with the state(6) to be set 00:24:05.183 [2024-12-10 22:55:12.682250] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bbf30 is same with the state(6) to be set 00:24:05.183 [2024-12-10 22:55:12.682256] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bbf30 is same with the state(6) to be set 00:24:05.183 [2024-12-10 22:55:12.682262] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bbf30 is same with the state(6) to be set 00:24:05.183 [2024-12-10 22:55:12.682268] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bbf30 is same with the state(6) to be set 00:24:05.183 [2024-12-10 22:55:12.682274] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bbf30 is same with the state(6) to be set 00:24:05.183 [2024-12-10 22:55:12.682280] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bbf30 is same with the state(6) to be set 00:24:05.183 [2024-12-10 22:55:12.682288] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bbf30 is same with the state(6) to be set 00:24:05.183 [2024-12-10 22:55:12.682295] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bbf30 is same with the state(6) to be set 00:24:05.183 [2024-12-10 22:55:12.682301] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bbf30 is same with the state(6) to be set 00:24:05.183 [2024-12-10 22:55:12.682306] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bbf30 is same with the state(6) to be set 00:24:05.183 [2024-12-10 22:55:12.682312] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bbf30 is same with the state(6) to be set 00:24:05.183 [2024-12-10 22:55:12.682318] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bbf30 is same with the state(6) to be set 00:24:05.183 [2024-12-10 22:55:12.682324] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bbf30 is same with the state(6) to be set 00:24:05.183 [2024-12-10 22:55:12.682330] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bbf30 is same with the state(6) to be set 00:24:05.183 [2024-12-10 22:55:12.682336] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bbf30 is same with the state(6) to be set 00:24:05.183 [2024-12-10 22:55:12.682342] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bbf30 is same with the state(6) to be set 00:24:05.183 [2024-12-10 22:55:12.682348] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bbf30 is same with the state(6) to be set 00:24:05.183 [2024-12-10 22:55:12.682354] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bbf30 is same with the state(6) to be set 00:24:05.183 [2024-12-10 22:55:12.682360] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bbf30 is same with the state(6) to be set 00:24:05.183 [2024-12-10 22:55:12.682366] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bbf30 is same with the state(6) to be set 00:24:05.183 [2024-12-10 22:55:12.682372] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bbf30 is same with the state(6) to be set 00:24:05.183 [2024-12-10 22:55:12.682378] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bbf30 is same with the state(6) to be set 00:24:05.183 [2024-12-10 22:55:12.682383] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bbf30 is same with the state(6) to be set 00:24:05.183 [2024-12-10 22:55:12.682389] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bbf30 is same with the state(6) to be set 00:24:05.183 [2024-12-10 22:55:12.682395] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bbf30 is same with the state(6) to be set 00:24:05.183 [2024-12-10 22:55:12.682400] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bbf30 is same with the state(6) to be set 00:24:05.183 [2024-12-10 22:55:12.682406] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bbf30 is same with the state(6) to be set 00:24:05.183 [2024-12-10 22:55:12.682412] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bbf30 is same with the state(6) to be set 00:24:05.183 [2024-12-10 22:55:12.682418] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bbf30 is same with the state(6) to be set 00:24:05.183 [2024-12-10 22:55:12.682424] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bbf30 is same with the state(6) to be set 00:24:05.183 [2024-12-10 22:55:12.682430] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bbf30 is same with the state(6) to be set 00:24:05.183 [2024-12-10 22:55:12.682436] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bbf30 is same with the state(6) to be set 00:24:05.183 [2024-12-10 22:55:12.682443] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bbf30 is same with the state(6) to be set 00:24:05.183 [2024-12-10 22:55:12.682450] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bbf30 is same with the state(6) to be set 00:24:05.183 [2024-12-10 22:55:12.682456] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bbf30 is same with the state(6) to be set 00:24:05.183 [2024-12-10 22:55:12.682463] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bbf30 is same with the state(6) to be set 00:24:05.183 [2024-12-10 22:55:12.682469] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bbf30 is same with the state(6) to be set 00:24:05.183 [2024-12-10 22:55:12.682475] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bbf30 is same with the state(6) to be set 00:24:05.183 [2024-12-10 22:55:12.682481] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bbf30 is same with the state(6) to be set 00:24:05.183 [2024-12-10 22:55:12.682487] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bbf30 is same with the state(6) to be set 00:24:05.183 [2024-12-10 22:55:12.682493] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bbf30 is same with the state(6) to be set 00:24:05.183 22:55:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:24:08.471 22:55:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:08.471 00:24:08.471 22:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:08.731 [2024-12-10 22:55:16.365009] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcd80 is same with the state(6) to be set 00:24:08.731 [2024-12-10 22:55:16.365043] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcd80 is same with the state(6) to be set 00:24:08.731 [2024-12-10 22:55:16.365051] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcd80 is same with the state(6) to be set 00:24:08.731 [2024-12-10 22:55:16.365058] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcd80 is same with the state(6) to be set 00:24:08.731 [2024-12-10 22:55:16.365064] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcd80 is same with the state(6) to be set 00:24:08.731 [2024-12-10 22:55:16.365070] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcd80 is same with the state(6) to be set 00:24:08.731 [2024-12-10 22:55:16.365077] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcd80 is same with the state(6) to be set 00:24:08.731 [2024-12-10 22:55:16.365083] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcd80 is same with the state(6) to be set 00:24:08.731 [2024-12-10 22:55:16.365088] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcd80 is same with the state(6) to be set 00:24:08.731 [2024-12-10 22:55:16.365094] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcd80 is same with the state(6) to be set 00:24:08.731 [2024-12-10 22:55:16.365100] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcd80 is same with the state(6) to be set 00:24:08.731 [2024-12-10 22:55:16.365106] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcd80 is same with the state(6) to be set 00:24:08.731 [2024-12-10 22:55:16.365111] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcd80 is same with the state(6) to be set 00:24:08.731 [2024-12-10 22:55:16.365117] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcd80 is same with the state(6) to be set 00:24:08.731 [2024-12-10 22:55:16.365125] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcd80 is same with the state(6) to be set 00:24:08.731 [2024-12-10 22:55:16.365137] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcd80 is same with the state(6) to be set 00:24:08.731 [2024-12-10 22:55:16.365143] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcd80 is same with the state(6) to be set 00:24:08.731 [2024-12-10 22:55:16.365149] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcd80 is same with the state(6) to be set 00:24:08.731 [2024-12-10 22:55:16.365155] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcd80 is same with the state(6) to be set 00:24:08.731 [2024-12-10 22:55:16.365168] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcd80 is same with the state(6) to be set 00:24:08.731 [2024-12-10 22:55:16.365174] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcd80 is same with the state(6) to be set 00:24:08.731 [2024-12-10 22:55:16.365180] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcd80 is same with the state(6) to be set 00:24:08.731 [2024-12-10 22:55:16.365185] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcd80 is same with the state(6) to be set 00:24:08.731 [2024-12-10 22:55:16.365191] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcd80 is same with the state(6) to be set 00:24:08.731 [2024-12-10 22:55:16.365196] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcd80 is same with the state(6) to be set 00:24:08.731 [2024-12-10 22:55:16.365202] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcd80 is same with the state(6) to be set 00:24:08.731 [2024-12-10 22:55:16.365208] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcd80 is same with the state(6) to be set 00:24:08.731 [2024-12-10 22:55:16.365214] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcd80 is same with the state(6) to be set 00:24:08.731 [2024-12-10 22:55:16.365219] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcd80 is same with the state(6) to be set 00:24:08.731 [2024-12-10 22:55:16.365225] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcd80 is same with the state(6) to be set 00:24:08.731 [2024-12-10 22:55:16.365231] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcd80 is same with the state(6) to be set 00:24:08.731 [2024-12-10 22:55:16.365237] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcd80 is same with the state(6) to be set 00:24:08.731 [2024-12-10 22:55:16.365242] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcd80 is same with the state(6) to be set 00:24:08.731 [2024-12-10 22:55:16.365248] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcd80 is same with the state(6) to be set 00:24:08.731 [2024-12-10 22:55:16.365254] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcd80 is same with the state(6) to be set 00:24:08.731 [2024-12-10 22:55:16.365260] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcd80 is same with the state(6) to be set 00:24:08.731 [2024-12-10 22:55:16.365266] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcd80 is same with the state(6) to be set 00:24:08.731 [2024-12-10 22:55:16.365272] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcd80 is same with the state(6) to be set 00:24:08.731 [2024-12-10 22:55:16.365278] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcd80 is same with the state(6) to be set 00:24:08.731 [2024-12-10 22:55:16.365284] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcd80 is same with the state(6) to be set 00:24:08.731 [2024-12-10 22:55:16.365289] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcd80 is same with the state(6) to be set 00:24:08.731 [2024-12-10 22:55:16.365295] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcd80 is same with the state(6) to be set 00:24:08.731 [2024-12-10 22:55:16.365303] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcd80 is same with the state(6) to be set 00:24:08.731 [2024-12-10 22:55:16.365309] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcd80 is same with the state(6) to be set 00:24:08.731 [2024-12-10 22:55:16.365314] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcd80 is same with the state(6) to be set 00:24:08.731 [2024-12-10 22:55:16.365320] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcd80 is same with the state(6) to be set 00:24:08.731 [2024-12-10 22:55:16.365326] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcd80 is same with the state(6) to be set 00:24:08.731 [2024-12-10 22:55:16.365332] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcd80 is same with the state(6) to be set 00:24:08.731 [2024-12-10 22:55:16.365338] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcd80 is same with the state(6) to be set 00:24:08.731 [2024-12-10 22:55:16.365343] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcd80 is same with the state(6) to be set 00:24:08.731 [2024-12-10 22:55:16.365349] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcd80 is same with the state(6) to be set 00:24:08.731 [2024-12-10 22:55:16.365355] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcd80 is same with the state(6) to be set 00:24:08.731 [2024-12-10 22:55:16.365361] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcd80 is same with the state(6) to be set 00:24:08.731 [2024-12-10 22:55:16.365367] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcd80 is same with the state(6) to be set 00:24:08.731 [2024-12-10 22:55:16.365372] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcd80 is same with the state(6) to be set 00:24:08.731 [2024-12-10 22:55:16.365378] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcd80 is same with the state(6) to be set 00:24:08.731 [2024-12-10 22:55:16.365384] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcd80 is same with the state(6) to be set 00:24:08.731 [2024-12-10 22:55:16.365390] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcd80 is same with the state(6) to be set 00:24:08.731 [2024-12-10 22:55:16.365396] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcd80 is same with the state(6) to be set 00:24:08.731 [2024-12-10 22:55:16.365401] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcd80 is same with the state(6) to be set 00:24:08.731 [2024-12-10 22:55:16.365407] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcd80 is same with the state(6) to be set 00:24:08.731 [2024-12-10 22:55:16.365413] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcd80 is same with the state(6) to be set 00:24:08.731 [2024-12-10 22:55:16.365419] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcd80 is same with the state(6) to be set 00:24:08.731 [2024-12-10 22:55:16.365425] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcd80 is same with the state(6) to be set 00:24:08.731 [2024-12-10 22:55:16.365431] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcd80 is same with the state(6) to be set 00:24:08.731 [2024-12-10 22:55:16.365437] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcd80 is same with the state(6) to be set 00:24:08.731 [2024-12-10 22:55:16.365443] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcd80 is same with the state(6) to be set 00:24:08.731 [2024-12-10 22:55:16.365449] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcd80 is same with the state(6) to be set 00:24:08.731 [2024-12-10 22:55:16.365455] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcd80 is same with the state(6) to be set 00:24:08.731 [2024-12-10 22:55:16.365461] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcd80 is same with the state(6) to be set 00:24:08.731 [2024-12-10 22:55:16.365471] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcd80 is same with the state(6) to be set 00:24:08.731 [2024-12-10 22:55:16.365478] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcd80 is same with the state(6) to be set 00:24:08.731 [2024-12-10 22:55:16.365483] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcd80 is same with the state(6) to be set 00:24:08.732 [2024-12-10 22:55:16.365490] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcd80 is same with the state(6) to be set 00:24:08.732 [2024-12-10 22:55:16.365496] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcd80 is same with the state(6) to be set 00:24:08.732 [2024-12-10 22:55:16.365502] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcd80 is same with the state(6) to be set 00:24:08.732 [2024-12-10 22:55:16.365508] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcd80 is same with the state(6) to be set 00:24:08.732 [2024-12-10 22:55:16.365514] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15bcd80 is same with the state(6) to be set 00:24:08.732 22:55:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:24:12.020 22:55:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:12.020 [2024-12-10 22:55:19.581105] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:12.020 22:55:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:24:12.957 22:55:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:13.216 [2024-12-10 22:55:20.797414] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17091a0 is same with the state(6) to be set 00:24:13.216 [2024-12-10 22:55:20.797443] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17091a0 is same with the state(6) to be set 00:24:13.216 [2024-12-10 22:55:20.797450] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17091a0 is same with the state(6) to be set 00:24:13.216 [2024-12-10 22:55:20.797457] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17091a0 is same with the state(6) to be set 00:24:13.216 [2024-12-10 22:55:20.797463] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17091a0 is same with the state(6) to be set 00:24:13.216 [2024-12-10 22:55:20.797470] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17091a0 is same with the state(6) to be set 00:24:13.216 [2024-12-10 22:55:20.797476] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17091a0 is same with the state(6) to be set 00:24:13.216 [2024-12-10 22:55:20.797482] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17091a0 is same with the state(6) to be set 00:24:13.216 [2024-12-10 22:55:20.797488] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17091a0 is same with the state(6) to be set 00:24:13.216 [2024-12-10 22:55:20.797494] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17091a0 is same with the state(6) to be set 00:24:13.216 [2024-12-10 22:55:20.797500] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17091a0 is same with the state(6) to be set 00:24:13.216 [2024-12-10 22:55:20.797507] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17091a0 is same with the state(6) to be set 00:24:13.216 [2024-12-10 22:55:20.797513] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17091a0 is same with the state(6) to be set 00:24:13.216 [2024-12-10 22:55:20.797519] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17091a0 is same with the state(6) to be set 00:24:13.216 22:55:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 2472029 00:24:19.790 { 00:24:19.790 "results": [ 00:24:19.790 { 00:24:19.790 "job": "NVMe0n1", 00:24:19.790 "core_mask": "0x1", 00:24:19.790 "workload": "verify", 00:24:19.790 "status": "finished", 00:24:19.790 "verify_range": { 00:24:19.790 "start": 0, 00:24:19.790 "length": 16384 00:24:19.790 }, 00:24:19.790 "queue_depth": 128, 00:24:19.790 "io_size": 4096, 00:24:19.790 "runtime": 15.011903, 00:24:19.790 "iops": 10968.762587927726, 00:24:19.790 "mibps": 42.84672885909268, 00:24:19.790 "io_failed": 4029, 00:24:19.790 "io_timeout": 0, 00:24:19.790 "avg_latency_us": 11368.289712154432, 00:24:19.790 "min_latency_us": 450.56, 00:24:19.790 "max_latency_us": 22681.154782608697 00:24:19.790 } 00:24:19.790 ], 00:24:19.790 "core_count": 1 00:24:19.790 } 00:24:19.790 22:55:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 2471989 00:24:19.790 22:55:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2471989 ']' 00:24:19.790 22:55:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2471989 00:24:19.790 22:55:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:24:19.790 22:55:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:19.790 22:55:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2471989 00:24:19.790 22:55:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:19.790 22:55:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:19.790 22:55:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2471989' 00:24:19.790 killing process with pid 2471989 00:24:19.790 22:55:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2471989 00:24:19.790 22:55:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2471989 00:24:19.790 22:55:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/try.txt 00:24:19.790 [2024-12-10 22:55:10.577176] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:24:19.790 [2024-12-10 22:55:10.577231] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2471989 ] 00:24:19.790 [2024-12-10 22:55:10.652354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:19.790 [2024-12-10 22:55:10.693122] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:24:19.790 Running I/O for 15 seconds... 00:24:19.790 10894.00 IOPS, 42.55 MiB/s [2024-12-10T21:55:27.519Z] [2024-12-10 22:55:12.684022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:96984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.790 [2024-12-10 22:55:12.684060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.790 [2024-12-10 22:55:12.684077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:96992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.790 [2024-12-10 22:55:12.684085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.790 [2024-12-10 22:55:12.684094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:97000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.790 [2024-12-10 22:55:12.684101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.790 [2024-12-10 22:55:12.684110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:97008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.790 [2024-12-10 22:55:12.684117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.790 [2024-12-10 22:55:12.684125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:97016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.790 [2024-12-10 22:55:12.684131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.790 [2024-12-10 22:55:12.684139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:97024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.790 [2024-12-10 22:55:12.684146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.790 [2024-12-10 22:55:12.684154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:97032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.790 [2024-12-10 22:55:12.684165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.790 [2024-12-10 22:55:12.684174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:97040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.790 [2024-12-10 22:55:12.684181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.790 [2024-12-10 22:55:12.684189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:97048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.790 [2024-12-10 22:55:12.684196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.790 [2024-12-10 22:55:12.684204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:97056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.790 [2024-12-10 22:55:12.684210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.790 [2024-12-10 22:55:12.684219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:97064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.790 [2024-12-10 22:55:12.684225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.790 [2024-12-10 22:55:12.684238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:97072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.790 [2024-12-10 22:55:12.684245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.790 [2024-12-10 22:55:12.684253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:97080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.790 [2024-12-10 22:55:12.684260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.790 [2024-12-10 22:55:12.684267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:97088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.790 [2024-12-10 22:55:12.684274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.790 [2024-12-10 22:55:12.684281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:97096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.790 [2024-12-10 22:55:12.684288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.790 [2024-12-10 22:55:12.684296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:97104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.790 [2024-12-10 22:55:12.684302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.790 [2024-12-10 22:55:12.684310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:97112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.790 [2024-12-10 22:55:12.684317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.790 [2024-12-10 22:55:12.684325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:97120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.790 [2024-12-10 22:55:12.684332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.790 [2024-12-10 22:55:12.684340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:97128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.790 [2024-12-10 22:55:12.684346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.790 [2024-12-10 22:55:12.684354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.790 [2024-12-10 22:55:12.684361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.790 [2024-12-10 22:55:12.684369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:97144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.790 [2024-12-10 22:55:12.684375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.790 [2024-12-10 22:55:12.684383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:97152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.790 [2024-12-10 22:55:12.684390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.790 [2024-12-10 22:55:12.684398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:97160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.790 [2024-12-10 22:55:12.684404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.790 [2024-12-10 22:55:12.684412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:97168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.790 [2024-12-10 22:55:12.684424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.790 [2024-12-10 22:55:12.684432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:97176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.790 [2024-12-10 22:55:12.684439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.790 [2024-12-10 22:55:12.684447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:97184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.790 [2024-12-10 22:55:12.684454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.790 [2024-12-10 22:55:12.684462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:97192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.790 [2024-12-10 22:55:12.684468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.790 [2024-12-10 22:55:12.684476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:97200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.790 [2024-12-10 22:55:12.684483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.790 [2024-12-10 22:55:12.684491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.791 [2024-12-10 22:55:12.684497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.791 [2024-12-10 22:55:12.684505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:97216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.791 [2024-12-10 22:55:12.684512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.791 [2024-12-10 22:55:12.684519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:97224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.791 [2024-12-10 22:55:12.684526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.791 [2024-12-10 22:55:12.684534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:97232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.791 [2024-12-10 22:55:12.684541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.791 [2024-12-10 22:55:12.684549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:97240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.791 [2024-12-10 22:55:12.684556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.791 [2024-12-10 22:55:12.684564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:97248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.791 [2024-12-10 22:55:12.684571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.791 [2024-12-10 22:55:12.684579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:97256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.791 [2024-12-10 22:55:12.684586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.791 [2024-12-10 22:55:12.684594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:97264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.791 [2024-12-10 22:55:12.684600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.791 [2024-12-10 22:55:12.684610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:97272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.791 [2024-12-10 22:55:12.684617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.791 [2024-12-10 22:55:12.684625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:97280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.791 [2024-12-10 22:55:12.684632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.791 [2024-12-10 22:55:12.684640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:97288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.791 [2024-12-10 22:55:12.684647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.791 [2024-12-10 22:55:12.684655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:97296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.791 [2024-12-10 22:55:12.684661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.791 [2024-12-10 22:55:12.684670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:97304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.791 [2024-12-10 22:55:12.684676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.791 [2024-12-10 22:55:12.684684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:97312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.791 [2024-12-10 22:55:12.684692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.791 [2024-12-10 22:55:12.684700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:97320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.791 [2024-12-10 22:55:12.684706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.791 [2024-12-10 22:55:12.684714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:97328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.791 [2024-12-10 22:55:12.684721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.791 [2024-12-10 22:55:12.684729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:97336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.791 [2024-12-10 22:55:12.684735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.791 [2024-12-10 22:55:12.684743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:97344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.791 [2024-12-10 22:55:12.684750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.791 [2024-12-10 22:55:12.684757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:97352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.791 [2024-12-10 22:55:12.684764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.791 [2024-12-10 22:55:12.684773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:97360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.791 [2024-12-10 22:55:12.684779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.791 [2024-12-10 22:55:12.684787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:97368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.791 [2024-12-10 22:55:12.684795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.791 [2024-12-10 22:55:12.684804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:97376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.791 [2024-12-10 22:55:12.684810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.791 [2024-12-10 22:55:12.684819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:97384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.791 [2024-12-10 22:55:12.684825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.791 [2024-12-10 22:55:12.684833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:97392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.791 [2024-12-10 22:55:12.684839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.791 [2024-12-10 22:55:12.684847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:97400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.791 [2024-12-10 22:55:12.684854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.791 [2024-12-10 22:55:12.684862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.791 [2024-12-10 22:55:12.684869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.791 [2024-12-10 22:55:12.684877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:97416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.791 [2024-12-10 22:55:12.684883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.791 [2024-12-10 22:55:12.684891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:97424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.791 [2024-12-10 22:55:12.684898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.791 [2024-12-10 22:55:12.684906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:97432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.791 [2024-12-10 22:55:12.684913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.791 [2024-12-10 22:55:12.684921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:97440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.791 [2024-12-10 22:55:12.684927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.791 [2024-12-10 22:55:12.684935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:97448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.791 [2024-12-10 22:55:12.684941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.791 [2024-12-10 22:55:12.684949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:97456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.791 [2024-12-10 22:55:12.684956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.791 [2024-12-10 22:55:12.684964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:97464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.791 [2024-12-10 22:55:12.684970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.791 [2024-12-10 22:55:12.684978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:97472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.791 [2024-12-10 22:55:12.684986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.791 [2024-12-10 22:55:12.684994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:97480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.791 [2024-12-10 22:55:12.685000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.791 [2024-12-10 22:55:12.685008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:97488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.791 [2024-12-10 22:55:12.685015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.791 [2024-12-10 22:55:12.685023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:97496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.791 [2024-12-10 22:55:12.685030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.791 [2024-12-10 22:55:12.685038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:97504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.791 [2024-12-10 22:55:12.685045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.791 [2024-12-10 22:55:12.685053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:97512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.791 [2024-12-10 22:55:12.685060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.791 [2024-12-10 22:55:12.685068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:97520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.791 [2024-12-10 22:55:12.685075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.791 [2024-12-10 22:55:12.685083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:97528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.791 [2024-12-10 22:55:12.685089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.791 [2024-12-10 22:55:12.685097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:97536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.792 [2024-12-10 22:55:12.685104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.792 [2024-12-10 22:55:12.685112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:97544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.792 [2024-12-10 22:55:12.685118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.792 [2024-12-10 22:55:12.685126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:97552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.792 [2024-12-10 22:55:12.685132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.792 [2024-12-10 22:55:12.685140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:97560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.792 [2024-12-10 22:55:12.685146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.792 [2024-12-10 22:55:12.685154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:97568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.792 [2024-12-10 22:55:12.685164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.792 [2024-12-10 22:55:12.685173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:97576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.792 [2024-12-10 22:55:12.685180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.792 [2024-12-10 22:55:12.685189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:97584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.792 [2024-12-10 22:55:12.685195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.792 [2024-12-10 22:55:12.685203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:97592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.792 [2024-12-10 22:55:12.685210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.792 [2024-12-10 22:55:12.685218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:97600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.792 [2024-12-10 22:55:12.685224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.792 [2024-12-10 22:55:12.685232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.792 [2024-12-10 22:55:12.685239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.792 [2024-12-10 22:55:12.685247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:97616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.792 [2024-12-10 22:55:12.685254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.792 [2024-12-10 22:55:12.685262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:97624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.792 [2024-12-10 22:55:12.685269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.792 [2024-12-10 22:55:12.685277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:97632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.792 [2024-12-10 22:55:12.685284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.792 [2024-12-10 22:55:12.685292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:97640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.792 [2024-12-10 22:55:12.685299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.792 [2024-12-10 22:55:12.685307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:97648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.792 [2024-12-10 22:55:12.685314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.792 [2024-12-10 22:55:12.685322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:97656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.792 [2024-12-10 22:55:12.685328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.792 [2024-12-10 22:55:12.685336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:97664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.792 [2024-12-10 22:55:12.685342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.792 [2024-12-10 22:55:12.685350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:97672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.792 [2024-12-10 22:55:12.685358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.792 [2024-12-10 22:55:12.685367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:97680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.792 [2024-12-10 22:55:12.685373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.792 [2024-12-10 22:55:12.685381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:97688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.792 [2024-12-10 22:55:12.685388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.792 [2024-12-10 22:55:12.685396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:97696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.792 [2024-12-10 22:55:12.685403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.792 [2024-12-10 22:55:12.685411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:97704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.792 [2024-12-10 22:55:12.685418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.792 [2024-12-10 22:55:12.685425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:97712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.792 [2024-12-10 22:55:12.685432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.792 [2024-12-10 22:55:12.685440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:97720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.792 [2024-12-10 22:55:12.685447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.792 [2024-12-10 22:55:12.685454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:97728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.792 [2024-12-10 22:55:12.685461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.792 [2024-12-10 22:55:12.685469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:97736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.792 [2024-12-10 22:55:12.685475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.792 [2024-12-10 22:55:12.685483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:97744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.792 [2024-12-10 22:55:12.685490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.792 [2024-12-10 22:55:12.685498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:97752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.792 [2024-12-10 22:55:12.685508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.792 [2024-12-10 22:55:12.685516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:97760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.792 [2024-12-10 22:55:12.685522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.792 [2024-12-10 22:55:12.685530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:97768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.792 [2024-12-10 22:55:12.685537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.792 [2024-12-10 22:55:12.685546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:97776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.792 [2024-12-10 22:55:12.685552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.792 [2024-12-10 22:55:12.685560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:97784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.792 [2024-12-10 22:55:12.685567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.792 [2024-12-10 22:55:12.685577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:97792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.792 [2024-12-10 22:55:12.685584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.792 [2024-12-10 22:55:12.685591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:97800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.792 [2024-12-10 22:55:12.685598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.792 [2024-12-10 22:55:12.685606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:97808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.792 [2024-12-10 22:55:12.685612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.792 [2024-12-10 22:55:12.685620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:97816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.792 [2024-12-10 22:55:12.685626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.792 [2024-12-10 22:55:12.685634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:97824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.792 [2024-12-10 22:55:12.685640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.792 [2024-12-10 22:55:12.685648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:97832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.792 [2024-12-10 22:55:12.685655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.792 [2024-12-10 22:55:12.685663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:97840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.792 [2024-12-10 22:55:12.685669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.792 [2024-12-10 22:55:12.685677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:97848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.792 [2024-12-10 22:55:12.685683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.792 [2024-12-10 22:55:12.685691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:97856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.792 [2024-12-10 22:55:12.685697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.793 [2024-12-10 22:55:12.685705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:97864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.793 [2024-12-10 22:55:12.685712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.793 [2024-12-10 22:55:12.685719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.793 [2024-12-10 22:55:12.685726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.793 [2024-12-10 22:55:12.685747] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.793 [2024-12-10 22:55:12.685755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97880 len:8 PRP1 0x0 PRP2 0x0 00:24:19.793 [2024-12-10 22:55:12.685761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.793 [2024-12-10 22:55:12.685770] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.793 [2024-12-10 22:55:12.685775] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.793 [2024-12-10 22:55:12.685781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97888 len:8 PRP1 0x0 PRP2 0x0 00:24:19.793 [2024-12-10 22:55:12.685787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.793 [2024-12-10 22:55:12.685794] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.793 [2024-12-10 22:55:12.685799] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.793 [2024-12-10 22:55:12.685804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97896 len:8 PRP1 0x0 PRP2 0x0 00:24:19.793 [2024-12-10 22:55:12.685811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.793 [2024-12-10 22:55:12.685818] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.793 [2024-12-10 22:55:12.685822] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.793 [2024-12-10 22:55:12.685828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97904 len:8 PRP1 0x0 PRP2 0x0 00:24:19.793 [2024-12-10 22:55:12.685834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.793 [2024-12-10 22:55:12.685840] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.793 [2024-12-10 22:55:12.685845] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.793 [2024-12-10 22:55:12.685851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97912 len:8 PRP1 0x0 PRP2 0x0 00:24:19.793 [2024-12-10 22:55:12.685857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.793 [2024-12-10 22:55:12.685863] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.793 [2024-12-10 22:55:12.685868] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.793 [2024-12-10 22:55:12.685873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97920 len:8 PRP1 0x0 PRP2 0x0 00:24:19.793 [2024-12-10 22:55:12.685879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.793 [2024-12-10 22:55:12.685886] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.793 [2024-12-10 22:55:12.685891] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.793 [2024-12-10 22:55:12.685896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97928 len:8 PRP1 0x0 PRP2 0x0 00:24:19.793 [2024-12-10 22:55:12.685902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.793 [2024-12-10 22:55:12.685909] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.793 [2024-12-10 22:55:12.685914] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.793 [2024-12-10 22:55:12.685919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97936 len:8 PRP1 0x0 PRP2 0x0 00:24:19.793 [2024-12-10 22:55:12.685927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.793 [2024-12-10 22:55:12.685933] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.793 [2024-12-10 22:55:12.685938] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.793 [2024-12-10 22:55:12.685944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97944 len:8 PRP1 0x0 PRP2 0x0 00:24:19.793 [2024-12-10 22:55:12.685951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.793 [2024-12-10 22:55:12.685957] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.793 [2024-12-10 22:55:12.685962] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.793 [2024-12-10 22:55:12.685967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97952 len:8 PRP1 0x0 PRP2 0x0 00:24:19.793 [2024-12-10 22:55:12.685973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.793 [2024-12-10 22:55:12.685979] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.793 [2024-12-10 22:55:12.685984] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.793 [2024-12-10 22:55:12.685989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97960 len:8 PRP1 0x0 PRP2 0x0 00:24:19.793 [2024-12-10 22:55:12.685997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.793 [2024-12-10 22:55:12.686003] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.793 [2024-12-10 22:55:12.686008] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.793 [2024-12-10 22:55:12.686013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97968 len:8 PRP1 0x0 PRP2 0x0 00:24:19.793 [2024-12-10 22:55:12.686019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.793 [2024-12-10 22:55:12.686026] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.793 [2024-12-10 22:55:12.686031] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.793 [2024-12-10 22:55:12.686036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97976 len:8 PRP1 0x0 PRP2 0x0 00:24:19.793 [2024-12-10 22:55:12.686042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.793 [2024-12-10 22:55:12.686049] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.793 [2024-12-10 22:55:12.686053] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.793 [2024-12-10 22:55:12.686058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97984 len:8 PRP1 0x0 PRP2 0x0 00:24:19.793 [2024-12-10 22:55:12.686065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.793 [2024-12-10 22:55:12.698035] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.793 [2024-12-10 22:55:12.698045] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.793 [2024-12-10 22:55:12.698052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97992 len:8 PRP1 0x0 PRP2 0x0 00:24:19.793 [2024-12-10 22:55:12.698059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.793 [2024-12-10 22:55:12.698066] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.793 [2024-12-10 22:55:12.698070] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.793 [2024-12-10 22:55:12.698078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98000 len:8 PRP1 0x0 PRP2 0x0 00:24:19.793 [2024-12-10 22:55:12.698085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.793 [2024-12-10 22:55:12.698130] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:19.793 [2024-12-10 22:55:12.698153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:19.793 [2024-12-10 22:55:12.698166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.793 [2024-12-10 22:55:12.698174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:19.793 [2024-12-10 22:55:12.698180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.793 [2024-12-10 22:55:12.698187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:19.793 [2024-12-10 22:55:12.698193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.793 [2024-12-10 22:55:12.698201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:19.793 [2024-12-10 22:55:12.698207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.793 [2024-12-10 22:55:12.698213] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:24:19.793 [2024-12-10 22:55:12.698242] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8c7fe0 (9): Bad file descriptor 00:24:19.793 [2024-12-10 22:55:12.701792] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:24:19.793 [2024-12-10 22:55:12.729845] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:24:19.793 10779.50 IOPS, 42.11 MiB/s [2024-12-10T21:55:27.522Z] 10933.33 IOPS, 42.71 MiB/s [2024-12-10T21:55:27.522Z] 10966.50 IOPS, 42.84 MiB/s [2024-12-10T21:55:27.522Z] [2024-12-10 22:55:16.366853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:38144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.793 [2024-12-10 22:55:16.366888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.793 [2024-12-10 22:55:16.366902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:38152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.793 [2024-12-10 22:55:16.366910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.793 [2024-12-10 22:55:16.366920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:38160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.793 [2024-12-10 22:55:16.366927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.793 [2024-12-10 22:55:16.366935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:38168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.793 [2024-12-10 22:55:16.366941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.793 [2024-12-10 22:55:16.366949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:38176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.793 [2024-12-10 22:55:16.366956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.794 [2024-12-10 22:55:16.366964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:38184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.794 [2024-12-10 22:55:16.366974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.794 [2024-12-10 22:55:16.366982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:38192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.794 [2024-12-10 22:55:16.366989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.794 [2024-12-10 22:55:16.366997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:38200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.794 [2024-12-10 22:55:16.367004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.794 [2024-12-10 22:55:16.367012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:38208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.794 [2024-12-10 22:55:16.367020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.794 [2024-12-10 22:55:16.367028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:38216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.794 [2024-12-10 22:55:16.367034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.794 [2024-12-10 22:55:16.367042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:38224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.794 [2024-12-10 22:55:16.367049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.794 [2024-12-10 22:55:16.367057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:38232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.794 [2024-12-10 22:55:16.367064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.794 [2024-12-10 22:55:16.367072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:38240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.794 [2024-12-10 22:55:16.367078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.794 [2024-12-10 22:55:16.367086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:38248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.794 [2024-12-10 22:55:16.367092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.794 [2024-12-10 22:55:16.367100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:38256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.794 [2024-12-10 22:55:16.367107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.794 [2024-12-10 22:55:16.367115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:38264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.794 [2024-12-10 22:55:16.367121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.794 [2024-12-10 22:55:16.367129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:38272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.794 [2024-12-10 22:55:16.367136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.794 [2024-12-10 22:55:16.367145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:38280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.794 [2024-12-10 22:55:16.367151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.794 [2024-12-10 22:55:16.367166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:38288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.794 [2024-12-10 22:55:16.367173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.794 [2024-12-10 22:55:16.367183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:38296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.794 [2024-12-10 22:55:16.367190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.794 [2024-12-10 22:55:16.367198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:38304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.794 [2024-12-10 22:55:16.367204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.794 [2024-12-10 22:55:16.367212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:38312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.794 [2024-12-10 22:55:16.367219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.794 [2024-12-10 22:55:16.367226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:38320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.794 [2024-12-10 22:55:16.367233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.794 [2024-12-10 22:55:16.367241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.794 [2024-12-10 22:55:16.367247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.794 [2024-12-10 22:55:16.367255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:38336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.794 [2024-12-10 22:55:16.367261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.794 [2024-12-10 22:55:16.367269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:38344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.794 [2024-12-10 22:55:16.367275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.794 [2024-12-10 22:55:16.367283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:38352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.794 [2024-12-10 22:55:16.367289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.794 [2024-12-10 22:55:16.367297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:38360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.794 [2024-12-10 22:55:16.367303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.794 [2024-12-10 22:55:16.367311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.794 [2024-12-10 22:55:16.367317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.794 [2024-12-10 22:55:16.367325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:38376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.794 [2024-12-10 22:55:16.367331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.794 [2024-12-10 22:55:16.367339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:38384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.794 [2024-12-10 22:55:16.367347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.794 [2024-12-10 22:55:16.367354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:38392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.794 [2024-12-10 22:55:16.367361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.794 [2024-12-10 22:55:16.367369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.794 [2024-12-10 22:55:16.367375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.794 [2024-12-10 22:55:16.367383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:38408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.794 [2024-12-10 22:55:16.367390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.794 [2024-12-10 22:55:16.367397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.794 [2024-12-10 22:55:16.367404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.794 [2024-12-10 22:55:16.367412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:38424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.794 [2024-12-10 22:55:16.367419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.794 [2024-12-10 22:55:16.367427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:38432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.794 [2024-12-10 22:55:16.367433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.794 [2024-12-10 22:55:16.367441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:38440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.794 [2024-12-10 22:55:16.367447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.794 [2024-12-10 22:55:16.367455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:38448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.794 [2024-12-10 22:55:16.367461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.795 [2024-12-10 22:55:16.367470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:38456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.795 [2024-12-10 22:55:16.367476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.795 [2024-12-10 22:55:16.367484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:38464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.795 [2024-12-10 22:55:16.367490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.795 [2024-12-10 22:55:16.367498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:38472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.795 [2024-12-10 22:55:16.367504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.795 [2024-12-10 22:55:16.367512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:38480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.795 [2024-12-10 22:55:16.367519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.795 [2024-12-10 22:55:16.367528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:38488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.795 [2024-12-10 22:55:16.367535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.795 [2024-12-10 22:55:16.367543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:38496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.795 [2024-12-10 22:55:16.367549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.795 [2024-12-10 22:55:16.367557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.795 [2024-12-10 22:55:16.367563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.795 [2024-12-10 22:55:16.367571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:38512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.795 [2024-12-10 22:55:16.367577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.795 [2024-12-10 22:55:16.367585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:38520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.795 [2024-12-10 22:55:16.367591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.795 [2024-12-10 22:55:16.367599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:38528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.795 [2024-12-10 22:55:16.367606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.795 [2024-12-10 22:55:16.367614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:38536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.795 [2024-12-10 22:55:16.367620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.795 [2024-12-10 22:55:16.367629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:38544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.795 [2024-12-10 22:55:16.367635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.795 [2024-12-10 22:55:16.367643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:38552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.795 [2024-12-10 22:55:16.367649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.795 [2024-12-10 22:55:16.367657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:38560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.795 [2024-12-10 22:55:16.367663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.795 [2024-12-10 22:55:16.367671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:38568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.795 [2024-12-10 22:55:16.367678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.795 [2024-12-10 22:55:16.367685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:38576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.795 [2024-12-10 22:55:16.367691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.795 [2024-12-10 22:55:16.367699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:38584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.795 [2024-12-10 22:55:16.367705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.795 [2024-12-10 22:55:16.367714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:38592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.795 [2024-12-10 22:55:16.367720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.795 [2024-12-10 22:55:16.367729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:38600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.795 [2024-12-10 22:55:16.367735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.795 [2024-12-10 22:55:16.367743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:38608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.795 [2024-12-10 22:55:16.367749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.795 [2024-12-10 22:55:16.367757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:38616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.795 [2024-12-10 22:55:16.367763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.795 [2024-12-10 22:55:16.367771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:38624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.795 [2024-12-10 22:55:16.367778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.795 [2024-12-10 22:55:16.367785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:38632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.795 [2024-12-10 22:55:16.367792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.795 [2024-12-10 22:55:16.367801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:38640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.795 [2024-12-10 22:55:16.367807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.795 [2024-12-10 22:55:16.367815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:38648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.795 [2024-12-10 22:55:16.367821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.795 [2024-12-10 22:55:16.367829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:38656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.795 [2024-12-10 22:55:16.367836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.795 [2024-12-10 22:55:16.367844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:38664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.795 [2024-12-10 22:55:16.367851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.795 [2024-12-10 22:55:16.367859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:38672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.795 [2024-12-10 22:55:16.367865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.795 [2024-12-10 22:55:16.367873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:38680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.795 [2024-12-10 22:55:16.367879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.795 [2024-12-10 22:55:16.367887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:38688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.795 [2024-12-10 22:55:16.367895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.795 [2024-12-10 22:55:16.367902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:38696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.795 [2024-12-10 22:55:16.367909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.795 [2024-12-10 22:55:16.367916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:38704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.795 [2024-12-10 22:55:16.367922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.795 [2024-12-10 22:55:16.367930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:38712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.795 [2024-12-10 22:55:16.367936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.795 [2024-12-10 22:55:16.367944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:38720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.795 [2024-12-10 22:55:16.367950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.795 [2024-12-10 22:55:16.367958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.795 [2024-12-10 22:55:16.367964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.795 [2024-12-10 22:55:16.367972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:38736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.795 [2024-12-10 22:55:16.367978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.795 [2024-12-10 22:55:16.367986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.795 [2024-12-10 22:55:16.367992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.795 [2024-12-10 22:55:16.368000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:38752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.795 [2024-12-10 22:55:16.368006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.795 [2024-12-10 22:55:16.368014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:38760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.795 [2024-12-10 22:55:16.368020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.795 [2024-12-10 22:55:16.368028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.795 [2024-12-10 22:55:16.368035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.795 [2024-12-10 22:55:16.368043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:38776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.795 [2024-12-10 22:55:16.368049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.796 [2024-12-10 22:55:16.368057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:38784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.796 [2024-12-10 22:55:16.368063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.796 [2024-12-10 22:55:16.368076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:38792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.796 [2024-12-10 22:55:16.368083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.796 [2024-12-10 22:55:16.368091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:38800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.796 [2024-12-10 22:55:16.368097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.796 [2024-12-10 22:55:16.368105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:38808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.796 [2024-12-10 22:55:16.368111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.796 [2024-12-10 22:55:16.368118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:38816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.796 [2024-12-10 22:55:16.368125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.796 [2024-12-10 22:55:16.368133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:38824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.796 [2024-12-10 22:55:16.368139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.796 [2024-12-10 22:55:16.368147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:38832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.796 [2024-12-10 22:55:16.368153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.796 [2024-12-10 22:55:16.368164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.796 [2024-12-10 22:55:16.368170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.796 [2024-12-10 22:55:16.368178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:38848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.796 [2024-12-10 22:55:16.368184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.796 [2024-12-10 22:55:16.368192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:38856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.796 [2024-12-10 22:55:16.368198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.796 [2024-12-10 22:55:16.368206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:38864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.796 [2024-12-10 22:55:16.368212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.796 [2024-12-10 22:55:16.368220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:38872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.796 [2024-12-10 22:55:16.368226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.796 [2024-12-10 22:55:16.368234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:38880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.796 [2024-12-10 22:55:16.368240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.796 [2024-12-10 22:55:16.368248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:38888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.796 [2024-12-10 22:55:16.368256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.796 [2024-12-10 22:55:16.368264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:38896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.796 [2024-12-10 22:55:16.368270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.796 [2024-12-10 22:55:16.368290] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.796 [2024-12-10 22:55:16.368297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38904 len:8 PRP1 0x0 PRP2 0x0 00:24:19.796 [2024-12-10 22:55:16.368303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.796 [2024-12-10 22:55:16.368312] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.796 [2024-12-10 22:55:16.368319] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.796 [2024-12-10 22:55:16.368325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38912 len:8 PRP1 0x0 PRP2 0x0 00:24:19.796 [2024-12-10 22:55:16.368331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.796 [2024-12-10 22:55:16.368338] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.796 [2024-12-10 22:55:16.368343] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.796 [2024-12-10 22:55:16.368348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38920 len:8 PRP1 0x0 PRP2 0x0 00:24:19.796 [2024-12-10 22:55:16.368354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.796 [2024-12-10 22:55:16.368361] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.796 [2024-12-10 22:55:16.368366] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.796 [2024-12-10 22:55:16.368371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38928 len:8 PRP1 0x0 PRP2 0x0 00:24:19.796 [2024-12-10 22:55:16.368378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.796 [2024-12-10 22:55:16.368384] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.796 [2024-12-10 22:55:16.368389] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.796 [2024-12-10 22:55:16.368394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38936 len:8 PRP1 0x0 PRP2 0x0 00:24:19.796 [2024-12-10 22:55:16.368400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.796 [2024-12-10 22:55:16.368407] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.796 [2024-12-10 22:55:16.368411] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.796 [2024-12-10 22:55:16.368417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38944 len:8 PRP1 0x0 PRP2 0x0 00:24:19.796 [2024-12-10 22:55:16.368423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.796 [2024-12-10 22:55:16.368429] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.796 [2024-12-10 22:55:16.368434] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.796 [2024-12-10 22:55:16.368439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38952 len:8 PRP1 0x0 PRP2 0x0 00:24:19.796 [2024-12-10 22:55:16.368445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.796 [2024-12-10 22:55:16.368453] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.796 [2024-12-10 22:55:16.368459] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.796 [2024-12-10 22:55:16.368464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38960 len:8 PRP1 0x0 PRP2 0x0 00:24:19.796 [2024-12-10 22:55:16.368470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.796 [2024-12-10 22:55:16.368477] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.796 [2024-12-10 22:55:16.368482] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.796 [2024-12-10 22:55:16.368488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38968 len:8 PRP1 0x0 PRP2 0x0 00:24:19.796 [2024-12-10 22:55:16.368494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.796 [2024-12-10 22:55:16.368501] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.796 [2024-12-10 22:55:16.368506] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.796 [2024-12-10 22:55:16.368512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38976 len:8 PRP1 0x0 PRP2 0x0 00:24:19.796 [2024-12-10 22:55:16.368518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.796 [2024-12-10 22:55:16.368524] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.796 [2024-12-10 22:55:16.368529] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.796 [2024-12-10 22:55:16.368534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38984 len:8 PRP1 0x0 PRP2 0x0 00:24:19.796 [2024-12-10 22:55:16.368541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.796 [2024-12-10 22:55:16.368547] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.796 [2024-12-10 22:55:16.368552] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.796 [2024-12-10 22:55:16.368557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:38992 len:8 PRP1 0x0 PRP2 0x0 00:24:19.796 [2024-12-10 22:55:16.368563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.796 [2024-12-10 22:55:16.368570] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.796 [2024-12-10 22:55:16.368575] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.796 [2024-12-10 22:55:16.368580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39000 len:8 PRP1 0x0 PRP2 0x0 00:24:19.796 [2024-12-10 22:55:16.368586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.796 [2024-12-10 22:55:16.368593] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.796 [2024-12-10 22:55:16.368597] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.796 [2024-12-10 22:55:16.368603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39008 len:8 PRP1 0x0 PRP2 0x0 00:24:19.796 [2024-12-10 22:55:16.368609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.796 [2024-12-10 22:55:16.368615] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.796 [2024-12-10 22:55:16.368620] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.796 [2024-12-10 22:55:16.368625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39016 len:8 PRP1 0x0 PRP2 0x0 00:24:19.796 [2024-12-10 22:55:16.368631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.796 [2024-12-10 22:55:16.368642] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.796 [2024-12-10 22:55:16.368647] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.797 [2024-12-10 22:55:16.368652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39024 len:8 PRP1 0x0 PRP2 0x0 00:24:19.797 [2024-12-10 22:55:16.368658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.797 [2024-12-10 22:55:16.368664] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.797 [2024-12-10 22:55:16.368669] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.797 [2024-12-10 22:55:16.368674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39032 len:8 PRP1 0x0 PRP2 0x0 00:24:19.797 [2024-12-10 22:55:16.368680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.797 [2024-12-10 22:55:16.368688] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.797 [2024-12-10 22:55:16.368693] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.797 [2024-12-10 22:55:16.368698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39040 len:8 PRP1 0x0 PRP2 0x0 00:24:19.797 [2024-12-10 22:55:16.368705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.797 [2024-12-10 22:55:16.368711] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.797 [2024-12-10 22:55:16.368716] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.797 [2024-12-10 22:55:16.368721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39048 len:8 PRP1 0x0 PRP2 0x0 00:24:19.797 [2024-12-10 22:55:16.368728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.797 [2024-12-10 22:55:16.368734] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.797 [2024-12-10 22:55:16.368739] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.797 [2024-12-10 22:55:16.368744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39056 len:8 PRP1 0x0 PRP2 0x0 00:24:19.797 [2024-12-10 22:55:16.368750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.797 [2024-12-10 22:55:16.368757] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.797 [2024-12-10 22:55:16.368762] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.797 [2024-12-10 22:55:16.368767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39064 len:8 PRP1 0x0 PRP2 0x0 00:24:19.797 [2024-12-10 22:55:16.368773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.797 [2024-12-10 22:55:16.368780] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.797 [2024-12-10 22:55:16.368785] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.797 [2024-12-10 22:55:16.368790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39072 len:8 PRP1 0x0 PRP2 0x0 00:24:19.797 [2024-12-10 22:55:16.368797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.797 [2024-12-10 22:55:16.368803] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.797 [2024-12-10 22:55:16.368808] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.797 [2024-12-10 22:55:16.380128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39080 len:8 PRP1 0x0 PRP2 0x0 00:24:19.797 [2024-12-10 22:55:16.380142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.797 [2024-12-10 22:55:16.380153] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.797 [2024-12-10 22:55:16.380163] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.797 [2024-12-10 22:55:16.380171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39088 len:8 PRP1 0x0 PRP2 0x0 00:24:19.797 [2024-12-10 22:55:16.380179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.797 [2024-12-10 22:55:16.380188] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.797 [2024-12-10 22:55:16.380194] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.797 [2024-12-10 22:55:16.380202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39096 len:8 PRP1 0x0 PRP2 0x0 00:24:19.797 [2024-12-10 22:55:16.380210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.797 [2024-12-10 22:55:16.380219] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.797 [2024-12-10 22:55:16.380226] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.797 [2024-12-10 22:55:16.380233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39104 len:8 PRP1 0x0 PRP2 0x0 00:24:19.797 [2024-12-10 22:55:16.380242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.797 [2024-12-10 22:55:16.380251] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.797 [2024-12-10 22:55:16.380257] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.797 [2024-12-10 22:55:16.380265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39112 len:8 PRP1 0x0 PRP2 0x0 00:24:19.797 [2024-12-10 22:55:16.380273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.797 [2024-12-10 22:55:16.380282] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.797 [2024-12-10 22:55:16.380288] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.797 [2024-12-10 22:55:16.380296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39120 len:8 PRP1 0x0 PRP2 0x0 00:24:19.797 [2024-12-10 22:55:16.380304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.797 [2024-12-10 22:55:16.380313] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.797 [2024-12-10 22:55:16.380320] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.797 [2024-12-10 22:55:16.380327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39128 len:8 PRP1 0x0 PRP2 0x0 00:24:19.797 [2024-12-10 22:55:16.380336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.797 [2024-12-10 22:55:16.380344] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.797 [2024-12-10 22:55:16.380351] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.797 [2024-12-10 22:55:16.380358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39136 len:8 PRP1 0x0 PRP2 0x0 00:24:19.797 [2024-12-10 22:55:16.380366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.797 [2024-12-10 22:55:16.380375] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.797 [2024-12-10 22:55:16.380384] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.797 [2024-12-10 22:55:16.380391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39144 len:8 PRP1 0x0 PRP2 0x0 00:24:19.797 [2024-12-10 22:55:16.380400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.797 [2024-12-10 22:55:16.380409] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.797 [2024-12-10 22:55:16.380415] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.797 [2024-12-10 22:55:16.380423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39152 len:8 PRP1 0x0 PRP2 0x0 00:24:19.797 [2024-12-10 22:55:16.380432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.797 [2024-12-10 22:55:16.380440] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.797 [2024-12-10 22:55:16.380447] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.797 [2024-12-10 22:55:16.380454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39160 len:8 PRP1 0x0 PRP2 0x0 00:24:19.797 [2024-12-10 22:55:16.380463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.797 [2024-12-10 22:55:16.380512] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:24:19.797 [2024-12-10 22:55:16.380540] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:19.797 [2024-12-10 22:55:16.380551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.797 [2024-12-10 22:55:16.380561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:19.797 [2024-12-10 22:55:16.380570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.797 [2024-12-10 22:55:16.380580] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:19.797 [2024-12-10 22:55:16.380588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.797 [2024-12-10 22:55:16.380598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:19.797 [2024-12-10 22:55:16.380607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.797 [2024-12-10 22:55:16.380615] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:24:19.797 [2024-12-10 22:55:16.380653] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8c7fe0 (9): Bad file descriptor 00:24:19.797 [2024-12-10 22:55:16.384542] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:24:19.797 [2024-12-10 22:55:16.413223] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:24:19.797 10866.40 IOPS, 42.45 MiB/s [2024-12-10T21:55:27.526Z] 10906.67 IOPS, 42.60 MiB/s [2024-12-10T21:55:27.526Z] 10918.57 IOPS, 42.65 MiB/s [2024-12-10T21:55:27.526Z] 10943.38 IOPS, 42.75 MiB/s [2024-12-10T21:55:27.526Z] 10975.67 IOPS, 42.87 MiB/s [2024-12-10T21:55:27.526Z] [2024-12-10 22:55:20.796330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:19.797 [2024-12-10 22:55:20.796367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.797 [2024-12-10 22:55:20.796377] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:19.797 [2024-12-10 22:55:20.796389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.797 [2024-12-10 22:55:20.796397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:19.797 [2024-12-10 22:55:20.796404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.797 [2024-12-10 22:55:20.796411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:19.797 [2024-12-10 22:55:20.796417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.798 [2024-12-10 22:55:20.796424] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c7fe0 is same with the state(6) to be set 00:24:19.798 [2024-12-10 22:55:20.797987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:44320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.798 [2024-12-10 22:55:20.798002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.798 [2024-12-10 22:55:20.798016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:43488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.798 [2024-12-10 22:55:20.798023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.798 [2024-12-10 22:55:20.798032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:43496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.798 [2024-12-10 22:55:20.798039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.798 [2024-12-10 22:55:20.798047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:43504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.798 [2024-12-10 22:55:20.798054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.798 [2024-12-10 22:55:20.798062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:43512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.798 [2024-12-10 22:55:20.798069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.798 [2024-12-10 22:55:20.798077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:43520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.798 [2024-12-10 22:55:20.798083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.798 [2024-12-10 22:55:20.798091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:43528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.798 [2024-12-10 22:55:20.798098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.798 [2024-12-10 22:55:20.798106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:43536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.798 [2024-12-10 22:55:20.798112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.798 [2024-12-10 22:55:20.798120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:43544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.798 [2024-12-10 22:55:20.798128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.798 [2024-12-10 22:55:20.798136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:43552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.798 [2024-12-10 22:55:20.798146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.798 [2024-12-10 22:55:20.798154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:43560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.798 [2024-12-10 22:55:20.798165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.798 [2024-12-10 22:55:20.798173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:43568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.798 [2024-12-10 22:55:20.798180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.798 [2024-12-10 22:55:20.798188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:43576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.798 [2024-12-10 22:55:20.798194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.798 [2024-12-10 22:55:20.798202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:43584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.798 [2024-12-10 22:55:20.798209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.798 [2024-12-10 22:55:20.798217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:43592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.798 [2024-12-10 22:55:20.798224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.798 [2024-12-10 22:55:20.798232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:43600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.798 [2024-12-10 22:55:20.798238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.798 [2024-12-10 22:55:20.798246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:43608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.798 [2024-12-10 22:55:20.798253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.798 [2024-12-10 22:55:20.798262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:43616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.798 [2024-12-10 22:55:20.798268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.798 [2024-12-10 22:55:20.798276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:43624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.798 [2024-12-10 22:55:20.798282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.798 [2024-12-10 22:55:20.798290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:43632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.798 [2024-12-10 22:55:20.798296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.798 [2024-12-10 22:55:20.798304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:43640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.798 [2024-12-10 22:55:20.798310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.798 [2024-12-10 22:55:20.798318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:43648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.798 [2024-12-10 22:55:20.798325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.798 [2024-12-10 22:55:20.798334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:43656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.798 [2024-12-10 22:55:20.798342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.798 [2024-12-10 22:55:20.798349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:43664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.798 [2024-12-10 22:55:20.798356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.798 [2024-12-10 22:55:20.798364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:43672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.798 [2024-12-10 22:55:20.798370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.798 [2024-12-10 22:55:20.798379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:43680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.798 [2024-12-10 22:55:20.798385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.798 [2024-12-10 22:55:20.798393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:43688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.798 [2024-12-10 22:55:20.798399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.798 [2024-12-10 22:55:20.798407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:43696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.798 [2024-12-10 22:55:20.798413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.798 [2024-12-10 22:55:20.798421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:43704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.798 [2024-12-10 22:55:20.798427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.798 [2024-12-10 22:55:20.798435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:43712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.798 [2024-12-10 22:55:20.798441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.798 [2024-12-10 22:55:20.798449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:43720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.798 [2024-12-10 22:55:20.798456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.798 [2024-12-10 22:55:20.798463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:43728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.798 [2024-12-10 22:55:20.798470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.798 [2024-12-10 22:55:20.798479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:43736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.798 [2024-12-10 22:55:20.798485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.798 [2024-12-10 22:55:20.798494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:43744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.798 [2024-12-10 22:55:20.798500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.798 [2024-12-10 22:55:20.798508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:43752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.798 [2024-12-10 22:55:20.798515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.798 [2024-12-10 22:55:20.798524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:43760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.798 [2024-12-10 22:55:20.798531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.799 [2024-12-10 22:55:20.798539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:43768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.799 [2024-12-10 22:55:20.798546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.799 [2024-12-10 22:55:20.798554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:43776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.799 [2024-12-10 22:55:20.798561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.799 [2024-12-10 22:55:20.798569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:43784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.799 [2024-12-10 22:55:20.798575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.799 [2024-12-10 22:55:20.798583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:43792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.799 [2024-12-10 22:55:20.798590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.799 [2024-12-10 22:55:20.798598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:43800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.799 [2024-12-10 22:55:20.798604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.799 [2024-12-10 22:55:20.798612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:43808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.799 [2024-12-10 22:55:20.798619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.799 [2024-12-10 22:55:20.798626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:43816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.799 [2024-12-10 22:55:20.798633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.799 [2024-12-10 22:55:20.798641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:43824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.799 [2024-12-10 22:55:20.798647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.799 [2024-12-10 22:55:20.798655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:43832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.799 [2024-12-10 22:55:20.798661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.799 [2024-12-10 22:55:20.798669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:43840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.799 [2024-12-10 22:55:20.798676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.799 [2024-12-10 22:55:20.798685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:43848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.799 [2024-12-10 22:55:20.798691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.799 [2024-12-10 22:55:20.798699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:43856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.799 [2024-12-10 22:55:20.798706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.799 [2024-12-10 22:55:20.798714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:43864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.799 [2024-12-10 22:55:20.798722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.799 [2024-12-10 22:55:20.798730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:43872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.799 [2024-12-10 22:55:20.798736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.799 [2024-12-10 22:55:20.798744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:43880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.799 [2024-12-10 22:55:20.798751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.799 [2024-12-10 22:55:20.798759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:43888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.799 [2024-12-10 22:55:20.798765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.799 [2024-12-10 22:55:20.798773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:43896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.799 [2024-12-10 22:55:20.798779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.799 [2024-12-10 22:55:20.798788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:43904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.799 [2024-12-10 22:55:20.798794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.799 [2024-12-10 22:55:20.798801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:43912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.799 [2024-12-10 22:55:20.798808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.799 [2024-12-10 22:55:20.798815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:43920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.799 [2024-12-10 22:55:20.798822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.799 [2024-12-10 22:55:20.798830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:43928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.799 [2024-12-10 22:55:20.798837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.799 [2024-12-10 22:55:20.798845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:43936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.799 [2024-12-10 22:55:20.798851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.799 [2024-12-10 22:55:20.798859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:43944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.799 [2024-12-10 22:55:20.798866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.799 [2024-12-10 22:55:20.798873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:43952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.799 [2024-12-10 22:55:20.798880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.799 [2024-12-10 22:55:20.798891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:43960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.799 [2024-12-10 22:55:20.798898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.799 [2024-12-10 22:55:20.798906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:43968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.799 [2024-12-10 22:55:20.798912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.799 [2024-12-10 22:55:20.798920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:43976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.799 [2024-12-10 22:55:20.798928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.799 [2024-12-10 22:55:20.798936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:43984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.799 [2024-12-10 22:55:20.798942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.799 [2024-12-10 22:55:20.798950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:43992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.799 [2024-12-10 22:55:20.798957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.799 [2024-12-10 22:55:20.798965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:44000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.799 [2024-12-10 22:55:20.798971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.799 [2024-12-10 22:55:20.798980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:44008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.799 [2024-12-10 22:55:20.798986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.799 [2024-12-10 22:55:20.798994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:44016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.799 [2024-12-10 22:55:20.799000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.799 [2024-12-10 22:55:20.799008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:44024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.799 [2024-12-10 22:55:20.799014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.799 [2024-12-10 22:55:20.799022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:44032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.799 [2024-12-10 22:55:20.799029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.799 [2024-12-10 22:55:20.799037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:44040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.799 [2024-12-10 22:55:20.799043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.799 [2024-12-10 22:55:20.799051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:44048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.799 [2024-12-10 22:55:20.799057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.799 [2024-12-10 22:55:20.799065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:44056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.799 [2024-12-10 22:55:20.799073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.799 [2024-12-10 22:55:20.799083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:44064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.799 [2024-12-10 22:55:20.799089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.799 [2024-12-10 22:55:20.799098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:44072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.799 [2024-12-10 22:55:20.799104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.799 [2024-12-10 22:55:20.799112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:44080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.799 [2024-12-10 22:55:20.799119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.799 [2024-12-10 22:55:20.799127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:44088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.799 [2024-12-10 22:55:20.799133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.800 [2024-12-10 22:55:20.799141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:44096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.800 [2024-12-10 22:55:20.799148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.800 [2024-12-10 22:55:20.799156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:44104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.800 [2024-12-10 22:55:20.799166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.800 [2024-12-10 22:55:20.799174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:44112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.800 [2024-12-10 22:55:20.799180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.800 [2024-12-10 22:55:20.799188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:44120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.800 [2024-12-10 22:55:20.799195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.800 [2024-12-10 22:55:20.799203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:44328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.800 [2024-12-10 22:55:20.799210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.800 [2024-12-10 22:55:20.799219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:44336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.800 [2024-12-10 22:55:20.799225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.800 [2024-12-10 22:55:20.799233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:44344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.800 [2024-12-10 22:55:20.799240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.800 [2024-12-10 22:55:20.799248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:44352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.800 [2024-12-10 22:55:20.799254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.800 [2024-12-10 22:55:20.799264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:44360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.800 [2024-12-10 22:55:20.799271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.800 [2024-12-10 22:55:20.799278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:44368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.800 [2024-12-10 22:55:20.799285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.800 [2024-12-10 22:55:20.799293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:44376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.800 [2024-12-10 22:55:20.799300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.800 [2024-12-10 22:55:20.799308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:44384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.800 [2024-12-10 22:55:20.799314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.800 [2024-12-10 22:55:20.799322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.800 [2024-12-10 22:55:20.799329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.800 [2024-12-10 22:55:20.799337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:44136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.800 [2024-12-10 22:55:20.799343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.800 [2024-12-10 22:55:20.799351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:44144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.800 [2024-12-10 22:55:20.799357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.800 [2024-12-10 22:55:20.799365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:44152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.800 [2024-12-10 22:55:20.799372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.800 [2024-12-10 22:55:20.799380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:44160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.800 [2024-12-10 22:55:20.799386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.800 [2024-12-10 22:55:20.799394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:44168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.800 [2024-12-10 22:55:20.799400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.800 [2024-12-10 22:55:20.799408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:44176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.800 [2024-12-10 22:55:20.799414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.800 [2024-12-10 22:55:20.799422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:44184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.800 [2024-12-10 22:55:20.799429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.800 [2024-12-10 22:55:20.799437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:44192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.800 [2024-12-10 22:55:20.799444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.800 [2024-12-10 22:55:20.799453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:44200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.800 [2024-12-10 22:55:20.799460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.800 [2024-12-10 22:55:20.799467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:44208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.800 [2024-12-10 22:55:20.799474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.800 [2024-12-10 22:55:20.799482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:44216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.800 [2024-12-10 22:55:20.799489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.800 [2024-12-10 22:55:20.799497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:44224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.800 [2024-12-10 22:55:20.799504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.800 [2024-12-10 22:55:20.799511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:44232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.800 [2024-12-10 22:55:20.799518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.800 [2024-12-10 22:55:20.799526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:44240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.800 [2024-12-10 22:55:20.799532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.800 [2024-12-10 22:55:20.799540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:44248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.800 [2024-12-10 22:55:20.799546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.800 [2024-12-10 22:55:20.799554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:44256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.800 [2024-12-10 22:55:20.799561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.800 [2024-12-10 22:55:20.799569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:44264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.800 [2024-12-10 22:55:20.799575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.800 [2024-12-10 22:55:20.799583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:44272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.800 [2024-12-10 22:55:20.799589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.800 [2024-12-10 22:55:20.799597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:44280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.800 [2024-12-10 22:55:20.799604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.800 [2024-12-10 22:55:20.799612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:44288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.800 [2024-12-10 22:55:20.799618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.800 [2024-12-10 22:55:20.799626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:44296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.800 [2024-12-10 22:55:20.799634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.800 [2024-12-10 22:55:20.799642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:44304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.800 [2024-12-10 22:55:20.799649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.800 [2024-12-10 22:55:20.799657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:44312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.800 [2024-12-10 22:55:20.799671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.800 [2024-12-10 22:55:20.799679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:44392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.800 [2024-12-10 22:55:20.799685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.800 [2024-12-10 22:55:20.799693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:44400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.800 [2024-12-10 22:55:20.799700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.800 [2024-12-10 22:55:20.799708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:44408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.800 [2024-12-10 22:55:20.799714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.800 [2024-12-10 22:55:20.799722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:44416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.800 [2024-12-10 22:55:20.799729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.800 [2024-12-10 22:55:20.799736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:44424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.800 [2024-12-10 22:55:20.799743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.801 [2024-12-10 22:55:20.799751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:44432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.801 [2024-12-10 22:55:20.799757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.801 [2024-12-10 22:55:20.799765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:44440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.801 [2024-12-10 22:55:20.799773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.801 [2024-12-10 22:55:20.799781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:44448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.801 [2024-12-10 22:55:20.799788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.801 [2024-12-10 22:55:20.799796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:44456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.801 [2024-12-10 22:55:20.799802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.801 [2024-12-10 22:55:20.799810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:44464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.801 [2024-12-10 22:55:20.799817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.801 [2024-12-10 22:55:20.799826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:44472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.801 [2024-12-10 22:55:20.799832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.801 [2024-12-10 22:55:20.799840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:44480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.801 [2024-12-10 22:55:20.799846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.801 [2024-12-10 22:55:20.799854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:44488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.801 [2024-12-10 22:55:20.799861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.801 [2024-12-10 22:55:20.799869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:44496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:19.801 [2024-12-10 22:55:20.799877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.801 [2024-12-10 22:55:20.799894] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:19.801 [2024-12-10 22:55:20.799900] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:19.801 [2024-12-10 22:55:20.799906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:44504 len:8 PRP1 0x0 PRP2 0x0 00:24:19.801 [2024-12-10 22:55:20.799915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:19.801 [2024-12-10 22:55:20.799961] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:24:19.801 [2024-12-10 22:55:20.799969] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:24:19.801 [2024-12-10 22:55:20.802847] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:24:19.801 [2024-12-10 22:55:20.802876] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8c7fe0 (9): Bad file descriptor 00:24:19.801 [2024-12-10 22:55:20.825612] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:24:19.801 10953.80 IOPS, 42.79 MiB/s [2024-12-10T21:55:27.530Z] 10963.45 IOPS, 42.83 MiB/s [2024-12-10T21:55:27.530Z] 10964.00 IOPS, 42.83 MiB/s [2024-12-10T21:55:27.530Z] 10967.62 IOPS, 42.84 MiB/s [2024-12-10T21:55:27.530Z] 10970.14 IOPS, 42.85 MiB/s [2024-12-10T21:55:27.530Z] 10969.07 IOPS, 42.85 MiB/s 00:24:19.801 Latency(us) 00:24:19.801 [2024-12-10T21:55:27.530Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:19.801 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:19.801 Verification LBA range: start 0x0 length 0x4000 00:24:19.801 NVMe0n1 : 15.01 10968.76 42.85 268.39 0.00 11368.29 450.56 22681.15 00:24:19.801 [2024-12-10T21:55:27.530Z] =================================================================================================================== 00:24:19.801 [2024-12-10T21:55:27.530Z] Total : 10968.76 42.85 268.39 0.00 11368.29 450.56 22681.15 00:24:19.801 Received shutdown signal, test time was about 15.000000 seconds 00:24:19.801 00:24:19.801 Latency(us) 00:24:19.801 [2024-12-10T21:55:27.530Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:19.801 [2024-12-10T21:55:27.530Z] =================================================================================================================== 00:24:19.801 [2024-12-10T21:55:27.530Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:19.801 22:55:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:24:19.801 22:55:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:24:19.801 22:55:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:24:19.801 22:55:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2474529 00:24:19.801 22:55:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:24:19.801 22:55:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2474529 /var/tmp/bdevperf.sock 00:24:19.801 22:55:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2474529 ']' 00:24:19.801 22:55:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:19.801 22:55:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:19.801 22:55:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:19.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:19.801 22:55:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:19.801 22:55:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:19.801 22:55:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:19.801 22:55:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:24:19.801 22:55:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:19.801 [2024-12-10 22:55:27.310679] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:19.801 22:55:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:20.060 [2024-12-10 22:55:27.515291] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:20.060 22:55:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:20.060 NVMe0n1 00:24:20.318 22:55:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:20.577 00:24:20.577 22:55:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:24:20.835 00:24:20.835 22:55:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:20.835 22:55:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:24:21.093 22:55:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:21.093 22:55:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:24:24.380 22:55:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:24.380 22:55:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:24:24.380 22:55:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:24.380 22:55:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2475446 00:24:24.380 22:55:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 2475446 00:24:25.757 { 00:24:25.757 "results": [ 00:24:25.757 { 00:24:25.757 "job": "NVMe0n1", 00:24:25.757 "core_mask": "0x1", 00:24:25.757 "workload": "verify", 00:24:25.757 "status": "finished", 00:24:25.757 "verify_range": { 00:24:25.757 "start": 0, 00:24:25.757 "length": 16384 00:24:25.757 }, 00:24:25.757 "queue_depth": 128, 00:24:25.757 "io_size": 4096, 00:24:25.757 "runtime": 1.011533, 00:24:25.757 "iops": 10836.028088060399, 00:24:25.757 "mibps": 42.32823471898593, 00:24:25.757 "io_failed": 0, 00:24:25.757 "io_timeout": 0, 00:24:25.757 "avg_latency_us": 11767.339443322768, 00:24:25.757 "min_latency_us": 2137.0434782608695, 00:24:25.757 "max_latency_us": 15386.713043478261 00:24:25.757 } 00:24:25.757 ], 00:24:25.757 "core_count": 1 00:24:25.757 } 00:24:25.757 22:55:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/try.txt 00:24:25.757 [2024-12-10 22:55:26.914963] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:24:25.757 [2024-12-10 22:55:26.915016] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2474529 ] 00:24:25.757 [2024-12-10 22:55:26.989589] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:25.757 [2024-12-10 22:55:27.026886] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:24:25.757 [2024-12-10 22:55:28.754287] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:25.758 [2024-12-10 22:55:28.754333] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:25.758 [2024-12-10 22:55:28.754345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.758 [2024-12-10 22:55:28.754354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:25.758 [2024-12-10 22:55:28.754361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.758 [2024-12-10 22:55:28.754368] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:25.758 [2024-12-10 22:55:28.754375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.758 [2024-12-10 22:55:28.754382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:25.758 [2024-12-10 22:55:28.754389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:25.758 [2024-12-10 22:55:28.754396] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:24:25.758 [2024-12-10 22:55:28.754422] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:24:25.758 [2024-12-10 22:55:28.754437] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a9fe0 (9): Bad file descriptor 00:24:25.758 [2024-12-10 22:55:28.800244] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:24:25.758 Running I/O for 1 seconds... 00:24:25.758 10833.00 IOPS, 42.32 MiB/s 00:24:25.758 Latency(us) 00:24:25.758 [2024-12-10T21:55:33.487Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:25.758 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:25.758 Verification LBA range: start 0x0 length 0x4000 00:24:25.758 NVMe0n1 : 1.01 10836.03 42.33 0.00 0.00 11767.34 2137.04 15386.71 00:24:25.758 [2024-12-10T21:55:33.487Z] =================================================================================================================== 00:24:25.758 [2024-12-10T21:55:33.487Z] Total : 10836.03 42.33 0.00 0.00 11767.34 2137.04 15386.71 00:24:25.758 22:55:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:25.758 22:55:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:24:25.758 22:55:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:26.017 22:55:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:26.017 22:55:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:24:26.017 22:55:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:26.275 22:55:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:24:29.562 22:55:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:29.562 22:55:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:24:29.562 22:55:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 2474529 00:24:29.562 22:55:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2474529 ']' 00:24:29.562 22:55:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2474529 00:24:29.562 22:55:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:24:29.562 22:55:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:29.562 22:55:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2474529 00:24:29.562 22:55:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:29.562 22:55:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:29.562 22:55:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2474529' 00:24:29.562 killing process with pid 2474529 00:24:29.562 22:55:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2474529 00:24:29.562 22:55:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2474529 00:24:29.821 22:55:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:24:29.821 22:55:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:29.821 22:55:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:24:29.821 22:55:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/try.txt 00:24:29.821 22:55:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:24:29.821 22:55:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:29.821 22:55:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:24:29.821 22:55:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:29.821 22:55:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:24:29.821 22:55:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:29.821 22:55:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:30.080 rmmod nvme_tcp 00:24:30.080 rmmod nvme_fabrics 00:24:30.080 rmmod nvme_keyring 00:24:30.080 22:55:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:30.080 22:55:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:24:30.080 22:55:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:24:30.080 22:55:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 2471634 ']' 00:24:30.080 22:55:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 2471634 00:24:30.080 22:55:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2471634 ']' 00:24:30.080 22:55:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2471634 00:24:30.080 22:55:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:24:30.080 22:55:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:30.080 22:55:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2471634 00:24:30.080 22:55:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:30.080 22:55:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:30.080 22:55:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2471634' 00:24:30.080 killing process with pid 2471634 00:24:30.080 22:55:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2471634 00:24:30.080 22:55:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2471634 00:24:30.340 22:55:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:30.340 22:55:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:30.340 22:55:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:30.340 22:55:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:24:30.340 22:55:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:24:30.340 22:55:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:30.340 22:55:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:24:30.340 22:55:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:30.340 22:55:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:30.340 22:55:37 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:30.340 22:55:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:30.340 22:55:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:32.253 22:55:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:32.253 00:24:32.253 real 0m37.287s 00:24:32.253 user 1m57.755s 00:24:32.253 sys 0m8.061s 00:24:32.253 22:55:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:32.253 22:55:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:32.253 ************************************ 00:24:32.253 END TEST nvmf_failover 00:24:32.253 ************************************ 00:24:32.253 22:55:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:32.253 22:55:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:32.253 22:55:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:32.253 22:55:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.513 ************************************ 00:24:32.513 START TEST nvmf_host_discovery 00:24:32.513 ************************************ 00:24:32.513 22:55:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:24:32.513 * Looking for test storage... 00:24:32.513 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host 00:24:32.513 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:32.513 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:24:32.513 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:32.513 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:32.513 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:32.513 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:32.513 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:32.513 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:24:32.513 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:24:32.513 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:24:32.513 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:24:32.513 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:24:32.513 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:24:32.513 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:24:32.513 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:32.513 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:24:32.513 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:24:32.513 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:32.513 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:32.513 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:24:32.513 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:24:32.513 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:32.513 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:24:32.513 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:24:32.513 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:24:32.513 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:24:32.513 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:32.513 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:24:32.513 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:24:32.513 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:32.513 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:32.513 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:24:32.513 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:32.513 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:32.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:32.513 --rc genhtml_branch_coverage=1 00:24:32.513 --rc genhtml_function_coverage=1 00:24:32.513 --rc genhtml_legend=1 00:24:32.513 --rc geninfo_all_blocks=1 00:24:32.513 --rc geninfo_unexecuted_blocks=1 00:24:32.513 00:24:32.513 ' 00:24:32.513 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:32.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:32.513 --rc genhtml_branch_coverage=1 00:24:32.513 --rc genhtml_function_coverage=1 00:24:32.513 --rc genhtml_legend=1 00:24:32.513 --rc geninfo_all_blocks=1 00:24:32.513 --rc geninfo_unexecuted_blocks=1 00:24:32.513 00:24:32.513 ' 00:24:32.513 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:32.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:32.513 --rc genhtml_branch_coverage=1 00:24:32.513 --rc genhtml_function_coverage=1 00:24:32.513 --rc genhtml_legend=1 00:24:32.513 --rc geninfo_all_blocks=1 00:24:32.513 --rc geninfo_unexecuted_blocks=1 00:24:32.513 00:24:32.513 ' 00:24:32.513 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:32.513 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:32.513 --rc genhtml_branch_coverage=1 00:24:32.513 --rc genhtml_function_coverage=1 00:24:32.513 --rc genhtml_legend=1 00:24:32.513 --rc geninfo_all_blocks=1 00:24:32.513 --rc geninfo_unexecuted_blocks=1 00:24:32.513 00:24:32.513 ' 00:24:32.513 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:24:32.513 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:24:32.513 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:32.513 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:32.513 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:32.513 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:32.513 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:32.513 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:32.513 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:32.513 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:32.513 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:32.513 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:32.513 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:32.513 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:32.513 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:32.513 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:32.513 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:32.513 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:32.513 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:24:32.513 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:24:32.513 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:32.513 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:32.514 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:32.514 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.514 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.514 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.514 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:24:32.514 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:32.514 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:24:32.514 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:32.514 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:32.514 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:32.514 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:32.514 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:32.514 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:32.514 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:32.514 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:32.514 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:32.514 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:32.514 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:24:32.514 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:24:32.514 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:24:32.514 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:24:32.514 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:24:32.514 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:24:32.514 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:24:32.514 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:32.514 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:32.514 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:32.514 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:32.514 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:32.514 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:32.514 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:32.514 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:32.514 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:32.514 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:32.514 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:24:32.514 22:55:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:39.084 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:39.084 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:24:39.084 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:39.084 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:39.084 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:39.084 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:39.084 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:39.084 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:24:39.084 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:39.084 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:24:39.084 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:24:39.084 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:24:39.084 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:24:39.084 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:24:39.084 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:24:39.084 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:39.084 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:39.084 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:39.084 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:39.084 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:39.084 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:39.084 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:39.084 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:39.084 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:39.085 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:39.085 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:39.085 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:39.085 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:39.085 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:39.085 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:39.085 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:39.085 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:39.085 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:39.085 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:39.085 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:39.085 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:39.085 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:39.085 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:39.085 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:39.085 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:39.085 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:39.085 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:39.085 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:39.085 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:39.085 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:39.085 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:39.085 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:39.085 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:39.085 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:39.085 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:39.085 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:39.085 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:39.085 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:39.085 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:39.085 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:39.085 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:39.085 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:39.085 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:39.085 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:39.085 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:39.085 Found net devices under 0000:86:00.0: cvl_0_0 00:24:39.085 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:39.085 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:39.085 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:39.085 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:39.085 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:39.085 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:39.085 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:39.085 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:39.085 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:39.085 Found net devices under 0000:86:00.1: cvl_0_1 00:24:39.085 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:39.085 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:39.085 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:24:39.085 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:39.085 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:39.085 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:39.085 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:39.085 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:39.085 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:39.085 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:39.085 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:39.085 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:39.085 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:39.085 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:39.085 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:39.085 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:39.085 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:39.085 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:39.085 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:39.085 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:39.085 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:39.085 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:39.085 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:39.085 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:39.085 22:55:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:39.085 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:39.085 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:39.085 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:39.085 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:39.085 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:39.085 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.399 ms 00:24:39.085 00:24:39.085 --- 10.0.0.2 ping statistics --- 00:24:39.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:39.085 rtt min/avg/max/mdev = 0.399/0.399/0.399/0.000 ms 00:24:39.085 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:39.085 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:39.085 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:24:39.085 00:24:39.085 --- 10.0.0.1 ping statistics --- 00:24:39.085 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:39.085 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:24:39.085 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:39.085 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:24:39.085 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:39.085 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:39.085 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:39.085 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:39.085 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:39.085 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:39.085 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:39.085 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:24:39.085 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:39.085 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:39.085 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:39.085 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=2479896 00:24:39.085 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 2479896 00:24:39.085 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:39.085 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 2479896 ']' 00:24:39.085 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:39.085 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:39.085 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:39.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:39.085 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:39.085 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:39.085 [2024-12-10 22:55:46.271803] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:24:39.085 [2024-12-10 22:55:46.271855] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:39.085 [2024-12-10 22:55:46.351109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:39.085 [2024-12-10 22:55:46.392309] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:39.085 [2024-12-10 22:55:46.392341] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:39.086 [2024-12-10 22:55:46.392351] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:39.086 [2024-12-10 22:55:46.392357] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:39.086 [2024-12-10 22:55:46.392362] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:39.086 [2024-12-10 22:55:46.392901] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:24:39.086 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:39.086 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:24:39.086 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:39.086 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:39.086 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:39.086 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:39.086 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:39.086 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.086 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:39.086 [2024-12-10 22:55:46.525620] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:39.086 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.086 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:24:39.086 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.086 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:39.086 [2024-12-10 22:55:46.533786] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:39.086 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.086 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:24:39.086 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.086 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:39.086 null0 00:24:39.086 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.086 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:24:39.086 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.086 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:39.086 null1 00:24:39.086 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.086 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:24:39.086 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.086 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:39.086 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.086 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2479918 00:24:39.086 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2479918 /tmp/host.sock 00:24:39.086 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 2479918 ']' 00:24:39.086 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:24:39.086 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:39.086 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:39.086 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:39.086 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:24:39.086 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:39.086 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:39.086 [2024-12-10 22:55:46.611514] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:24:39.086 [2024-12-10 22:55:46.611555] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2479918 ] 00:24:39.086 [2024-12-10 22:55:46.685584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:39.086 [2024-12-10 22:55:46.727268] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:24:39.345 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:39.345 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:24:39.345 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:39.345 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:24:39.345 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.345 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:39.345 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.345 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:24:39.345 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.345 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:39.345 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.345 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:24:39.345 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:24:39.345 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:39.345 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:39.345 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.345 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:39.345 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:39.345 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:39.345 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.345 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:24:39.345 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:24:39.345 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:39.345 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:39.345 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.345 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:39.345 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:39.345 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:39.345 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.345 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:24:39.345 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:24:39.345 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.345 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:39.345 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.345 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:24:39.345 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:39.345 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:39.345 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:39.345 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.345 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:39.345 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:39.345 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.345 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:24:39.346 22:55:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:24:39.346 22:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:39.346 22:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:39.346 22:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.346 22:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:39.346 22:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:39.346 22:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:39.346 22:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.346 22:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:24:39.346 22:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:24:39.346 22:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.346 22:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:39.346 22:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.346 22:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:24:39.346 22:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:39.346 22:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:39.346 22:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.346 22:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:39.346 22:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:39.346 22:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:39.605 22:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.605 22:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:24:39.605 22:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:24:39.605 22:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:39.605 22:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:39.605 22:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.605 22:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:39.605 22:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:39.605 22:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:39.605 22:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.605 22:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:24:39.605 22:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:39.605 22:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.605 22:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:39.605 [2024-12-10 22:55:47.159382] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:39.605 22:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.605 22:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:24:39.605 22:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:39.605 22:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:39.605 22:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.605 22:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:39.605 22:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:39.605 22:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:39.605 22:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.605 22:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:24:39.605 22:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:24:39.605 22:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:39.605 22:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:39.605 22:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:39.605 22:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.605 22:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:39.605 22:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:39.605 22:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.605 22:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:24:39.605 22:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:24:39.605 22:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:39.605 22:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:39.605 22:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:39.605 22:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:39.605 22:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:39.605 22:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:39.605 22:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:39.605 22:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:39.605 22:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:39.605 22:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.605 22:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:39.605 22:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.605 22:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:39.605 22:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:24:39.605 22:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:39.605 22:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:39.605 22:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:24:39.605 22:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.605 22:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:39.605 22:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.605 22:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:39.605 22:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:39.605 22:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:39.605 22:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:39.605 22:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:39.605 22:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:39.605 22:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:39.606 22:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:39.606 22:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.606 22:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:39.606 22:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:39.606 22:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:39.864 22:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.864 22:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:24:39.864 22:55:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:24:40.432 [2024-12-10 22:55:47.851621] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:40.432 [2024-12-10 22:55:47.851643] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:40.432 [2024-12-10 22:55:47.851656] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:40.432 [2024-12-10 22:55:47.939915] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:40.432 [2024-12-10 22:55:48.125016] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:24:40.432 [2024-12-10 22:55:48.125799] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x99a9a0:1 started. 00:24:40.432 [2024-12-10 22:55:48.127177] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:40.432 [2024-12-10 22:55:48.127193] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:40.432 [2024-12-10 22:55:48.131035] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x99a9a0 was disconnected and freed. delete nvme_qpair. 00:24:40.691 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:40.691 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:40.691 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:40.691 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:40.691 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:40.691 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.691 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:40.691 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:40.691 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:40.691 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.691 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:40.691 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:40.691 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:40.691 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:24:40.691 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:40.950 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:40.951 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:24:40.951 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:40.951 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:40.951 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:40.951 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.951 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:40.951 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:40.951 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:40.951 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.951 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:24:40.951 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:40.951 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:40.951 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:24:40.951 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:40.951 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:40.951 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:24:40.951 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:24:40.951 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:40.951 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:40.951 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:40.951 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.951 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:40.951 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:40.951 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.951 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:24:40.951 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:40.951 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:24:40.951 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:40.951 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:40.951 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:40.951 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:40.951 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:40.951 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:40.951 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:40.951 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:24:40.951 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.951 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:40.951 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:40.951 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.951 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:40.951 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:24:40.951 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:40.951 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:40.951 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:24:40.951 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.951 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:40.951 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.951 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:40.951 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:40.951 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:40.951 [2024-12-10 22:55:48.577455] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:40.951 x99ad20:1 started. 00:24:40.951 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:40.951 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:40.951 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:40.951 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.951 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:40.951 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:40.951 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:40.951 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:40.951 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.951 [2024-12-10 22:55:48.623653] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x99ad20 was disconnected and freed. delete nvme_qpair. 00:24:40.951 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:40.951 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:40.951 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:24:40.951 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:24:40.951 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:40.951 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:40.951 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:40.951 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:40.951 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:40.951 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:40.951 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:24:40.951 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:40.951 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.951 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:40.951 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.951 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:24:40.951 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:40.951 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:40.951 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:40.951 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:24:40.951 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.951 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:40.951 [2024-12-10 22:55:48.671663] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:40.951 [2024-12-10 22:55:48.672227] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:40.951 [2024-12-10 22:55:48.672245] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:40.951 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.211 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:41.211 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:41.211 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:41.211 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:41.211 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:41.211 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:41.211 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:41.211 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:41.211 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.211 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:41.211 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:41.211 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:41.211 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.211 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:41.211 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:41.211 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:41.211 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:41.211 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:41.211 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:41.211 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:41.211 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:41.211 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:41.211 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:41.211 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.211 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:41.211 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:41.211 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:41.211 [2024-12-10 22:55:48.759499] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:24:41.211 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.211 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:41.211 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:41.211 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:41.211 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:24:41.211 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:41.211 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:41.211 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:41.211 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:24:41.211 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:41.211 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:41.211 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.211 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:41.211 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:41.211 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:41.211 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.211 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:24:41.211 22:55:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:24:41.211 [2024-12-10 22:55:48.899571] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:24:41.211 [2024-12-10 22:55:48.899608] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:41.211 [2024-12-10 22:55:48.899617] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:41.211 [2024-12-10 22:55:48.899622] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:42.148 22:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:42.148 22:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:24:42.148 22:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:24:42.148 22:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:42.148 22:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:42.148 22:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.148 22:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:42.148 22:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:42.148 22:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:42.148 22:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.409 22:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:24:42.409 22:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:42.409 22:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:24:42.409 22:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:42.409 22:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:42.409 22:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:42.409 22:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:42.409 22:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:42.409 22:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:42.409 22:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:42.409 22:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:42.409 22:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:42.409 22:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.409 22:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:42.409 22:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.409 22:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:42.409 22:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:42.409 22:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:42.409 22:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:42.409 22:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:42.409 22:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.409 22:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:42.409 [2024-12-10 22:55:49.928160] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:24:42.409 [2024-12-10 22:55:49.928183] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:42.409 [2024-12-10 22:55:49.929564] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:42.409 [2024-12-10 22:55:49.929580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.409 [2024-12-10 22:55:49.929588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:42.409 [2024-12-10 22:55:49.929595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.409 [2024-12-10 22:55:49.929619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:42.409 [2024-12-10 22:55:49.929626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.409 [2024-12-10 22:55:49.929633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:42.409 [2024-12-10 22:55:49.929641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:42.409 [2024-12-10 22:55:49.929650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c970 is same with the state(6) to be set 00:24:42.409 22:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.409 22:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:42.409 22:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:24:42.409 22:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:42.409 22:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:42.409 22:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:24:42.409 22:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:42.409 22:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:42.409 22:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:42.409 22:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.409 22:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:42.409 22:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:42.409 22:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:42.409 [2024-12-10 22:55:49.939576] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x96c970 (9): Bad file descriptor 00:24:42.409 [2024-12-10 22:55:49.949612] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:42.409 [2024-12-10 22:55:49.949629] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:42.409 [2024-12-10 22:55:49.949637] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:42.409 [2024-12-10 22:55:49.949641] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:42.409 [2024-12-10 22:55:49.949658] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:42.409 [2024-12-10 22:55:49.949914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.409 [2024-12-10 22:55:49.949928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x96c970 with addr=10.0.0.2, port=4420 00:24:42.409 [2024-12-10 22:55:49.949936] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c970 is same with the state(6) to be set 00:24:42.409 [2024-12-10 22:55:49.949950] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x96c970 (9): Bad file descriptor 00:24:42.409 [2024-12-10 22:55:49.949961] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:42.409 [2024-12-10 22:55:49.949967] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:42.409 [2024-12-10 22:55:49.949977] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:42.409 [2024-12-10 22:55:49.949984] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:42.409 [2024-12-10 22:55:49.949989] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:42.409 [2024-12-10 22:55:49.949994] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:42.409 22:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.409 [2024-12-10 22:55:49.959689] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:42.409 [2024-12-10 22:55:49.959702] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:42.409 [2024-12-10 22:55:49.959707] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:42.409 [2024-12-10 22:55:49.959711] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:42.410 [2024-12-10 22:55:49.959723] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:42.410 [2024-12-10 22:55:49.959963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.410 [2024-12-10 22:55:49.959977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x96c970 with addr=10.0.0.2, port=4420 00:24:42.410 [2024-12-10 22:55:49.959985] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c970 is same with the state(6) to be set 00:24:42.410 [2024-12-10 22:55:49.959996] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x96c970 (9): Bad file descriptor 00:24:42.410 [2024-12-10 22:55:49.960006] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:42.410 [2024-12-10 22:55:49.960012] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:42.410 [2024-12-10 22:55:49.960019] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:42.410 [2024-12-10 22:55:49.960024] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:42.410 [2024-12-10 22:55:49.960029] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:42.410 [2024-12-10 22:55:49.960037] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:42.410 [2024-12-10 22:55:49.969754] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:42.410 [2024-12-10 22:55:49.969765] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:42.410 [2024-12-10 22:55:49.969769] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:42.410 [2024-12-10 22:55:49.969773] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:42.410 [2024-12-10 22:55:49.969785] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:42.410 [2024-12-10 22:55:49.969930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.410 [2024-12-10 22:55:49.969941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x96c970 with addr=10.0.0.2, port=4420 00:24:42.410 [2024-12-10 22:55:49.969948] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c970 is same with the state(6) to be set 00:24:42.410 [2024-12-10 22:55:49.969958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x96c970 (9): Bad file descriptor 00:24:42.410 [2024-12-10 22:55:49.969968] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:42.410 [2024-12-10 22:55:49.969974] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:42.410 [2024-12-10 22:55:49.969981] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:42.410 [2024-12-10 22:55:49.969986] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:42.410 [2024-12-10 22:55:49.969991] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:42.410 [2024-12-10 22:55:49.969995] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:42.410 [2024-12-10 22:55:49.979817] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:42.410 [2024-12-10 22:55:49.979830] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:42.410 [2024-12-10 22:55:49.979834] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:42.410 [2024-12-10 22:55:49.979838] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:42.410 [2024-12-10 22:55:49.979852] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:42.410 [2024-12-10 22:55:49.980088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.410 [2024-12-10 22:55:49.980101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x96c970 with addr=10.0.0.2, port=4420 00:24:42.410 [2024-12-10 22:55:49.980109] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c970 is same with the state(6) to be set 00:24:42.410 [2024-12-10 22:55:49.980120] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x96c970 (9): Bad file descriptor 00:24:42.410 [2024-12-10 22:55:49.980130] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:42.410 [2024-12-10 22:55:49.980136] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:42.410 [2024-12-10 22:55:49.980143] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:42.410 [2024-12-10 22:55:49.980149] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:42.410 [2024-12-10 22:55:49.980163] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:42.410 [2024-12-10 22:55:49.980167] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:42.410 22:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:42.410 22:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:42.410 22:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:42.410 22:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:24:42.410 22:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:42.410 22:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:42.410 22:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:24:42.410 22:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:42.410 22:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:42.410 22:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:42.410 22:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.410 22:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:42.410 22:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:42.410 22:55:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:42.410 [2024-12-10 22:55:49.989883] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:42.410 [2024-12-10 22:55:49.989894] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:42.410 [2024-12-10 22:55:49.989899] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:42.410 [2024-12-10 22:55:49.989903] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:42.410 [2024-12-10 22:55:49.989914] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:42.410 [2024-12-10 22:55:49.990063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.410 [2024-12-10 22:55:49.990077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x96c970 with addr=10.0.0.2, port=4420 00:24:42.410 [2024-12-10 22:55:49.990087] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c970 is same with the state(6) to be set 00:24:42.410 [2024-12-10 22:55:49.990099] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x96c970 (9): Bad file descriptor 00:24:42.410 [2024-12-10 22:55:49.990109] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:42.410 [2024-12-10 22:55:49.990118] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:42.410 [2024-12-10 22:55:49.990126] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:42.410 [2024-12-10 22:55:49.990134] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:42.410 [2024-12-10 22:55:49.990141] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:42.410 [2024-12-10 22:55:49.990146] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:42.410 [2024-12-10 22:55:49.999946] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:42.410 [2024-12-10 22:55:49.999965] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:42.410 [2024-12-10 22:55:49.999970] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:42.410 [2024-12-10 22:55:49.999974] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:42.410 [2024-12-10 22:55:49.999987] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:42.410 [2024-12-10 22:55:50.000228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.410 [2024-12-10 22:55:50.000242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x96c970 with addr=10.0.0.2, port=4420 00:24:42.410 [2024-12-10 22:55:50.000249] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c970 is same with the state(6) to be set 00:24:42.410 [2024-12-10 22:55:50.000260] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x96c970 (9): Bad file descriptor 00:24:42.410 [2024-12-10 22:55:50.000269] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:42.410 [2024-12-10 22:55:50.000275] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:42.410 [2024-12-10 22:55:50.000282] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:42.410 [2024-12-10 22:55:50.000287] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:42.410 [2024-12-10 22:55:50.000292] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:42.410 [2024-12-10 22:55:50.000296] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:42.410 [2024-12-10 22:55:50.010023] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:42.410 [2024-12-10 22:55:50.010046] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:42.410 [2024-12-10 22:55:50.010054] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:42.410 [2024-12-10 22:55:50.010061] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:42.410 [2024-12-10 22:55:50.010084] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:42.410 [2024-12-10 22:55:50.010282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.410 [2024-12-10 22:55:50.010304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x96c970 with addr=10.0.0.2, port=4420 00:24:42.410 [2024-12-10 22:55:50.010315] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96c970 is same with the state(6) to be set 00:24:42.411 [2024-12-10 22:55:50.010332] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x96c970 (9): Bad file descriptor 00:24:42.411 [2024-12-10 22:55:50.010347] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:42.411 [2024-12-10 22:55:50.010358] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:42.411 [2024-12-10 22:55:50.010368] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:42.411 [2024-12-10 22:55:50.010377] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:42.411 [2024-12-10 22:55:50.010385] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:42.411 [2024-12-10 22:55:50.010391] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:42.411 [2024-12-10 22:55:50.015528] bdev_nvme.c:7303:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:24:42.411 [2024-12-10 22:55:50.015587] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:42.411 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.411 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:42.411 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:42.411 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:42.411 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:24:42.411 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:42.411 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:42.411 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:24:42.411 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:24:42.411 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:24:42.411 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:24:42.411 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.411 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:24:42.411 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:42.411 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:24:42.411 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.411 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:24:42.411 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:42.411 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:24:42.411 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:24:42.411 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:42.411 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:42.411 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:42.411 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:42.411 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:42.411 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:42.411 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:42.411 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:42.411 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.411 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:42.411 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.411 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:24:42.411 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:24:42.411 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:42.411 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:42.411 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:24:42.411 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.411 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:42.670 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.670 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:24:42.670 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:24:42.670 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:42.670 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:42.670 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:24:42.670 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:24:42.670 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:24:42.670 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:24:42.670 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.670 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:24:42.670 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:42.670 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:24:42.670 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.670 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:24:42.670 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:42.670 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:24:42.670 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:24:42.670 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:42.670 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:42.670 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:24:42.670 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:24:42.670 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:42.670 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:42.670 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.670 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:42.670 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:42.670 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:42.670 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.670 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:24:42.670 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:42.670 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:24:42.670 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:24:42.670 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:24:42.670 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:24:42.670 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:24:42.670 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:24:42.670 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:24:42.670 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:24:42.670 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:24:42.670 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:24:42.670 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.670 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:42.670 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.670 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:24:42.670 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:24:42.670 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:24:42.670 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:24:42.670 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:42.670 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.670 22:55:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:44.048 [2024-12-10 22:55:51.351709] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:44.048 [2024-12-10 22:55:51.351728] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:44.048 [2024-12-10 22:55:51.351740] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:44.048 [2024-12-10 22:55:51.437987] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:24:44.048 [2024-12-10 22:55:51.705205] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:24:44.048 [2024-12-10 22:55:51.705850] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x968620:1 started. 00:24:44.048 [2024-12-10 22:55:51.707506] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:44.048 [2024-12-10 22:55:51.707532] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:24:44.048 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.048 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:44.048 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:24:44.048 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:44.048 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:44.048 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:44.048 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:44.048 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:44.048 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:44.048 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.048 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:44.048 request: 00:24:44.048 { 00:24:44.048 "name": "nvme", 00:24:44.048 "trtype": "tcp", 00:24:44.048 "traddr": "10.0.0.2", 00:24:44.048 "adrfam": "ipv4", 00:24:44.048 "trsvcid": "8009", 00:24:44.048 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:44.048 "wait_for_attach": true, 00:24:44.048 "method": "bdev_nvme_start_discovery", 00:24:44.048 "req_id": 1 00:24:44.048 } 00:24:44.048 Got JSON-RPC error response 00:24:44.048 response: 00:24:44.048 { 00:24:44.048 "code": -17, 00:24:44.048 "message": "File exists" 00:24:44.048 } 00:24:44.048 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:44.048 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:24:44.048 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:44.048 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:44.048 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:44.048 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:24:44.048 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:44.048 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:44.048 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.048 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:44.048 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:44.048 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:44.048 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.048 [2024-12-10 22:55:51.750741] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x968620 was disconnected and freed. delete nvme_qpair. 00:24:44.307 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:24:44.307 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:24:44.307 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:44.307 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:44.307 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.307 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:44.307 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:44.307 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:44.307 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.307 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:44.307 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:44.307 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:24:44.307 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:44.307 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:44.307 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:44.307 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:44.307 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:44.307 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:24:44.307 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.307 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:44.307 request: 00:24:44.307 { 00:24:44.307 "name": "nvme_second", 00:24:44.307 "trtype": "tcp", 00:24:44.307 "traddr": "10.0.0.2", 00:24:44.307 "adrfam": "ipv4", 00:24:44.307 "trsvcid": "8009", 00:24:44.307 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:44.307 "wait_for_attach": true, 00:24:44.307 "method": "bdev_nvme_start_discovery", 00:24:44.307 "req_id": 1 00:24:44.307 } 00:24:44.307 Got JSON-RPC error response 00:24:44.307 response: 00:24:44.307 { 00:24:44.307 "code": -17, 00:24:44.307 "message": "File exists" 00:24:44.307 } 00:24:44.307 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:44.307 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:24:44.307 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:44.307 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:44.307 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:44.307 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:24:44.307 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:44.307 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:44.307 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.307 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:44.307 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:44.307 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:44.307 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.307 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:24:44.307 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:24:44.307 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:44.307 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:24:44.307 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.307 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:24:44.307 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:44.308 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:24:44.308 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.308 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:24:44.308 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:44.308 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:24:44.308 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:44.308 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:44.308 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:44.308 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:44.308 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:44.308 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:24:44.308 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.308 22:55:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:45.244 [2024-12-10 22:55:52.951193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:45.244 [2024-12-10 22:55:52.951220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99cec0 with addr=10.0.0.2, port=8010 00:24:45.244 [2024-12-10 22:55:52.951233] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:45.244 [2024-12-10 22:55:52.951240] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:45.244 [2024-12-10 22:55:52.951246] bdev_nvme.c:7584:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:46.619 [2024-12-10 22:55:53.953685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:46.619 [2024-12-10 22:55:53.953709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x99cec0 with addr=10.0.0.2, port=8010 00:24:46.619 [2024-12-10 22:55:53.953721] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:46.619 [2024-12-10 22:55:53.953728] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:46.619 [2024-12-10 22:55:53.953734] bdev_nvme.c:7584:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:47.603 [2024-12-10 22:55:54.955906] bdev_nvme.c:7559:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:24:47.603 request: 00:24:47.603 { 00:24:47.603 "name": "nvme_second", 00:24:47.603 "trtype": "tcp", 00:24:47.603 "traddr": "10.0.0.2", 00:24:47.603 "adrfam": "ipv4", 00:24:47.603 "trsvcid": "8010", 00:24:47.603 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:47.603 "wait_for_attach": false, 00:24:47.603 "attach_timeout_ms": 3000, 00:24:47.603 "method": "bdev_nvme_start_discovery", 00:24:47.603 "req_id": 1 00:24:47.603 } 00:24:47.603 Got JSON-RPC error response 00:24:47.603 response: 00:24:47.603 { 00:24:47.603 "code": -110, 00:24:47.603 "message": "Connection timed out" 00:24:47.603 } 00:24:47.603 22:55:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:47.603 22:55:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:24:47.603 22:55:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:47.603 22:55:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:47.603 22:55:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:47.603 22:55:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:24:47.603 22:55:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:47.603 22:55:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:47.603 22:55:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.603 22:55:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:47.603 22:55:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:47.603 22:55:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:47.603 22:55:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.603 22:55:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:24:47.604 22:55:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:24:47.604 22:55:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2479918 00:24:47.604 22:55:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:24:47.604 22:55:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:47.604 22:55:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:24:47.604 22:55:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:47.604 22:55:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:24:47.604 22:55:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:47.604 22:55:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:47.604 rmmod nvme_tcp 00:24:47.604 rmmod nvme_fabrics 00:24:47.604 rmmod nvme_keyring 00:24:47.604 22:55:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:47.604 22:55:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:24:47.604 22:55:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:24:47.604 22:55:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 2479896 ']' 00:24:47.604 22:55:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 2479896 00:24:47.604 22:55:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 2479896 ']' 00:24:47.604 22:55:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 2479896 00:24:47.604 22:55:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:24:47.604 22:55:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:47.604 22:55:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2479896 00:24:47.604 22:55:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:47.604 22:55:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:47.604 22:55:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2479896' 00:24:47.604 killing process with pid 2479896 00:24:47.604 22:55:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 2479896 00:24:47.604 22:55:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 2479896 00:24:47.604 22:55:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:47.604 22:55:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:47.604 22:55:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:47.604 22:55:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:24:47.604 22:55:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:24:47.604 22:55:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:47.604 22:55:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:24:47.604 22:55:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:47.604 22:55:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:47.604 22:55:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:47.604 22:55:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:47.604 22:55:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:50.181 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:50.181 00:24:50.181 real 0m17.353s 00:24:50.181 user 0m20.734s 00:24:50.181 sys 0m5.779s 00:24:50.181 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:50.181 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:50.181 ************************************ 00:24:50.181 END TEST nvmf_host_discovery 00:24:50.181 ************************************ 00:24:50.181 22:55:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:50.181 22:55:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:50.181 22:55:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:50.181 22:55:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.181 ************************************ 00:24:50.181 START TEST nvmf_host_multipath_status 00:24:50.181 ************************************ 00:24:50.181 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:50.181 * Looking for test storage... 00:24:50.181 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host 00:24:50.181 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:50.181 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:24:50.181 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:50.181 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:50.181 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:50.181 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:50.181 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:50.181 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:24:50.181 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:24:50.181 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:24:50.181 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:24:50.181 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:24:50.181 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:24:50.181 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:24:50.181 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:50.181 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:24:50.181 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:24:50.181 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:50.182 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:50.182 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:24:50.182 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:24:50.182 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:50.182 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:24:50.182 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:24:50.182 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:24:50.182 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:24:50.182 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:50.182 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:24:50.182 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:24:50.182 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:50.182 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:50.182 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:24:50.182 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:50.182 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:50.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:50.182 --rc genhtml_branch_coverage=1 00:24:50.182 --rc genhtml_function_coverage=1 00:24:50.182 --rc genhtml_legend=1 00:24:50.182 --rc geninfo_all_blocks=1 00:24:50.182 --rc geninfo_unexecuted_blocks=1 00:24:50.182 00:24:50.182 ' 00:24:50.182 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:50.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:50.182 --rc genhtml_branch_coverage=1 00:24:50.182 --rc genhtml_function_coverage=1 00:24:50.182 --rc genhtml_legend=1 00:24:50.182 --rc geninfo_all_blocks=1 00:24:50.182 --rc geninfo_unexecuted_blocks=1 00:24:50.182 00:24:50.182 ' 00:24:50.182 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:50.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:50.182 --rc genhtml_branch_coverage=1 00:24:50.182 --rc genhtml_function_coverage=1 00:24:50.182 --rc genhtml_legend=1 00:24:50.182 --rc geninfo_all_blocks=1 00:24:50.182 --rc geninfo_unexecuted_blocks=1 00:24:50.182 00:24:50.182 ' 00:24:50.182 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:50.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:50.182 --rc genhtml_branch_coverage=1 00:24:50.182 --rc genhtml_function_coverage=1 00:24:50.182 --rc genhtml_legend=1 00:24:50.182 --rc geninfo_all_blocks=1 00:24:50.182 --rc geninfo_unexecuted_blocks=1 00:24:50.182 00:24:50.182 ' 00:24:50.182 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:24:50.182 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:24:50.182 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:50.182 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:50.182 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:50.182 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:50.182 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:50.182 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:50.182 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:50.182 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:50.182 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:50.182 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:50.182 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:50.182 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:50.182 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:50.182 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:50.182 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:50.182 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:50.182 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:24:50.182 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:24:50.182 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:50.182 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:50.182 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:50.182 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.182 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.182 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.182 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:24:50.182 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:50.182 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:24:50.182 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:50.182 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:50.182 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:50.182 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:50.182 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:50.182 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:50.182 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:50.182 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:50.182 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:50.182 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:50.182 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:50.182 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:50.182 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:24:50.182 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/bpftrace.sh 00:24:50.182 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:50.182 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:24:50.182 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:24:50.182 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:50.182 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:50.182 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:50.182 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:50.182 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:50.182 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:50.182 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:50.182 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:50.182 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:50.182 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:50.182 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:24:50.182 22:55:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:56.754 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:56.754 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:24:56.754 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:56.754 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:56.754 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:56.754 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:56.754 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:56.754 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:24:56.754 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:56.754 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:24:56.754 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:24:56.754 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:24:56.754 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:24:56.754 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:24:56.754 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:24:56.754 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:56.754 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:56.754 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:56.754 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:56.754 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:56.754 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:56.754 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:56.754 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:56.754 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:56.754 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:56.754 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:56.754 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:56.754 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:56.754 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:56.754 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:56.754 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:56.754 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:56.754 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:56.754 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:56.754 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:56.754 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:56.754 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:56.754 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:56.754 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:56.754 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:56.754 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:56.754 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:56.754 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:56.754 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:56.754 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:56.754 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:56.754 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:56.754 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:56.754 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:56.754 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:56.754 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:56.754 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:56.754 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:56.754 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:56.754 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:56.755 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:56.755 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:56.755 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:56.755 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:56.755 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:56.755 Found net devices under 0000:86:00.0: cvl_0_0 00:24:56.755 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:56.755 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:56.755 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:56.755 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:56.755 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:56.755 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:56.755 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:56.755 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:56.755 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:56.755 Found net devices under 0000:86:00.1: cvl_0_1 00:24:56.755 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:56.755 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:56.755 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:24:56.755 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:56.755 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:56.755 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:56.755 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:56.755 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:56.755 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:56.755 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:56.755 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:56.755 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:56.755 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:56.755 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:56.755 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:56.755 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:56.755 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:56.755 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:56.755 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:56.755 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:56.755 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:56.755 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:56.755 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:56.755 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:56.755 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:56.755 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:56.755 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:56.755 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:56.755 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:56.755 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:56.755 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.388 ms 00:24:56.755 00:24:56.755 --- 10.0.0.2 ping statistics --- 00:24:56.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:56.755 rtt min/avg/max/mdev = 0.388/0.388/0.388/0.000 ms 00:24:56.755 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:56.755 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:56.755 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:24:56.755 00:24:56.755 --- 10.0.0.1 ping statistics --- 00:24:56.755 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:56.755 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:24:56.755 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:56.755 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:24:56.755 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:56.755 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:56.755 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:56.755 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:56.755 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:56.755 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:56.755 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:56.755 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:24:56.755 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:56.755 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:56.755 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:56.755 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=2485001 00:24:56.755 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 2485001 00:24:56.755 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:56.755 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 2485001 ']' 00:24:56.755 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:56.755 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:56.755 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:56.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:56.755 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:56.755 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:56.756 [2024-12-10 22:56:03.607236] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:24:56.756 [2024-12-10 22:56:03.607281] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:56.756 [2024-12-10 22:56:03.688214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:56.756 [2024-12-10 22:56:03.729460] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:56.756 [2024-12-10 22:56:03.729495] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:56.756 [2024-12-10 22:56:03.729502] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:56.756 [2024-12-10 22:56:03.729508] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:56.756 [2024-12-10 22:56:03.729513] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:56.756 [2024-12-10 22:56:03.730666] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:24:56.756 [2024-12-10 22:56:03.730667] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:24:56.756 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:56.756 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:24:56.756 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:56.756 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:56.756 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:56.756 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:56.756 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2485001 00:24:56.756 22:56:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:56.756 [2024-12-10 22:56:04.032774] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:56.756 22:56:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:56.756 Malloc0 00:24:56.756 22:56:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:24:57.015 22:56:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:57.015 22:56:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:57.274 [2024-12-10 22:56:04.870148] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:57.274 22:56:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:57.537 [2024-12-10 22:56:05.058627] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:57.537 22:56:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:24:57.537 22:56:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2485255 00:24:57.537 22:56:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:57.537 22:56:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2485255 /var/tmp/bdevperf.sock 00:24:57.537 22:56:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 2485255 ']' 00:24:57.537 22:56:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:57.537 22:56:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:57.537 22:56:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:57.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:57.537 22:56:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:57.537 22:56:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:57.803 22:56:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:57.803 22:56:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:24:57.803 22:56:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:57.803 22:56:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:58.370 Nvme0n1 00:24:58.371 22:56:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:58.629 Nvme0n1 00:24:58.629 22:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:24:58.629 22:56:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:25:01.170 22:56:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:25:01.170 22:56:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:01.170 22:56:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:01.170 22:56:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:25:02.107 22:56:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:25:02.107 22:56:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:02.107 22:56:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:02.107 22:56:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:02.366 22:56:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:02.366 22:56:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:02.366 22:56:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:02.366 22:56:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:02.625 22:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:02.625 22:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:02.625 22:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:02.625 22:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:02.883 22:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:02.883 22:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:02.884 22:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:02.884 22:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:03.142 22:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:03.142 22:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:03.142 22:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:03.142 22:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:03.142 22:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:03.142 22:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:03.142 22:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:03.142 22:56:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:03.401 22:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:03.401 22:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:25:03.401 22:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:03.659 22:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:03.918 22:56:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:25:04.854 22:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:25:04.854 22:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:04.854 22:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:04.854 22:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:05.113 22:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:05.113 22:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:05.113 22:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:05.113 22:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:05.372 22:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:05.372 22:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:05.372 22:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:05.372 22:56:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:05.372 22:56:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:05.372 22:56:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:05.372 22:56:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:05.372 22:56:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:05.631 22:56:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:05.631 22:56:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:05.631 22:56:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:05.631 22:56:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:05.890 22:56:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:05.890 22:56:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:05.890 22:56:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:05.890 22:56:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:06.149 22:56:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:06.149 22:56:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:25:06.149 22:56:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:06.408 22:56:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:06.408 22:56:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:25:07.786 22:56:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:25:07.786 22:56:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:07.786 22:56:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:07.787 22:56:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:07.787 22:56:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:07.787 22:56:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:07.787 22:56:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:07.787 22:56:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:08.046 22:56:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:08.046 22:56:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:08.046 22:56:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:08.046 22:56:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:08.046 22:56:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:08.046 22:56:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:08.046 22:56:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:08.046 22:56:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:08.305 22:56:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:08.305 22:56:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:08.305 22:56:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:08.305 22:56:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:08.564 22:56:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:08.564 22:56:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:08.564 22:56:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:08.564 22:56:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:08.823 22:56:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:08.823 22:56:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:25:08.823 22:56:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:09.082 22:56:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:09.082 22:56:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:25:10.462 22:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:25:10.462 22:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:10.462 22:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:10.462 22:56:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:10.462 22:56:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:10.462 22:56:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:10.462 22:56:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:10.462 22:56:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:10.720 22:56:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:10.720 22:56:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:10.720 22:56:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:10.720 22:56:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:10.979 22:56:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:10.979 22:56:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:10.979 22:56:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:10.979 22:56:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:10.979 22:56:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:10.979 22:56:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:10.979 22:56:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:10.979 22:56:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:11.238 22:56:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:11.238 22:56:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:11.238 22:56:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:11.238 22:56:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:11.496 22:56:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:11.496 22:56:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:25:11.496 22:56:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:11.755 22:56:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:12.014 22:56:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:25:12.951 22:56:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:25:12.951 22:56:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:12.951 22:56:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:12.951 22:56:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:13.209 22:56:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:13.209 22:56:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:13.209 22:56:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:13.209 22:56:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:13.469 22:56:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:13.469 22:56:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:13.469 22:56:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:13.469 22:56:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:13.469 22:56:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:13.469 22:56:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:13.469 22:56:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:13.469 22:56:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:13.728 22:56:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:13.728 22:56:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:13.728 22:56:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:13.728 22:56:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:13.986 22:56:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:13.987 22:56:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:13.987 22:56:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:13.987 22:56:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:14.246 22:56:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:14.246 22:56:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:25:14.246 22:56:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:14.246 22:56:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:14.505 22:56:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:25:15.442 22:56:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:25:15.442 22:56:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:15.442 22:56:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:15.442 22:56:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:15.701 22:56:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:15.701 22:56:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:15.701 22:56:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:15.701 22:56:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:15.960 22:56:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:15.960 22:56:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:15.960 22:56:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:15.960 22:56:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:16.219 22:56:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:16.219 22:56:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:16.219 22:56:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:16.219 22:56:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:16.478 22:56:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:16.479 22:56:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:16.479 22:56:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:16.479 22:56:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:16.479 22:56:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:16.479 22:56:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:16.479 22:56:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:16.479 22:56:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:16.738 22:56:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:16.738 22:56:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:25:16.997 22:56:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:25:16.997 22:56:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:17.256 22:56:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:17.518 22:56:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:25:18.456 22:56:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:25:18.456 22:56:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:18.456 22:56:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:18.456 22:56:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:18.715 22:56:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:18.715 22:56:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:18.715 22:56:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:18.715 22:56:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:18.975 22:56:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:18.975 22:56:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:18.975 22:56:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:18.975 22:56:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:19.233 22:56:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:19.233 22:56:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:19.233 22:56:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:19.233 22:56:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:19.233 22:56:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:19.233 22:56:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:19.233 22:56:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:19.233 22:56:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:19.491 22:56:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:19.491 22:56:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:19.491 22:56:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:19.491 22:56:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:19.750 22:56:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:19.750 22:56:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:25:19.750 22:56:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:20.010 22:56:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:20.268 22:56:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:25:21.206 22:56:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:25:21.206 22:56:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:21.206 22:56:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:21.206 22:56:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:21.465 22:56:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:21.465 22:56:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:21.465 22:56:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:21.465 22:56:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:21.724 22:56:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:21.724 22:56:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:21.724 22:56:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:21.724 22:56:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:21.724 22:56:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:21.724 22:56:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:21.724 22:56:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:21.724 22:56:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:21.983 22:56:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:21.983 22:56:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:21.983 22:56:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:21.983 22:56:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:22.242 22:56:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:22.242 22:56:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:22.242 22:56:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:22.242 22:56:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:22.501 22:56:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:22.501 22:56:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:25:22.501 22:56:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:22.759 22:56:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:22.759 22:56:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:25:24.137 22:56:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:25:24.137 22:56:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:24.137 22:56:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:24.137 22:56:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:24.137 22:56:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:24.137 22:56:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:24.137 22:56:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:24.137 22:56:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:24.395 22:56:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:24.395 22:56:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:24.396 22:56:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:24.396 22:56:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:24.396 22:56:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:24.396 22:56:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:24.396 22:56:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:24.396 22:56:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:24.655 22:56:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:24.655 22:56:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:24.655 22:56:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:24.655 22:56:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:24.914 22:56:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:24.914 22:56:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:24.914 22:56:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:24.914 22:56:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:25.172 22:56:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:25.172 22:56:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:25:25.172 22:56:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:25.431 22:56:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:25.431 22:56:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:25:26.806 22:56:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:25:26.806 22:56:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:26.806 22:56:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:26.806 22:56:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:26.806 22:56:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:26.806 22:56:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:26.806 22:56:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:26.806 22:56:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:27.065 22:56:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:27.065 22:56:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:27.065 22:56:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:27.065 22:56:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:27.065 22:56:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:27.065 22:56:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:27.065 22:56:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:27.065 22:56:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:27.324 22:56:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:27.324 22:56:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:27.324 22:56:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:27.324 22:56:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:27.591 22:56:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:27.591 22:56:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:27.591 22:56:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:27.591 22:56:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:27.851 22:56:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:27.851 22:56:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2485255 00:25:27.851 22:56:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 2485255 ']' 00:25:27.851 22:56:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 2485255 00:25:27.851 22:56:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:25:27.851 22:56:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:27.851 22:56:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2485255 00:25:27.851 22:56:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:25:27.851 22:56:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:25:27.851 22:56:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2485255' 00:25:27.851 killing process with pid 2485255 00:25:27.851 22:56:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 2485255 00:25:27.851 22:56:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 2485255 00:25:27.851 { 00:25:27.851 "results": [ 00:25:27.851 { 00:25:27.851 "job": "Nvme0n1", 00:25:27.851 "core_mask": "0x4", 00:25:27.851 "workload": "verify", 00:25:27.851 "status": "terminated", 00:25:27.851 "verify_range": { 00:25:27.851 "start": 0, 00:25:27.851 "length": 16384 00:25:27.851 }, 00:25:27.851 "queue_depth": 128, 00:25:27.851 "io_size": 4096, 00:25:27.851 "runtime": 28.992642, 00:25:27.851 "iops": 10108.116397256932, 00:25:27.851 "mibps": 39.48482967678489, 00:25:27.851 "io_failed": 0, 00:25:27.851 "io_timeout": 0, 00:25:27.851 "avg_latency_us": 12639.28085904656, 00:25:27.851 "min_latency_us": 562.7547826086957, 00:25:27.851 "max_latency_us": 3019898.88 00:25:27.851 } 00:25:27.851 ], 00:25:27.851 "core_count": 1 00:25:27.851 } 00:25:28.113 22:56:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2485255 00:25:28.113 22:56:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/try.txt 00:25:28.113 [2024-12-10 22:56:05.117980] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:25:28.113 [2024-12-10 22:56:05.118035] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2485255 ] 00:25:28.113 [2024-12-10 22:56:05.195484] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:28.113 [2024-12-10 22:56:05.235800] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:25:28.113 Running I/O for 90 seconds... 00:25:28.113 10964.00 IOPS, 42.83 MiB/s [2024-12-10T21:56:35.842Z] 10874.00 IOPS, 42.48 MiB/s [2024-12-10T21:56:35.842Z] 10923.33 IOPS, 42.67 MiB/s [2024-12-10T21:56:35.842Z] 10980.75 IOPS, 42.89 MiB/s [2024-12-10T21:56:35.842Z] 10928.20 IOPS, 42.69 MiB/s [2024-12-10T21:56:35.842Z] 10887.50 IOPS, 42.53 MiB/s [2024-12-10T21:56:35.842Z] 10890.43 IOPS, 42.54 MiB/s [2024-12-10T21:56:35.842Z] 10907.12 IOPS, 42.61 MiB/s [2024-12-10T21:56:35.842Z] 10927.56 IOPS, 42.69 MiB/s [2024-12-10T21:56:35.842Z] 10938.00 IOPS, 42.73 MiB/s [2024-12-10T21:56:35.842Z] 10944.91 IOPS, 42.75 MiB/s [2024-12-10T21:56:35.842Z] 10946.50 IOPS, 42.76 MiB/s [2024-12-10T21:56:35.842Z] [2024-12-10 22:56:19.302700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:77184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.113 [2024-12-10 22:56:19.302738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:28.113 [2024-12-10 22:56:19.302776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:77192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.113 [2024-12-10 22:56:19.302785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:28.113 [2024-12-10 22:56:19.302799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:77200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.113 [2024-12-10 22:56:19.302806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:28.113 [2024-12-10 22:56:19.302819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:77208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.113 [2024-12-10 22:56:19.302827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:28.113 [2024-12-10 22:56:19.302839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.113 [2024-12-10 22:56:19.302846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:28.113 [2024-12-10 22:56:19.302858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:77224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.113 [2024-12-10 22:56:19.302865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:28.113 [2024-12-10 22:56:19.302877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:77232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.113 [2024-12-10 22:56:19.302884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:28.113 [2024-12-10 22:56:19.302896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.113 [2024-12-10 22:56:19.302904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:28.113 [2024-12-10 22:56:19.302916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:77248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.113 [2024-12-10 22:56:19.302923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:28.113 [2024-12-10 22:56:19.302935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:77256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.113 [2024-12-10 22:56:19.302949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:28.113 [2024-12-10 22:56:19.302961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:77264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.113 [2024-12-10 22:56:19.302969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:28.113 [2024-12-10 22:56:19.302981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:77272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.113 [2024-12-10 22:56:19.302988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:28.113 [2024-12-10 22:56:19.303000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:77280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.113 [2024-12-10 22:56:19.303007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:28.114 [2024-12-10 22:56:19.303019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:77288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.114 [2024-12-10 22:56:19.303026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:28.114 [2024-12-10 22:56:19.303038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.114 [2024-12-10 22:56:19.303045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:28.114 [2024-12-10 22:56:19.303057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:77304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.114 [2024-12-10 22:56:19.303064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:28.114 [2024-12-10 22:56:19.303076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:77312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.114 [2024-12-10 22:56:19.303083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:28.114 [2024-12-10 22:56:19.303097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:77320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.114 [2024-12-10 22:56:19.303106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:28.114 [2024-12-10 22:56:19.303118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:77328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.114 [2024-12-10 22:56:19.303125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:28.114 [2024-12-10 22:56:19.303138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:77336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.114 [2024-12-10 22:56:19.303145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:28.114 [2024-12-10 22:56:19.303162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:77344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.114 [2024-12-10 22:56:19.303169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:28.114 [2024-12-10 22:56:19.303182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:77352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.114 [2024-12-10 22:56:19.303190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:28.114 [2024-12-10 22:56:19.303203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:77360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.114 [2024-12-10 22:56:19.303210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:28.114 [2024-12-10 22:56:19.303222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:77368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.114 [2024-12-10 22:56:19.303228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:28.114 [2024-12-10 22:56:19.303241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:77376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.114 [2024-12-10 22:56:19.303248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:28.114 [2024-12-10 22:56:19.303260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:77384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.114 [2024-12-10 22:56:19.303267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:28.114 [2024-12-10 22:56:19.303279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:77392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.114 [2024-12-10 22:56:19.303287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:28.114 [2024-12-10 22:56:19.303299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:77400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.114 [2024-12-10 22:56:19.303306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:28.114 [2024-12-10 22:56:19.303318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:77408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.114 [2024-12-10 22:56:19.303325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:28.114 [2024-12-10 22:56:19.303337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:77416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.114 [2024-12-10 22:56:19.303344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:28.114 [2024-12-10 22:56:19.303357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:77424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.114 [2024-12-10 22:56:19.303364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:28.114 [2024-12-10 22:56:19.303376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:77432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.114 [2024-12-10 22:56:19.303384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:28.114 [2024-12-10 22:56:19.303396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:77440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.114 [2024-12-10 22:56:19.303403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:28.114 [2024-12-10 22:56:19.303415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:77448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.114 [2024-12-10 22:56:19.303422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:28.114 [2024-12-10 22:56:19.303435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:77456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.114 [2024-12-10 22:56:19.303442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:28.114 [2024-12-10 22:56:19.303454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:77464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.114 [2024-12-10 22:56:19.303461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:28.114 [2024-12-10 22:56:19.303474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:77472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.114 [2024-12-10 22:56:19.303480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:28.114 [2024-12-10 22:56:19.303493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:77480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.114 [2024-12-10 22:56:19.303499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:28.114 [2024-12-10 22:56:19.303512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:77488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.114 [2024-12-10 22:56:19.303518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:28.114 [2024-12-10 22:56:19.303531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:77496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.114 [2024-12-10 22:56:19.303537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:28.114 [2024-12-10 22:56:19.303550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:77504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.114 [2024-12-10 22:56:19.303556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:28.114 [2024-12-10 22:56:19.303569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:77512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.114 [2024-12-10 22:56:19.303575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:28.114 [2024-12-10 22:56:19.303587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:77520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.114 [2024-12-10 22:56:19.303595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:28.114 [2024-12-10 22:56:19.303607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:77528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.114 [2024-12-10 22:56:19.303614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:28.114 [2024-12-10 22:56:19.303626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:77536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.114 [2024-12-10 22:56:19.303633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:28.114 [2024-12-10 22:56:19.303645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:77544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.114 [2024-12-10 22:56:19.303652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:28.114 [2024-12-10 22:56:19.303666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:77552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.114 [2024-12-10 22:56:19.303673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:28.114 [2024-12-10 22:56:19.303685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:77560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.114 [2024-12-10 22:56:19.303692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:25:28.114 [2024-12-10 22:56:19.303704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:77568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.114 [2024-12-10 22:56:19.303711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:28.114 [2024-12-10 22:56:19.304054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:77576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.114 [2024-12-10 22:56:19.304067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:28.114 [2024-12-10 22:56:19.304085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:77584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.114 [2024-12-10 22:56:19.304092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:28.114 [2024-12-10 22:56:19.304108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:77592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.114 [2024-12-10 22:56:19.304116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:28.115 [2024-12-10 22:56:19.304131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:77600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.115 [2024-12-10 22:56:19.304138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:28.115 [2024-12-10 22:56:19.304153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:77608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.115 [2024-12-10 22:56:19.304165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:28.115 [2024-12-10 22:56:19.304180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:77616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.115 [2024-12-10 22:56:19.304187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:28.115 [2024-12-10 22:56:19.304203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:77624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.115 [2024-12-10 22:56:19.304210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:28.115 [2024-12-10 22:56:19.304224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:77632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.115 [2024-12-10 22:56:19.304231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:28.115 [2024-12-10 22:56:19.304246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:77640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.115 [2024-12-10 22:56:19.304253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:28.115 [2024-12-10 22:56:19.304268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:77648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.115 [2024-12-10 22:56:19.304278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:25:28.115 [2024-12-10 22:56:19.304292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:77656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.115 [2024-12-10 22:56:19.304299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:28.115 [2024-12-10 22:56:19.304314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:77664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.115 [2024-12-10 22:56:19.304322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:28.115 [2024-12-10 22:56:19.304337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:77672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.115 [2024-12-10 22:56:19.304344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:28.115 [2024-12-10 22:56:19.304358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:77680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.115 [2024-12-10 22:56:19.304365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:28.115 [2024-12-10 22:56:19.304380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:77688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.115 [2024-12-10 22:56:19.304387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:28.115 [2024-12-10 22:56:19.304401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:77696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.115 [2024-12-10 22:56:19.304408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:28.115 [2024-12-10 22:56:19.304423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:77704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.115 [2024-12-10 22:56:19.304430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:28.115 [2024-12-10 22:56:19.304445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:77712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.115 [2024-12-10 22:56:19.304452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:28.115 [2024-12-10 22:56:19.304467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:77720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.115 [2024-12-10 22:56:19.304473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:28.115 [2024-12-10 22:56:19.304488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:77728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.115 [2024-12-10 22:56:19.304495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:28.115 [2024-12-10 22:56:19.304510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:77736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.115 [2024-12-10 22:56:19.304517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:28.115 [2024-12-10 22:56:19.304532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:77744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.115 [2024-12-10 22:56:19.304540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:28.115 [2024-12-10 22:56:19.304556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:77752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.115 [2024-12-10 22:56:19.304562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:28.115 [2024-12-10 22:56:19.304577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:77760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.115 [2024-12-10 22:56:19.304584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:28.115 [2024-12-10 22:56:19.304599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:77768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.115 [2024-12-10 22:56:19.304606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:28.115 [2024-12-10 22:56:19.304620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:77776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.115 [2024-12-10 22:56:19.304626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:28.115 [2024-12-10 22:56:19.304641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:77784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.115 [2024-12-10 22:56:19.304648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:28.115 [2024-12-10 22:56:19.304663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:77792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.115 [2024-12-10 22:56:19.304670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:28.115 [2024-12-10 22:56:19.304684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:77800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.115 [2024-12-10 22:56:19.304691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:28.115 [2024-12-10 22:56:19.304705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:77808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.115 [2024-12-10 22:56:19.304712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:28.115 [2024-12-10 22:56:19.304727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:77816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.115 [2024-12-10 22:56:19.304734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:28.115 [2024-12-10 22:56:19.304748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:77824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.115 [2024-12-10 22:56:19.304755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:28.115 [2024-12-10 22:56:19.304770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:77832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.115 [2024-12-10 22:56:19.304777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:28.115 [2024-12-10 22:56:19.304792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:77840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.115 [2024-12-10 22:56:19.304798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:28.115 [2024-12-10 22:56:19.304815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:77848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.115 [2024-12-10 22:56:19.304822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:28.115 [2024-12-10 22:56:19.304838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:77856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.115 [2024-12-10 22:56:19.304844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:28.115 [2024-12-10 22:56:19.304859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:77864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.115 [2024-12-10 22:56:19.304866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:28.115 [2024-12-10 22:56:19.304881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:77872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.115 [2024-12-10 22:56:19.304888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:28.115 [2024-12-10 22:56:19.304902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:77880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.115 [2024-12-10 22:56:19.304909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:28.115 [2024-12-10 22:56:19.304923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:77888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.115 [2024-12-10 22:56:19.304930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:28.115 [2024-12-10 22:56:19.304945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.115 [2024-12-10 22:56:19.304952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:28.115 [2024-12-10 22:56:19.304966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:77904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.115 [2024-12-10 22:56:19.304973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:28.115 [2024-12-10 22:56:19.304987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:77912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.116 [2024-12-10 22:56:19.304994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:28.116 [2024-12-10 22:56:19.305009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:77920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.116 [2024-12-10 22:56:19.305016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:28.116 [2024-12-10 22:56:19.305154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:77928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.116 [2024-12-10 22:56:19.305168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:28.116 [2024-12-10 22:56:19.305187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:77936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.116 [2024-12-10 22:56:19.305194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:28.116 [2024-12-10 22:56:19.305217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:77944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.116 [2024-12-10 22:56:19.305224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:28.116 [2024-12-10 22:56:19.305241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:77952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.116 [2024-12-10 22:56:19.305247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:28.116 [2024-12-10 22:56:19.305273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:77960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.116 [2024-12-10 22:56:19.305280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:28.116 [2024-12-10 22:56:19.305297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:77968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.116 [2024-12-10 22:56:19.305303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:28.116 [2024-12-10 22:56:19.305320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:77976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.116 [2024-12-10 22:56:19.305327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:28.116 [2024-12-10 22:56:19.305344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:77984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.116 [2024-12-10 22:56:19.305351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:28.116 [2024-12-10 22:56:19.305369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:77992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.116 [2024-12-10 22:56:19.305376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:28.116 [2024-12-10 22:56:19.305394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.116 [2024-12-10 22:56:19.305401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:28.116 [2024-12-10 22:56:19.305418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:78008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.116 [2024-12-10 22:56:19.305425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:28.116 [2024-12-10 22:56:19.305442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:78016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.116 [2024-12-10 22:56:19.305449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:28.116 [2024-12-10 22:56:19.305466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:78024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.116 [2024-12-10 22:56:19.305473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:28.116 [2024-12-10 22:56:19.305490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:78032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.116 [2024-12-10 22:56:19.305496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:28.116 [2024-12-10 22:56:19.305513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.116 [2024-12-10 22:56:19.305522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:28.116 [2024-12-10 22:56:19.305539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:78048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.116 [2024-12-10 22:56:19.305545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:28.116 [2024-12-10 22:56:19.305562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:78056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.116 [2024-12-10 22:56:19.305569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:28.116 [2024-12-10 22:56:19.305586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:78064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.116 [2024-12-10 22:56:19.305593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:28.116 [2024-12-10 22:56:19.305610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:78072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.116 [2024-12-10 22:56:19.305617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:28.116 [2024-12-10 22:56:19.305634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:78080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.116 [2024-12-10 22:56:19.305641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:28.116 [2024-12-10 22:56:19.305659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:78088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.116 [2024-12-10 22:56:19.305665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:28.116 [2024-12-10 22:56:19.305682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:78096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.116 [2024-12-10 22:56:19.305689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:28.116 [2024-12-10 22:56:19.305707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:78104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.116 [2024-12-10 22:56:19.305713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:28.116 [2024-12-10 22:56:19.305730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:78112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.116 [2024-12-10 22:56:19.305737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:28.116 [2024-12-10 22:56:19.305754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:78120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.116 [2024-12-10 22:56:19.305761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:28.116 [2024-12-10 22:56:19.305778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:78128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.116 [2024-12-10 22:56:19.305784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:28.116 [2024-12-10 22:56:19.305801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:78136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.116 [2024-12-10 22:56:19.305810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:28.116 [2024-12-10 22:56:19.305827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:78144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.116 [2024-12-10 22:56:19.305834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:28.116 [2024-12-10 22:56:19.305851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:78152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.116 [2024-12-10 22:56:19.305858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:28.116 [2024-12-10 22:56:19.305875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:78160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.116 [2024-12-10 22:56:19.305881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:28.116 [2024-12-10 22:56:19.305898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:78168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.116 [2024-12-10 22:56:19.305905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:28.116 [2024-12-10 22:56:19.305922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:78176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.116 [2024-12-10 22:56:19.305929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:28.116 [2024-12-10 22:56:19.305946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:78184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.116 [2024-12-10 22:56:19.305953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:28.116 [2024-12-10 22:56:19.305970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:78192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.116 [2024-12-10 22:56:19.305977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:28.116 [2024-12-10 22:56:19.305994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:78200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.116 [2024-12-10 22:56:19.306001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:28.116 10824.62 IOPS, 42.28 MiB/s [2024-12-10T21:56:35.845Z] 10051.43 IOPS, 39.26 MiB/s [2024-12-10T21:56:35.845Z] 9381.33 IOPS, 36.65 MiB/s [2024-12-10T21:56:35.845Z] 8889.31 IOPS, 34.72 MiB/s [2024-12-10T21:56:35.845Z] 9013.82 IOPS, 35.21 MiB/s [2024-12-10T21:56:35.845Z] 9114.94 IOPS, 35.61 MiB/s [2024-12-10T21:56:35.845Z] 9265.42 IOPS, 36.19 MiB/s [2024-12-10T21:56:35.845Z] 9452.85 IOPS, 36.93 MiB/s [2024-12-10T21:56:35.845Z] 9610.67 IOPS, 37.54 MiB/s [2024-12-10T21:56:35.845Z] 9677.86 IOPS, 37.80 MiB/s [2024-12-10T21:56:35.845Z] 9728.00 IOPS, 38.00 MiB/s [2024-12-10T21:56:35.845Z] 9775.33 IOPS, 38.18 MiB/s [2024-12-10T21:56:35.845Z] 9882.24 IOPS, 38.60 MiB/s [2024-12-10T21:56:35.845Z] 9981.08 IOPS, 38.99 MiB/s [2024-12-10T21:56:35.846Z] [2024-12-10 22:56:33.106038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:40696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.117 [2024-12-10 22:56:33.106077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:28.117 [2024-12-10 22:56:33.106129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:40712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.117 [2024-12-10 22:56:33.106138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:28.117 [2024-12-10 22:56:33.106152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:40728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.117 [2024-12-10 22:56:33.106170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:28.117 [2024-12-10 22:56:33.106182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:40744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.117 [2024-12-10 22:56:33.106189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:28.117 [2024-12-10 22:56:33.106202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:40760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.117 [2024-12-10 22:56:33.106209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:28.117 [2024-12-10 22:56:33.106221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:40776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.117 [2024-12-10 22:56:33.106229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:28.117 [2024-12-10 22:56:33.106241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:40792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.117 [2024-12-10 22:56:33.106248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:28.117 [2024-12-10 22:56:33.106260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:40808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.117 [2024-12-10 22:56:33.106267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:28.117 [2024-12-10 22:56:33.106280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.117 [2024-12-10 22:56:33.106287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:28.117 [2024-12-10 22:56:33.106299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:40840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.117 [2024-12-10 22:56:33.106306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:28.117 [2024-12-10 22:56:33.106318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:40856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.117 [2024-12-10 22:56:33.106325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:28.117 [2024-12-10 22:56:33.106337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.117 [2024-12-10 22:56:33.106344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:28.117 [2024-12-10 22:56:33.106356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:40888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.117 [2024-12-10 22:56:33.106363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:28.117 [2024-12-10 22:56:33.106375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:40904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.117 [2024-12-10 22:56:33.106382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:28.117 [2024-12-10 22:56:33.106394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:40920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.117 [2024-12-10 22:56:33.106403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:28.117 [2024-12-10 22:56:33.106415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:40936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.117 [2024-12-10 22:56:33.106422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:28.117 [2024-12-10 22:56:33.106434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:40952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.117 [2024-12-10 22:56:33.106441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:28.117 [2024-12-10 22:56:33.106454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:40968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.117 [2024-12-10 22:56:33.106461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:28.117 [2024-12-10 22:56:33.106474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:40984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.117 [2024-12-10 22:56:33.106481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:28.117 [2024-12-10 22:56:33.106493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:40480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.117 [2024-12-10 22:56:33.106500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:28.117 [2024-12-10 22:56:33.106512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:40512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.117 [2024-12-10 22:56:33.106519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:28.117 [2024-12-10 22:56:33.106532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:40544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.117 [2024-12-10 22:56:33.106539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:28.117 [2024-12-10 22:56:33.106551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:40992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.117 [2024-12-10 22:56:33.106558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:28.117 [2024-12-10 22:56:33.106571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:41008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.117 [2024-12-10 22:56:33.106577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:28.117 [2024-12-10 22:56:33.106590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:41024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.117 [2024-12-10 22:56:33.106596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:28.117 [2024-12-10 22:56:33.106609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:41040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.117 [2024-12-10 22:56:33.106615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:28.117 [2024-12-10 22:56:33.106627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:41056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.117 [2024-12-10 22:56:33.106634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:28.117 [2024-12-10 22:56:33.106648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:41072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.117 [2024-12-10 22:56:33.106655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:28.117 [2024-12-10 22:56:33.106668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:41088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.117 [2024-12-10 22:56:33.106674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:28.117 [2024-12-10 22:56:33.106687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:41104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.117 [2024-12-10 22:56:33.106693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:28.117 [2024-12-10 22:56:33.106705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:41120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.117 [2024-12-10 22:56:33.106712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:28.117 [2024-12-10 22:56:33.106724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:41136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.117 [2024-12-10 22:56:33.106731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:28.117 [2024-12-10 22:56:33.106743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:41152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.117 [2024-12-10 22:56:33.106750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:28.117 [2024-12-10 22:56:33.106763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:41168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.117 [2024-12-10 22:56:33.106770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:28.118 [2024-12-10 22:56:33.106783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:41184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.118 [2024-12-10 22:56:33.106789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:28.118 [2024-12-10 22:56:33.106802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:41200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.118 [2024-12-10 22:56:33.106808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:28.118 [2024-12-10 22:56:33.106820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:41216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.118 [2024-12-10 22:56:33.106827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:28.118 [2024-12-10 22:56:33.106839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:41232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.118 [2024-12-10 22:56:33.106846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:28.118 [2024-12-10 22:56:33.106858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:41248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.118 [2024-12-10 22:56:33.106865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:28.118 [2024-12-10 22:56:33.106879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:41264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.118 [2024-12-10 22:56:33.106885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:28.118 [2024-12-10 22:56:33.106898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:41280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.118 [2024-12-10 22:56:33.106905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:28.118 [2024-12-10 22:56:33.106918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:40456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.118 [2024-12-10 22:56:33.106924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:28.118 [2024-12-10 22:56:33.106936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:40488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.118 [2024-12-10 22:56:33.106943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:28.118 [2024-12-10 22:56:33.106955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:40520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.118 [2024-12-10 22:56:33.106962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:28.118 [2024-12-10 22:56:33.106975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:40552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.118 [2024-12-10 22:56:33.106981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:28.118 [2024-12-10 22:56:33.107466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:41304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.118 [2024-12-10 22:56:33.107484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:28.118 [2024-12-10 22:56:33.107500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:41320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.118 [2024-12-10 22:56:33.107508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:28.118 [2024-12-10 22:56:33.107521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:40592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.118 [2024-12-10 22:56:33.107528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:28.118 [2024-12-10 22:56:33.107541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:40624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.118 [2024-12-10 22:56:33.107548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:28.118 [2024-12-10 22:56:33.107562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:40656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.118 [2024-12-10 22:56:33.107569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:28.118 [2024-12-10 22:56:33.107581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:41328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.118 [2024-12-10 22:56:33.107587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:28.118 [2024-12-10 22:56:33.107600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:41344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.118 [2024-12-10 22:56:33.107610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:28.118 [2024-12-10 22:56:33.107624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:40568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.118 [2024-12-10 22:56:33.107631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:28.118 [2024-12-10 22:56:33.107643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:40600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.118 [2024-12-10 22:56:33.107650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:28.118 [2024-12-10 22:56:33.107663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:40632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.118 [2024-12-10 22:56:33.107670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:28.118 [2024-12-10 22:56:33.107682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:40664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.118 [2024-12-10 22:56:33.107689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:28.118 [2024-12-10 22:56:33.107702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:41360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.118 [2024-12-10 22:56:33.107708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:28.118 [2024-12-10 22:56:33.107721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:41376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.118 [2024-12-10 22:56:33.107728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:28.118 [2024-12-10 22:56:33.107740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:41392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.118 [2024-12-10 22:56:33.107746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:28.118 [2024-12-10 22:56:33.107759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:41408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.118 [2024-12-10 22:56:33.107766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:28.118 [2024-12-10 22:56:33.108986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:41424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.118 [2024-12-10 22:56:33.109003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:28.118 [2024-12-10 22:56:33.109019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:41440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.118 [2024-12-10 22:56:33.109027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:28.118 [2024-12-10 22:56:33.109040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:41456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.118 [2024-12-10 22:56:33.109047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:28.118 [2024-12-10 22:56:33.109060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:41472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:28.118 [2024-12-10 22:56:33.109070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:28.118 [2024-12-10 22:56:33.109082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:40704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.118 [2024-12-10 22:56:33.109089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:28.118 [2024-12-10 22:56:33.109103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:40736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.118 [2024-12-10 22:56:33.109110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:28.118 [2024-12-10 22:56:33.109123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:40768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.118 [2024-12-10 22:56:33.109130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:28.118 [2024-12-10 22:56:33.109142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:40800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.118 [2024-12-10 22:56:33.109149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:28.118 [2024-12-10 22:56:33.109167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:40832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.118 [2024-12-10 22:56:33.109174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:28.118 [2024-12-10 22:56:33.109186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:40864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.118 [2024-12-10 22:56:33.109193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:28.118 [2024-12-10 22:56:33.109205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:40896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.118 [2024-12-10 22:56:33.109213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:28.118 [2024-12-10 22:56:33.109225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:40928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.118 [2024-12-10 22:56:33.109232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:28.118 [2024-12-10 22:56:33.109244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:40960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.118 [2024-12-10 22:56:33.109251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:28.118 [2024-12-10 22:56:33.109263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:41000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.119 [2024-12-10 22:56:33.109270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:28.119 [2024-12-10 22:56:33.109283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:41032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:28.119 [2024-12-10 22:56:33.109289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:28.119 10053.37 IOPS, 39.27 MiB/s [2024-12-10T21:56:35.848Z] 10085.36 IOPS, 39.40 MiB/s [2024-12-10T21:56:35.848Z] Received shutdown signal, test time was about 28.993303 seconds 00:25:28.119 00:25:28.119 Latency(us) 00:25:28.119 [2024-12-10T21:56:35.848Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:28.119 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:28.119 Verification LBA range: start 0x0 length 0x4000 00:25:28.119 Nvme0n1 : 28.99 10108.12 39.48 0.00 0.00 12639.28 562.75 3019898.88 00:25:28.119 [2024-12-10T21:56:35.848Z] =================================================================================================================== 00:25:28.119 [2024-12-10T21:56:35.848Z] Total : 10108.12 39.48 0.00 0.00 12639.28 562.75 3019898.88 00:25:28.119 22:56:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:28.119 22:56:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:25:28.119 22:56:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/try.txt 00:25:28.119 22:56:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:25:28.119 22:56:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:28.119 22:56:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:25:28.119 22:56:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:28.377 22:56:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:25:28.377 22:56:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:28.377 22:56:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:28.377 rmmod nvme_tcp 00:25:28.377 rmmod nvme_fabrics 00:25:28.377 rmmod nvme_keyring 00:25:28.377 22:56:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:28.377 22:56:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:25:28.377 22:56:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:25:28.377 22:56:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 2485001 ']' 00:25:28.377 22:56:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 2485001 00:25:28.377 22:56:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 2485001 ']' 00:25:28.377 22:56:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 2485001 00:25:28.377 22:56:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:25:28.377 22:56:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:28.377 22:56:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2485001 00:25:28.377 22:56:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:28.377 22:56:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:28.377 22:56:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2485001' 00:25:28.377 killing process with pid 2485001 00:25:28.377 22:56:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 2485001 00:25:28.377 22:56:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 2485001 00:25:28.636 22:56:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:28.636 22:56:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:28.636 22:56:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:28.636 22:56:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:25:28.636 22:56:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:25:28.636 22:56:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:28.636 22:56:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:25:28.636 22:56:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:28.636 22:56:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:28.636 22:56:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:28.636 22:56:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:28.636 22:56:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:30.542 22:56:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:30.542 00:25:30.542 real 0m40.786s 00:25:30.542 user 1m50.748s 00:25:30.542 sys 0m11.634s 00:25:30.542 22:56:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:30.542 22:56:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:30.542 ************************************ 00:25:30.542 END TEST nvmf_host_multipath_status 00:25:30.542 ************************************ 00:25:30.542 22:56:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:30.542 22:56:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:30.542 22:56:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:30.542 22:56:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.801 ************************************ 00:25:30.801 START TEST nvmf_discovery_remove_ifc 00:25:30.801 ************************************ 00:25:30.801 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:30.801 * Looking for test storage... 00:25:30.801 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host 00:25:30.801 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:30.801 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:25:30.801 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:30.801 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:30.801 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:30.801 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:30.801 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:30.801 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:25:30.801 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:25:30.801 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:25:30.801 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:25:30.801 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:25:30.801 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:25:30.801 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:25:30.801 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:30.801 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:25:30.801 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:25:30.801 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:30.801 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:30.802 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:25:30.802 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:25:30.802 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:30.802 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:25:30.802 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:25:30.802 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:25:30.802 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:25:30.802 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:30.802 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:25:30.802 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:25:30.802 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:30.802 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:30.802 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:25:30.802 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:30.802 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:30.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:30.802 --rc genhtml_branch_coverage=1 00:25:30.802 --rc genhtml_function_coverage=1 00:25:30.802 --rc genhtml_legend=1 00:25:30.802 --rc geninfo_all_blocks=1 00:25:30.802 --rc geninfo_unexecuted_blocks=1 00:25:30.802 00:25:30.802 ' 00:25:30.802 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:30.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:30.802 --rc genhtml_branch_coverage=1 00:25:30.802 --rc genhtml_function_coverage=1 00:25:30.802 --rc genhtml_legend=1 00:25:30.802 --rc geninfo_all_blocks=1 00:25:30.802 --rc geninfo_unexecuted_blocks=1 00:25:30.802 00:25:30.802 ' 00:25:30.802 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:30.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:30.802 --rc genhtml_branch_coverage=1 00:25:30.802 --rc genhtml_function_coverage=1 00:25:30.802 --rc genhtml_legend=1 00:25:30.802 --rc geninfo_all_blocks=1 00:25:30.802 --rc geninfo_unexecuted_blocks=1 00:25:30.802 00:25:30.802 ' 00:25:30.802 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:30.802 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:30.802 --rc genhtml_branch_coverage=1 00:25:30.802 --rc genhtml_function_coverage=1 00:25:30.802 --rc genhtml_legend=1 00:25:30.802 --rc geninfo_all_blocks=1 00:25:30.802 --rc geninfo_unexecuted_blocks=1 00:25:30.802 00:25:30.802 ' 00:25:30.802 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:25:30.802 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:25:30.802 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:30.802 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:30.802 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:30.802 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:30.802 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:30.802 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:30.802 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:30.802 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:30.802 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:30.802 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:30.802 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:30.802 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:30.802 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:30.802 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:30.802 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:30.802 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:30.802 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:25:30.802 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:25:30.802 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:30.802 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:30.802 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:30.802 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.802 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.802 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.802 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:25:30.802 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.802 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:25:30.802 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:30.802 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:30.802 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:30.802 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:30.802 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:30.802 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:30.802 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:30.802 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:30.802 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:30.802 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:30.802 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:25:30.802 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:25:30.802 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:25:30.802 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:25:30.802 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:25:30.802 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:25:30.802 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:25:30.802 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:30.802 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:30.802 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:30.802 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:30.802 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:30.802 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:30.802 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:30.802 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:30.802 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:30.802 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:30.802 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:25:30.802 22:56:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:37.457 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:37.457 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:25:37.457 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:37.457 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:37.458 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:37.458 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:37.458 Found net devices under 0000:86:00.0: cvl_0_0 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:37.458 Found net devices under 0000:86:00.1: cvl_0_1 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:37.458 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:37.458 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.362 ms 00:25:37.458 00:25:37.458 --- 10.0.0.2 ping statistics --- 00:25:37.458 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:37.458 rtt min/avg/max/mdev = 0.362/0.362/0.362/0.000 ms 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:37.458 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:37.458 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.220 ms 00:25:37.458 00:25:37.458 --- 10.0.0.1 ping statistics --- 00:25:37.458 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:37.458 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:25:37.458 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:37.459 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:37.459 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:37.459 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:37.459 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:37.459 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:37.459 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:37.459 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:25:37.459 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:37.459 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:37.459 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:37.459 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=2494016 00:25:37.459 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 2494016 00:25:37.459 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:37.459 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 2494016 ']' 00:25:37.459 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:37.459 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:37.459 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:37.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:37.459 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:37.459 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:37.459 [2024-12-10 22:56:44.480899] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:25:37.459 [2024-12-10 22:56:44.480952] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:37.459 [2024-12-10 22:56:44.544080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:37.459 [2024-12-10 22:56:44.583976] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:37.459 [2024-12-10 22:56:44.584007] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:37.459 [2024-12-10 22:56:44.584015] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:37.459 [2024-12-10 22:56:44.584021] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:37.459 [2024-12-10 22:56:44.584028] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:37.459 [2024-12-10 22:56:44.584611] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:25:37.459 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:37.459 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:25:37.459 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:37.459 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:37.459 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:37.459 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:37.459 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:25:37.459 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.459 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:37.459 [2024-12-10 22:56:44.732204] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:37.459 [2024-12-10 22:56:44.740376] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:37.459 null0 00:25:37.459 [2024-12-10 22:56:44.772362] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:37.459 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.459 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2494040 00:25:37.459 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:25:37.459 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2494040 /tmp/host.sock 00:25:37.459 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 2494040 ']' 00:25:37.459 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:25:37.459 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:37.459 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:37.459 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:37.459 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:37.459 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:37.459 [2024-12-10 22:56:44.840547] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:25:37.459 [2024-12-10 22:56:44.840589] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2494040 ] 00:25:37.459 [2024-12-10 22:56:44.916707] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:37.459 [2024-12-10 22:56:44.959638] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:25:37.459 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:37.459 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:25:37.459 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:37.459 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:25:37.459 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.459 22:56:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:37.459 22:56:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.459 22:56:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:25:37.459 22:56:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.459 22:56:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:37.459 22:56:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.459 22:56:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:25:37.459 22:56:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.459 22:56:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:38.836 [2024-12-10 22:56:46.138637] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:38.836 [2024-12-10 22:56:46.138659] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:38.836 [2024-12-10 22:56:46.138670] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:38.836 [2024-12-10 22:56:46.265041] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:38.836 [2024-12-10 22:56:46.483171] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:25:38.836 [2024-12-10 22:56:46.483882] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1ba8800:1 started. 00:25:38.836 [2024-12-10 22:56:46.485206] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:38.836 [2024-12-10 22:56:46.485247] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:38.836 [2024-12-10 22:56:46.485266] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:38.836 [2024-12-10 22:56:46.485278] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:38.836 [2024-12-10 22:56:46.485296] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:38.836 22:56:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.836 22:56:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:25:38.836 [2024-12-10 22:56:46.488014] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1ba8800 was disconnected and freed. delete nvme_qpair. 00:25:38.836 22:56:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:38.836 22:56:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:38.836 22:56:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:38.836 22:56:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.836 22:56:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:38.836 22:56:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:38.836 22:56:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:38.836 22:56:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.836 22:56:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:25:38.836 22:56:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:25:38.836 22:56:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:25:39.095 22:56:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:25:39.095 22:56:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:39.095 22:56:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:39.095 22:56:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:39.095 22:56:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.095 22:56:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:39.095 22:56:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:39.095 22:56:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:39.095 22:56:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.095 22:56:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:39.095 22:56:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:40.032 22:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:40.032 22:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:40.032 22:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:40.032 22:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.032 22:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:40.032 22:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:40.032 22:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:40.032 22:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.032 22:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:40.032 22:56:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:41.409 22:56:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:41.409 22:56:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:41.409 22:56:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:41.409 22:56:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.409 22:56:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:41.409 22:56:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:41.409 22:56:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:41.409 22:56:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.409 22:56:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:41.409 22:56:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:42.346 22:56:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:42.346 22:56:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:42.346 22:56:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:42.346 22:56:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.346 22:56:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:42.346 22:56:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:42.346 22:56:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:42.346 22:56:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.346 22:56:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:42.346 22:56:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:43.282 22:56:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:43.282 22:56:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:43.282 22:56:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:43.282 22:56:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.282 22:56:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:43.282 22:56:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:43.282 22:56:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:43.282 22:56:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.282 22:56:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:43.282 22:56:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:44.219 22:56:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:44.219 22:56:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:44.219 22:56:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:44.219 22:56:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.219 22:56:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:44.219 22:56:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:44.219 22:56:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:44.219 22:56:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.219 [2024-12-10 22:56:51.926907] /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:25:44.219 [2024-12-10 22:56:51.926949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:44.219 [2024-12-10 22:56:51.926975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.219 [2024-12-10 22:56:51.926985] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:44.219 [2024-12-10 22:56:51.926992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.219 [2024-12-10 22:56:51.927000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:44.219 [2024-12-10 22:56:51.927007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.219 [2024-12-10 22:56:51.927014] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:44.219 [2024-12-10 22:56:51.927020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.219 [2024-12-10 22:56:51.927032] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:44.219 [2024-12-10 22:56:51.927039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:44.219 [2024-12-10 22:56:51.927045] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b84fe0 is same with the state(6) to be set 00:25:44.219 [2024-12-10 22:56:51.936929] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b84fe0 (9): Bad file descriptor 00:25:44.219 22:56:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:44.219 22:56:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:44.478 [2024-12-10 22:56:51.946964] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:25:44.478 [2024-12-10 22:56:51.946976] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:25:44.478 [2024-12-10 22:56:51.946982] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:44.478 [2024-12-10 22:56:51.946987] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:44.478 [2024-12-10 22:56:51.947009] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:45.415 22:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:45.415 22:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:45.415 22:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:45.415 22:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:45.415 22:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:45.415 22:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:45.415 22:56:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:45.415 [2024-12-10 22:56:52.986188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:25:45.415 [2024-12-10 22:56:52.986258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b84fe0 with addr=10.0.0.2, port=4420 00:25:45.415 [2024-12-10 22:56:52.986289] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b84fe0 is same with the state(6) to be set 00:25:45.415 [2024-12-10 22:56:52.986340] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b84fe0 (9): Bad file descriptor 00:25:45.415 [2024-12-10 22:56:52.987280] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:25:45.415 [2024-12-10 22:56:52.987342] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:45.415 [2024-12-10 22:56:52.987365] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:45.415 [2024-12-10 22:56:52.987387] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:45.415 [2024-12-10 22:56:52.987406] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:45.415 [2024-12-10 22:56:52.987422] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:45.415 [2024-12-10 22:56:52.987436] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:45.415 [2024-12-10 22:56:52.987457] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:25:45.415 [2024-12-10 22:56:52.987480] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:45.415 22:56:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:45.415 22:56:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:45.415 22:56:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:46.351 [2024-12-10 22:56:53.990000] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:25:46.352 [2024-12-10 22:56:53.990021] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:25:46.352 [2024-12-10 22:56:53.990032] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:25:46.352 [2024-12-10 22:56:53.990038] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:25:46.352 [2024-12-10 22:56:53.990045] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:25:46.352 [2024-12-10 22:56:53.990052] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:25:46.352 [2024-12-10 22:56:53.990056] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:25:46.352 [2024-12-10 22:56:53.990060] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:25:46.352 [2024-12-10 22:56:53.990080] bdev_nvme.c:7267:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:25:46.352 [2024-12-10 22:56:53.990101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:46.352 [2024-12-10 22:56:53.990110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:46.352 [2024-12-10 22:56:53.990119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:46.352 [2024-12-10 22:56:53.990125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:46.352 [2024-12-10 22:56:53.990133] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:46.352 [2024-12-10 22:56:53.990139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:46.352 [2024-12-10 22:56:53.990146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:46.352 [2024-12-10 22:56:53.990152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:46.352 [2024-12-10 22:56:53.990165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:46.352 [2024-12-10 22:56:53.990171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:46.352 [2024-12-10 22:56:53.990178] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:25:46.352 [2024-12-10 22:56:53.990530] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b742f0 (9): Bad file descriptor 00:25:46.352 [2024-12-10 22:56:53.991541] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:25:46.352 [2024-12-10 22:56:53.991552] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:25:46.352 22:56:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:46.352 22:56:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:46.352 22:56:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:46.352 22:56:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.352 22:56:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:46.352 22:56:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:46.352 22:56:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:46.352 22:56:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.352 22:56:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:25:46.352 22:56:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:46.352 22:56:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:46.611 22:56:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:25:46.611 22:56:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:46.611 22:56:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:46.611 22:56:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:46.611 22:56:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:46.611 22:56:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:46.611 22:56:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:46.611 22:56:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:46.611 22:56:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:46.611 22:56:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:46.611 22:56:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:47.547 22:56:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:47.547 22:56:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:47.547 22:56:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:47.547 22:56:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:47.547 22:56:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:47.547 22:56:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:47.547 22:56:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:47.547 22:56:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:47.547 22:56:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:47.547 22:56:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:48.482 [2024-12-10 22:56:56.045629] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:48.482 [2024-12-10 22:56:56.045646] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:48.482 [2024-12-10 22:56:56.045658] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:48.482 [2024-12-10 22:56:56.172037] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:25:48.742 [2024-12-10 22:56:56.233634] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:25:48.742 [2024-12-10 22:56:56.234196] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x1b902a0:1 started. 00:25:48.742 [2024-12-10 22:56:56.235269] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:48.742 [2024-12-10 22:56:56.235300] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:48.742 [2024-12-10 22:56:56.235318] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:48.742 [2024-12-10 22:56:56.235332] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:25:48.742 [2024-12-10 22:56:56.235339] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:48.742 [2024-12-10 22:56:56.243065] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x1b902a0 was disconnected and freed. delete nvme_qpair. 00:25:48.742 22:56:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:48.742 22:56:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:48.742 22:56:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:48.742 22:56:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:48.742 22:56:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:48.742 22:56:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:48.742 22:56:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:48.742 22:56:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:48.742 22:56:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:25:48.742 22:56:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:25:48.742 22:56:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2494040 00:25:48.742 22:56:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 2494040 ']' 00:25:48.742 22:56:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 2494040 00:25:48.742 22:56:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:25:48.742 22:56:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:48.742 22:56:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2494040 00:25:48.742 22:56:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:48.742 22:56:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:48.742 22:56:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2494040' 00:25:48.742 killing process with pid 2494040 00:25:48.742 22:56:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 2494040 00:25:48.742 22:56:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 2494040 00:25:49.001 22:56:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:25:49.001 22:56:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:49.001 22:56:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:25:49.001 22:56:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:49.001 22:56:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:25:49.001 22:56:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:49.001 22:56:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:49.001 rmmod nvme_tcp 00:25:49.001 rmmod nvme_fabrics 00:25:49.001 rmmod nvme_keyring 00:25:49.001 22:56:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:49.001 22:56:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:25:49.001 22:56:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:25:49.001 22:56:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 2494016 ']' 00:25:49.001 22:56:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 2494016 00:25:49.002 22:56:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 2494016 ']' 00:25:49.002 22:56:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 2494016 00:25:49.002 22:56:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:25:49.002 22:56:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:49.002 22:56:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2494016 00:25:49.002 22:56:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:49.002 22:56:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:49.002 22:56:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2494016' 00:25:49.002 killing process with pid 2494016 00:25:49.002 22:56:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 2494016 00:25:49.002 22:56:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 2494016 00:25:49.261 22:56:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:49.261 22:56:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:49.261 22:56:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:49.261 22:56:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:25:49.261 22:56:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:25:49.261 22:56:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:49.261 22:56:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:25:49.261 22:56:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:49.261 22:56:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:49.261 22:56:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:49.261 22:56:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:49.261 22:56:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:51.167 22:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:51.167 00:25:51.167 real 0m20.559s 00:25:51.167 user 0m24.899s 00:25:51.167 sys 0m5.828s 00:25:51.167 22:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:51.167 22:56:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:51.167 ************************************ 00:25:51.167 END TEST nvmf_discovery_remove_ifc 00:25:51.167 ************************************ 00:25:51.167 22:56:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:51.167 22:56:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:51.167 22:56:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:51.167 22:56:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.427 ************************************ 00:25:51.427 START TEST nvmf_identify_kernel_target 00:25:51.427 ************************************ 00:25:51.427 22:56:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:51.427 * Looking for test storage... 00:25:51.427 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host 00:25:51.427 22:56:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:51.427 22:56:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:25:51.427 22:56:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:51.427 22:56:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:51.427 22:56:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:51.427 22:56:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:51.427 22:56:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:51.427 22:56:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:25:51.427 22:56:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:25:51.427 22:56:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:25:51.427 22:56:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:25:51.427 22:56:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:25:51.427 22:56:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:25:51.427 22:56:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:25:51.427 22:56:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:51.427 22:56:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:25:51.427 22:56:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:25:51.427 22:56:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:51.427 22:56:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:51.427 22:56:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:25:51.427 22:56:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:25:51.427 22:56:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:51.427 22:56:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:25:51.427 22:56:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:25:51.427 22:56:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:25:51.427 22:56:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:25:51.427 22:56:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:51.427 22:56:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:25:51.427 22:56:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:25:51.427 22:56:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:51.427 22:56:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:51.427 22:56:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:25:51.427 22:56:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:51.427 22:56:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:51.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:51.427 --rc genhtml_branch_coverage=1 00:25:51.427 --rc genhtml_function_coverage=1 00:25:51.427 --rc genhtml_legend=1 00:25:51.427 --rc geninfo_all_blocks=1 00:25:51.427 --rc geninfo_unexecuted_blocks=1 00:25:51.427 00:25:51.427 ' 00:25:51.427 22:56:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:51.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:51.427 --rc genhtml_branch_coverage=1 00:25:51.427 --rc genhtml_function_coverage=1 00:25:51.427 --rc genhtml_legend=1 00:25:51.427 --rc geninfo_all_blocks=1 00:25:51.427 --rc geninfo_unexecuted_blocks=1 00:25:51.427 00:25:51.427 ' 00:25:51.427 22:56:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:51.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:51.427 --rc genhtml_branch_coverage=1 00:25:51.427 --rc genhtml_function_coverage=1 00:25:51.427 --rc genhtml_legend=1 00:25:51.427 --rc geninfo_all_blocks=1 00:25:51.427 --rc geninfo_unexecuted_blocks=1 00:25:51.427 00:25:51.427 ' 00:25:51.427 22:56:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:51.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:51.427 --rc genhtml_branch_coverage=1 00:25:51.427 --rc genhtml_function_coverage=1 00:25:51.427 --rc genhtml_legend=1 00:25:51.427 --rc geninfo_all_blocks=1 00:25:51.427 --rc geninfo_unexecuted_blocks=1 00:25:51.428 00:25:51.428 ' 00:25:51.428 22:56:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:25:51.428 22:56:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:25:51.428 22:56:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:51.428 22:56:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:51.428 22:56:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:51.428 22:56:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:51.428 22:56:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:51.428 22:56:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:51.428 22:56:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:51.428 22:56:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:51.428 22:56:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:51.428 22:56:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:51.428 22:56:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:51.428 22:56:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:51.428 22:56:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:51.428 22:56:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:51.428 22:56:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:51.428 22:56:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:51.428 22:56:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:25:51.428 22:56:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:25:51.428 22:56:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:51.428 22:56:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:51.428 22:56:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:51.428 22:56:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.428 22:56:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.428 22:56:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.428 22:56:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:25:51.428 22:56:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:51.428 22:56:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:25:51.428 22:56:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:51.428 22:56:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:51.428 22:56:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:51.428 22:56:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:51.428 22:56:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:51.428 22:56:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:51.428 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:51.428 22:56:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:51.428 22:56:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:51.428 22:56:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:51.428 22:56:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:25:51.428 22:56:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:51.428 22:56:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:51.428 22:56:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:51.428 22:56:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:51.428 22:56:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:51.428 22:56:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:51.428 22:56:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:51.428 22:56:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:51.428 22:56:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:51.428 22:56:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:51.428 22:56:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:25:51.428 22:56:59 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:58.000 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:58.000 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:25:58.000 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:58.000 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:58.000 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:58.000 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:58.000 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:58.000 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:25:58.000 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:58.000 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:25:58.000 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:25:58.000 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:25:58.000 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:25:58.000 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:25:58.000 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:25:58.000 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:58.000 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:58.001 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:58.001 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:58.001 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:58.001 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:58.001 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:58.001 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:58.001 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:58.001 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:58.001 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:58.001 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:58.001 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:58.001 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:58.001 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:58.001 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:58.001 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:58.001 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:58.001 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:58.001 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:58.001 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:58.001 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:58.001 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:58.001 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:58.001 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:58.001 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:58.001 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:58.001 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:58.001 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:58.001 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:58.001 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:58.001 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:58.001 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:58.001 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:58.001 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:58.001 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:58.001 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:58.001 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:58.001 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:58.001 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:58.001 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:58.001 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:58.001 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:58.001 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:58.001 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:58.001 Found net devices under 0000:86:00.0: cvl_0_0 00:25:58.001 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:58.001 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:58.001 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:58.001 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:58.001 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:58.001 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:58.001 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:58.001 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:58.001 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:58.001 Found net devices under 0000:86:00.1: cvl_0_1 00:25:58.001 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:58.001 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:58.001 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:25:58.001 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:58.001 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:58.001 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:58.001 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:58.001 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:58.001 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:58.001 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:58.001 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:58.001 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:58.001 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:58.001 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:58.001 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:58.001 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:58.001 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:58.001 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:58.001 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:58.001 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:58.001 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:58.001 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:58.001 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:58.001 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:58.001 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:58.001 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:58.001 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:58.001 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:58.001 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:58.001 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:58.001 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.392 ms 00:25:58.001 00:25:58.001 --- 10.0.0.2 ping statistics --- 00:25:58.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:58.001 rtt min/avg/max/mdev = 0.392/0.392/0.392/0.000 ms 00:25:58.001 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:58.001 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:58.001 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:25:58.001 00:25:58.001 --- 10.0.0.1 ping statistics --- 00:25:58.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:58.001 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:25:58.001 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:58.001 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:25:58.001 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:58.001 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:58.001 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:58.001 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:58.001 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:58.001 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:58.001 22:57:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:58.001 22:57:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:25:58.001 22:57:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:25:58.001 22:57:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:25:58.001 22:57:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:25:58.001 22:57:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:25:58.001 22:57:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:58.001 22:57:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:58.001 22:57:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:25:58.001 22:57:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:58.001 22:57:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:25:58.002 22:57:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:25:58.002 22:57:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:25:58.002 22:57:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:25:58.002 22:57:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:25:58.002 22:57:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:25:58.002 22:57:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:25:58.002 22:57:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:58.002 22:57:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:58.002 22:57:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:58.002 22:57:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:25:58.002 22:57:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:25:58.002 22:57:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:25:58.002 22:57:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:58.002 22:57:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/setup.sh reset 00:26:00.538 Waiting for block devices as requested 00:26:00.538 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:26:00.538 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:00.538 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:00.538 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:00.538 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:00.798 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:00.798 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:00.798 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:00.798 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:01.056 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:01.056 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:01.056 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:01.315 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:01.315 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:01.315 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:01.315 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:01.574 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:01.574 22:57:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:26:01.574 22:57:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:01.574 22:57:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:26:01.574 22:57:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:26:01.574 22:57:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:01.574 22:57:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:26:01.574 22:57:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:26:01.574 22:57:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:26:01.574 22:57:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/spdk-gpt.py nvme0n1 00:26:01.574 No valid GPT data, bailing 00:26:01.574 22:57:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:01.574 22:57:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:26:01.574 22:57:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:26:01.574 22:57:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:26:01.574 22:57:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:26:01.574 22:57:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:01.574 22:57:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:01.574 22:57:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:01.574 22:57:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:26:01.574 22:57:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:26:01.574 22:57:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:26:01.574 22:57:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:26:01.574 22:57:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:26:01.574 22:57:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:26:01.574 22:57:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:26:01.574 22:57:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:26:01.574 22:57:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:01.574 22:57:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:26:01.834 00:26:01.834 Discovery Log Number of Records 2, Generation counter 2 00:26:01.834 =====Discovery Log Entry 0====== 00:26:01.834 trtype: tcp 00:26:01.834 adrfam: ipv4 00:26:01.834 subtype: current discovery subsystem 00:26:01.834 treq: not specified, sq flow control disable supported 00:26:01.834 portid: 1 00:26:01.834 trsvcid: 4420 00:26:01.834 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:01.834 traddr: 10.0.0.1 00:26:01.834 eflags: none 00:26:01.834 sectype: none 00:26:01.834 =====Discovery Log Entry 1====== 00:26:01.834 trtype: tcp 00:26:01.834 adrfam: ipv4 00:26:01.834 subtype: nvme subsystem 00:26:01.834 treq: not specified, sq flow control disable supported 00:26:01.834 portid: 1 00:26:01.834 trsvcid: 4420 00:26:01.834 subnqn: nqn.2016-06.io.spdk:testnqn 00:26:01.834 traddr: 10.0.0.1 00:26:01.834 eflags: none 00:26:01.834 sectype: none 00:26:01.834 22:57:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:26:01.834 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:26:01.834 ===================================================== 00:26:01.834 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:01.834 ===================================================== 00:26:01.834 Controller Capabilities/Features 00:26:01.834 ================================ 00:26:01.834 Vendor ID: 0000 00:26:01.834 Subsystem Vendor ID: 0000 00:26:01.834 Serial Number: 724b9cf3a56b7f2b9111 00:26:01.834 Model Number: Linux 00:26:01.834 Firmware Version: 6.8.9-20 00:26:01.834 Recommended Arb Burst: 0 00:26:01.834 IEEE OUI Identifier: 00 00 00 00:26:01.834 Multi-path I/O 00:26:01.834 May have multiple subsystem ports: No 00:26:01.834 May have multiple controllers: No 00:26:01.834 Associated with SR-IOV VF: No 00:26:01.834 Max Data Transfer Size: Unlimited 00:26:01.834 Max Number of Namespaces: 0 00:26:01.834 Max Number of I/O Queues: 1024 00:26:01.834 NVMe Specification Version (VS): 1.3 00:26:01.834 NVMe Specification Version (Identify): 1.3 00:26:01.834 Maximum Queue Entries: 1024 00:26:01.834 Contiguous Queues Required: No 00:26:01.834 Arbitration Mechanisms Supported 00:26:01.834 Weighted Round Robin: Not Supported 00:26:01.834 Vendor Specific: Not Supported 00:26:01.834 Reset Timeout: 7500 ms 00:26:01.834 Doorbell Stride: 4 bytes 00:26:01.834 NVM Subsystem Reset: Not Supported 00:26:01.834 Command Sets Supported 00:26:01.834 NVM Command Set: Supported 00:26:01.834 Boot Partition: Not Supported 00:26:01.834 Memory Page Size Minimum: 4096 bytes 00:26:01.834 Memory Page Size Maximum: 4096 bytes 00:26:01.834 Persistent Memory Region: Not Supported 00:26:01.834 Optional Asynchronous Events Supported 00:26:01.834 Namespace Attribute Notices: Not Supported 00:26:01.834 Firmware Activation Notices: Not Supported 00:26:01.834 ANA Change Notices: Not Supported 00:26:01.834 PLE Aggregate Log Change Notices: Not Supported 00:26:01.834 LBA Status Info Alert Notices: Not Supported 00:26:01.834 EGE Aggregate Log Change Notices: Not Supported 00:26:01.834 Normal NVM Subsystem Shutdown event: Not Supported 00:26:01.834 Zone Descriptor Change Notices: Not Supported 00:26:01.834 Discovery Log Change Notices: Supported 00:26:01.834 Controller Attributes 00:26:01.834 128-bit Host Identifier: Not Supported 00:26:01.834 Non-Operational Permissive Mode: Not Supported 00:26:01.834 NVM Sets: Not Supported 00:26:01.834 Read Recovery Levels: Not Supported 00:26:01.834 Endurance Groups: Not Supported 00:26:01.834 Predictable Latency Mode: Not Supported 00:26:01.834 Traffic Based Keep ALive: Not Supported 00:26:01.834 Namespace Granularity: Not Supported 00:26:01.834 SQ Associations: Not Supported 00:26:01.834 UUID List: Not Supported 00:26:01.834 Multi-Domain Subsystem: Not Supported 00:26:01.834 Fixed Capacity Management: Not Supported 00:26:01.834 Variable Capacity Management: Not Supported 00:26:01.834 Delete Endurance Group: Not Supported 00:26:01.834 Delete NVM Set: Not Supported 00:26:01.834 Extended LBA Formats Supported: Not Supported 00:26:01.834 Flexible Data Placement Supported: Not Supported 00:26:01.834 00:26:01.834 Controller Memory Buffer Support 00:26:01.834 ================================ 00:26:01.834 Supported: No 00:26:01.834 00:26:01.834 Persistent Memory Region Support 00:26:01.834 ================================ 00:26:01.834 Supported: No 00:26:01.834 00:26:01.834 Admin Command Set Attributes 00:26:01.834 ============================ 00:26:01.834 Security Send/Receive: Not Supported 00:26:01.834 Format NVM: Not Supported 00:26:01.834 Firmware Activate/Download: Not Supported 00:26:01.834 Namespace Management: Not Supported 00:26:01.834 Device Self-Test: Not Supported 00:26:01.834 Directives: Not Supported 00:26:01.834 NVMe-MI: Not Supported 00:26:01.834 Virtualization Management: Not Supported 00:26:01.834 Doorbell Buffer Config: Not Supported 00:26:01.835 Get LBA Status Capability: Not Supported 00:26:01.835 Command & Feature Lockdown Capability: Not Supported 00:26:01.835 Abort Command Limit: 1 00:26:01.835 Async Event Request Limit: 1 00:26:01.835 Number of Firmware Slots: N/A 00:26:01.835 Firmware Slot 1 Read-Only: N/A 00:26:01.835 Firmware Activation Without Reset: N/A 00:26:01.835 Multiple Update Detection Support: N/A 00:26:01.835 Firmware Update Granularity: No Information Provided 00:26:01.835 Per-Namespace SMART Log: No 00:26:01.835 Asymmetric Namespace Access Log Page: Not Supported 00:26:01.835 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:01.835 Command Effects Log Page: Not Supported 00:26:01.835 Get Log Page Extended Data: Supported 00:26:01.835 Telemetry Log Pages: Not Supported 00:26:01.835 Persistent Event Log Pages: Not Supported 00:26:01.835 Supported Log Pages Log Page: May Support 00:26:01.835 Commands Supported & Effects Log Page: Not Supported 00:26:01.835 Feature Identifiers & Effects Log Page:May Support 00:26:01.835 NVMe-MI Commands & Effects Log Page: May Support 00:26:01.835 Data Area 4 for Telemetry Log: Not Supported 00:26:01.835 Error Log Page Entries Supported: 1 00:26:01.835 Keep Alive: Not Supported 00:26:01.835 00:26:01.835 NVM Command Set Attributes 00:26:01.835 ========================== 00:26:01.835 Submission Queue Entry Size 00:26:01.835 Max: 1 00:26:01.835 Min: 1 00:26:01.835 Completion Queue Entry Size 00:26:01.835 Max: 1 00:26:01.835 Min: 1 00:26:01.835 Number of Namespaces: 0 00:26:01.835 Compare Command: Not Supported 00:26:01.835 Write Uncorrectable Command: Not Supported 00:26:01.835 Dataset Management Command: Not Supported 00:26:01.835 Write Zeroes Command: Not Supported 00:26:01.835 Set Features Save Field: Not Supported 00:26:01.835 Reservations: Not Supported 00:26:01.835 Timestamp: Not Supported 00:26:01.835 Copy: Not Supported 00:26:01.835 Volatile Write Cache: Not Present 00:26:01.835 Atomic Write Unit (Normal): 1 00:26:01.835 Atomic Write Unit (PFail): 1 00:26:01.835 Atomic Compare & Write Unit: 1 00:26:01.835 Fused Compare & Write: Not Supported 00:26:01.835 Scatter-Gather List 00:26:01.835 SGL Command Set: Supported 00:26:01.835 SGL Keyed: Not Supported 00:26:01.835 SGL Bit Bucket Descriptor: Not Supported 00:26:01.835 SGL Metadata Pointer: Not Supported 00:26:01.835 Oversized SGL: Not Supported 00:26:01.835 SGL Metadata Address: Not Supported 00:26:01.835 SGL Offset: Supported 00:26:01.835 Transport SGL Data Block: Not Supported 00:26:01.835 Replay Protected Memory Block: Not Supported 00:26:01.835 00:26:01.835 Firmware Slot Information 00:26:01.835 ========================= 00:26:01.835 Active slot: 0 00:26:01.835 00:26:01.835 00:26:01.835 Error Log 00:26:01.835 ========= 00:26:01.835 00:26:01.835 Active Namespaces 00:26:01.835 ================= 00:26:01.835 Discovery Log Page 00:26:01.835 ================== 00:26:01.835 Generation Counter: 2 00:26:01.835 Number of Records: 2 00:26:01.835 Record Format: 0 00:26:01.835 00:26:01.835 Discovery Log Entry 0 00:26:01.835 ---------------------- 00:26:01.835 Transport Type: 3 (TCP) 00:26:01.835 Address Family: 1 (IPv4) 00:26:01.835 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:01.835 Entry Flags: 00:26:01.835 Duplicate Returned Information: 0 00:26:01.835 Explicit Persistent Connection Support for Discovery: 0 00:26:01.835 Transport Requirements: 00:26:01.835 Secure Channel: Not Specified 00:26:01.835 Port ID: 1 (0x0001) 00:26:01.835 Controller ID: 65535 (0xffff) 00:26:01.835 Admin Max SQ Size: 32 00:26:01.835 Transport Service Identifier: 4420 00:26:01.835 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:01.835 Transport Address: 10.0.0.1 00:26:01.835 Discovery Log Entry 1 00:26:01.835 ---------------------- 00:26:01.835 Transport Type: 3 (TCP) 00:26:01.835 Address Family: 1 (IPv4) 00:26:01.835 Subsystem Type: 2 (NVM Subsystem) 00:26:01.835 Entry Flags: 00:26:01.835 Duplicate Returned Information: 0 00:26:01.835 Explicit Persistent Connection Support for Discovery: 0 00:26:01.835 Transport Requirements: 00:26:01.835 Secure Channel: Not Specified 00:26:01.835 Port ID: 1 (0x0001) 00:26:01.835 Controller ID: 65535 (0xffff) 00:26:01.835 Admin Max SQ Size: 32 00:26:01.835 Transport Service Identifier: 4420 00:26:01.835 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:26:01.835 Transport Address: 10.0.0.1 00:26:01.835 22:57:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:01.835 get_feature(0x01) failed 00:26:01.835 get_feature(0x02) failed 00:26:01.835 get_feature(0x04) failed 00:26:01.835 ===================================================== 00:26:01.835 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:01.835 ===================================================== 00:26:01.835 Controller Capabilities/Features 00:26:01.835 ================================ 00:26:01.835 Vendor ID: 0000 00:26:01.835 Subsystem Vendor ID: 0000 00:26:01.835 Serial Number: 3dfe07bfe4c81d59241b 00:26:01.835 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:26:01.835 Firmware Version: 6.8.9-20 00:26:01.835 Recommended Arb Burst: 6 00:26:01.835 IEEE OUI Identifier: 00 00 00 00:26:01.835 Multi-path I/O 00:26:01.835 May have multiple subsystem ports: Yes 00:26:01.835 May have multiple controllers: Yes 00:26:01.835 Associated with SR-IOV VF: No 00:26:01.835 Max Data Transfer Size: Unlimited 00:26:01.835 Max Number of Namespaces: 1024 00:26:01.835 Max Number of I/O Queues: 128 00:26:01.835 NVMe Specification Version (VS): 1.3 00:26:01.835 NVMe Specification Version (Identify): 1.3 00:26:01.835 Maximum Queue Entries: 1024 00:26:01.835 Contiguous Queues Required: No 00:26:01.835 Arbitration Mechanisms Supported 00:26:01.835 Weighted Round Robin: Not Supported 00:26:01.835 Vendor Specific: Not Supported 00:26:01.835 Reset Timeout: 7500 ms 00:26:01.835 Doorbell Stride: 4 bytes 00:26:01.835 NVM Subsystem Reset: Not Supported 00:26:01.835 Command Sets Supported 00:26:01.835 NVM Command Set: Supported 00:26:01.835 Boot Partition: Not Supported 00:26:01.835 Memory Page Size Minimum: 4096 bytes 00:26:01.835 Memory Page Size Maximum: 4096 bytes 00:26:01.835 Persistent Memory Region: Not Supported 00:26:01.835 Optional Asynchronous Events Supported 00:26:01.835 Namespace Attribute Notices: Supported 00:26:01.835 Firmware Activation Notices: Not Supported 00:26:01.835 ANA Change Notices: Supported 00:26:01.835 PLE Aggregate Log Change Notices: Not Supported 00:26:01.835 LBA Status Info Alert Notices: Not Supported 00:26:01.835 EGE Aggregate Log Change Notices: Not Supported 00:26:01.835 Normal NVM Subsystem Shutdown event: Not Supported 00:26:01.835 Zone Descriptor Change Notices: Not Supported 00:26:01.835 Discovery Log Change Notices: Not Supported 00:26:01.835 Controller Attributes 00:26:01.835 128-bit Host Identifier: Supported 00:26:01.835 Non-Operational Permissive Mode: Not Supported 00:26:01.835 NVM Sets: Not Supported 00:26:01.835 Read Recovery Levels: Not Supported 00:26:01.835 Endurance Groups: Not Supported 00:26:01.835 Predictable Latency Mode: Not Supported 00:26:01.835 Traffic Based Keep ALive: Supported 00:26:01.835 Namespace Granularity: Not Supported 00:26:01.835 SQ Associations: Not Supported 00:26:01.835 UUID List: Not Supported 00:26:01.835 Multi-Domain Subsystem: Not Supported 00:26:01.835 Fixed Capacity Management: Not Supported 00:26:01.835 Variable Capacity Management: Not Supported 00:26:01.835 Delete Endurance Group: Not Supported 00:26:01.835 Delete NVM Set: Not Supported 00:26:01.835 Extended LBA Formats Supported: Not Supported 00:26:01.835 Flexible Data Placement Supported: Not Supported 00:26:01.835 00:26:01.835 Controller Memory Buffer Support 00:26:01.835 ================================ 00:26:01.835 Supported: No 00:26:01.835 00:26:01.835 Persistent Memory Region Support 00:26:01.835 ================================ 00:26:01.835 Supported: No 00:26:01.835 00:26:01.835 Admin Command Set Attributes 00:26:01.835 ============================ 00:26:01.835 Security Send/Receive: Not Supported 00:26:01.835 Format NVM: Not Supported 00:26:01.835 Firmware Activate/Download: Not Supported 00:26:01.835 Namespace Management: Not Supported 00:26:01.835 Device Self-Test: Not Supported 00:26:01.835 Directives: Not Supported 00:26:01.835 NVMe-MI: Not Supported 00:26:01.835 Virtualization Management: Not Supported 00:26:01.835 Doorbell Buffer Config: Not Supported 00:26:01.835 Get LBA Status Capability: Not Supported 00:26:01.835 Command & Feature Lockdown Capability: Not Supported 00:26:01.835 Abort Command Limit: 4 00:26:01.835 Async Event Request Limit: 4 00:26:01.835 Number of Firmware Slots: N/A 00:26:01.835 Firmware Slot 1 Read-Only: N/A 00:26:01.835 Firmware Activation Without Reset: N/A 00:26:01.835 Multiple Update Detection Support: N/A 00:26:01.835 Firmware Update Granularity: No Information Provided 00:26:01.835 Per-Namespace SMART Log: Yes 00:26:01.835 Asymmetric Namespace Access Log Page: Supported 00:26:01.835 ANA Transition Time : 10 sec 00:26:01.835 00:26:01.835 Asymmetric Namespace Access Capabilities 00:26:01.835 ANA Optimized State : Supported 00:26:01.836 ANA Non-Optimized State : Supported 00:26:01.836 ANA Inaccessible State : Supported 00:26:01.836 ANA Persistent Loss State : Supported 00:26:01.836 ANA Change State : Supported 00:26:01.836 ANAGRPID is not changed : No 00:26:01.836 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:26:01.836 00:26:01.836 ANA Group Identifier Maximum : 128 00:26:01.836 Number of ANA Group Identifiers : 128 00:26:01.836 Max Number of Allowed Namespaces : 1024 00:26:01.836 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:26:01.836 Command Effects Log Page: Supported 00:26:01.836 Get Log Page Extended Data: Supported 00:26:01.836 Telemetry Log Pages: Not Supported 00:26:01.836 Persistent Event Log Pages: Not Supported 00:26:01.836 Supported Log Pages Log Page: May Support 00:26:01.836 Commands Supported & Effects Log Page: Not Supported 00:26:01.836 Feature Identifiers & Effects Log Page:May Support 00:26:01.836 NVMe-MI Commands & Effects Log Page: May Support 00:26:01.836 Data Area 4 for Telemetry Log: Not Supported 00:26:01.836 Error Log Page Entries Supported: 128 00:26:01.836 Keep Alive: Supported 00:26:01.836 Keep Alive Granularity: 1000 ms 00:26:01.836 00:26:01.836 NVM Command Set Attributes 00:26:01.836 ========================== 00:26:01.836 Submission Queue Entry Size 00:26:01.836 Max: 64 00:26:01.836 Min: 64 00:26:01.836 Completion Queue Entry Size 00:26:01.836 Max: 16 00:26:01.836 Min: 16 00:26:01.836 Number of Namespaces: 1024 00:26:01.836 Compare Command: Not Supported 00:26:01.836 Write Uncorrectable Command: Not Supported 00:26:01.836 Dataset Management Command: Supported 00:26:01.836 Write Zeroes Command: Supported 00:26:01.836 Set Features Save Field: Not Supported 00:26:01.836 Reservations: Not Supported 00:26:01.836 Timestamp: Not Supported 00:26:01.836 Copy: Not Supported 00:26:01.836 Volatile Write Cache: Present 00:26:01.836 Atomic Write Unit (Normal): 1 00:26:01.836 Atomic Write Unit (PFail): 1 00:26:01.836 Atomic Compare & Write Unit: 1 00:26:01.836 Fused Compare & Write: Not Supported 00:26:01.836 Scatter-Gather List 00:26:01.836 SGL Command Set: Supported 00:26:01.836 SGL Keyed: Not Supported 00:26:01.836 SGL Bit Bucket Descriptor: Not Supported 00:26:01.836 SGL Metadata Pointer: Not Supported 00:26:01.836 Oversized SGL: Not Supported 00:26:01.836 SGL Metadata Address: Not Supported 00:26:01.836 SGL Offset: Supported 00:26:01.836 Transport SGL Data Block: Not Supported 00:26:01.836 Replay Protected Memory Block: Not Supported 00:26:01.836 00:26:01.836 Firmware Slot Information 00:26:01.836 ========================= 00:26:01.836 Active slot: 0 00:26:01.836 00:26:01.836 Asymmetric Namespace Access 00:26:01.836 =========================== 00:26:01.836 Change Count : 0 00:26:01.836 Number of ANA Group Descriptors : 1 00:26:01.836 ANA Group Descriptor : 0 00:26:01.836 ANA Group ID : 1 00:26:01.836 Number of NSID Values : 1 00:26:01.836 Change Count : 0 00:26:01.836 ANA State : 1 00:26:01.836 Namespace Identifier : 1 00:26:01.836 00:26:01.836 Commands Supported and Effects 00:26:01.836 ============================== 00:26:01.836 Admin Commands 00:26:01.836 -------------- 00:26:01.836 Get Log Page (02h): Supported 00:26:01.836 Identify (06h): Supported 00:26:01.836 Abort (08h): Supported 00:26:01.836 Set Features (09h): Supported 00:26:01.836 Get Features (0Ah): Supported 00:26:01.836 Asynchronous Event Request (0Ch): Supported 00:26:01.836 Keep Alive (18h): Supported 00:26:01.836 I/O Commands 00:26:01.836 ------------ 00:26:01.836 Flush (00h): Supported 00:26:01.836 Write (01h): Supported LBA-Change 00:26:01.836 Read (02h): Supported 00:26:01.836 Write Zeroes (08h): Supported LBA-Change 00:26:01.836 Dataset Management (09h): Supported 00:26:01.836 00:26:01.836 Error Log 00:26:01.836 ========= 00:26:01.836 Entry: 0 00:26:01.836 Error Count: 0x3 00:26:01.836 Submission Queue Id: 0x0 00:26:01.836 Command Id: 0x5 00:26:01.836 Phase Bit: 0 00:26:01.836 Status Code: 0x2 00:26:01.836 Status Code Type: 0x0 00:26:01.836 Do Not Retry: 1 00:26:02.096 Error Location: 0x28 00:26:02.096 LBA: 0x0 00:26:02.096 Namespace: 0x0 00:26:02.096 Vendor Log Page: 0x0 00:26:02.096 ----------- 00:26:02.096 Entry: 1 00:26:02.096 Error Count: 0x2 00:26:02.096 Submission Queue Id: 0x0 00:26:02.096 Command Id: 0x5 00:26:02.096 Phase Bit: 0 00:26:02.096 Status Code: 0x2 00:26:02.096 Status Code Type: 0x0 00:26:02.096 Do Not Retry: 1 00:26:02.096 Error Location: 0x28 00:26:02.096 LBA: 0x0 00:26:02.096 Namespace: 0x0 00:26:02.096 Vendor Log Page: 0x0 00:26:02.096 ----------- 00:26:02.096 Entry: 2 00:26:02.096 Error Count: 0x1 00:26:02.096 Submission Queue Id: 0x0 00:26:02.096 Command Id: 0x4 00:26:02.096 Phase Bit: 0 00:26:02.096 Status Code: 0x2 00:26:02.096 Status Code Type: 0x0 00:26:02.096 Do Not Retry: 1 00:26:02.096 Error Location: 0x28 00:26:02.096 LBA: 0x0 00:26:02.096 Namespace: 0x0 00:26:02.096 Vendor Log Page: 0x0 00:26:02.096 00:26:02.096 Number of Queues 00:26:02.096 ================ 00:26:02.096 Number of I/O Submission Queues: 128 00:26:02.096 Number of I/O Completion Queues: 128 00:26:02.096 00:26:02.096 ZNS Specific Controller Data 00:26:02.096 ============================ 00:26:02.096 Zone Append Size Limit: 0 00:26:02.096 00:26:02.096 00:26:02.096 Active Namespaces 00:26:02.096 ================= 00:26:02.096 get_feature(0x05) failed 00:26:02.096 Namespace ID:1 00:26:02.096 Command Set Identifier: NVM (00h) 00:26:02.096 Deallocate: Supported 00:26:02.096 Deallocated/Unwritten Error: Not Supported 00:26:02.096 Deallocated Read Value: Unknown 00:26:02.096 Deallocate in Write Zeroes: Not Supported 00:26:02.096 Deallocated Guard Field: 0xFFFF 00:26:02.096 Flush: Supported 00:26:02.096 Reservation: Not Supported 00:26:02.096 Namespace Sharing Capabilities: Multiple Controllers 00:26:02.096 Size (in LBAs): 1953525168 (931GiB) 00:26:02.096 Capacity (in LBAs): 1953525168 (931GiB) 00:26:02.096 Utilization (in LBAs): 1953525168 (931GiB) 00:26:02.096 UUID: dd5bb711-0ac7-42a7-8676-6bb748f9b5bf 00:26:02.096 Thin Provisioning: Not Supported 00:26:02.096 Per-NS Atomic Units: Yes 00:26:02.096 Atomic Boundary Size (Normal): 0 00:26:02.096 Atomic Boundary Size (PFail): 0 00:26:02.096 Atomic Boundary Offset: 0 00:26:02.096 NGUID/EUI64 Never Reused: No 00:26:02.096 ANA group ID: 1 00:26:02.096 Namespace Write Protected: No 00:26:02.096 Number of LBA Formats: 1 00:26:02.096 Current LBA Format: LBA Format #00 00:26:02.096 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:02.096 00:26:02.096 22:57:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:26:02.096 22:57:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:02.096 22:57:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:26:02.096 22:57:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:02.096 22:57:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:26:02.096 22:57:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:02.096 22:57:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:02.096 rmmod nvme_tcp 00:26:02.096 rmmod nvme_fabrics 00:26:02.096 22:57:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:02.096 22:57:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:26:02.096 22:57:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:26:02.096 22:57:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:26:02.096 22:57:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:02.096 22:57:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:02.096 22:57:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:02.096 22:57:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:26:02.096 22:57:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:26:02.096 22:57:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:02.096 22:57:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:26:02.096 22:57:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:02.096 22:57:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:02.096 22:57:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:02.096 22:57:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:02.096 22:57:09 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:04.001 22:57:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:04.001 22:57:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:26:04.001 22:57:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:26:04.001 22:57:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:26:04.001 22:57:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:04.001 22:57:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:04.001 22:57:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:04.001 22:57:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:04.001 22:57:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:26:04.001 22:57:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:26:04.260 22:57:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/setup.sh 00:26:06.794 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:07.053 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:07.053 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:07.053 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:07.053 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:07.053 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:07.053 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:07.053 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:07.053 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:07.053 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:07.053 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:07.053 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:07.053 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:07.053 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:07.053 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:07.053 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:07.990 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:26:07.990 00:26:07.990 real 0m16.744s 00:26:07.990 user 0m4.418s 00:26:07.990 sys 0m8.687s 00:26:07.990 22:57:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:07.990 22:57:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:07.990 ************************************ 00:26:07.990 END TEST nvmf_identify_kernel_target 00:26:07.990 ************************************ 00:26:07.990 22:57:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:07.990 22:57:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:07.990 22:57:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:07.990 22:57:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.990 ************************************ 00:26:07.990 START TEST nvmf_auth_host 00:26:07.990 ************************************ 00:26:07.990 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:08.249 * Looking for test storage... 00:26:08.249 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host 00:26:08.249 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:08.249 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:08.249 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:26:08.250 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:08.250 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:08.250 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:08.250 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:08.250 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:26:08.250 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:26:08.250 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:26:08.250 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:26:08.250 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:26:08.250 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:26:08.250 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:26:08.250 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:08.250 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:26:08.250 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:26:08.250 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:08.250 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:08.250 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:26:08.250 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:26:08.250 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:08.250 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:26:08.250 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:26:08.250 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:26:08.250 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:26:08.250 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:08.250 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:26:08.250 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:26:08.250 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:08.250 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:08.250 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:26:08.250 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:08.250 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:08.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:08.250 --rc genhtml_branch_coverage=1 00:26:08.250 --rc genhtml_function_coverage=1 00:26:08.250 --rc genhtml_legend=1 00:26:08.250 --rc geninfo_all_blocks=1 00:26:08.250 --rc geninfo_unexecuted_blocks=1 00:26:08.250 00:26:08.250 ' 00:26:08.250 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:08.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:08.250 --rc genhtml_branch_coverage=1 00:26:08.250 --rc genhtml_function_coverage=1 00:26:08.250 --rc genhtml_legend=1 00:26:08.250 --rc geninfo_all_blocks=1 00:26:08.250 --rc geninfo_unexecuted_blocks=1 00:26:08.250 00:26:08.250 ' 00:26:08.250 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:08.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:08.250 --rc genhtml_branch_coverage=1 00:26:08.250 --rc genhtml_function_coverage=1 00:26:08.250 --rc genhtml_legend=1 00:26:08.250 --rc geninfo_all_blocks=1 00:26:08.250 --rc geninfo_unexecuted_blocks=1 00:26:08.250 00:26:08.250 ' 00:26:08.250 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:08.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:08.250 --rc genhtml_branch_coverage=1 00:26:08.250 --rc genhtml_function_coverage=1 00:26:08.250 --rc genhtml_legend=1 00:26:08.250 --rc geninfo_all_blocks=1 00:26:08.250 --rc geninfo_unexecuted_blocks=1 00:26:08.250 00:26:08.250 ' 00:26:08.250 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:26:08.250 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:26:08.250 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:08.250 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:08.250 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:08.250 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:08.250 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:08.250 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:08.250 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:08.250 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:08.250 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:08.250 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:08.250 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:08.250 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:08.250 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:08.250 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:08.250 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:08.250 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:08.250 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:26:08.250 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:26:08.250 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:08.250 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:08.250 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:08.250 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:08.250 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:08.250 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:08.250 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:26:08.250 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:08.250 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:26:08.250 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:08.250 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:08.250 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:08.250 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:08.250 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:08.250 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:08.250 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:08.250 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:08.250 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:08.250 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:08.250 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:26:08.250 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:26:08.250 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:26:08.250 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:26:08.250 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:08.250 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:08.250 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:26:08.250 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:26:08.250 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:26:08.250 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:08.250 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:08.250 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:08.251 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:08.251 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:08.251 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:08.251 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:08.251 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:08.251 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:08.251 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:08.251 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:26:08.251 22:57:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:14.820 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:14.820 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:14.820 Found net devices under 0000:86:00.0: cvl_0_0 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:14.820 Found net devices under 0000:86:00.1: cvl_0_1 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:14.820 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:14.821 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:14.821 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:14.821 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:14.821 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:14.821 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.308 ms 00:26:14.821 00:26:14.821 --- 10.0.0.2 ping statistics --- 00:26:14.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:14.821 rtt min/avg/max/mdev = 0.308/0.308/0.308/0.000 ms 00:26:14.821 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:14.821 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:14.821 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:26:14.821 00:26:14.821 --- 10.0.0.1 ping statistics --- 00:26:14.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:14.821 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:26:14.821 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:14.821 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:26:14.821 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:14.821 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:14.821 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:14.821 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:14.821 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:14.821 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:14.821 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:14.821 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:26:14.821 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:14.821 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:14.821 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.821 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:26:14.821 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=2506472 00:26:14.821 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 2506472 00:26:14.821 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 2506472 ']' 00:26:14.821 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:14.821 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:14.821 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:14.821 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:14.821 22:57:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.821 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:14.821 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:26:14.821 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:14.821 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:14.821 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:14.821 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:14.821 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:26:14.821 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:26:14.821 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:14.821 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:14.821 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:14.821 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:26:14.821 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:26:14.821 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:14.821 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ba4884e686eb73bdea7077c581e040c9 00:26:14.821 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:26:14.821 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.3HS 00:26:14.821 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ba4884e686eb73bdea7077c581e040c9 0 00:26:14.821 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ba4884e686eb73bdea7077c581e040c9 0 00:26:14.821 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:14.821 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:14.821 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ba4884e686eb73bdea7077c581e040c9 00:26:14.821 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:26:14.821 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:14.821 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.3HS 00:26:14.821 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.3HS 00:26:14.821 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.3HS 00:26:14.821 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:26:14.821 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:14.821 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:14.821 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:14.821 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:26:14.821 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:26:14.821 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:14.821 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=f9c54ef39171b8674539b3d4cd4dca24fa0fb98f4c8d7164acc09250ffd4de10 00:26:14.821 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:26:14.821 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Ghj 00:26:14.821 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key f9c54ef39171b8674539b3d4cd4dca24fa0fb98f4c8d7164acc09250ffd4de10 3 00:26:14.821 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 f9c54ef39171b8674539b3d4cd4dca24fa0fb98f4c8d7164acc09250ffd4de10 3 00:26:14.821 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:14.821 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:14.821 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=f9c54ef39171b8674539b3d4cd4dca24fa0fb98f4c8d7164acc09250ffd4de10 00:26:14.821 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:26:14.821 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:14.821 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Ghj 00:26:14.821 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Ghj 00:26:14.821 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.Ghj 00:26:14.821 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:26:14.821 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:14.821 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:14.821 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:14.821 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:26:14.821 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:26:14.821 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:14.821 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=2bd37fc10ff8ef0852580af7422ce2090fe3faa44631979d 00:26:14.821 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:26:14.821 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.hJD 00:26:14.821 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 2bd37fc10ff8ef0852580af7422ce2090fe3faa44631979d 0 00:26:14.821 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 2bd37fc10ff8ef0852580af7422ce2090fe3faa44631979d 0 00:26:14.821 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:14.821 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:14.821 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=2bd37fc10ff8ef0852580af7422ce2090fe3faa44631979d 00:26:14.821 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:26:14.821 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:14.821 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.hJD 00:26:14.821 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.hJD 00:26:14.821 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.hJD 00:26:14.821 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:26:14.821 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:14.821 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:14.821 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:14.821 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:26:14.821 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:26:14.821 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:14.821 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=8ae1e4309a79bb6d27c4800f846b55bdc2a340af255c4dd1 00:26:14.821 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:26:14.821 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.PGs 00:26:14.821 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 8ae1e4309a79bb6d27c4800f846b55bdc2a340af255c4dd1 2 00:26:14.822 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 8ae1e4309a79bb6d27c4800f846b55bdc2a340af255c4dd1 2 00:26:14.822 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:14.822 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:14.822 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=8ae1e4309a79bb6d27c4800f846b55bdc2a340af255c4dd1 00:26:14.822 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:26:14.822 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:14.822 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.PGs 00:26:14.822 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.PGs 00:26:14.822 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.PGs 00:26:14.822 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:14.822 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:14.822 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:14.822 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:14.822 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:26:14.822 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:26:14.822 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:14.822 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=580e60460dbe5cb727996c6ce1f75d64 00:26:14.822 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:26:14.822 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Qck 00:26:14.822 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 580e60460dbe5cb727996c6ce1f75d64 1 00:26:14.822 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 580e60460dbe5cb727996c6ce1f75d64 1 00:26:14.822 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:14.822 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:14.822 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=580e60460dbe5cb727996c6ce1f75d64 00:26:14.822 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:26:14.822 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:14.822 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Qck 00:26:14.822 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Qck 00:26:14.822 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.Qck 00:26:14.822 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:14.822 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:14.822 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:14.822 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:14.822 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:26:14.822 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:26:14.822 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:14.822 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=1ed9da6c7e46142911408a4c08d903fd 00:26:14.822 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:26:14.822 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.9gU 00:26:14.822 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 1ed9da6c7e46142911408a4c08d903fd 1 00:26:14.822 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 1ed9da6c7e46142911408a4c08d903fd 1 00:26:14.822 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:14.822 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:14.822 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=1ed9da6c7e46142911408a4c08d903fd 00:26:14.822 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:26:14.822 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:14.822 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.9gU 00:26:14.822 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.9gU 00:26:14.822 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.9gU 00:26:14.822 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:26:14.822 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:14.822 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:14.822 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:14.822 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:26:14.822 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:26:14.822 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:14.822 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=2de3e106476408acc6e8232e735d56c49c38228ad51e6ad3 00:26:14.822 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:26:14.822 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.7Fb 00:26:14.822 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 2de3e106476408acc6e8232e735d56c49c38228ad51e6ad3 2 00:26:14.822 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 2de3e106476408acc6e8232e735d56c49c38228ad51e6ad3 2 00:26:14.822 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:14.822 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:14.822 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=2de3e106476408acc6e8232e735d56c49c38228ad51e6ad3 00:26:14.822 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:26:14.822 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:15.081 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.7Fb 00:26:15.081 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.7Fb 00:26:15.081 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.7Fb 00:26:15.081 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:26:15.081 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:15.081 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:15.081 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:15.081 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:26:15.081 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:26:15.081 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:15.081 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=0e45fb8174bc884580bea09e2bb088d4 00:26:15.081 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:26:15.081 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Hov 00:26:15.081 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 0e45fb8174bc884580bea09e2bb088d4 0 00:26:15.081 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 0e45fb8174bc884580bea09e2bb088d4 0 00:26:15.081 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:15.081 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:15.081 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=0e45fb8174bc884580bea09e2bb088d4 00:26:15.081 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:26:15.081 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:15.081 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Hov 00:26:15.081 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Hov 00:26:15.081 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.Hov 00:26:15.081 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:26:15.081 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:26:15.081 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:15.081 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:26:15.081 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:26:15.081 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:26:15.081 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:15.081 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=30ce4d2432c330929c2b2671653103bd93aba2f983c5aea3e0a6e9b68061d797 00:26:15.081 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:26:15.081 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.H1X 00:26:15.081 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 30ce4d2432c330929c2b2671653103bd93aba2f983c5aea3e0a6e9b68061d797 3 00:26:15.081 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 30ce4d2432c330929c2b2671653103bd93aba2f983c5aea3e0a6e9b68061d797 3 00:26:15.081 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:26:15.081 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:26:15.081 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=30ce4d2432c330929c2b2671653103bd93aba2f983c5aea3e0a6e9b68061d797 00:26:15.081 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:26:15.081 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:26:15.081 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.H1X 00:26:15.081 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.H1X 00:26:15.081 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.H1X 00:26:15.081 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:26:15.081 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2506472 00:26:15.081 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 2506472 ']' 00:26:15.082 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:15.082 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:15.082 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:15.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:15.082 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:15.082 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.341 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:15.341 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:26:15.341 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:15.341 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.3HS 00:26:15.341 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.341 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.341 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.341 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.Ghj ]] 00:26:15.341 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Ghj 00:26:15.341 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.341 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.341 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.341 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:15.341 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.hJD 00:26:15.341 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.341 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.341 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.341 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.PGs ]] 00:26:15.341 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.PGs 00:26:15.341 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.341 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.341 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.341 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:15.341 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.Qck 00:26:15.341 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.341 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.341 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.341 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.9gU ]] 00:26:15.341 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.9gU 00:26:15.341 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.341 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.341 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.341 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:15.341 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.7Fb 00:26:15.341 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.341 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.341 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.341 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.Hov ]] 00:26:15.341 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.Hov 00:26:15.341 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.341 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.341 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.341 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:15.341 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.H1X 00:26:15.341 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.341 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:15.341 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.341 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:26:15.341 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:26:15.341 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:26:15.341 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:15.341 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:15.341 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:15.341 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:15.341 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:15.341 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:15.341 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:15.341 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:15.341 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:15.341 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:15.341 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:26:15.341 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:26:15.341 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:26:15.341 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:15.341 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:15.341 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:15.341 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:26:15.341 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:26:15.341 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:26:15.341 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:15.342 22:57:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/setup.sh reset 00:26:18.628 Waiting for block devices as requested 00:26:18.628 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:26:18.628 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:18.628 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:18.628 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:18.628 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:18.628 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:18.628 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:18.628 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:18.628 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:18.628 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:26:18.887 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:26:18.887 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:26:18.887 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:26:19.146 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:26:19.146 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:26:19.146 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:26:19.146 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:26:20.082 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:26:20.082 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:20.082 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:26:20.082 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:26:20.082 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:20.082 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:26:20.082 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:26:20.082 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:26:20.082 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/spdk-gpt.py nvme0n1 00:26:20.082 No valid GPT data, bailing 00:26:20.082 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:20.082 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:26:20.082 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:26:20.082 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:26:20.082 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:26:20.082 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:20.082 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:20.082 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:20.082 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:26:20.082 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:26:20.082 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:26:20.082 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:26:20.082 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:26:20.082 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:26:20.082 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:26:20.082 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:26:20.082 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:20.082 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:26:20.082 00:26:20.082 Discovery Log Number of Records 2, Generation counter 2 00:26:20.082 =====Discovery Log Entry 0====== 00:26:20.082 trtype: tcp 00:26:20.082 adrfam: ipv4 00:26:20.082 subtype: current discovery subsystem 00:26:20.082 treq: not specified, sq flow control disable supported 00:26:20.082 portid: 1 00:26:20.082 trsvcid: 4420 00:26:20.082 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:20.082 traddr: 10.0.0.1 00:26:20.082 eflags: none 00:26:20.082 sectype: none 00:26:20.082 =====Discovery Log Entry 1====== 00:26:20.082 trtype: tcp 00:26:20.082 adrfam: ipv4 00:26:20.082 subtype: nvme subsystem 00:26:20.082 treq: not specified, sq flow control disable supported 00:26:20.082 portid: 1 00:26:20.082 trsvcid: 4420 00:26:20.082 subnqn: nqn.2024-02.io.spdk:cnode0 00:26:20.082 traddr: 10.0.0.1 00:26:20.082 eflags: none 00:26:20.082 sectype: none 00:26:20.082 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:20.082 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:26:20.082 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:20.082 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:20.082 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:20.082 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:20.082 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:20.082 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:20.082 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmJkMzdmYzEwZmY4ZWYwODUyNTgwYWY3NDIyY2UyMDkwZmUzZmFhNDQ2MzE5NzlkAPUbXg==: 00:26:20.082 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGFlMWU0MzA5YTc5YmI2ZDI3YzQ4MDBmODQ2YjU1YmRjMmEzNDBhZjI1NWM0ZGQxhL1gpA==: 00:26:20.082 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:20.082 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:20.082 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmJkMzdmYzEwZmY4ZWYwODUyNTgwYWY3NDIyY2UyMDkwZmUzZmFhNDQ2MzE5NzlkAPUbXg==: 00:26:20.082 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGFlMWU0MzA5YTc5YmI2ZDI3YzQ4MDBmODQ2YjU1YmRjMmEzNDBhZjI1NWM0ZGQxhL1gpA==: ]] 00:26:20.082 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGFlMWU0MzA5YTc5YmI2ZDI3YzQ4MDBmODQ2YjU1YmRjMmEzNDBhZjI1NWM0ZGQxhL1gpA==: 00:26:20.082 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:20.082 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:26:20.082 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:20.082 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:20.082 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:26:20.082 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:20.082 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:26:20.082 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:20.082 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:20.082 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:20.082 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:20.082 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.082 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.082 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.082 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:20.082 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:20.082 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:20.082 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:20.082 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:20.082 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:20.082 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:20.082 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:20.082 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:20.082 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:20.082 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:20.082 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:20.082 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.082 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.340 nvme0n1 00:26:20.340 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.340 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:20.340 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:20.340 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.340 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.340 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.340 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:20.340 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:20.340 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.340 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.340 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.340 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:20.340 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:20.340 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:20.340 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:26:20.340 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:20.340 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:20.340 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:20.340 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:20.340 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmE0ODg0ZTY4NmViNzNiZGVhNzA3N2M1ODFlMDQwYzk0I2Gp: 00:26:20.340 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjljNTRlZjM5MTcxYjg2NzQ1MzliM2Q0Y2Q0ZGNhMjRmYTBmYjk4ZjRjOGQ3MTY0YWNjMDkyNTBmZmQ0ZGUxMFKaOuc=: 00:26:20.340 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:20.340 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:20.340 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmE0ODg0ZTY4NmViNzNiZGVhNzA3N2M1ODFlMDQwYzk0I2Gp: 00:26:20.340 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjljNTRlZjM5MTcxYjg2NzQ1MzliM2Q0Y2Q0ZGNhMjRmYTBmYjk4ZjRjOGQ3MTY0YWNjMDkyNTBmZmQ0ZGUxMFKaOuc=: ]] 00:26:20.340 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjljNTRlZjM5MTcxYjg2NzQ1MzliM2Q0Y2Q0ZGNhMjRmYTBmYjk4ZjRjOGQ3MTY0YWNjMDkyNTBmZmQ0ZGUxMFKaOuc=: 00:26:20.340 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:26:20.340 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:20.340 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:20.340 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:20.340 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:20.340 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:20.340 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:20.340 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.340 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.340 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.340 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:20.340 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:20.340 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:20.340 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:20.340 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:20.340 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:20.340 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:20.340 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:20.340 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:20.340 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:20.340 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:20.340 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:20.340 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.340 22:57:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.341 nvme0n1 00:26:20.341 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.341 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:20.341 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:20.341 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.341 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.341 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.600 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:20.600 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:20.600 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.600 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.600 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.600 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:20.600 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:20.600 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:20.600 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:20.600 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:20.600 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:20.600 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmJkMzdmYzEwZmY4ZWYwODUyNTgwYWY3NDIyY2UyMDkwZmUzZmFhNDQ2MzE5NzlkAPUbXg==: 00:26:20.600 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGFlMWU0MzA5YTc5YmI2ZDI3YzQ4MDBmODQ2YjU1YmRjMmEzNDBhZjI1NWM0ZGQxhL1gpA==: 00:26:20.600 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:20.600 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:20.600 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmJkMzdmYzEwZmY4ZWYwODUyNTgwYWY3NDIyY2UyMDkwZmUzZmFhNDQ2MzE5NzlkAPUbXg==: 00:26:20.600 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGFlMWU0MzA5YTc5YmI2ZDI3YzQ4MDBmODQ2YjU1YmRjMmEzNDBhZjI1NWM0ZGQxhL1gpA==: ]] 00:26:20.600 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGFlMWU0MzA5YTc5YmI2ZDI3YzQ4MDBmODQ2YjU1YmRjMmEzNDBhZjI1NWM0ZGQxhL1gpA==: 00:26:20.600 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:26:20.600 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:20.600 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:20.600 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:20.600 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:20.600 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:20.600 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:20.600 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.600 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.600 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.600 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:20.600 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:20.600 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:20.600 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:20.600 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:20.600 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:20.600 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:20.600 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:20.600 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:20.600 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:20.600 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:20.600 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:20.600 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.600 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.600 nvme0n1 00:26:20.600 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.600 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:20.600 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:20.600 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.600 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.600 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.600 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:20.600 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:20.600 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.600 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.860 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.860 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:20.860 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:20.860 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:20.860 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:20.860 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:20.860 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:20.860 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTgwZTYwNDYwZGJlNWNiNzI3OTk2YzZjZTFmNzVkNjTx03No: 00:26:20.860 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWVkOWRhNmM3ZTQ2MTQyOTExNDA4YTRjMDhkOTAzZmQQ6u7u: 00:26:20.860 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:20.860 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:20.860 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTgwZTYwNDYwZGJlNWNiNzI3OTk2YzZjZTFmNzVkNjTx03No: 00:26:20.860 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWVkOWRhNmM3ZTQ2MTQyOTExNDA4YTRjMDhkOTAzZmQQ6u7u: ]] 00:26:20.860 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWVkOWRhNmM3ZTQ2MTQyOTExNDA4YTRjMDhkOTAzZmQQ6u7u: 00:26:20.860 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:26:20.860 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:20.860 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:20.860 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:20.860 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:20.860 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:20.860 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:20.860 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.860 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.860 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.860 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:20.860 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:20.860 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:20.860 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:20.860 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:20.860 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:20.860 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:20.860 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:20.860 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:20.860 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:20.860 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:20.860 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:20.860 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.860 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.860 nvme0n1 00:26:20.860 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.860 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:20.860 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:20.860 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.860 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.860 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.860 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:20.860 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:20.860 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.860 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.860 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.860 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:20.860 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:26:20.860 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:20.860 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:20.860 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:20.860 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:20.860 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmRlM2UxMDY0NzY0MDhhY2M2ZTgyMzJlNzM1ZDU2YzQ5YzM4MjI4YWQ1MWU2YWQznZh1aQ==: 00:26:20.860 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGU0NWZiODE3NGJjODg0NTgwYmVhMDllMmJiMDg4ZDTBDdSn: 00:26:20.860 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:20.860 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:20.860 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmRlM2UxMDY0NzY0MDhhY2M2ZTgyMzJlNzM1ZDU2YzQ5YzM4MjI4YWQ1MWU2YWQznZh1aQ==: 00:26:20.860 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGU0NWZiODE3NGJjODg0NTgwYmVhMDllMmJiMDg4ZDTBDdSn: ]] 00:26:20.860 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGU0NWZiODE3NGJjODg0NTgwYmVhMDllMmJiMDg4ZDTBDdSn: 00:26:20.860 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:26:20.860 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:20.860 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:20.860 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:20.860 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:20.860 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:20.860 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:20.860 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.860 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:20.860 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.860 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:20.860 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:20.860 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:20.860 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:20.860 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:20.860 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:20.860 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:20.860 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:20.860 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:20.860 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:20.860 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:20.860 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:20.860 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.860 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.119 nvme0n1 00:26:21.119 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.119 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:21.119 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.119 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:21.119 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.119 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.119 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:21.119 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:21.120 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.120 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.120 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.120 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:21.120 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:26:21.120 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:21.120 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:21.120 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:21.120 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:21.120 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzBjZTRkMjQzMmMzMzA5MjljMmIyNjcxNjUzMTAzYmQ5M2FiYTJmOTgzYzVhZWEzZTBhNmU5YjY4MDYxZDc5N0wKKyo=: 00:26:21.120 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:21.120 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:21.120 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:21.120 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzBjZTRkMjQzMmMzMzA5MjljMmIyNjcxNjUzMTAzYmQ5M2FiYTJmOTgzYzVhZWEzZTBhNmU5YjY4MDYxZDc5N0wKKyo=: 00:26:21.120 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:21.120 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:26:21.120 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:21.120 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:21.120 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:21.120 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:21.120 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:21.120 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:21.120 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.120 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.120 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.120 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:21.120 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:21.120 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:21.120 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:21.120 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:21.120 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:21.120 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:21.120 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:21.120 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:21.120 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:21.120 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:21.120 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:21.120 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.120 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.379 nvme0n1 00:26:21.379 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.379 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:21.379 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:21.379 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.379 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.379 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.379 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:21.379 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:21.379 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.379 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.379 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.379 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:21.379 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:21.379 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:26:21.379 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:21.379 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:21.379 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:21.379 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:21.379 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmE0ODg0ZTY4NmViNzNiZGVhNzA3N2M1ODFlMDQwYzk0I2Gp: 00:26:21.379 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjljNTRlZjM5MTcxYjg2NzQ1MzliM2Q0Y2Q0ZGNhMjRmYTBmYjk4ZjRjOGQ3MTY0YWNjMDkyNTBmZmQ0ZGUxMFKaOuc=: 00:26:21.379 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:21.379 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:21.379 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmE0ODg0ZTY4NmViNzNiZGVhNzA3N2M1ODFlMDQwYzk0I2Gp: 00:26:21.379 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjljNTRlZjM5MTcxYjg2NzQ1MzliM2Q0Y2Q0ZGNhMjRmYTBmYjk4ZjRjOGQ3MTY0YWNjMDkyNTBmZmQ0ZGUxMFKaOuc=: ]] 00:26:21.380 22:57:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjljNTRlZjM5MTcxYjg2NzQ1MzliM2Q0Y2Q0ZGNhMjRmYTBmYjk4ZjRjOGQ3MTY0YWNjMDkyNTBmZmQ0ZGUxMFKaOuc=: 00:26:21.380 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:26:21.380 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:21.380 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:21.380 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:21.380 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:21.380 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:21.380 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:21.380 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.380 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.380 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.380 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:21.380 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:21.380 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:21.380 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:21.380 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:21.380 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:21.380 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:21.380 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:21.380 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:21.380 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:21.380 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:21.380 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:21.380 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.380 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.639 nvme0n1 00:26:21.639 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.639 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:21.639 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:21.639 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.639 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.639 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.639 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:21.639 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:21.639 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.639 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.639 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.639 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:21.639 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:26:21.639 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:21.639 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:21.639 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:21.639 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:21.639 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmJkMzdmYzEwZmY4ZWYwODUyNTgwYWY3NDIyY2UyMDkwZmUzZmFhNDQ2MzE5NzlkAPUbXg==: 00:26:21.639 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGFlMWU0MzA5YTc5YmI2ZDI3YzQ4MDBmODQ2YjU1YmRjMmEzNDBhZjI1NWM0ZGQxhL1gpA==: 00:26:21.639 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:21.639 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:21.639 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmJkMzdmYzEwZmY4ZWYwODUyNTgwYWY3NDIyY2UyMDkwZmUzZmFhNDQ2MzE5NzlkAPUbXg==: 00:26:21.639 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGFlMWU0MzA5YTc5YmI2ZDI3YzQ4MDBmODQ2YjU1YmRjMmEzNDBhZjI1NWM0ZGQxhL1gpA==: ]] 00:26:21.639 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGFlMWU0MzA5YTc5YmI2ZDI3YzQ4MDBmODQ2YjU1YmRjMmEzNDBhZjI1NWM0ZGQxhL1gpA==: 00:26:21.639 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:26:21.639 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:21.639 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:21.639 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:21.639 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:21.639 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:21.639 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:21.639 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.639 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.639 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.639 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:21.639 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:21.639 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:21.639 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:21.639 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:21.639 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:21.639 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:21.639 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:21.639 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:21.639 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:21.639 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:21.639 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:21.639 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.639 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.899 nvme0n1 00:26:21.899 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.899 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:21.899 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:21.899 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.899 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.899 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.899 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:21.899 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:21.899 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.899 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.899 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.899 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:21.899 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:26:21.899 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:21.899 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:21.899 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:21.899 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:21.899 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTgwZTYwNDYwZGJlNWNiNzI3OTk2YzZjZTFmNzVkNjTx03No: 00:26:21.899 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWVkOWRhNmM3ZTQ2MTQyOTExNDA4YTRjMDhkOTAzZmQQ6u7u: 00:26:21.899 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:21.899 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:21.899 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTgwZTYwNDYwZGJlNWNiNzI3OTk2YzZjZTFmNzVkNjTx03No: 00:26:21.899 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWVkOWRhNmM3ZTQ2MTQyOTExNDA4YTRjMDhkOTAzZmQQ6u7u: ]] 00:26:21.899 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWVkOWRhNmM3ZTQ2MTQyOTExNDA4YTRjMDhkOTAzZmQQ6u7u: 00:26:21.899 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:26:21.899 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:21.899 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:21.899 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:21.899 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:21.899 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:21.899 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:21.899 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.899 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:21.899 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:21.899 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:21.899 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:21.899 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:21.899 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:21.899 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:21.899 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:21.899 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:21.899 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:21.899 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:21.899 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:21.899 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:21.899 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:21.899 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:21.899 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.158 nvme0n1 00:26:22.158 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.158 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:22.158 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:22.158 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.158 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.158 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.158 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:22.158 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:22.158 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.158 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.158 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.158 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:22.158 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:26:22.158 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:22.158 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:22.158 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:22.158 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:22.158 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmRlM2UxMDY0NzY0MDhhY2M2ZTgyMzJlNzM1ZDU2YzQ5YzM4MjI4YWQ1MWU2YWQznZh1aQ==: 00:26:22.158 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGU0NWZiODE3NGJjODg0NTgwYmVhMDllMmJiMDg4ZDTBDdSn: 00:26:22.158 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:22.158 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:22.158 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmRlM2UxMDY0NzY0MDhhY2M2ZTgyMzJlNzM1ZDU2YzQ5YzM4MjI4YWQ1MWU2YWQznZh1aQ==: 00:26:22.158 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGU0NWZiODE3NGJjODg0NTgwYmVhMDllMmJiMDg4ZDTBDdSn: ]] 00:26:22.158 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGU0NWZiODE3NGJjODg0NTgwYmVhMDllMmJiMDg4ZDTBDdSn: 00:26:22.158 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:26:22.158 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:22.159 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:22.159 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:22.159 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:22.159 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:22.159 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:22.159 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.159 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.159 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.159 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:22.159 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:22.159 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:22.159 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:22.159 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:22.159 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:22.159 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:22.159 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:22.159 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:22.159 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:22.159 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:22.159 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:22.159 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.159 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.417 nvme0n1 00:26:22.417 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.417 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:22.417 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:22.417 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.417 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.417 22:57:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.417 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:22.417 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:22.417 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.417 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.417 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.417 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:22.417 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:26:22.417 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:22.417 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:22.417 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:22.417 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:22.417 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzBjZTRkMjQzMmMzMzA5MjljMmIyNjcxNjUzMTAzYmQ5M2FiYTJmOTgzYzVhZWEzZTBhNmU5YjY4MDYxZDc5N0wKKyo=: 00:26:22.417 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:22.418 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:22.418 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:22.418 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzBjZTRkMjQzMmMzMzA5MjljMmIyNjcxNjUzMTAzYmQ5M2FiYTJmOTgzYzVhZWEzZTBhNmU5YjY4MDYxZDc5N0wKKyo=: 00:26:22.418 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:22.418 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:26:22.418 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:22.418 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:22.418 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:22.418 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:22.418 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:22.418 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:22.418 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.418 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.418 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.418 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:22.418 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:22.418 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:22.418 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:22.418 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:22.418 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:22.418 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:22.418 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:22.418 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:22.418 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:22.418 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:22.418 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:22.418 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.418 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.682 nvme0n1 00:26:22.682 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.682 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:22.682 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:22.682 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.682 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.682 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.682 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:22.682 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:22.682 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.682 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.682 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.682 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:22.682 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:22.682 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:26:22.682 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:22.682 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:22.682 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:22.682 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:22.682 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmE0ODg0ZTY4NmViNzNiZGVhNzA3N2M1ODFlMDQwYzk0I2Gp: 00:26:22.682 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjljNTRlZjM5MTcxYjg2NzQ1MzliM2Q0Y2Q0ZGNhMjRmYTBmYjk4ZjRjOGQ3MTY0YWNjMDkyNTBmZmQ0ZGUxMFKaOuc=: 00:26:22.682 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:22.682 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:22.682 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmE0ODg0ZTY4NmViNzNiZGVhNzA3N2M1ODFlMDQwYzk0I2Gp: 00:26:22.682 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjljNTRlZjM5MTcxYjg2NzQ1MzliM2Q0Y2Q0ZGNhMjRmYTBmYjk4ZjRjOGQ3MTY0YWNjMDkyNTBmZmQ0ZGUxMFKaOuc=: ]] 00:26:22.682 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjljNTRlZjM5MTcxYjg2NzQ1MzliM2Q0Y2Q0ZGNhMjRmYTBmYjk4ZjRjOGQ3MTY0YWNjMDkyNTBmZmQ0ZGUxMFKaOuc=: 00:26:22.682 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:26:22.682 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:22.682 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:22.682 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:22.682 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:22.682 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:22.682 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:22.682 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.682 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.682 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.682 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:22.682 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:22.682 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:22.682 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:22.682 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:22.683 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:22.683 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:22.683 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:22.683 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:22.683 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:22.683 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:22.683 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:22.683 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.683 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.943 nvme0n1 00:26:22.943 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.943 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:22.943 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:22.943 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.943 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.943 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.943 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:22.943 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:22.943 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.943 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.943 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.943 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:22.943 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:26:22.943 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:22.943 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:22.943 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:22.943 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:22.943 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmJkMzdmYzEwZmY4ZWYwODUyNTgwYWY3NDIyY2UyMDkwZmUzZmFhNDQ2MzE5NzlkAPUbXg==: 00:26:22.943 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGFlMWU0MzA5YTc5YmI2ZDI3YzQ4MDBmODQ2YjU1YmRjMmEzNDBhZjI1NWM0ZGQxhL1gpA==: 00:26:22.943 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:22.943 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:22.943 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmJkMzdmYzEwZmY4ZWYwODUyNTgwYWY3NDIyY2UyMDkwZmUzZmFhNDQ2MzE5NzlkAPUbXg==: 00:26:22.943 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGFlMWU0MzA5YTc5YmI2ZDI3YzQ4MDBmODQ2YjU1YmRjMmEzNDBhZjI1NWM0ZGQxhL1gpA==: ]] 00:26:22.943 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGFlMWU0MzA5YTc5YmI2ZDI3YzQ4MDBmODQ2YjU1YmRjMmEzNDBhZjI1NWM0ZGQxhL1gpA==: 00:26:22.943 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:26:22.943 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:22.943 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:22.943 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:22.943 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:22.943 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:22.943 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:22.943 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.943 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:22.943 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.943 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:22.943 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:22.943 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:22.943 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:22.943 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:22.943 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:22.943 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:22.943 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:22.943 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:22.943 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:22.943 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:22.943 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:22.943 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.943 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.202 nvme0n1 00:26:23.202 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.202 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:23.202 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.202 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:23.202 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.202 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.202 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:23.202 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:23.202 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.202 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.202 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.202 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:23.202 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:26:23.202 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:23.202 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:23.202 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:23.202 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:23.202 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTgwZTYwNDYwZGJlNWNiNzI3OTk2YzZjZTFmNzVkNjTx03No: 00:26:23.202 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWVkOWRhNmM3ZTQ2MTQyOTExNDA4YTRjMDhkOTAzZmQQ6u7u: 00:26:23.202 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:23.202 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:23.202 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTgwZTYwNDYwZGJlNWNiNzI3OTk2YzZjZTFmNzVkNjTx03No: 00:26:23.202 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWVkOWRhNmM3ZTQ2MTQyOTExNDA4YTRjMDhkOTAzZmQQ6u7u: ]] 00:26:23.202 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWVkOWRhNmM3ZTQ2MTQyOTExNDA4YTRjMDhkOTAzZmQQ6u7u: 00:26:23.202 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:26:23.202 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:23.202 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:23.202 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:23.202 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:23.202 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:23.202 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:23.202 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.202 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.461 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.461 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:23.461 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:23.461 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:23.461 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:23.461 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:23.461 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:23.461 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:23.461 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:23.461 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:23.461 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:23.461 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:23.461 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:23.461 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.461 22:57:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.721 nvme0n1 00:26:23.721 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.721 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:23.721 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.721 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:23.721 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.721 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.721 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:23.721 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:23.721 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.721 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.721 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.721 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:23.721 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:26:23.721 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:23.721 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:23.721 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:23.721 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:23.721 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmRlM2UxMDY0NzY0MDhhY2M2ZTgyMzJlNzM1ZDU2YzQ5YzM4MjI4YWQ1MWU2YWQznZh1aQ==: 00:26:23.721 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGU0NWZiODE3NGJjODg0NTgwYmVhMDllMmJiMDg4ZDTBDdSn: 00:26:23.721 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:23.721 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:23.721 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmRlM2UxMDY0NzY0MDhhY2M2ZTgyMzJlNzM1ZDU2YzQ5YzM4MjI4YWQ1MWU2YWQznZh1aQ==: 00:26:23.721 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGU0NWZiODE3NGJjODg0NTgwYmVhMDllMmJiMDg4ZDTBDdSn: ]] 00:26:23.721 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGU0NWZiODE3NGJjODg0NTgwYmVhMDllMmJiMDg4ZDTBDdSn: 00:26:23.721 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:26:23.721 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:23.721 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:23.721 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:23.721 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:23.721 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:23.721 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:23.721 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.721 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.721 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.721 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:23.721 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:23.721 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:23.721 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:23.721 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:23.721 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:23.721 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:23.721 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:23.721 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:23.721 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:23.721 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:23.721 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:23.721 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.721 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.980 nvme0n1 00:26:23.980 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.980 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:23.980 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:23.980 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.980 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.980 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.980 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:23.980 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:23.980 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.980 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.980 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.980 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:23.980 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:26:23.980 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:23.980 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:23.980 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:23.980 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:23.980 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzBjZTRkMjQzMmMzMzA5MjljMmIyNjcxNjUzMTAzYmQ5M2FiYTJmOTgzYzVhZWEzZTBhNmU5YjY4MDYxZDc5N0wKKyo=: 00:26:23.980 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:23.980 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:23.980 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:23.980 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzBjZTRkMjQzMmMzMzA5MjljMmIyNjcxNjUzMTAzYmQ5M2FiYTJmOTgzYzVhZWEzZTBhNmU5YjY4MDYxZDc5N0wKKyo=: 00:26:23.980 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:23.980 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:26:23.980 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:23.980 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:23.980 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:23.980 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:23.981 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:23.981 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:23.981 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.981 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.981 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:23.981 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:23.981 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:23.981 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:23.981 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:23.981 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:23.981 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:23.981 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:23.981 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:23.981 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:23.981 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:23.981 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:23.981 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:23.981 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:23.981 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.239 nvme0n1 00:26:24.239 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.239 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:24.239 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:24.239 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.239 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.239 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.239 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:24.239 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:24.239 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.239 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.239 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.239 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:24.239 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:24.239 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:26:24.239 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:24.239 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:24.240 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:24.240 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:24.240 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmE0ODg0ZTY4NmViNzNiZGVhNzA3N2M1ODFlMDQwYzk0I2Gp: 00:26:24.240 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjljNTRlZjM5MTcxYjg2NzQ1MzliM2Q0Y2Q0ZGNhMjRmYTBmYjk4ZjRjOGQ3MTY0YWNjMDkyNTBmZmQ0ZGUxMFKaOuc=: 00:26:24.240 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:24.240 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:24.240 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmE0ODg0ZTY4NmViNzNiZGVhNzA3N2M1ODFlMDQwYzk0I2Gp: 00:26:24.240 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjljNTRlZjM5MTcxYjg2NzQ1MzliM2Q0Y2Q0ZGNhMjRmYTBmYjk4ZjRjOGQ3MTY0YWNjMDkyNTBmZmQ0ZGUxMFKaOuc=: ]] 00:26:24.240 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjljNTRlZjM5MTcxYjg2NzQ1MzliM2Q0Y2Q0ZGNhMjRmYTBmYjk4ZjRjOGQ3MTY0YWNjMDkyNTBmZmQ0ZGUxMFKaOuc=: 00:26:24.240 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:26:24.240 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:24.240 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:24.240 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:24.240 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:24.240 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:24.240 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:24.240 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.240 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.240 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.240 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:24.240 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:24.240 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:24.240 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:24.240 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:24.240 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:24.240 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:24.240 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:24.240 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:24.240 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:24.240 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:24.240 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:24.240 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.240 22:57:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.808 nvme0n1 00:26:24.808 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.808 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:24.808 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:24.808 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.808 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.808 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.808 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:24.808 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:24.808 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.808 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.808 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.808 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:24.808 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:26:24.808 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:24.808 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:24.808 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:24.808 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:24.808 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmJkMzdmYzEwZmY4ZWYwODUyNTgwYWY3NDIyY2UyMDkwZmUzZmFhNDQ2MzE5NzlkAPUbXg==: 00:26:24.808 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGFlMWU0MzA5YTc5YmI2ZDI3YzQ4MDBmODQ2YjU1YmRjMmEzNDBhZjI1NWM0ZGQxhL1gpA==: 00:26:24.808 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:24.808 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:24.808 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmJkMzdmYzEwZmY4ZWYwODUyNTgwYWY3NDIyY2UyMDkwZmUzZmFhNDQ2MzE5NzlkAPUbXg==: 00:26:24.808 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGFlMWU0MzA5YTc5YmI2ZDI3YzQ4MDBmODQ2YjU1YmRjMmEzNDBhZjI1NWM0ZGQxhL1gpA==: ]] 00:26:24.808 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGFlMWU0MzA5YTc5YmI2ZDI3YzQ4MDBmODQ2YjU1YmRjMmEzNDBhZjI1NWM0ZGQxhL1gpA==: 00:26:24.808 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:26:24.808 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:24.808 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:24.808 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:24.808 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:24.808 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:24.808 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:24.808 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.808 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:24.808 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:24.808 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:24.808 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:24.808 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:24.808 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:24.808 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:24.808 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:24.808 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:24.808 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:24.808 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:24.808 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:24.808 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:24.808 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:24.808 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:24.808 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.067 nvme0n1 00:26:25.067 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.067 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:25.067 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:25.067 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.067 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.067 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.067 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:25.067 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:25.067 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.067 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.327 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.327 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:25.327 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:26:25.327 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:25.327 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:25.327 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:25.327 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:25.327 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTgwZTYwNDYwZGJlNWNiNzI3OTk2YzZjZTFmNzVkNjTx03No: 00:26:25.327 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWVkOWRhNmM3ZTQ2MTQyOTExNDA4YTRjMDhkOTAzZmQQ6u7u: 00:26:25.327 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:25.327 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:25.327 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTgwZTYwNDYwZGJlNWNiNzI3OTk2YzZjZTFmNzVkNjTx03No: 00:26:25.327 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWVkOWRhNmM3ZTQ2MTQyOTExNDA4YTRjMDhkOTAzZmQQ6u7u: ]] 00:26:25.327 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWVkOWRhNmM3ZTQ2MTQyOTExNDA4YTRjMDhkOTAzZmQQ6u7u: 00:26:25.327 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:26:25.327 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:25.327 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:25.327 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:25.327 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:25.327 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:25.327 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:25.327 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.327 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.327 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.327 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:25.327 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:25.327 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:25.327 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:25.327 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:25.327 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:25.327 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:25.327 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:25.327 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:25.327 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:25.327 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:25.327 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:25.327 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.327 22:57:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.587 nvme0n1 00:26:25.587 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.587 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:25.587 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:25.587 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.587 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.587 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.587 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:25.587 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:25.587 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.587 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.587 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.587 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:25.587 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:26:25.587 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:25.587 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:25.587 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:25.587 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:25.587 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmRlM2UxMDY0NzY0MDhhY2M2ZTgyMzJlNzM1ZDU2YzQ5YzM4MjI4YWQ1MWU2YWQznZh1aQ==: 00:26:25.587 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGU0NWZiODE3NGJjODg0NTgwYmVhMDllMmJiMDg4ZDTBDdSn: 00:26:25.587 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:25.587 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:25.587 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmRlM2UxMDY0NzY0MDhhY2M2ZTgyMzJlNzM1ZDU2YzQ5YzM4MjI4YWQ1MWU2YWQznZh1aQ==: 00:26:25.587 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGU0NWZiODE3NGJjODg0NTgwYmVhMDllMmJiMDg4ZDTBDdSn: ]] 00:26:25.587 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGU0NWZiODE3NGJjODg0NTgwYmVhMDllMmJiMDg4ZDTBDdSn: 00:26:25.587 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:26:25.587 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:25.587 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:25.587 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:25.587 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:25.587 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:25.587 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:25.587 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.587 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:25.587 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.587 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:25.587 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:25.587 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:25.587 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:25.587 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:25.587 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:25.587 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:25.587 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:25.587 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:25.587 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:25.587 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:25.587 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:25.587 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.587 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.157 nvme0n1 00:26:26.157 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.157 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:26.157 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.157 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:26.157 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.157 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.157 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:26.157 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:26.157 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.157 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.157 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.157 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:26.157 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:26:26.157 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:26.157 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:26.157 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:26.157 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:26.157 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzBjZTRkMjQzMmMzMzA5MjljMmIyNjcxNjUzMTAzYmQ5M2FiYTJmOTgzYzVhZWEzZTBhNmU5YjY4MDYxZDc5N0wKKyo=: 00:26:26.157 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:26.157 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:26.157 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:26.157 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzBjZTRkMjQzMmMzMzA5MjljMmIyNjcxNjUzMTAzYmQ5M2FiYTJmOTgzYzVhZWEzZTBhNmU5YjY4MDYxZDc5N0wKKyo=: 00:26:26.157 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:26.157 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:26:26.157 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:26.157 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:26.157 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:26.157 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:26.157 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:26.157 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:26.157 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.157 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.157 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.157 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:26.157 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:26.157 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:26.157 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:26.157 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:26.157 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:26.157 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:26.157 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:26.157 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:26.157 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:26.157 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:26.157 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:26.157 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.157 22:57:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.417 nvme0n1 00:26:26.417 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.417 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:26.417 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:26.417 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.417 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.417 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.697 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:26.697 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:26.697 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.697 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.697 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.697 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:26.697 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:26.697 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:26:26.697 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:26.697 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:26.697 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:26.697 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:26.697 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmE0ODg0ZTY4NmViNzNiZGVhNzA3N2M1ODFlMDQwYzk0I2Gp: 00:26:26.697 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjljNTRlZjM5MTcxYjg2NzQ1MzliM2Q0Y2Q0ZGNhMjRmYTBmYjk4ZjRjOGQ3MTY0YWNjMDkyNTBmZmQ0ZGUxMFKaOuc=: 00:26:26.697 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:26.697 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:26.697 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmE0ODg0ZTY4NmViNzNiZGVhNzA3N2M1ODFlMDQwYzk0I2Gp: 00:26:26.697 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjljNTRlZjM5MTcxYjg2NzQ1MzliM2Q0Y2Q0ZGNhMjRmYTBmYjk4ZjRjOGQ3MTY0YWNjMDkyNTBmZmQ0ZGUxMFKaOuc=: ]] 00:26:26.697 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjljNTRlZjM5MTcxYjg2NzQ1MzliM2Q0Y2Q0ZGNhMjRmYTBmYjk4ZjRjOGQ3MTY0YWNjMDkyNTBmZmQ0ZGUxMFKaOuc=: 00:26:26.697 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:26:26.697 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:26.697 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:26.697 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:26.697 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:26.697 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:26.697 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:26.697 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.697 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:26.697 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.697 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:26.697 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:26.697 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:26.697 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:26.697 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:26.697 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:26.697 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:26.697 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:26.697 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:26.697 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:26.697 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:26.697 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:26.697 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.697 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.341 nvme0n1 00:26:27.341 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.341 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:27.341 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:27.341 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.341 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.341 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.341 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:27.341 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:27.341 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.341 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.341 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.341 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:27.341 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:26:27.341 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:27.341 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:27.341 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:27.341 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:27.341 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmJkMzdmYzEwZmY4ZWYwODUyNTgwYWY3NDIyY2UyMDkwZmUzZmFhNDQ2MzE5NzlkAPUbXg==: 00:26:27.341 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGFlMWU0MzA5YTc5YmI2ZDI3YzQ4MDBmODQ2YjU1YmRjMmEzNDBhZjI1NWM0ZGQxhL1gpA==: 00:26:27.341 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:27.341 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:27.341 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmJkMzdmYzEwZmY4ZWYwODUyNTgwYWY3NDIyY2UyMDkwZmUzZmFhNDQ2MzE5NzlkAPUbXg==: 00:26:27.341 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGFlMWU0MzA5YTc5YmI2ZDI3YzQ4MDBmODQ2YjU1YmRjMmEzNDBhZjI1NWM0ZGQxhL1gpA==: ]] 00:26:27.341 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGFlMWU0MzA5YTc5YmI2ZDI3YzQ4MDBmODQ2YjU1YmRjMmEzNDBhZjI1NWM0ZGQxhL1gpA==: 00:26:27.341 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:26:27.341 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:27.341 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:27.341 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:27.341 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:27.341 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:27.341 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:27.341 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.341 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.341 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.341 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:27.341 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:27.341 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:27.341 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:27.341 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:27.341 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:27.341 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:27.341 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:27.341 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:27.341 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:27.341 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:27.341 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:27.341 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.341 22:57:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.909 nvme0n1 00:26:27.909 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.909 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:27.909 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:27.909 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.909 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.909 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.909 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:27.909 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:27.909 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.909 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.909 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.909 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:27.909 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:26:27.909 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:27.909 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:27.909 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:27.909 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:27.909 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTgwZTYwNDYwZGJlNWNiNzI3OTk2YzZjZTFmNzVkNjTx03No: 00:26:27.909 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWVkOWRhNmM3ZTQ2MTQyOTExNDA4YTRjMDhkOTAzZmQQ6u7u: 00:26:27.909 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:27.910 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:27.910 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTgwZTYwNDYwZGJlNWNiNzI3OTk2YzZjZTFmNzVkNjTx03No: 00:26:27.910 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWVkOWRhNmM3ZTQ2MTQyOTExNDA4YTRjMDhkOTAzZmQQ6u7u: ]] 00:26:27.910 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWVkOWRhNmM3ZTQ2MTQyOTExNDA4YTRjMDhkOTAzZmQQ6u7u: 00:26:27.910 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:26:27.910 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:27.910 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:27.910 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:27.910 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:27.910 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:27.910 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:27.910 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.910 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.910 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:27.910 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:27.910 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:27.910 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:27.910 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:27.910 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:27.910 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:27.910 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:27.910 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:27.910 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:27.910 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:27.910 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:27.910 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:27.910 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:27.910 22:57:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.479 nvme0n1 00:26:28.479 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.479 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:28.479 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:28.479 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.479 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.479 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.479 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:28.479 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:28.479 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.479 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.479 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.479 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:28.479 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:26:28.479 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:28.479 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:28.479 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:28.479 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:28.479 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmRlM2UxMDY0NzY0MDhhY2M2ZTgyMzJlNzM1ZDU2YzQ5YzM4MjI4YWQ1MWU2YWQznZh1aQ==: 00:26:28.479 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGU0NWZiODE3NGJjODg0NTgwYmVhMDllMmJiMDg4ZDTBDdSn: 00:26:28.479 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:28.479 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:28.479 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmRlM2UxMDY0NzY0MDhhY2M2ZTgyMzJlNzM1ZDU2YzQ5YzM4MjI4YWQ1MWU2YWQznZh1aQ==: 00:26:28.479 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGU0NWZiODE3NGJjODg0NTgwYmVhMDllMmJiMDg4ZDTBDdSn: ]] 00:26:28.479 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGU0NWZiODE3NGJjODg0NTgwYmVhMDllMmJiMDg4ZDTBDdSn: 00:26:28.479 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:26:28.479 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:28.479 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:28.479 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:28.479 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:28.479 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:28.479 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:28.479 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.479 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.479 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.479 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:28.479 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:28.479 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:28.479 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:28.479 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:28.479 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:28.479 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:28.479 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:28.479 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:28.479 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:28.479 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:28.479 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:28.479 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.479 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.048 nvme0n1 00:26:29.048 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.048 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:29.048 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:29.048 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.048 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.048 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.307 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:29.307 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:29.307 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.307 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.307 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.307 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:29.307 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:26:29.307 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:29.307 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:29.307 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:29.307 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:29.307 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzBjZTRkMjQzMmMzMzA5MjljMmIyNjcxNjUzMTAzYmQ5M2FiYTJmOTgzYzVhZWEzZTBhNmU5YjY4MDYxZDc5N0wKKyo=: 00:26:29.307 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:29.307 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:29.307 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:29.307 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzBjZTRkMjQzMmMzMzA5MjljMmIyNjcxNjUzMTAzYmQ5M2FiYTJmOTgzYzVhZWEzZTBhNmU5YjY4MDYxZDc5N0wKKyo=: 00:26:29.307 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:29.308 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:26:29.308 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:29.308 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:29.308 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:29.308 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:29.308 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:29.308 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:29.308 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.308 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.308 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.308 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:29.308 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:29.308 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:29.308 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:29.308 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:29.308 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:29.308 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:29.308 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:29.308 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:29.308 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:29.308 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:29.308 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:29.308 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.308 22:57:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.877 nvme0n1 00:26:29.877 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.877 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:29.877 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:29.877 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.877 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.877 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.877 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:29.877 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:29.877 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.877 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.877 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.877 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:29.877 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:29.877 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:29.877 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:26:29.877 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:29.877 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:29.877 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:29.877 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:29.877 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmE0ODg0ZTY4NmViNzNiZGVhNzA3N2M1ODFlMDQwYzk0I2Gp: 00:26:29.877 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjljNTRlZjM5MTcxYjg2NzQ1MzliM2Q0Y2Q0ZGNhMjRmYTBmYjk4ZjRjOGQ3MTY0YWNjMDkyNTBmZmQ0ZGUxMFKaOuc=: 00:26:29.877 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:29.877 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:29.877 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmE0ODg0ZTY4NmViNzNiZGVhNzA3N2M1ODFlMDQwYzk0I2Gp: 00:26:29.877 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjljNTRlZjM5MTcxYjg2NzQ1MzliM2Q0Y2Q0ZGNhMjRmYTBmYjk4ZjRjOGQ3MTY0YWNjMDkyNTBmZmQ0ZGUxMFKaOuc=: ]] 00:26:29.877 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjljNTRlZjM5MTcxYjg2NzQ1MzliM2Q0Y2Q0ZGNhMjRmYTBmYjk4ZjRjOGQ3MTY0YWNjMDkyNTBmZmQ0ZGUxMFKaOuc=: 00:26:29.877 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:26:29.877 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:29.877 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:29.877 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:29.877 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:29.877 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:29.877 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:29.877 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.877 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:29.877 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.877 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:29.877 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:29.877 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:29.877 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:29.877 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:29.877 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:29.877 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:29.877 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:29.877 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:29.877 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:29.877 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:29.877 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:29.877 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.877 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.137 nvme0n1 00:26:30.137 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.137 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:30.137 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:30.137 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.137 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.137 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.137 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:30.137 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:30.137 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.137 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.137 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.137 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:30.137 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:26:30.137 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:30.137 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:30.137 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:30.137 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:30.137 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmJkMzdmYzEwZmY4ZWYwODUyNTgwYWY3NDIyY2UyMDkwZmUzZmFhNDQ2MzE5NzlkAPUbXg==: 00:26:30.137 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGFlMWU0MzA5YTc5YmI2ZDI3YzQ4MDBmODQ2YjU1YmRjMmEzNDBhZjI1NWM0ZGQxhL1gpA==: 00:26:30.137 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:30.137 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:30.137 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmJkMzdmYzEwZmY4ZWYwODUyNTgwYWY3NDIyY2UyMDkwZmUzZmFhNDQ2MzE5NzlkAPUbXg==: 00:26:30.137 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGFlMWU0MzA5YTc5YmI2ZDI3YzQ4MDBmODQ2YjU1YmRjMmEzNDBhZjI1NWM0ZGQxhL1gpA==: ]] 00:26:30.137 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGFlMWU0MzA5YTc5YmI2ZDI3YzQ4MDBmODQ2YjU1YmRjMmEzNDBhZjI1NWM0ZGQxhL1gpA==: 00:26:30.137 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:26:30.137 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:30.137 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:30.137 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:30.137 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:30.137 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:30.137 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:30.137 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.137 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.137 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.137 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:30.137 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:30.137 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:30.137 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:30.137 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:30.137 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:30.137 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:30.137 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:30.137 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:30.137 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:30.137 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:30.137 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:30.137 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.137 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.137 nvme0n1 00:26:30.137 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.137 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:30.137 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.137 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:30.137 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.137 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.397 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:30.397 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:30.397 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.397 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.397 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.397 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:30.397 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:26:30.397 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:30.397 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:30.397 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:30.397 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:30.397 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTgwZTYwNDYwZGJlNWNiNzI3OTk2YzZjZTFmNzVkNjTx03No: 00:26:30.397 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWVkOWRhNmM3ZTQ2MTQyOTExNDA4YTRjMDhkOTAzZmQQ6u7u: 00:26:30.397 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:30.397 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:30.397 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTgwZTYwNDYwZGJlNWNiNzI3OTk2YzZjZTFmNzVkNjTx03No: 00:26:30.397 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWVkOWRhNmM3ZTQ2MTQyOTExNDA4YTRjMDhkOTAzZmQQ6u7u: ]] 00:26:30.397 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWVkOWRhNmM3ZTQ2MTQyOTExNDA4YTRjMDhkOTAzZmQQ6u7u: 00:26:30.397 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:26:30.397 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:30.397 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:30.397 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:30.397 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:30.397 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:30.397 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:30.397 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.397 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.397 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.397 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:30.397 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:30.397 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:30.397 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:30.397 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:30.397 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:30.397 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:30.397 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:30.397 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:30.397 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:30.397 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:30.397 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:30.397 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.397 22:57:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.397 nvme0n1 00:26:30.397 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.397 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:30.397 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:30.397 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.397 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.397 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.397 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:30.397 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:30.397 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.397 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.397 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.397 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:30.397 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:26:30.397 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:30.397 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:30.397 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:30.397 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:30.397 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmRlM2UxMDY0NzY0MDhhY2M2ZTgyMzJlNzM1ZDU2YzQ5YzM4MjI4YWQ1MWU2YWQznZh1aQ==: 00:26:30.397 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGU0NWZiODE3NGJjODg0NTgwYmVhMDllMmJiMDg4ZDTBDdSn: 00:26:30.397 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:30.397 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:30.397 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmRlM2UxMDY0NzY0MDhhY2M2ZTgyMzJlNzM1ZDU2YzQ5YzM4MjI4YWQ1MWU2YWQznZh1aQ==: 00:26:30.397 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGU0NWZiODE3NGJjODg0NTgwYmVhMDllMmJiMDg4ZDTBDdSn: ]] 00:26:30.397 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGU0NWZiODE3NGJjODg0NTgwYmVhMDllMmJiMDg4ZDTBDdSn: 00:26:30.397 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:26:30.397 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:30.397 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:30.397 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:30.397 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:30.397 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:30.397 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:30.397 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.397 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.657 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.657 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:30.657 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:30.657 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:30.657 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:30.657 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:30.657 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:30.657 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:30.657 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:30.657 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:30.657 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:30.657 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:30.657 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:30.657 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.657 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.657 nvme0n1 00:26:30.657 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.657 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:30.657 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:30.657 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.657 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.657 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.657 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:30.657 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:30.657 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.657 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.657 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.657 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:30.657 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:26:30.657 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:30.657 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:30.657 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:30.657 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:30.657 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzBjZTRkMjQzMmMzMzA5MjljMmIyNjcxNjUzMTAzYmQ5M2FiYTJmOTgzYzVhZWEzZTBhNmU5YjY4MDYxZDc5N0wKKyo=: 00:26:30.657 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:30.657 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:30.657 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:30.657 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzBjZTRkMjQzMmMzMzA5MjljMmIyNjcxNjUzMTAzYmQ5M2FiYTJmOTgzYzVhZWEzZTBhNmU5YjY4MDYxZDc5N0wKKyo=: 00:26:30.657 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:30.657 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:26:30.657 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:30.657 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:30.657 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:30.657 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:30.657 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:30.657 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:30.657 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.657 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.657 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.657 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:30.657 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:30.657 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:30.657 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:30.657 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:30.657 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:30.657 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:30.657 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:30.657 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:30.657 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:30.657 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:30.657 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:30.657 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.657 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.917 nvme0n1 00:26:30.917 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.917 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:30.917 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:30.917 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.917 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.917 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.917 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:30.917 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:30.917 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.917 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.917 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.917 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:30.917 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:30.917 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:26:30.917 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:30.917 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:30.917 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:30.917 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:30.917 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmE0ODg0ZTY4NmViNzNiZGVhNzA3N2M1ODFlMDQwYzk0I2Gp: 00:26:30.917 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjljNTRlZjM5MTcxYjg2NzQ1MzliM2Q0Y2Q0ZGNhMjRmYTBmYjk4ZjRjOGQ3MTY0YWNjMDkyNTBmZmQ0ZGUxMFKaOuc=: 00:26:30.917 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:30.917 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:30.917 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmE0ODg0ZTY4NmViNzNiZGVhNzA3N2M1ODFlMDQwYzk0I2Gp: 00:26:30.917 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjljNTRlZjM5MTcxYjg2NzQ1MzliM2Q0Y2Q0ZGNhMjRmYTBmYjk4ZjRjOGQ3MTY0YWNjMDkyNTBmZmQ0ZGUxMFKaOuc=: ]] 00:26:30.917 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjljNTRlZjM5MTcxYjg2NzQ1MzliM2Q0Y2Q0ZGNhMjRmYTBmYjk4ZjRjOGQ3MTY0YWNjMDkyNTBmZmQ0ZGUxMFKaOuc=: 00:26:30.917 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:26:30.917 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:30.917 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:30.917 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:30.917 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:30.917 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:30.917 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:30.917 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.917 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:30.917 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.917 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:30.917 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:30.917 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:30.917 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:30.917 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:30.917 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:30.917 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:30.917 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:30.917 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:30.917 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:30.917 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:30.917 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:30.917 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.917 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.176 nvme0n1 00:26:31.176 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.176 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:31.176 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:31.176 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.176 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.177 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.177 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:31.177 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:31.177 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.177 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.177 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.177 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:31.177 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:26:31.177 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:31.177 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:31.177 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:31.177 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:31.177 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmJkMzdmYzEwZmY4ZWYwODUyNTgwYWY3NDIyY2UyMDkwZmUzZmFhNDQ2MzE5NzlkAPUbXg==: 00:26:31.177 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGFlMWU0MzA5YTc5YmI2ZDI3YzQ4MDBmODQ2YjU1YmRjMmEzNDBhZjI1NWM0ZGQxhL1gpA==: 00:26:31.177 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:31.177 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:31.177 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmJkMzdmYzEwZmY4ZWYwODUyNTgwYWY3NDIyY2UyMDkwZmUzZmFhNDQ2MzE5NzlkAPUbXg==: 00:26:31.177 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGFlMWU0MzA5YTc5YmI2ZDI3YzQ4MDBmODQ2YjU1YmRjMmEzNDBhZjI1NWM0ZGQxhL1gpA==: ]] 00:26:31.177 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGFlMWU0MzA5YTc5YmI2ZDI3YzQ4MDBmODQ2YjU1YmRjMmEzNDBhZjI1NWM0ZGQxhL1gpA==: 00:26:31.177 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:26:31.177 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:31.177 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:31.177 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:31.177 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:31.177 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:31.177 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:31.177 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.177 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.177 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.177 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:31.177 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:31.177 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:31.177 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:31.177 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:31.177 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:31.177 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:31.177 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:31.177 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:31.177 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:31.177 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:31.177 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:31.177 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.177 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.436 nvme0n1 00:26:31.436 22:57:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.436 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:31.436 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:31.436 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.436 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.436 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.436 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:31.436 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:31.436 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.436 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.436 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.436 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:31.436 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:26:31.436 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:31.436 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:31.436 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:31.436 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:31.436 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTgwZTYwNDYwZGJlNWNiNzI3OTk2YzZjZTFmNzVkNjTx03No: 00:26:31.436 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWVkOWRhNmM3ZTQ2MTQyOTExNDA4YTRjMDhkOTAzZmQQ6u7u: 00:26:31.436 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:31.436 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:31.436 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTgwZTYwNDYwZGJlNWNiNzI3OTk2YzZjZTFmNzVkNjTx03No: 00:26:31.436 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWVkOWRhNmM3ZTQ2MTQyOTExNDA4YTRjMDhkOTAzZmQQ6u7u: ]] 00:26:31.436 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWVkOWRhNmM3ZTQ2MTQyOTExNDA4YTRjMDhkOTAzZmQQ6u7u: 00:26:31.436 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:26:31.436 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:31.436 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:31.436 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:31.436 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:31.436 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:31.436 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:31.436 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.436 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.436 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.436 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:31.436 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:31.436 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:31.436 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:31.436 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:31.436 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:31.436 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:31.436 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:31.436 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:31.436 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:31.436 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:31.436 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:31.436 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.436 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.695 nvme0n1 00:26:31.695 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.695 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:31.695 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.695 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:31.695 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.695 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.695 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:31.695 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:31.695 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.695 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.695 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.696 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:31.696 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:26:31.696 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:31.696 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:31.696 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:31.696 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:31.696 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmRlM2UxMDY0NzY0MDhhY2M2ZTgyMzJlNzM1ZDU2YzQ5YzM4MjI4YWQ1MWU2YWQznZh1aQ==: 00:26:31.696 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGU0NWZiODE3NGJjODg0NTgwYmVhMDllMmJiMDg4ZDTBDdSn: 00:26:31.696 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:31.696 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:31.696 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmRlM2UxMDY0NzY0MDhhY2M2ZTgyMzJlNzM1ZDU2YzQ5YzM4MjI4YWQ1MWU2YWQznZh1aQ==: 00:26:31.696 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGU0NWZiODE3NGJjODg0NTgwYmVhMDllMmJiMDg4ZDTBDdSn: ]] 00:26:31.696 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGU0NWZiODE3NGJjODg0NTgwYmVhMDllMmJiMDg4ZDTBDdSn: 00:26:31.696 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:26:31.696 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:31.696 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:31.696 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:31.696 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:31.696 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:31.696 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:31.696 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.696 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.696 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.696 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:31.696 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:31.696 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:31.696 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:31.696 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:31.696 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:31.696 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:31.696 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:31.696 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:31.696 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:31.696 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:31.696 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:31.696 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.696 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.955 nvme0n1 00:26:31.955 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.955 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:31.955 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:31.955 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.955 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.955 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.955 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:31.955 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:31.955 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.955 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.955 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.955 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:31.955 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:26:31.955 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:31.955 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:31.955 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:31.955 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:31.955 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzBjZTRkMjQzMmMzMzA5MjljMmIyNjcxNjUzMTAzYmQ5M2FiYTJmOTgzYzVhZWEzZTBhNmU5YjY4MDYxZDc5N0wKKyo=: 00:26:31.955 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:31.955 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:31.955 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:31.955 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzBjZTRkMjQzMmMzMzA5MjljMmIyNjcxNjUzMTAzYmQ5M2FiYTJmOTgzYzVhZWEzZTBhNmU5YjY4MDYxZDc5N0wKKyo=: 00:26:31.955 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:31.955 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:26:31.955 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:31.955 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:31.955 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:31.955 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:31.955 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:31.955 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:26:31.955 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.955 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:31.956 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.956 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:31.956 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:31.956 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:31.956 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:31.956 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:31.956 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:31.956 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:31.956 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:31.956 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:31.956 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:31.956 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:31.956 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:31.956 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.956 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.215 nvme0n1 00:26:32.215 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.215 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:32.215 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:32.215 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.215 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.215 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.215 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:32.215 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:32.215 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.215 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.215 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.215 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:32.215 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:32.215 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:26:32.215 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:32.215 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:32.215 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:32.215 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:32.215 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmE0ODg0ZTY4NmViNzNiZGVhNzA3N2M1ODFlMDQwYzk0I2Gp: 00:26:32.215 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjljNTRlZjM5MTcxYjg2NzQ1MzliM2Q0Y2Q0ZGNhMjRmYTBmYjk4ZjRjOGQ3MTY0YWNjMDkyNTBmZmQ0ZGUxMFKaOuc=: 00:26:32.215 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:32.215 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:32.215 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmE0ODg0ZTY4NmViNzNiZGVhNzA3N2M1ODFlMDQwYzk0I2Gp: 00:26:32.215 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjljNTRlZjM5MTcxYjg2NzQ1MzliM2Q0Y2Q0ZGNhMjRmYTBmYjk4ZjRjOGQ3MTY0YWNjMDkyNTBmZmQ0ZGUxMFKaOuc=: ]] 00:26:32.215 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjljNTRlZjM5MTcxYjg2NzQ1MzliM2Q0Y2Q0ZGNhMjRmYTBmYjk4ZjRjOGQ3MTY0YWNjMDkyNTBmZmQ0ZGUxMFKaOuc=: 00:26:32.215 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:26:32.215 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:32.215 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:32.215 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:32.215 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:32.215 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:32.215 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:32.215 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.215 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.215 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.215 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:32.215 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:32.215 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:32.215 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:32.215 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:32.215 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:32.215 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:32.215 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:32.215 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:32.215 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:32.215 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:32.215 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:32.215 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.215 22:57:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.475 nvme0n1 00:26:32.475 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.475 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:32.475 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:32.475 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.475 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.475 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.475 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:32.475 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:32.475 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.475 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.475 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.475 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:32.475 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:26:32.475 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:32.475 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:32.476 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:32.476 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:32.476 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmJkMzdmYzEwZmY4ZWYwODUyNTgwYWY3NDIyY2UyMDkwZmUzZmFhNDQ2MzE5NzlkAPUbXg==: 00:26:32.476 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGFlMWU0MzA5YTc5YmI2ZDI3YzQ4MDBmODQ2YjU1YmRjMmEzNDBhZjI1NWM0ZGQxhL1gpA==: 00:26:32.476 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:32.476 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:32.476 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmJkMzdmYzEwZmY4ZWYwODUyNTgwYWY3NDIyY2UyMDkwZmUzZmFhNDQ2MzE5NzlkAPUbXg==: 00:26:32.476 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGFlMWU0MzA5YTc5YmI2ZDI3YzQ4MDBmODQ2YjU1YmRjMmEzNDBhZjI1NWM0ZGQxhL1gpA==: ]] 00:26:32.476 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGFlMWU0MzA5YTc5YmI2ZDI3YzQ4MDBmODQ2YjU1YmRjMmEzNDBhZjI1NWM0ZGQxhL1gpA==: 00:26:32.476 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:26:32.476 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:32.476 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:32.476 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:32.476 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:32.476 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:32.476 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:32.476 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.476 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.476 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.476 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:32.476 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:32.476 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:32.476 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:32.476 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:32.476 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:32.476 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:32.476 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:32.476 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:32.476 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:32.476 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:32.476 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:32.476 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.476 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.736 nvme0n1 00:26:32.736 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.736 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:32.736 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:32.736 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.736 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.736 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.996 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:32.996 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:32.996 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.996 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.996 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.996 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:32.996 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:26:32.996 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:32.996 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:32.996 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:32.996 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:32.996 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTgwZTYwNDYwZGJlNWNiNzI3OTk2YzZjZTFmNzVkNjTx03No: 00:26:32.996 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWVkOWRhNmM3ZTQ2MTQyOTExNDA4YTRjMDhkOTAzZmQQ6u7u: 00:26:32.996 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:32.996 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:32.996 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTgwZTYwNDYwZGJlNWNiNzI3OTk2YzZjZTFmNzVkNjTx03No: 00:26:32.996 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWVkOWRhNmM3ZTQ2MTQyOTExNDA4YTRjMDhkOTAzZmQQ6u7u: ]] 00:26:32.996 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWVkOWRhNmM3ZTQ2MTQyOTExNDA4YTRjMDhkOTAzZmQQ6u7u: 00:26:32.996 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:26:32.996 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:32.996 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:32.996 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:32.996 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:32.996 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:32.996 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:32.996 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.996 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:32.996 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.996 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:32.996 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:32.996 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:32.996 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:32.996 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:32.996 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:32.996 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:32.996 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:32.996 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:32.996 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:32.996 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:32.996 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:32.996 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.996 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.257 nvme0n1 00:26:33.257 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.257 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:33.257 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:33.257 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.257 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.257 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.257 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:33.257 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:33.257 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.257 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.257 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.257 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:33.257 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:26:33.257 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:33.257 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:33.257 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:33.257 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:33.257 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmRlM2UxMDY0NzY0MDhhY2M2ZTgyMzJlNzM1ZDU2YzQ5YzM4MjI4YWQ1MWU2YWQznZh1aQ==: 00:26:33.257 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGU0NWZiODE3NGJjODg0NTgwYmVhMDllMmJiMDg4ZDTBDdSn: 00:26:33.257 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:33.257 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:33.257 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmRlM2UxMDY0NzY0MDhhY2M2ZTgyMzJlNzM1ZDU2YzQ5YzM4MjI4YWQ1MWU2YWQznZh1aQ==: 00:26:33.257 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGU0NWZiODE3NGJjODg0NTgwYmVhMDllMmJiMDg4ZDTBDdSn: ]] 00:26:33.257 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGU0NWZiODE3NGJjODg0NTgwYmVhMDllMmJiMDg4ZDTBDdSn: 00:26:33.257 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:26:33.257 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:33.257 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:33.257 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:33.257 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:33.257 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:33.257 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:33.257 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.257 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.257 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.257 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:33.257 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:33.257 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:33.257 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:33.257 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:33.257 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:33.257 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:33.257 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:33.257 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:33.257 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:33.257 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:33.257 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:33.257 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.257 22:57:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.517 nvme0n1 00:26:33.517 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.517 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:33.517 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.517 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:33.517 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.517 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.517 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:33.517 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:33.517 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.517 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.517 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.517 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:33.517 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:26:33.517 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:33.517 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:33.517 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:33.517 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:33.517 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzBjZTRkMjQzMmMzMzA5MjljMmIyNjcxNjUzMTAzYmQ5M2FiYTJmOTgzYzVhZWEzZTBhNmU5YjY4MDYxZDc5N0wKKyo=: 00:26:33.517 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:33.517 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:33.517 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:33.517 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzBjZTRkMjQzMmMzMzA5MjljMmIyNjcxNjUzMTAzYmQ5M2FiYTJmOTgzYzVhZWEzZTBhNmU5YjY4MDYxZDc5N0wKKyo=: 00:26:33.517 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:33.517 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:26:33.517 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:33.517 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:33.517 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:33.517 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:33.517 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:33.517 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:26:33.517 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.517 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.517 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.517 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:33.517 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:33.517 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:33.517 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:33.517 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:33.517 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:33.517 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:33.517 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:33.517 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:33.517 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:33.517 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:33.517 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:33.517 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.517 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.777 nvme0n1 00:26:33.777 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.777 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:33.777 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:33.777 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.777 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.777 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.777 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:33.777 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:33.777 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.777 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.777 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.778 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:33.778 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:33.778 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:26:33.778 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:33.778 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:33.778 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:33.778 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:33.778 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmE0ODg0ZTY4NmViNzNiZGVhNzA3N2M1ODFlMDQwYzk0I2Gp: 00:26:33.778 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjljNTRlZjM5MTcxYjg2NzQ1MzliM2Q0Y2Q0ZGNhMjRmYTBmYjk4ZjRjOGQ3MTY0YWNjMDkyNTBmZmQ0ZGUxMFKaOuc=: 00:26:33.778 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:33.778 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:33.778 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmE0ODg0ZTY4NmViNzNiZGVhNzA3N2M1ODFlMDQwYzk0I2Gp: 00:26:33.778 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjljNTRlZjM5MTcxYjg2NzQ1MzliM2Q0Y2Q0ZGNhMjRmYTBmYjk4ZjRjOGQ3MTY0YWNjMDkyNTBmZmQ0ZGUxMFKaOuc=: ]] 00:26:33.778 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjljNTRlZjM5MTcxYjg2NzQ1MzliM2Q0Y2Q0ZGNhMjRmYTBmYjk4ZjRjOGQ3MTY0YWNjMDkyNTBmZmQ0ZGUxMFKaOuc=: 00:26:33.778 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:26:33.778 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:33.778 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:33.778 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:33.778 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:33.778 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:33.778 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:33.778 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.778 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:33.778 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:33.778 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:33.778 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:33.778 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:33.778 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:33.778 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:33.778 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:33.778 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:33.778 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:33.778 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:33.778 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:33.778 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:33.778 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:33.778 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:33.778 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.348 nvme0n1 00:26:34.348 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.348 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:34.348 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:34.348 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.348 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.348 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.348 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:34.348 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:34.348 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.348 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.348 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.348 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:34.348 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:26:34.348 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:34.348 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:34.348 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:34.348 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:34.348 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmJkMzdmYzEwZmY4ZWYwODUyNTgwYWY3NDIyY2UyMDkwZmUzZmFhNDQ2MzE5NzlkAPUbXg==: 00:26:34.348 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGFlMWU0MzA5YTc5YmI2ZDI3YzQ4MDBmODQ2YjU1YmRjMmEzNDBhZjI1NWM0ZGQxhL1gpA==: 00:26:34.348 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:34.348 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:34.348 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmJkMzdmYzEwZmY4ZWYwODUyNTgwYWY3NDIyY2UyMDkwZmUzZmFhNDQ2MzE5NzlkAPUbXg==: 00:26:34.348 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGFlMWU0MzA5YTc5YmI2ZDI3YzQ4MDBmODQ2YjU1YmRjMmEzNDBhZjI1NWM0ZGQxhL1gpA==: ]] 00:26:34.348 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGFlMWU0MzA5YTc5YmI2ZDI3YzQ4MDBmODQ2YjU1YmRjMmEzNDBhZjI1NWM0ZGQxhL1gpA==: 00:26:34.348 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:26:34.348 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:34.348 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:34.348 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:34.348 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:34.348 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:34.348 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:34.348 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.348 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.348 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.348 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:34.348 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:34.348 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:34.348 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:34.348 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:34.348 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:34.348 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:34.348 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:34.348 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:34.348 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:34.348 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:34.348 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:34.348 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.348 22:57:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.607 nvme0n1 00:26:34.607 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.607 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:34.607 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:34.607 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.607 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.867 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.867 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:34.867 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:34.867 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.867 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.867 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.867 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:34.867 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:26:34.867 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:34.867 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:34.867 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:34.867 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:34.867 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTgwZTYwNDYwZGJlNWNiNzI3OTk2YzZjZTFmNzVkNjTx03No: 00:26:34.867 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWVkOWRhNmM3ZTQ2MTQyOTExNDA4YTRjMDhkOTAzZmQQ6u7u: 00:26:34.867 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:34.867 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:34.867 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTgwZTYwNDYwZGJlNWNiNzI3OTk2YzZjZTFmNzVkNjTx03No: 00:26:34.867 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWVkOWRhNmM3ZTQ2MTQyOTExNDA4YTRjMDhkOTAzZmQQ6u7u: ]] 00:26:34.867 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWVkOWRhNmM3ZTQ2MTQyOTExNDA4YTRjMDhkOTAzZmQQ6u7u: 00:26:34.867 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:26:34.867 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:34.867 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:34.867 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:34.867 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:34.867 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:34.867 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:34.867 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.867 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:34.867 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.867 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:34.867 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:34.867 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:34.867 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:34.867 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:34.867 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:34.867 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:34.867 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:34.867 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:34.867 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:34.867 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:34.867 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:34.867 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.867 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.127 nvme0n1 00:26:35.127 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.127 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:35.127 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.127 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:35.127 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.127 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.127 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:35.127 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:35.127 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.127 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.127 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.127 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:35.127 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:26:35.127 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:35.127 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:35.127 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:35.127 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:35.127 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmRlM2UxMDY0NzY0MDhhY2M2ZTgyMzJlNzM1ZDU2YzQ5YzM4MjI4YWQ1MWU2YWQznZh1aQ==: 00:26:35.127 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGU0NWZiODE3NGJjODg0NTgwYmVhMDllMmJiMDg4ZDTBDdSn: 00:26:35.127 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:35.127 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:35.127 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmRlM2UxMDY0NzY0MDhhY2M2ZTgyMzJlNzM1ZDU2YzQ5YzM4MjI4YWQ1MWU2YWQznZh1aQ==: 00:26:35.127 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGU0NWZiODE3NGJjODg0NTgwYmVhMDllMmJiMDg4ZDTBDdSn: ]] 00:26:35.127 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGU0NWZiODE3NGJjODg0NTgwYmVhMDllMmJiMDg4ZDTBDdSn: 00:26:35.127 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:26:35.127 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:35.127 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:35.127 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:35.127 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:35.127 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:35.127 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:35.127 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.127 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.386 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.386 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:35.386 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:35.386 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:35.386 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:35.386 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:35.386 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:35.386 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:35.386 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:35.386 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:35.386 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:35.386 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:35.386 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:35.386 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.386 22:57:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.645 nvme0n1 00:26:35.645 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.645 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:35.645 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:35.645 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.645 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.645 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.645 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:35.645 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:35.645 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.645 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.645 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.645 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:35.645 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:26:35.645 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:35.646 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:35.646 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:35.646 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:35.646 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzBjZTRkMjQzMmMzMzA5MjljMmIyNjcxNjUzMTAzYmQ5M2FiYTJmOTgzYzVhZWEzZTBhNmU5YjY4MDYxZDc5N0wKKyo=: 00:26:35.646 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:35.646 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:35.646 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:35.646 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzBjZTRkMjQzMmMzMzA5MjljMmIyNjcxNjUzMTAzYmQ5M2FiYTJmOTgzYzVhZWEzZTBhNmU5YjY4MDYxZDc5N0wKKyo=: 00:26:35.646 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:35.646 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:26:35.646 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:35.646 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:35.646 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:35.646 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:35.646 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:35.646 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:26:35.646 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.646 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:35.646 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.646 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:35.646 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:35.646 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:35.646 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:35.646 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:35.646 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:35.646 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:35.646 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:35.646 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:35.646 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:35.646 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:35.646 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:35.646 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.646 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.215 nvme0n1 00:26:36.215 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.215 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:36.215 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:36.215 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.215 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.215 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.215 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:36.215 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:36.215 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.215 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.215 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.215 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:36.215 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:36.215 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:26:36.215 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:36.215 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:36.215 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:36.215 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:36.215 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmE0ODg0ZTY4NmViNzNiZGVhNzA3N2M1ODFlMDQwYzk0I2Gp: 00:26:36.215 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjljNTRlZjM5MTcxYjg2NzQ1MzliM2Q0Y2Q0ZGNhMjRmYTBmYjk4ZjRjOGQ3MTY0YWNjMDkyNTBmZmQ0ZGUxMFKaOuc=: 00:26:36.215 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:36.215 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:36.215 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmE0ODg0ZTY4NmViNzNiZGVhNzA3N2M1ODFlMDQwYzk0I2Gp: 00:26:36.215 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjljNTRlZjM5MTcxYjg2NzQ1MzliM2Q0Y2Q0ZGNhMjRmYTBmYjk4ZjRjOGQ3MTY0YWNjMDkyNTBmZmQ0ZGUxMFKaOuc=: ]] 00:26:36.215 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjljNTRlZjM5MTcxYjg2NzQ1MzliM2Q0Y2Q0ZGNhMjRmYTBmYjk4ZjRjOGQ3MTY0YWNjMDkyNTBmZmQ0ZGUxMFKaOuc=: 00:26:36.215 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:26:36.215 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:36.215 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:36.215 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:36.215 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:36.215 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:36.215 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:36.215 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.215 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.215 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.215 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:36.215 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:36.215 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:36.215 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:36.215 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:36.215 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:36.215 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:36.215 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:36.215 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:36.215 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:36.215 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:36.215 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:36.215 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.215 22:57:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.784 nvme0n1 00:26:36.784 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.784 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:36.784 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:36.784 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.784 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.784 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.784 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:36.784 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:36.784 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.784 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.784 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.784 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:36.784 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:26:36.784 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:36.784 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:36.784 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:36.784 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:36.784 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmJkMzdmYzEwZmY4ZWYwODUyNTgwYWY3NDIyY2UyMDkwZmUzZmFhNDQ2MzE5NzlkAPUbXg==: 00:26:36.784 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGFlMWU0MzA5YTc5YmI2ZDI3YzQ4MDBmODQ2YjU1YmRjMmEzNDBhZjI1NWM0ZGQxhL1gpA==: 00:26:36.784 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:36.785 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:36.785 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmJkMzdmYzEwZmY4ZWYwODUyNTgwYWY3NDIyY2UyMDkwZmUzZmFhNDQ2MzE5NzlkAPUbXg==: 00:26:36.785 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGFlMWU0MzA5YTc5YmI2ZDI3YzQ4MDBmODQ2YjU1YmRjMmEzNDBhZjI1NWM0ZGQxhL1gpA==: ]] 00:26:36.785 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGFlMWU0MzA5YTc5YmI2ZDI3YzQ4MDBmODQ2YjU1YmRjMmEzNDBhZjI1NWM0ZGQxhL1gpA==: 00:26:36.785 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:26:36.785 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:36.785 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:36.785 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:36.785 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:36.785 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:36.785 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:36.785 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.785 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:36.785 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:36.785 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:36.785 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:36.785 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:36.785 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:36.785 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:36.785 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:36.785 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:36.785 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:36.785 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:36.785 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:36.785 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:36.785 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:36.785 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:36.785 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.355 nvme0n1 00:26:37.355 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.355 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:37.355 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:37.355 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.355 22:57:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.355 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.355 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:37.355 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:37.355 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.355 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.355 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.355 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:37.355 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:26:37.355 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:37.355 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:37.355 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:37.355 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:37.355 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTgwZTYwNDYwZGJlNWNiNzI3OTk2YzZjZTFmNzVkNjTx03No: 00:26:37.355 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWVkOWRhNmM3ZTQ2MTQyOTExNDA4YTRjMDhkOTAzZmQQ6u7u: 00:26:37.355 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:37.355 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:37.355 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTgwZTYwNDYwZGJlNWNiNzI3OTk2YzZjZTFmNzVkNjTx03No: 00:26:37.355 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWVkOWRhNmM3ZTQ2MTQyOTExNDA4YTRjMDhkOTAzZmQQ6u7u: ]] 00:26:37.355 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWVkOWRhNmM3ZTQ2MTQyOTExNDA4YTRjMDhkOTAzZmQQ6u7u: 00:26:37.355 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:26:37.355 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:37.355 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:37.355 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:37.355 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:37.355 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:37.355 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:37.355 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.355 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.355 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.355 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:37.355 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:37.355 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:37.355 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:37.355 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:37.355 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:37.355 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:37.355 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:37.355 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:37.355 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:37.355 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:37.355 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:37.355 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.355 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.922 nvme0n1 00:26:37.922 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:37.922 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:37.922 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:37.922 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:37.922 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.182 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.182 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:38.182 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:38.182 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.182 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.182 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.182 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:38.182 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:26:38.182 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:38.182 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:38.182 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:38.182 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:38.182 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmRlM2UxMDY0NzY0MDhhY2M2ZTgyMzJlNzM1ZDU2YzQ5YzM4MjI4YWQ1MWU2YWQznZh1aQ==: 00:26:38.182 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGU0NWZiODE3NGJjODg0NTgwYmVhMDllMmJiMDg4ZDTBDdSn: 00:26:38.182 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:38.182 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:38.182 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmRlM2UxMDY0NzY0MDhhY2M2ZTgyMzJlNzM1ZDU2YzQ5YzM4MjI4YWQ1MWU2YWQznZh1aQ==: 00:26:38.182 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGU0NWZiODE3NGJjODg0NTgwYmVhMDllMmJiMDg4ZDTBDdSn: ]] 00:26:38.182 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGU0NWZiODE3NGJjODg0NTgwYmVhMDllMmJiMDg4ZDTBDdSn: 00:26:38.182 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:26:38.182 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:38.182 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:38.182 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:38.182 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:38.182 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:38.182 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:38.182 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.182 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.182 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.182 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:38.182 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:38.182 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:38.182 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:38.182 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:38.182 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:38.182 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:38.182 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:38.182 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:38.182 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:38.182 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:38.182 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:38.182 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.182 22:57:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.751 nvme0n1 00:26:38.751 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.751 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:38.751 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:38.751 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.751 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.751 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.751 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:38.751 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:38.751 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.751 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.751 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.751 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:38.751 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:26:38.751 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:38.751 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:38.751 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:38.751 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:38.751 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzBjZTRkMjQzMmMzMzA5MjljMmIyNjcxNjUzMTAzYmQ5M2FiYTJmOTgzYzVhZWEzZTBhNmU5YjY4MDYxZDc5N0wKKyo=: 00:26:38.751 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:38.751 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:38.751 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:38.751 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzBjZTRkMjQzMmMzMzA5MjljMmIyNjcxNjUzMTAzYmQ5M2FiYTJmOTgzYzVhZWEzZTBhNmU5YjY4MDYxZDc5N0wKKyo=: 00:26:38.751 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:38.751 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:26:38.751 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:38.751 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:38.751 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:38.751 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:38.751 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:38.751 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:26:38.751 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.751 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:38.751 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.751 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:38.751 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:38.751 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:38.751 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:38.751 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:38.751 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:38.751 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:38.751 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:38.751 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:38.751 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:38.751 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:38.751 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:38.751 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.751 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.320 nvme0n1 00:26:39.320 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.320 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:39.320 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:39.320 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.320 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.320 22:57:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.320 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:39.320 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:39.320 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.320 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.320 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.320 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:39.320 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:39.320 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:39.321 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:26:39.321 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:39.321 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:39.321 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:39.321 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:39.321 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmE0ODg0ZTY4NmViNzNiZGVhNzA3N2M1ODFlMDQwYzk0I2Gp: 00:26:39.321 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjljNTRlZjM5MTcxYjg2NzQ1MzliM2Q0Y2Q0ZGNhMjRmYTBmYjk4ZjRjOGQ3MTY0YWNjMDkyNTBmZmQ0ZGUxMFKaOuc=: 00:26:39.321 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:39.321 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:39.321 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmE0ODg0ZTY4NmViNzNiZGVhNzA3N2M1ODFlMDQwYzk0I2Gp: 00:26:39.321 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjljNTRlZjM5MTcxYjg2NzQ1MzliM2Q0Y2Q0ZGNhMjRmYTBmYjk4ZjRjOGQ3MTY0YWNjMDkyNTBmZmQ0ZGUxMFKaOuc=: ]] 00:26:39.321 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjljNTRlZjM5MTcxYjg2NzQ1MzliM2Q0Y2Q0ZGNhMjRmYTBmYjk4ZjRjOGQ3MTY0YWNjMDkyNTBmZmQ0ZGUxMFKaOuc=: 00:26:39.321 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:26:39.321 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:39.321 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:39.321 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:39.321 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:39.321 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:39.321 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:39.321 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.321 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.321 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.321 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:39.321 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:39.321 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:39.321 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:39.321 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:39.321 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:39.321 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:39.321 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:39.321 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:39.321 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:39.321 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:39.321 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:39.321 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.321 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.580 nvme0n1 00:26:39.580 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.580 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:39.580 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:39.580 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.580 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.580 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.580 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:39.580 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:39.580 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.580 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.580 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.580 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:39.580 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:26:39.580 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:39.580 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:39.580 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:39.580 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:39.581 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmJkMzdmYzEwZmY4ZWYwODUyNTgwYWY3NDIyY2UyMDkwZmUzZmFhNDQ2MzE5NzlkAPUbXg==: 00:26:39.581 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGFlMWU0MzA5YTc5YmI2ZDI3YzQ4MDBmODQ2YjU1YmRjMmEzNDBhZjI1NWM0ZGQxhL1gpA==: 00:26:39.581 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:39.581 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:39.581 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmJkMzdmYzEwZmY4ZWYwODUyNTgwYWY3NDIyY2UyMDkwZmUzZmFhNDQ2MzE5NzlkAPUbXg==: 00:26:39.581 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGFlMWU0MzA5YTc5YmI2ZDI3YzQ4MDBmODQ2YjU1YmRjMmEzNDBhZjI1NWM0ZGQxhL1gpA==: ]] 00:26:39.581 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGFlMWU0MzA5YTc5YmI2ZDI3YzQ4MDBmODQ2YjU1YmRjMmEzNDBhZjI1NWM0ZGQxhL1gpA==: 00:26:39.581 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:26:39.581 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:39.581 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:39.581 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:39.581 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:39.581 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:39.581 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:39.581 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.581 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.581 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.581 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:39.581 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:39.581 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:39.581 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:39.581 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:39.581 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:39.581 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:39.581 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:39.581 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:39.581 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:39.581 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:39.581 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:39.581 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.581 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.841 nvme0n1 00:26:39.841 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.841 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:39.841 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.841 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.841 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:39.841 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.841 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:39.841 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:39.841 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.841 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.841 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.841 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:39.841 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:26:39.841 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:39.841 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:39.841 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:39.841 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:39.841 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTgwZTYwNDYwZGJlNWNiNzI3OTk2YzZjZTFmNzVkNjTx03No: 00:26:39.841 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWVkOWRhNmM3ZTQ2MTQyOTExNDA4YTRjMDhkOTAzZmQQ6u7u: 00:26:39.841 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:39.841 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:39.841 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTgwZTYwNDYwZGJlNWNiNzI3OTk2YzZjZTFmNzVkNjTx03No: 00:26:39.841 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWVkOWRhNmM3ZTQ2MTQyOTExNDA4YTRjMDhkOTAzZmQQ6u7u: ]] 00:26:39.841 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWVkOWRhNmM3ZTQ2MTQyOTExNDA4YTRjMDhkOTAzZmQQ6u7u: 00:26:39.841 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:26:39.841 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:39.841 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:39.841 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:39.841 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:39.841 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:39.841 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:39.841 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.841 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.841 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.841 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:39.841 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:39.841 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:39.841 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:39.841 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:39.841 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:39.841 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:39.841 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:39.841 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:39.841 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:39.841 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:39.841 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:39.841 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.841 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.101 nvme0n1 00:26:40.101 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.101 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:40.101 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:40.101 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.101 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.101 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.101 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:40.101 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:40.101 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.101 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.101 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.101 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:40.101 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:26:40.101 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:40.101 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:40.101 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:40.101 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:40.101 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmRlM2UxMDY0NzY0MDhhY2M2ZTgyMzJlNzM1ZDU2YzQ5YzM4MjI4YWQ1MWU2YWQznZh1aQ==: 00:26:40.101 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGU0NWZiODE3NGJjODg0NTgwYmVhMDllMmJiMDg4ZDTBDdSn: 00:26:40.101 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:40.101 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:40.101 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmRlM2UxMDY0NzY0MDhhY2M2ZTgyMzJlNzM1ZDU2YzQ5YzM4MjI4YWQ1MWU2YWQznZh1aQ==: 00:26:40.101 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGU0NWZiODE3NGJjODg0NTgwYmVhMDllMmJiMDg4ZDTBDdSn: ]] 00:26:40.101 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGU0NWZiODE3NGJjODg0NTgwYmVhMDllMmJiMDg4ZDTBDdSn: 00:26:40.101 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:26:40.101 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:40.101 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:40.101 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:40.101 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:40.101 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:40.101 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:40.101 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.101 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.101 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.101 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:40.101 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:40.101 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:40.101 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:40.101 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:40.101 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:40.101 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:40.101 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:40.101 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:40.101 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:40.101 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:40.101 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:40.101 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.101 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.361 nvme0n1 00:26:40.361 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.361 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:40.361 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:40.361 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.361 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.361 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.361 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:40.361 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:40.361 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.361 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.361 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.361 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:40.361 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:26:40.361 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:40.361 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:40.361 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:40.361 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:40.361 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzBjZTRkMjQzMmMzMzA5MjljMmIyNjcxNjUzMTAzYmQ5M2FiYTJmOTgzYzVhZWEzZTBhNmU5YjY4MDYxZDc5N0wKKyo=: 00:26:40.361 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:40.361 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:40.361 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:40.361 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzBjZTRkMjQzMmMzMzA5MjljMmIyNjcxNjUzMTAzYmQ5M2FiYTJmOTgzYzVhZWEzZTBhNmU5YjY4MDYxZDc5N0wKKyo=: 00:26:40.361 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:40.361 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:26:40.361 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:40.361 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:40.361 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:40.361 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:40.361 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:40.361 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:26:40.361 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.361 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.361 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.361 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:40.361 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:40.361 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:40.361 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:40.361 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:40.361 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:40.361 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:40.361 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:40.361 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:40.361 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:40.361 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:40.361 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:40.361 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.361 22:57:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.620 nvme0n1 00:26:40.620 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.620 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:40.620 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:40.620 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.620 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.620 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.620 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:40.620 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:40.620 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.620 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.620 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.620 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:40.620 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:40.620 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:26:40.620 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:40.620 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:40.620 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:40.620 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:40.620 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmE0ODg0ZTY4NmViNzNiZGVhNzA3N2M1ODFlMDQwYzk0I2Gp: 00:26:40.620 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjljNTRlZjM5MTcxYjg2NzQ1MzliM2Q0Y2Q0ZGNhMjRmYTBmYjk4ZjRjOGQ3MTY0YWNjMDkyNTBmZmQ0ZGUxMFKaOuc=: 00:26:40.620 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:40.620 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:40.620 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmE0ODg0ZTY4NmViNzNiZGVhNzA3N2M1ODFlMDQwYzk0I2Gp: 00:26:40.620 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjljNTRlZjM5MTcxYjg2NzQ1MzliM2Q0Y2Q0ZGNhMjRmYTBmYjk4ZjRjOGQ3MTY0YWNjMDkyNTBmZmQ0ZGUxMFKaOuc=: ]] 00:26:40.620 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjljNTRlZjM5MTcxYjg2NzQ1MzliM2Q0Y2Q0ZGNhMjRmYTBmYjk4ZjRjOGQ3MTY0YWNjMDkyNTBmZmQ0ZGUxMFKaOuc=: 00:26:40.620 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:26:40.620 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:40.620 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:40.620 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:40.620 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:40.620 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:40.620 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:40.620 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.620 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.620 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.620 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:40.620 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:40.620 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:40.620 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:40.620 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:40.620 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:40.620 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:40.620 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:40.620 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:40.620 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:40.620 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:40.620 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:40.620 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.620 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.880 nvme0n1 00:26:40.880 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.880 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:40.880 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:40.880 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.880 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.880 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.880 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:40.880 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:40.880 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.880 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.880 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.880 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:40.880 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:26:40.880 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:40.880 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:40.880 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:40.880 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:40.880 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmJkMzdmYzEwZmY4ZWYwODUyNTgwYWY3NDIyY2UyMDkwZmUzZmFhNDQ2MzE5NzlkAPUbXg==: 00:26:40.880 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGFlMWU0MzA5YTc5YmI2ZDI3YzQ4MDBmODQ2YjU1YmRjMmEzNDBhZjI1NWM0ZGQxhL1gpA==: 00:26:40.880 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:40.880 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:40.880 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmJkMzdmYzEwZmY4ZWYwODUyNTgwYWY3NDIyY2UyMDkwZmUzZmFhNDQ2MzE5NzlkAPUbXg==: 00:26:40.880 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGFlMWU0MzA5YTc5YmI2ZDI3YzQ4MDBmODQ2YjU1YmRjMmEzNDBhZjI1NWM0ZGQxhL1gpA==: ]] 00:26:40.880 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGFlMWU0MzA5YTc5YmI2ZDI3YzQ4MDBmODQ2YjU1YmRjMmEzNDBhZjI1NWM0ZGQxhL1gpA==: 00:26:40.880 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:26:40.880 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:40.880 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:40.880 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:40.880 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:40.880 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:40.880 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:40.880 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.880 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.880 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:40.880 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:40.880 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:40.880 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:40.880 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:40.880 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:40.880 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:40.880 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:40.880 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:40.880 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:40.880 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:40.880 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:40.880 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:40.880 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:40.880 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.140 nvme0n1 00:26:41.140 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.140 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:41.140 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.140 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:41.140 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.140 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.140 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:41.140 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:41.140 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.140 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.140 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.140 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:41.140 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:26:41.140 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:41.140 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:41.140 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:41.140 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:41.140 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTgwZTYwNDYwZGJlNWNiNzI3OTk2YzZjZTFmNzVkNjTx03No: 00:26:41.140 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWVkOWRhNmM3ZTQ2MTQyOTExNDA4YTRjMDhkOTAzZmQQ6u7u: 00:26:41.140 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:41.140 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:41.140 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTgwZTYwNDYwZGJlNWNiNzI3OTk2YzZjZTFmNzVkNjTx03No: 00:26:41.140 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWVkOWRhNmM3ZTQ2MTQyOTExNDA4YTRjMDhkOTAzZmQQ6u7u: ]] 00:26:41.140 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWVkOWRhNmM3ZTQ2MTQyOTExNDA4YTRjMDhkOTAzZmQQ6u7u: 00:26:41.140 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:26:41.140 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:41.140 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:41.140 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:41.140 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:41.140 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:41.140 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:41.140 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.140 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.140 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.140 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:41.140 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:41.140 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:41.140 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:41.140 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:41.140 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:41.140 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:41.140 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:41.140 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:41.140 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:41.140 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:41.140 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:41.140 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.140 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.140 nvme0n1 00:26:41.140 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.140 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:41.140 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:41.140 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.140 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.399 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.399 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:41.399 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:41.399 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.399 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.399 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.399 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:41.399 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:26:41.399 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:41.399 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:41.399 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:41.399 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:41.399 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmRlM2UxMDY0NzY0MDhhY2M2ZTgyMzJlNzM1ZDU2YzQ5YzM4MjI4YWQ1MWU2YWQznZh1aQ==: 00:26:41.399 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGU0NWZiODE3NGJjODg0NTgwYmVhMDllMmJiMDg4ZDTBDdSn: 00:26:41.399 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:41.399 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:41.399 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmRlM2UxMDY0NzY0MDhhY2M2ZTgyMzJlNzM1ZDU2YzQ5YzM4MjI4YWQ1MWU2YWQznZh1aQ==: 00:26:41.399 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGU0NWZiODE3NGJjODg0NTgwYmVhMDllMmJiMDg4ZDTBDdSn: ]] 00:26:41.399 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGU0NWZiODE3NGJjODg0NTgwYmVhMDllMmJiMDg4ZDTBDdSn: 00:26:41.399 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:26:41.399 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:41.400 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:41.400 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:41.400 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:41.400 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:41.400 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:41.400 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.400 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.400 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.400 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:41.400 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:41.400 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:41.400 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:41.400 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:41.400 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:41.400 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:41.400 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:41.400 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:41.400 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:41.400 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:41.400 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:41.400 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.400 22:57:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.400 nvme0n1 00:26:41.400 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.400 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:41.400 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.400 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:41.400 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.658 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.658 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:41.658 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:41.658 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.658 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.658 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.658 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:41.658 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:26:41.658 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:41.658 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:41.658 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:41.658 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:41.658 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzBjZTRkMjQzMmMzMzA5MjljMmIyNjcxNjUzMTAzYmQ5M2FiYTJmOTgzYzVhZWEzZTBhNmU5YjY4MDYxZDc5N0wKKyo=: 00:26:41.658 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:41.658 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:41.658 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:41.658 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzBjZTRkMjQzMmMzMzA5MjljMmIyNjcxNjUzMTAzYmQ5M2FiYTJmOTgzYzVhZWEzZTBhNmU5YjY4MDYxZDc5N0wKKyo=: 00:26:41.658 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:41.658 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:26:41.658 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:41.658 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:41.658 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:41.658 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:41.658 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:41.658 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:26:41.658 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.658 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.658 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.658 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:41.658 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:41.658 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:41.658 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:41.658 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:41.658 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:41.658 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:41.658 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:41.658 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:41.658 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:41.658 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:41.658 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:41.658 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.658 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.658 nvme0n1 00:26:41.658 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.658 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:41.658 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:41.658 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.658 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.658 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.917 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:41.917 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:41.917 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.917 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.917 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.917 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:41.917 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:41.917 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:26:41.917 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:41.917 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:41.917 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:41.917 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:41.917 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmE0ODg0ZTY4NmViNzNiZGVhNzA3N2M1ODFlMDQwYzk0I2Gp: 00:26:41.917 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjljNTRlZjM5MTcxYjg2NzQ1MzliM2Q0Y2Q0ZGNhMjRmYTBmYjk4ZjRjOGQ3MTY0YWNjMDkyNTBmZmQ0ZGUxMFKaOuc=: 00:26:41.917 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:41.917 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:41.917 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmE0ODg0ZTY4NmViNzNiZGVhNzA3N2M1ODFlMDQwYzk0I2Gp: 00:26:41.917 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjljNTRlZjM5MTcxYjg2NzQ1MzliM2Q0Y2Q0ZGNhMjRmYTBmYjk4ZjRjOGQ3MTY0YWNjMDkyNTBmZmQ0ZGUxMFKaOuc=: ]] 00:26:41.917 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjljNTRlZjM5MTcxYjg2NzQ1MzliM2Q0Y2Q0ZGNhMjRmYTBmYjk4ZjRjOGQ3MTY0YWNjMDkyNTBmZmQ0ZGUxMFKaOuc=: 00:26:41.917 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:26:41.917 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:41.917 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:41.917 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:41.917 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:41.917 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:41.917 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:41.917 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.917 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.917 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.917 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:41.917 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:41.917 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:41.917 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:41.917 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:41.917 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:41.917 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:41.917 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:41.917 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:41.917 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:41.917 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:41.917 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:41.917 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.917 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.176 nvme0n1 00:26:42.176 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.176 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:42.176 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:42.176 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.176 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.176 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.176 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:42.176 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:42.176 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.176 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.176 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.176 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:42.176 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:26:42.176 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:42.176 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:42.176 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:42.176 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:42.176 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmJkMzdmYzEwZmY4ZWYwODUyNTgwYWY3NDIyY2UyMDkwZmUzZmFhNDQ2MzE5NzlkAPUbXg==: 00:26:42.176 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGFlMWU0MzA5YTc5YmI2ZDI3YzQ4MDBmODQ2YjU1YmRjMmEzNDBhZjI1NWM0ZGQxhL1gpA==: 00:26:42.176 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:42.177 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:42.177 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmJkMzdmYzEwZmY4ZWYwODUyNTgwYWY3NDIyY2UyMDkwZmUzZmFhNDQ2MzE5NzlkAPUbXg==: 00:26:42.177 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGFlMWU0MzA5YTc5YmI2ZDI3YzQ4MDBmODQ2YjU1YmRjMmEzNDBhZjI1NWM0ZGQxhL1gpA==: ]] 00:26:42.177 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGFlMWU0MzA5YTc5YmI2ZDI3YzQ4MDBmODQ2YjU1YmRjMmEzNDBhZjI1NWM0ZGQxhL1gpA==: 00:26:42.177 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:26:42.177 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:42.177 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:42.177 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:42.177 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:42.177 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:42.177 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:42.177 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.177 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.177 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.177 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:42.177 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:42.177 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:42.177 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:42.177 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:42.177 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:42.177 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:42.177 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:42.177 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:42.177 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:42.177 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:42.177 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:42.177 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.177 22:57:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.436 nvme0n1 00:26:42.436 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.436 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:42.436 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:42.436 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.436 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.436 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.436 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:42.436 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:42.436 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.436 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.436 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.436 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:42.436 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:26:42.436 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:42.436 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:42.436 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:42.436 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:42.436 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTgwZTYwNDYwZGJlNWNiNzI3OTk2YzZjZTFmNzVkNjTx03No: 00:26:42.436 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWVkOWRhNmM3ZTQ2MTQyOTExNDA4YTRjMDhkOTAzZmQQ6u7u: 00:26:42.436 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:42.436 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:42.436 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTgwZTYwNDYwZGJlNWNiNzI3OTk2YzZjZTFmNzVkNjTx03No: 00:26:42.436 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWVkOWRhNmM3ZTQ2MTQyOTExNDA4YTRjMDhkOTAzZmQQ6u7u: ]] 00:26:42.436 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWVkOWRhNmM3ZTQ2MTQyOTExNDA4YTRjMDhkOTAzZmQQ6u7u: 00:26:42.436 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:26:42.436 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:42.436 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:42.436 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:42.436 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:42.436 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:42.436 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:42.436 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.436 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.436 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.437 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:42.437 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:42.437 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:42.437 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:42.437 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:42.437 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:42.437 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:42.437 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:42.437 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:42.437 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:42.437 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:42.437 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:42.437 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.437 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.696 nvme0n1 00:26:42.696 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.696 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:42.696 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:42.696 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.696 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.696 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.696 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:42.696 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:42.696 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.696 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.955 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.955 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:42.955 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:26:42.955 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:42.955 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:42.955 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:42.955 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:42.955 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmRlM2UxMDY0NzY0MDhhY2M2ZTgyMzJlNzM1ZDU2YzQ5YzM4MjI4YWQ1MWU2YWQznZh1aQ==: 00:26:42.955 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGU0NWZiODE3NGJjODg0NTgwYmVhMDllMmJiMDg4ZDTBDdSn: 00:26:42.955 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:42.955 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:42.955 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmRlM2UxMDY0NzY0MDhhY2M2ZTgyMzJlNzM1ZDU2YzQ5YzM4MjI4YWQ1MWU2YWQznZh1aQ==: 00:26:42.955 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGU0NWZiODE3NGJjODg0NTgwYmVhMDllMmJiMDg4ZDTBDdSn: ]] 00:26:42.955 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGU0NWZiODE3NGJjODg0NTgwYmVhMDllMmJiMDg4ZDTBDdSn: 00:26:42.955 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:26:42.955 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:42.955 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:42.955 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:42.955 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:42.955 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:42.955 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:42.956 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.956 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.956 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:42.956 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:42.956 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:42.956 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:42.956 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:42.956 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:42.956 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:42.956 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:42.956 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:42.956 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:42.956 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:42.956 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:42.956 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:42.956 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:42.956 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.215 nvme0n1 00:26:43.215 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.215 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:43.215 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:43.215 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.215 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.215 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.215 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:43.215 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:43.215 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.215 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.215 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.215 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:43.215 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:26:43.215 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:43.215 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:43.215 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:43.215 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:43.215 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzBjZTRkMjQzMmMzMzA5MjljMmIyNjcxNjUzMTAzYmQ5M2FiYTJmOTgzYzVhZWEzZTBhNmU5YjY4MDYxZDc5N0wKKyo=: 00:26:43.215 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:43.215 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:43.215 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:43.215 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzBjZTRkMjQzMmMzMzA5MjljMmIyNjcxNjUzMTAzYmQ5M2FiYTJmOTgzYzVhZWEzZTBhNmU5YjY4MDYxZDc5N0wKKyo=: 00:26:43.215 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:43.215 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:26:43.215 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:43.215 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:43.215 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:43.215 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:43.215 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:43.215 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:26:43.215 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.215 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.215 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.215 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:43.215 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:43.216 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:43.216 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:43.216 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:43.216 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:43.216 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:43.216 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:43.216 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:43.216 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:43.216 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:43.216 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:43.216 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.216 22:57:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.475 nvme0n1 00:26:43.475 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.475 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:43.475 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:43.475 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.475 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.475 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.475 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:43.475 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:43.475 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.475 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.475 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.475 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:43.475 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:43.475 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:26:43.475 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:43.475 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:43.475 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:43.475 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:43.475 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmE0ODg0ZTY4NmViNzNiZGVhNzA3N2M1ODFlMDQwYzk0I2Gp: 00:26:43.475 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjljNTRlZjM5MTcxYjg2NzQ1MzliM2Q0Y2Q0ZGNhMjRmYTBmYjk4ZjRjOGQ3MTY0YWNjMDkyNTBmZmQ0ZGUxMFKaOuc=: 00:26:43.475 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:43.475 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:43.475 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmE0ODg0ZTY4NmViNzNiZGVhNzA3N2M1ODFlMDQwYzk0I2Gp: 00:26:43.475 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjljNTRlZjM5MTcxYjg2NzQ1MzliM2Q0Y2Q0ZGNhMjRmYTBmYjk4ZjRjOGQ3MTY0YWNjMDkyNTBmZmQ0ZGUxMFKaOuc=: ]] 00:26:43.475 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjljNTRlZjM5MTcxYjg2NzQ1MzliM2Q0Y2Q0ZGNhMjRmYTBmYjk4ZjRjOGQ3MTY0YWNjMDkyNTBmZmQ0ZGUxMFKaOuc=: 00:26:43.475 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:26:43.475 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:43.475 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:43.475 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:43.475 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:43.475 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:43.475 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:43.475 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.475 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:43.475 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.475 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:43.475 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:43.475 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:43.475 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:43.475 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:43.475 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:43.475 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:43.475 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:43.475 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:43.475 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:43.475 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:43.475 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:43.475 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.475 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.044 nvme0n1 00:26:44.044 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.044 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:44.044 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.044 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:44.044 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.044 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.044 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:44.044 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:44.044 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.044 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.044 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.044 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:44.044 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:26:44.044 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:44.044 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:44.044 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:44.044 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:44.044 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmJkMzdmYzEwZmY4ZWYwODUyNTgwYWY3NDIyY2UyMDkwZmUzZmFhNDQ2MzE5NzlkAPUbXg==: 00:26:44.044 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGFlMWU0MzA5YTc5YmI2ZDI3YzQ4MDBmODQ2YjU1YmRjMmEzNDBhZjI1NWM0ZGQxhL1gpA==: 00:26:44.044 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:44.044 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:44.044 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmJkMzdmYzEwZmY4ZWYwODUyNTgwYWY3NDIyY2UyMDkwZmUzZmFhNDQ2MzE5NzlkAPUbXg==: 00:26:44.044 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGFlMWU0MzA5YTc5YmI2ZDI3YzQ4MDBmODQ2YjU1YmRjMmEzNDBhZjI1NWM0ZGQxhL1gpA==: ]] 00:26:44.044 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGFlMWU0MzA5YTc5YmI2ZDI3YzQ4MDBmODQ2YjU1YmRjMmEzNDBhZjI1NWM0ZGQxhL1gpA==: 00:26:44.044 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:26:44.044 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:44.044 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:44.044 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:44.044 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:44.044 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:44.044 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:44.044 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.044 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.044 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.044 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:44.044 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:44.044 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:44.044 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:44.044 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:44.044 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:44.044 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:44.044 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:44.044 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:44.044 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:44.044 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:44.044 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:44.044 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.044 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.303 nvme0n1 00:26:44.303 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.303 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:44.303 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:44.303 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.304 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.304 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.304 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:44.304 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:44.304 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.304 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.304 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.304 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:44.304 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:26:44.304 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:44.304 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:44.304 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:44.304 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:44.304 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTgwZTYwNDYwZGJlNWNiNzI3OTk2YzZjZTFmNzVkNjTx03No: 00:26:44.304 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWVkOWRhNmM3ZTQ2MTQyOTExNDA4YTRjMDhkOTAzZmQQ6u7u: 00:26:44.304 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:44.304 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:44.304 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTgwZTYwNDYwZGJlNWNiNzI3OTk2YzZjZTFmNzVkNjTx03No: 00:26:44.304 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWVkOWRhNmM3ZTQ2MTQyOTExNDA4YTRjMDhkOTAzZmQQ6u7u: ]] 00:26:44.304 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWVkOWRhNmM3ZTQ2MTQyOTExNDA4YTRjMDhkOTAzZmQQ6u7u: 00:26:44.304 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:26:44.304 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:44.304 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:44.304 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:44.304 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:44.304 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:44.304 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:44.304 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.304 22:57:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.304 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.304 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:44.304 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:44.304 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:44.304 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:44.304 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:44.304 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:44.304 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:44.304 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:44.304 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:44.304 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:44.304 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:44.304 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:44.304 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.304 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.872 nvme0n1 00:26:44.872 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.872 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:44.872 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:44.872 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.872 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.872 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.872 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:44.872 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:44.872 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.872 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.872 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.872 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:44.872 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:26:44.872 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:44.872 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:44.872 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:44.872 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:44.872 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmRlM2UxMDY0NzY0MDhhY2M2ZTgyMzJlNzM1ZDU2YzQ5YzM4MjI4YWQ1MWU2YWQznZh1aQ==: 00:26:44.872 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGU0NWZiODE3NGJjODg0NTgwYmVhMDllMmJiMDg4ZDTBDdSn: 00:26:44.872 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:44.872 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:44.872 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmRlM2UxMDY0NzY0MDhhY2M2ZTgyMzJlNzM1ZDU2YzQ5YzM4MjI4YWQ1MWU2YWQznZh1aQ==: 00:26:44.872 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGU0NWZiODE3NGJjODg0NTgwYmVhMDllMmJiMDg4ZDTBDdSn: ]] 00:26:44.872 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGU0NWZiODE3NGJjODg0NTgwYmVhMDllMmJiMDg4ZDTBDdSn: 00:26:44.872 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:26:44.872 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:44.872 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:44.872 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:44.872 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:44.872 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:44.872 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:44.872 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.872 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.872 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.872 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:44.872 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:44.872 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:44.872 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:44.872 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:44.872 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:44.872 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:44.872 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:44.872 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:44.872 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:44.872 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:44.872 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:44.872 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.872 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.131 nvme0n1 00:26:45.131 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.131 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:45.131 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:45.131 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.131 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.131 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.390 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:45.390 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:45.390 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.390 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.390 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.390 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:45.390 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:26:45.390 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:45.390 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:45.390 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:45.390 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:45.390 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzBjZTRkMjQzMmMzMzA5MjljMmIyNjcxNjUzMTAzYmQ5M2FiYTJmOTgzYzVhZWEzZTBhNmU5YjY4MDYxZDc5N0wKKyo=: 00:26:45.390 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:45.390 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:45.390 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:45.390 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzBjZTRkMjQzMmMzMzA5MjljMmIyNjcxNjUzMTAzYmQ5M2FiYTJmOTgzYzVhZWEzZTBhNmU5YjY4MDYxZDc5N0wKKyo=: 00:26:45.390 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:45.390 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:26:45.390 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:45.390 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:45.390 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:45.390 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:45.390 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:45.390 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:26:45.390 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.390 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.390 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.390 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:45.390 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:45.390 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:45.390 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:45.390 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:45.390 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:45.390 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:45.390 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:45.390 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:45.390 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:45.390 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:45.390 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:45.390 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.390 22:57:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.650 nvme0n1 00:26:45.650 22:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.650 22:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:45.650 22:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:45.650 22:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.650 22:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.650 22:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.650 22:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:45.650 22:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:45.650 22:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.650 22:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.650 22:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.650 22:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:45.650 22:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:45.650 22:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:26:45.650 22:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:45.650 22:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:45.650 22:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:45.650 22:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:45.650 22:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YmE0ODg0ZTY4NmViNzNiZGVhNzA3N2M1ODFlMDQwYzk0I2Gp: 00:26:45.650 22:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjljNTRlZjM5MTcxYjg2NzQ1MzliM2Q0Y2Q0ZGNhMjRmYTBmYjk4ZjRjOGQ3MTY0YWNjMDkyNTBmZmQ0ZGUxMFKaOuc=: 00:26:45.650 22:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:45.650 22:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:45.650 22:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YmE0ODg0ZTY4NmViNzNiZGVhNzA3N2M1ODFlMDQwYzk0I2Gp: 00:26:45.650 22:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjljNTRlZjM5MTcxYjg2NzQ1MzliM2Q0Y2Q0ZGNhMjRmYTBmYjk4ZjRjOGQ3MTY0YWNjMDkyNTBmZmQ0ZGUxMFKaOuc=: ]] 00:26:45.650 22:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjljNTRlZjM5MTcxYjg2NzQ1MzliM2Q0Y2Q0ZGNhMjRmYTBmYjk4ZjRjOGQ3MTY0YWNjMDkyNTBmZmQ0ZGUxMFKaOuc=: 00:26:45.650 22:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:26:45.650 22:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:45.650 22:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:45.650 22:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:45.650 22:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:45.650 22:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:45.650 22:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:45.650 22:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.650 22:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.650 22:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.909 22:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:45.909 22:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:45.909 22:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:45.909 22:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:45.909 22:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:45.909 22:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:45.909 22:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:45.909 22:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:45.909 22:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:45.909 22:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:45.909 22:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:45.909 22:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:45.909 22:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.909 22:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.478 nvme0n1 00:26:46.478 22:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.478 22:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:46.478 22:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:46.478 22:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.478 22:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.478 22:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.478 22:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:46.478 22:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:46.478 22:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.478 22:57:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.478 22:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.478 22:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:46.478 22:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:26:46.478 22:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:46.478 22:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:46.478 22:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:46.478 22:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:46.478 22:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmJkMzdmYzEwZmY4ZWYwODUyNTgwYWY3NDIyY2UyMDkwZmUzZmFhNDQ2MzE5NzlkAPUbXg==: 00:26:46.478 22:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGFlMWU0MzA5YTc5YmI2ZDI3YzQ4MDBmODQ2YjU1YmRjMmEzNDBhZjI1NWM0ZGQxhL1gpA==: 00:26:46.478 22:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:46.478 22:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:46.478 22:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmJkMzdmYzEwZmY4ZWYwODUyNTgwYWY3NDIyY2UyMDkwZmUzZmFhNDQ2MzE5NzlkAPUbXg==: 00:26:46.478 22:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGFlMWU0MzA5YTc5YmI2ZDI3YzQ4MDBmODQ2YjU1YmRjMmEzNDBhZjI1NWM0ZGQxhL1gpA==: ]] 00:26:46.478 22:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGFlMWU0MzA5YTc5YmI2ZDI3YzQ4MDBmODQ2YjU1YmRjMmEzNDBhZjI1NWM0ZGQxhL1gpA==: 00:26:46.478 22:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:26:46.478 22:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:46.478 22:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:46.478 22:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:46.478 22:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:46.478 22:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:46.478 22:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:46.478 22:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.478 22:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.478 22:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.478 22:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:46.478 22:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:46.478 22:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:46.478 22:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:46.478 22:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:46.478 22:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:46.478 22:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:46.478 22:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:46.478 22:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:46.478 22:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:46.478 22:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:46.478 22:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:46.478 22:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.478 22:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.046 nvme0n1 00:26:47.046 22:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.046 22:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:47.046 22:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:47.046 22:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.046 22:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.046 22:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.046 22:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:47.046 22:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:47.046 22:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.046 22:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.046 22:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.046 22:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:47.046 22:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:26:47.046 22:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:47.046 22:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:47.047 22:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:47.047 22:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:47.047 22:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTgwZTYwNDYwZGJlNWNiNzI3OTk2YzZjZTFmNzVkNjTx03No: 00:26:47.047 22:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWVkOWRhNmM3ZTQ2MTQyOTExNDA4YTRjMDhkOTAzZmQQ6u7u: 00:26:47.047 22:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:47.047 22:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:47.047 22:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTgwZTYwNDYwZGJlNWNiNzI3OTk2YzZjZTFmNzVkNjTx03No: 00:26:47.047 22:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWVkOWRhNmM3ZTQ2MTQyOTExNDA4YTRjMDhkOTAzZmQQ6u7u: ]] 00:26:47.047 22:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWVkOWRhNmM3ZTQ2MTQyOTExNDA4YTRjMDhkOTAzZmQQ6u7u: 00:26:47.047 22:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:26:47.047 22:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:47.047 22:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:47.047 22:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:47.047 22:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:47.047 22:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:47.047 22:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:47.047 22:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.047 22:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.047 22:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.047 22:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:47.047 22:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:47.047 22:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:47.047 22:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:47.047 22:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:47.047 22:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:47.047 22:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:47.047 22:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:47.047 22:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:47.047 22:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:47.047 22:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:47.047 22:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:47.047 22:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.047 22:57:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.614 nvme0n1 00:26:47.614 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.614 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:47.614 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:47.614 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.614 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.614 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.614 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:47.614 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:47.614 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.614 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.614 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.614 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:47.614 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:26:47.614 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:47.614 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:47.614 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:47.614 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:47.614 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmRlM2UxMDY0NzY0MDhhY2M2ZTgyMzJlNzM1ZDU2YzQ5YzM4MjI4YWQ1MWU2YWQznZh1aQ==: 00:26:47.614 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MGU0NWZiODE3NGJjODg0NTgwYmVhMDllMmJiMDg4ZDTBDdSn: 00:26:47.614 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:47.614 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:47.614 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmRlM2UxMDY0NzY0MDhhY2M2ZTgyMzJlNzM1ZDU2YzQ5YzM4MjI4YWQ1MWU2YWQznZh1aQ==: 00:26:47.614 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MGU0NWZiODE3NGJjODg0NTgwYmVhMDllMmJiMDg4ZDTBDdSn: ]] 00:26:47.614 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MGU0NWZiODE3NGJjODg0NTgwYmVhMDllMmJiMDg4ZDTBDdSn: 00:26:47.614 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:26:47.614 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:47.614 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:47.614 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:47.614 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:47.614 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:47.614 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:47.614 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.614 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.614 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.614 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:47.614 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:47.614 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:47.614 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:47.614 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:47.614 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:47.614 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:47.615 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:47.615 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:47.615 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:47.615 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:47.615 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:47.615 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.615 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.550 nvme0n1 00:26:48.550 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.550 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:48.550 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:48.550 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.550 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.550 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.550 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:48.550 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:48.550 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.550 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.550 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.550 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:48.550 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:26:48.550 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:48.550 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:26:48.550 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:48.550 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:48.550 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MzBjZTRkMjQzMmMzMzA5MjljMmIyNjcxNjUzMTAzYmQ5M2FiYTJmOTgzYzVhZWEzZTBhNmU5YjY4MDYxZDc5N0wKKyo=: 00:26:48.550 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:48.550 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:26:48.550 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:48.550 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MzBjZTRkMjQzMmMzMzA5MjljMmIyNjcxNjUzMTAzYmQ5M2FiYTJmOTgzYzVhZWEzZTBhNmU5YjY4MDYxZDc5N0wKKyo=: 00:26:48.550 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:48.550 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:26:48.550 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:48.551 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:26:48.551 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:48.551 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:48.551 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:48.551 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:26:48.551 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.551 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.551 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.551 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:48.551 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:48.551 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:48.551 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:48.551 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:48.551 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:48.551 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:48.551 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:48.551 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:48.551 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:48.551 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:48.551 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:48.551 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.551 22:57:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.120 nvme0n1 00:26:49.120 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.120 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:49.120 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:49.120 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.120 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.120 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.120 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:49.120 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:49.120 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.120 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.120 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.120 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:49.120 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:49.120 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:49.120 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:49.120 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:49.120 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmJkMzdmYzEwZmY4ZWYwODUyNTgwYWY3NDIyY2UyMDkwZmUzZmFhNDQ2MzE5NzlkAPUbXg==: 00:26:49.120 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGFlMWU0MzA5YTc5YmI2ZDI3YzQ4MDBmODQ2YjU1YmRjMmEzNDBhZjI1NWM0ZGQxhL1gpA==: 00:26:49.120 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:49.120 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:49.120 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmJkMzdmYzEwZmY4ZWYwODUyNTgwYWY3NDIyY2UyMDkwZmUzZmFhNDQ2MzE5NzlkAPUbXg==: 00:26:49.120 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGFlMWU0MzA5YTc5YmI2ZDI3YzQ4MDBmODQ2YjU1YmRjMmEzNDBhZjI1NWM0ZGQxhL1gpA==: ]] 00:26:49.120 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGFlMWU0MzA5YTc5YmI2ZDI3YzQ4MDBmODQ2YjU1YmRjMmEzNDBhZjI1NWM0ZGQxhL1gpA==: 00:26:49.120 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:49.120 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.120 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.120 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.120 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:26:49.120 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:49.120 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:49.120 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:49.120 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:49.120 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:49.120 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:49.120 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:49.120 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:49.120 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:49.120 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:49.120 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:49.120 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:49.120 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:49.120 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:49.120 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:49.120 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:49.120 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:49.120 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:26:49.120 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.120 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.120 request: 00:26:49.120 { 00:26:49.120 "name": "nvme0", 00:26:49.120 "trtype": "tcp", 00:26:49.120 "traddr": "10.0.0.1", 00:26:49.120 "adrfam": "ipv4", 00:26:49.120 "trsvcid": "4420", 00:26:49.120 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:49.120 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:49.120 "prchk_reftag": false, 00:26:49.120 "prchk_guard": false, 00:26:49.120 "hdgst": false, 00:26:49.120 "ddgst": false, 00:26:49.120 "allow_unrecognized_csi": false, 00:26:49.120 "method": "bdev_nvme_attach_controller", 00:26:49.120 "req_id": 1 00:26:49.120 } 00:26:49.120 Got JSON-RPC error response 00:26:49.120 response: 00:26:49.120 { 00:26:49.120 "code": -5, 00:26:49.120 "message": "Input/output error" 00:26:49.120 } 00:26:49.120 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:49.120 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:49.120 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:49.120 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:49.120 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:49.120 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:26:49.120 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:26:49.120 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.120 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.120 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.120 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:26:49.120 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:26:49.120 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:49.120 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:49.120 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:49.120 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:49.120 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:49.120 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:49.120 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:49.120 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:49.120 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:49.121 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:49.121 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:49.121 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:49.121 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:49.121 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:49.121 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:49.121 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:49.121 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:49.121 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:26:49.121 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.121 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.121 request: 00:26:49.121 { 00:26:49.121 "name": "nvme0", 00:26:49.121 "trtype": "tcp", 00:26:49.121 "traddr": "10.0.0.1", 00:26:49.121 "adrfam": "ipv4", 00:26:49.121 "trsvcid": "4420", 00:26:49.121 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:49.121 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:49.121 "prchk_reftag": false, 00:26:49.121 "prchk_guard": false, 00:26:49.121 "hdgst": false, 00:26:49.121 "ddgst": false, 00:26:49.121 "dhchap_key": "key2", 00:26:49.121 "allow_unrecognized_csi": false, 00:26:49.121 "method": "bdev_nvme_attach_controller", 00:26:49.121 "req_id": 1 00:26:49.121 } 00:26:49.121 Got JSON-RPC error response 00:26:49.121 response: 00:26:49.121 { 00:26:49.121 "code": -5, 00:26:49.121 "message": "Input/output error" 00:26:49.121 } 00:26:49.381 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:49.381 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:49.381 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:49.381 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:49.381 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:49.381 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:26:49.381 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:26:49.381 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.381 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.381 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.381 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:26:49.381 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:26:49.381 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:49.381 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:49.381 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:49.381 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:49.381 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:49.381 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:49.381 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:49.381 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:49.381 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:49.381 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:49.381 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:49.381 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:49.381 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:49.381 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:49.381 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:49.381 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:49.381 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:49.381 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:49.381 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.381 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.381 request: 00:26:49.381 { 00:26:49.381 "name": "nvme0", 00:26:49.381 "trtype": "tcp", 00:26:49.381 "traddr": "10.0.0.1", 00:26:49.381 "adrfam": "ipv4", 00:26:49.381 "trsvcid": "4420", 00:26:49.381 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:26:49.381 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:26:49.381 "prchk_reftag": false, 00:26:49.381 "prchk_guard": false, 00:26:49.381 "hdgst": false, 00:26:49.381 "ddgst": false, 00:26:49.381 "dhchap_key": "key1", 00:26:49.381 "dhchap_ctrlr_key": "ckey2", 00:26:49.381 "allow_unrecognized_csi": false, 00:26:49.381 "method": "bdev_nvme_attach_controller", 00:26:49.381 "req_id": 1 00:26:49.381 } 00:26:49.381 Got JSON-RPC error response 00:26:49.381 response: 00:26:49.381 { 00:26:49.381 "code": -5, 00:26:49.381 "message": "Input/output error" 00:26:49.381 } 00:26:49.381 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:49.381 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:49.381 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:49.381 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:49.381 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:49.381 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:26:49.381 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:49.381 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:49.381 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:49.381 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:49.381 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:49.381 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:49.381 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:49.381 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:49.381 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:49.381 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:49.381 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:26:49.381 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.381 22:57:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.381 nvme0n1 00:26:49.381 22:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.381 22:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:49.381 22:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:49.381 22:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:49.381 22:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:49.381 22:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:49.381 22:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTgwZTYwNDYwZGJlNWNiNzI3OTk2YzZjZTFmNzVkNjTx03No: 00:26:49.381 22:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWVkOWRhNmM3ZTQ2MTQyOTExNDA4YTRjMDhkOTAzZmQQ6u7u: 00:26:49.381 22:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:49.381 22:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:49.381 22:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTgwZTYwNDYwZGJlNWNiNzI3OTk2YzZjZTFmNzVkNjTx03No: 00:26:49.381 22:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWVkOWRhNmM3ZTQ2MTQyOTExNDA4YTRjMDhkOTAzZmQQ6u7u: ]] 00:26:49.381 22:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWVkOWRhNmM3ZTQ2MTQyOTExNDA4YTRjMDhkOTAzZmQQ6u7u: 00:26:49.381 22:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:49.381 22:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.382 22:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.640 22:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.640 22:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:26:49.640 22:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:26:49.640 22:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.640 22:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.640 22:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.640 22:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:49.640 22:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:49.640 22:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:49.640 22:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:49.640 22:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:49.640 22:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:49.640 22:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:49.640 22:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:49.640 22:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:26:49.640 22:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.640 22:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.640 request: 00:26:49.640 { 00:26:49.640 "name": "nvme0", 00:26:49.640 "dhchap_key": "key1", 00:26:49.640 "dhchap_ctrlr_key": "ckey2", 00:26:49.640 "method": "bdev_nvme_set_keys", 00:26:49.640 "req_id": 1 00:26:49.640 } 00:26:49.640 Got JSON-RPC error response 00:26:49.640 response: 00:26:49.640 { 00:26:49.640 "code": -13, 00:26:49.641 "message": "Permission denied" 00:26:49.641 } 00:26:49.641 22:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:49.641 22:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:49.641 22:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:49.641 22:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:49.641 22:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:49.641 22:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:26:49.641 22:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.641 22:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.641 22:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:26:49.641 22:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.641 22:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:26:49.641 22:57:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:26:51.018 22:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:26:51.018 22:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:26:51.018 22:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.018 22:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.018 22:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.018 22:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:26:51.018 22:57:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:26:51.955 22:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:26:51.955 22:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:26:51.955 22:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.955 22:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.955 22:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.955 22:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:26:51.955 22:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:51.955 22:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:51.955 22:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:51.955 22:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:51.955 22:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:51.955 22:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmJkMzdmYzEwZmY4ZWYwODUyNTgwYWY3NDIyY2UyMDkwZmUzZmFhNDQ2MzE5NzlkAPUbXg==: 00:26:51.955 22:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OGFlMWU0MzA5YTc5YmI2ZDI3YzQ4MDBmODQ2YjU1YmRjMmEzNDBhZjI1NWM0ZGQxhL1gpA==: 00:26:51.955 22:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:51.955 22:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:51.955 22:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmJkMzdmYzEwZmY4ZWYwODUyNTgwYWY3NDIyY2UyMDkwZmUzZmFhNDQ2MzE5NzlkAPUbXg==: 00:26:51.955 22:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OGFlMWU0MzA5YTc5YmI2ZDI3YzQ4MDBmODQ2YjU1YmRjMmEzNDBhZjI1NWM0ZGQxhL1gpA==: ]] 00:26:51.955 22:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OGFlMWU0MzA5YTc5YmI2ZDI3YzQ4MDBmODQ2YjU1YmRjMmEzNDBhZjI1NWM0ZGQxhL1gpA==: 00:26:51.955 22:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:26:51.955 22:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:26:51.955 22:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:26:51.955 22:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:26:51.955 22:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:51.955 22:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:51.955 22:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:26:51.955 22:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:51.955 22:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:26:51.955 22:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:26:51.955 22:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:26:51.955 22:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:26:51.955 22:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.955 22:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.955 nvme0n1 00:26:51.955 22:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.955 22:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:51.955 22:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:51.955 22:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:51.955 22:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:51.955 22:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:51.955 22:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NTgwZTYwNDYwZGJlNWNiNzI3OTk2YzZjZTFmNzVkNjTx03No: 00:26:51.955 22:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWVkOWRhNmM3ZTQ2MTQyOTExNDA4YTRjMDhkOTAzZmQQ6u7u: 00:26:51.955 22:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:51.955 22:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:51.955 22:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NTgwZTYwNDYwZGJlNWNiNzI3OTk2YzZjZTFmNzVkNjTx03No: 00:26:51.955 22:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWVkOWRhNmM3ZTQ2MTQyOTExNDA4YTRjMDhkOTAzZmQQ6u7u: ]] 00:26:51.955 22:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWVkOWRhNmM3ZTQ2MTQyOTExNDA4YTRjMDhkOTAzZmQQ6u7u: 00:26:51.955 22:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:26:51.955 22:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:26:51.955 22:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:26:51.955 22:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:51.956 22:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:51.956 22:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:51.956 22:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:51.956 22:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:26:51.956 22:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.956 22:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.956 request: 00:26:51.956 { 00:26:51.956 "name": "nvme0", 00:26:51.956 "dhchap_key": "key2", 00:26:51.956 "dhchap_ctrlr_key": "ckey1", 00:26:51.956 "method": "bdev_nvme_set_keys", 00:26:51.956 "req_id": 1 00:26:51.956 } 00:26:51.956 Got JSON-RPC error response 00:26:51.956 response: 00:26:51.956 { 00:26:51.956 "code": -13, 00:26:51.956 "message": "Permission denied" 00:26:51.956 } 00:26:51.956 22:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:51.956 22:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:26:51.956 22:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:51.956 22:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:51.956 22:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:51.956 22:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:26:51.956 22:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:26:51.956 22:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.956 22:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.956 22:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.215 22:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:26:52.215 22:57:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:26:53.151 22:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:26:53.151 22:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:26:53.151 22:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.151 22:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.151 22:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.151 22:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:26:53.151 22:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:26:53.151 22:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:26:53.151 22:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:26:53.151 22:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:53.151 22:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:26:53.151 22:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:53.151 22:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:26:53.151 22:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:53.151 22:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:53.151 rmmod nvme_tcp 00:26:53.151 rmmod nvme_fabrics 00:26:53.151 22:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:53.151 22:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:26:53.151 22:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:26:53.151 22:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 2506472 ']' 00:26:53.151 22:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 2506472 00:26:53.151 22:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 2506472 ']' 00:26:53.151 22:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 2506472 00:26:53.151 22:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:26:53.151 22:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:53.151 22:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2506472 00:26:53.151 22:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:53.151 22:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:53.151 22:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2506472' 00:26:53.151 killing process with pid 2506472 00:26:53.151 22:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 2506472 00:26:53.151 22:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 2506472 00:26:53.411 22:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:53.411 22:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:53.411 22:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:53.411 22:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:26:53.411 22:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:26:53.411 22:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:53.411 22:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:26:53.411 22:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:53.411 22:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:53.411 22:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:53.411 22:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:53.411 22:58:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:55.318 22:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:55.318 22:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:55.578 22:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:55.578 22:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:26:55.578 22:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:26:55.578 22:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:26:55.578 22:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:55.578 22:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:55.578 22:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:55.578 22:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:55.578 22:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:26:55.578 22:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:26:55.578 22:58:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/setup.sh 00:26:58.872 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:58.872 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:58.872 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:58.872 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:58.872 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:58.872 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:58.872 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:58.872 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:58.872 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:58.872 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:58.872 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:58.872 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:58.872 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:58.872 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:58.872 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:58.872 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:59.132 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:26:59.392 22:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.3HS /tmp/spdk.key-null.hJD /tmp/spdk.key-sha256.Qck /tmp/spdk.key-sha384.7Fb /tmp/spdk.key-sha512.H1X /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/nvme-auth.log 00:26:59.392 22:58:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/setup.sh 00:27:01.930 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:27:01.930 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:27:01.930 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:27:02.190 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:27:02.190 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:27:02.190 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:27:02.190 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:27:02.190 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:27:02.190 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:27:02.190 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:27:02.190 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:27:02.190 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:27:02.190 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:27:02.190 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:27:02.190 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:27:02.190 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:27:02.190 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:27:02.190 00:27:02.190 real 0m54.126s 00:27:02.190 user 0m48.993s 00:27:02.190 sys 0m12.553s 00:27:02.190 22:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:02.190 22:58:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.190 ************************************ 00:27:02.190 END TEST nvmf_auth_host 00:27:02.190 ************************************ 00:27:02.190 22:58:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:27:02.190 22:58:09 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:02.190 22:58:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:02.190 22:58:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:02.190 22:58:09 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.190 ************************************ 00:27:02.190 START TEST nvmf_digest 00:27:02.190 ************************************ 00:27:02.190 22:58:09 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:02.451 * Looking for test storage... 00:27:02.451 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host 00:27:02.451 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:02.451 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:27:02.451 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:02.451 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:02.451 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:02.451 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:02.451 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:02.451 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:27:02.451 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:27:02.451 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:27:02.451 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:27:02.451 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:27:02.451 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:27:02.451 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:27:02.451 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:02.451 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:27:02.451 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:27:02.451 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:02.451 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:02.451 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:27:02.451 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:27:02.451 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:02.451 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:27:02.451 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:27:02.451 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:27:02.451 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:27:02.451 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:02.451 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:27:02.451 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:27:02.451 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:02.451 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:02.451 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:27:02.451 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:02.451 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:02.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:02.451 --rc genhtml_branch_coverage=1 00:27:02.451 --rc genhtml_function_coverage=1 00:27:02.451 --rc genhtml_legend=1 00:27:02.451 --rc geninfo_all_blocks=1 00:27:02.451 --rc geninfo_unexecuted_blocks=1 00:27:02.451 00:27:02.451 ' 00:27:02.451 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:02.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:02.451 --rc genhtml_branch_coverage=1 00:27:02.451 --rc genhtml_function_coverage=1 00:27:02.451 --rc genhtml_legend=1 00:27:02.451 --rc geninfo_all_blocks=1 00:27:02.451 --rc geninfo_unexecuted_blocks=1 00:27:02.451 00:27:02.451 ' 00:27:02.451 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:02.451 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:02.451 --rc genhtml_branch_coverage=1 00:27:02.451 --rc genhtml_function_coverage=1 00:27:02.451 --rc genhtml_legend=1 00:27:02.451 --rc geninfo_all_blocks=1 00:27:02.451 --rc geninfo_unexecuted_blocks=1 00:27:02.451 00:27:02.451 ' 00:27:02.452 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:02.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:02.452 --rc genhtml_branch_coverage=1 00:27:02.452 --rc genhtml_function_coverage=1 00:27:02.452 --rc genhtml_legend=1 00:27:02.452 --rc geninfo_all_blocks=1 00:27:02.452 --rc geninfo_unexecuted_blocks=1 00:27:02.452 00:27:02.452 ' 00:27:02.452 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:27:02.452 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:27:02.452 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:02.452 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:02.452 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:02.452 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:02.452 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:02.452 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:02.452 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:02.452 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:02.452 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:02.452 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:02.452 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:02.452 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:02.452 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:02.452 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:02.452 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:02.452 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:02.452 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:27:02.452 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:27:02.452 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:02.452 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:02.452 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:02.452 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:02.452 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:02.452 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:02.452 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:27:02.452 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:02.452 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:27:02.452 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:02.452 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:02.452 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:02.452 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:02.452 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:02.452 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:02.452 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:02.452 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:02.452 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:02.452 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:02.452 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:27:02.452 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:27:02.452 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:27:02.452 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:27:02.452 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:27:02.452 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:02.452 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:02.452 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:02.452 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:02.452 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:02.452 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:02.452 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:02.452 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:02.452 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:02.452 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:02.452 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:27:02.452 22:58:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:09.030 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:09.030 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:27:09.030 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:09.031 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:09.031 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:09.031 Found net devices under 0000:86:00.0: cvl_0_0 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:09.031 Found net devices under 0000:86:00.1: cvl_0_1 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:09.031 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:09.031 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.398 ms 00:27:09.031 00:27:09.031 --- 10.0.0.2 ping statistics --- 00:27:09.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:09.031 rtt min/avg/max/mdev = 0.398/0.398/0.398/0.000 ms 00:27:09.031 22:58:15 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:09.031 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:09.032 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:27:09.032 00:27:09.032 --- 10.0.0.1 ping statistics --- 00:27:09.032 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:09.032 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:27:09.032 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:09.032 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:27:09.032 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:09.032 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:09.032 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:09.032 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:09.032 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:09.032 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:09.032 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:09.032 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:09.032 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:27:09.032 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:27:09.032 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:09.032 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:09.032 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:09.032 ************************************ 00:27:09.032 START TEST nvmf_digest_clean 00:27:09.032 ************************************ 00:27:09.032 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:27:09.032 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:27:09.032 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:27:09.032 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:27:09.032 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:27:09.032 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:27:09.032 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:09.032 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:09.032 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:09.032 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=2520304 00:27:09.032 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 2520304 00:27:09.032 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:09.032 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2520304 ']' 00:27:09.032 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:09.032 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:09.032 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:09.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:09.032 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:09.032 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:09.032 [2024-12-10 22:58:16.123901] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:27:09.032 [2024-12-10 22:58:16.123942] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:09.032 [2024-12-10 22:58:16.204358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:09.032 [2024-12-10 22:58:16.243992] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:09.032 [2024-12-10 22:58:16.244029] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:09.032 [2024-12-10 22:58:16.244036] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:09.032 [2024-12-10 22:58:16.244042] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:09.032 [2024-12-10 22:58:16.244047] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:09.032 [2024-12-10 22:58:16.244598] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:27:09.032 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:09.032 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:09.032 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:09.032 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:09.032 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:09.032 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:09.032 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:27:09.032 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:27:09.032 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:27:09.032 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.032 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:09.032 null0 00:27:09.032 [2024-12-10 22:58:16.392700] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:09.032 [2024-12-10 22:58:16.416895] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:09.032 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.032 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:27:09.032 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:09.032 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:09.032 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:27:09.032 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:27:09.032 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:27:09.032 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:09.032 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2520323 00:27:09.032 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2520323 /var/tmp/bperf.sock 00:27:09.032 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:27:09.032 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2520323 ']' 00:27:09.032 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:09.032 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:09.032 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:09.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:09.032 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:09.032 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:09.032 [2024-12-10 22:58:16.470592] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:27:09.032 [2024-12-10 22:58:16.470632] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2520323 ] 00:27:09.032 [2024-12-10 22:58:16.548386] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:09.032 [2024-12-10 22:58:16.589683] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:27:09.032 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:09.032 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:09.032 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:09.032 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:09.032 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:09.294 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:09.294 22:58:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:09.862 nvme0n1 00:27:09.862 22:58:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:09.862 22:58:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:09.862 Running I/O for 2 seconds... 00:27:11.857 24041.00 IOPS, 93.91 MiB/s [2024-12-10T21:58:19.586Z] 24400.50 IOPS, 95.31 MiB/s 00:27:11.857 Latency(us) 00:27:11.857 [2024-12-10T21:58:19.586Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:11.857 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:11.857 nvme0n1 : 2.04 23940.97 93.52 0.00 0.00 5235.99 2550.21 45818.21 00:27:11.857 [2024-12-10T21:58:19.586Z] =================================================================================================================== 00:27:11.857 [2024-12-10T21:58:19.586Z] Total : 23940.97 93.52 0.00 0.00 5235.99 2550.21 45818.21 00:27:11.857 { 00:27:11.857 "results": [ 00:27:11.857 { 00:27:11.857 "job": "nvme0n1", 00:27:11.857 "core_mask": "0x2", 00:27:11.857 "workload": "randread", 00:27:11.857 "status": "finished", 00:27:11.857 "queue_depth": 128, 00:27:11.857 "io_size": 4096, 00:27:11.857 "runtime": 2.043735, 00:27:11.857 "iops": 23940.970820580947, 00:27:11.857 "mibps": 93.51941726789433, 00:27:11.857 "io_failed": 0, 00:27:11.857 "io_timeout": 0, 00:27:11.857 "avg_latency_us": 5235.985968524045, 00:27:11.857 "min_latency_us": 2550.2052173913044, 00:27:11.857 "max_latency_us": 45818.212173913045 00:27:11.857 } 00:27:11.857 ], 00:27:11.857 "core_count": 1 00:27:11.857 } 00:27:11.857 22:58:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:11.857 22:58:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:11.857 22:58:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:11.857 22:58:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:11.857 | select(.opcode=="crc32c") 00:27:11.857 | "\(.module_name) \(.executed)"' 00:27:11.857 22:58:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:12.140 22:58:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:12.140 22:58:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:12.140 22:58:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:12.140 22:58:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:12.140 22:58:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2520323 00:27:12.140 22:58:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2520323 ']' 00:27:12.140 22:58:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2520323 00:27:12.140 22:58:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:27:12.140 22:58:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:12.140 22:58:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2520323 00:27:12.140 22:58:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:12.140 22:58:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:12.140 22:58:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2520323' 00:27:12.140 killing process with pid 2520323 00:27:12.140 22:58:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2520323 00:27:12.140 Received shutdown signal, test time was about 2.000000 seconds 00:27:12.140 00:27:12.140 Latency(us) 00:27:12.140 [2024-12-10T21:58:19.869Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:12.140 [2024-12-10T21:58:19.869Z] =================================================================================================================== 00:27:12.140 [2024-12-10T21:58:19.869Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:12.140 22:58:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2520323 00:27:12.400 22:58:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:27:12.400 22:58:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:12.400 22:58:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:12.400 22:58:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:27:12.400 22:58:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:27:12.400 22:58:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:27:12.400 22:58:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:12.400 22:58:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2520812 00:27:12.400 22:58:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2520812 /var/tmp/bperf.sock 00:27:12.400 22:58:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:27:12.400 22:58:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2520812 ']' 00:27:12.400 22:58:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:12.400 22:58:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:12.400 22:58:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:12.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:12.400 22:58:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:12.400 22:58:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:12.400 [2024-12-10 22:58:19.932736] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:27:12.400 [2024-12-10 22:58:19.932786] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2520812 ] 00:27:12.400 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:12.400 Zero copy mechanism will not be used. 00:27:12.400 [2024-12-10 22:58:20.008793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:12.400 [2024-12-10 22:58:20.054848] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:27:12.400 22:58:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:12.400 22:58:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:12.400 22:58:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:12.400 22:58:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:12.400 22:58:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:12.659 22:58:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:12.659 22:58:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:13.227 nvme0n1 00:27:13.227 22:58:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:13.227 22:58:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:13.227 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:13.227 Zero copy mechanism will not be used. 00:27:13.227 Running I/O for 2 seconds... 00:27:15.100 5522.00 IOPS, 690.25 MiB/s [2024-12-10T21:58:22.829Z] 5856.00 IOPS, 732.00 MiB/s 00:27:15.100 Latency(us) 00:27:15.100 [2024-12-10T21:58:22.829Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:15.100 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:15.100 nvme0n1 : 2.00 5855.11 731.89 0.00 0.00 2729.87 641.11 7351.43 00:27:15.100 [2024-12-10T21:58:22.829Z] =================================================================================================================== 00:27:15.100 [2024-12-10T21:58:22.829Z] Total : 5855.11 731.89 0.00 0.00 2729.87 641.11 7351.43 00:27:15.100 { 00:27:15.100 "results": [ 00:27:15.100 { 00:27:15.100 "job": "nvme0n1", 00:27:15.100 "core_mask": "0x2", 00:27:15.100 "workload": "randread", 00:27:15.100 "status": "finished", 00:27:15.100 "queue_depth": 16, 00:27:15.100 "io_size": 131072, 00:27:15.100 "runtime": 2.003035, 00:27:15.100 "iops": 5855.114863195102, 00:27:15.100 "mibps": 731.8893578993877, 00:27:15.100 "io_failed": 0, 00:27:15.100 "io_timeout": 0, 00:27:15.100 "avg_latency_us": 2729.870630523756, 00:27:15.100 "min_latency_us": 641.1130434782609, 00:27:15.100 "max_latency_us": 7351.429565217391 00:27:15.100 } 00:27:15.100 ], 00:27:15.100 "core_count": 1 00:27:15.100 } 00:27:15.359 22:58:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:15.359 22:58:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:15.359 22:58:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:15.359 22:58:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:15.359 | select(.opcode=="crc32c") 00:27:15.359 | "\(.module_name) \(.executed)"' 00:27:15.359 22:58:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:15.359 22:58:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:15.359 22:58:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:15.359 22:58:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:15.359 22:58:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:15.359 22:58:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2520812 00:27:15.359 22:58:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2520812 ']' 00:27:15.359 22:58:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2520812 00:27:15.359 22:58:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:27:15.359 22:58:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:15.359 22:58:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2520812 00:27:15.619 22:58:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:15.619 22:58:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:15.619 22:58:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2520812' 00:27:15.619 killing process with pid 2520812 00:27:15.619 22:58:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2520812 00:27:15.619 Received shutdown signal, test time was about 2.000000 seconds 00:27:15.619 00:27:15.619 Latency(us) 00:27:15.619 [2024-12-10T21:58:23.348Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:15.619 [2024-12-10T21:58:23.348Z] =================================================================================================================== 00:27:15.619 [2024-12-10T21:58:23.348Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:15.619 22:58:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2520812 00:27:15.619 22:58:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:27:15.619 22:58:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:15.619 22:58:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:15.619 22:58:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:27:15.619 22:58:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:27:15.619 22:58:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:27:15.619 22:58:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:15.619 22:58:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2521491 00:27:15.619 22:58:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2521491 /var/tmp/bperf.sock 00:27:15.619 22:58:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:27:15.619 22:58:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2521491 ']' 00:27:15.619 22:58:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:15.619 22:58:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:15.619 22:58:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:15.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:15.619 22:58:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:15.619 22:58:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:15.619 [2024-12-10 22:58:23.292032] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:27:15.619 [2024-12-10 22:58:23.292081] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2521491 ] 00:27:15.878 [2024-12-10 22:58:23.371564] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:15.878 [2024-12-10 22:58:23.408466] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:27:15.878 22:58:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:15.878 22:58:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:15.878 22:58:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:15.878 22:58:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:15.878 22:58:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:16.137 22:58:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:16.137 22:58:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:16.396 nvme0n1 00:27:16.396 22:58:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:16.396 22:58:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:16.396 Running I/O for 2 seconds... 00:27:18.709 26694.00 IOPS, 104.27 MiB/s [2024-12-10T21:58:26.438Z] 26763.00 IOPS, 104.54 MiB/s 00:27:18.709 Latency(us) 00:27:18.709 [2024-12-10T21:58:26.438Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:18.709 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:18.709 nvme0n1 : 2.00 26764.88 104.55 0.00 0.00 4774.17 3618.73 8776.13 00:27:18.709 [2024-12-10T21:58:26.438Z] =================================================================================================================== 00:27:18.709 [2024-12-10T21:58:26.439Z] Total : 26764.88 104.55 0.00 0.00 4774.17 3618.73 8776.13 00:27:18.710 { 00:27:18.710 "results": [ 00:27:18.710 { 00:27:18.710 "job": "nvme0n1", 00:27:18.710 "core_mask": "0x2", 00:27:18.710 "workload": "randwrite", 00:27:18.710 "status": "finished", 00:27:18.710 "queue_depth": 128, 00:27:18.710 "io_size": 4096, 00:27:18.710 "runtime": 2.004642, 00:27:18.710 "iops": 26764.878716499006, 00:27:18.710 "mibps": 104.55030748632424, 00:27:18.710 "io_failed": 0, 00:27:18.710 "io_timeout": 0, 00:27:18.710 "avg_latency_us": 4774.168035382912, 00:27:18.710 "min_latency_us": 3618.7269565217393, 00:27:18.710 "max_latency_us": 8776.125217391304 00:27:18.710 } 00:27:18.710 ], 00:27:18.710 "core_count": 1 00:27:18.710 } 00:27:18.710 22:58:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:18.710 22:58:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:18.710 22:58:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:18.710 22:58:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:18.710 | select(.opcode=="crc32c") 00:27:18.710 | "\(.module_name) \(.executed)"' 00:27:18.710 22:58:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:18.710 22:58:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:18.710 22:58:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:18.710 22:58:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:18.710 22:58:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:18.710 22:58:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2521491 00:27:18.710 22:58:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2521491 ']' 00:27:18.710 22:58:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2521491 00:27:18.710 22:58:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:27:18.710 22:58:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:18.710 22:58:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2521491 00:27:18.710 22:58:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:18.710 22:58:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:18.710 22:58:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2521491' 00:27:18.710 killing process with pid 2521491 00:27:18.710 22:58:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2521491 00:27:18.710 Received shutdown signal, test time was about 2.000000 seconds 00:27:18.710 00:27:18.710 Latency(us) 00:27:18.710 [2024-12-10T21:58:26.439Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:18.710 [2024-12-10T21:58:26.439Z] =================================================================================================================== 00:27:18.710 [2024-12-10T21:58:26.439Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:18.710 22:58:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2521491 00:27:18.969 22:58:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:27:18.969 22:58:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:18.969 22:58:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:18.969 22:58:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:27:18.969 22:58:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:27:18.969 22:58:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:27:18.969 22:58:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:18.969 22:58:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2521965 00:27:18.969 22:58:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2521965 /var/tmp/bperf.sock 00:27:18.969 22:58:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:27:18.969 22:58:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2521965 ']' 00:27:18.969 22:58:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:18.969 22:58:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:18.969 22:58:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:18.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:18.969 22:58:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:18.969 22:58:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:18.969 [2024-12-10 22:58:26.608683] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:27:18.969 [2024-12-10 22:58:26.608731] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2521965 ] 00:27:18.969 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:18.969 Zero copy mechanism will not be used. 00:27:18.969 [2024-12-10 22:58:26.684537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:19.228 [2024-12-10 22:58:26.726294] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:27:19.228 22:58:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:19.228 22:58:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:27:19.228 22:58:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:19.228 22:58:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:19.228 22:58:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:19.488 22:58:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:19.488 22:58:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:19.747 nvme0n1 00:27:19.747 22:58:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:19.747 22:58:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:19.747 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:19.747 Zero copy mechanism will not be used. 00:27:19.747 Running I/O for 2 seconds... 00:27:22.063 5971.00 IOPS, 746.38 MiB/s [2024-12-10T21:58:29.792Z] 5977.00 IOPS, 747.12 MiB/s 00:27:22.063 Latency(us) 00:27:22.063 [2024-12-10T21:58:29.792Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:22.063 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:22.063 nvme0n1 : 2.00 5976.40 747.05 0.00 0.00 2672.69 1367.71 4473.54 00:27:22.063 [2024-12-10T21:58:29.792Z] =================================================================================================================== 00:27:22.063 [2024-12-10T21:58:29.792Z] Total : 5976.40 747.05 0.00 0.00 2672.69 1367.71 4473.54 00:27:22.063 { 00:27:22.063 "results": [ 00:27:22.063 { 00:27:22.063 "job": "nvme0n1", 00:27:22.063 "core_mask": "0x2", 00:27:22.063 "workload": "randwrite", 00:27:22.063 "status": "finished", 00:27:22.063 "queue_depth": 16, 00:27:22.063 "io_size": 131072, 00:27:22.063 "runtime": 2.003546, 00:27:22.063 "iops": 5976.403835998774, 00:27:22.063 "mibps": 747.0504794998468, 00:27:22.063 "io_failed": 0, 00:27:22.063 "io_timeout": 0, 00:27:22.063 "avg_latency_us": 2672.6875186091606, 00:27:22.063 "min_latency_us": 1367.7078260869566, 00:27:22.063 "max_latency_us": 4473.544347826087 00:27:22.063 } 00:27:22.063 ], 00:27:22.063 "core_count": 1 00:27:22.063 } 00:27:22.063 22:58:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:22.063 22:58:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:22.063 22:58:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:22.063 22:58:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:22.063 | select(.opcode=="crc32c") 00:27:22.063 | "\(.module_name) \(.executed)"' 00:27:22.063 22:58:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:22.063 22:58:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:22.063 22:58:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:22.063 22:58:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:22.063 22:58:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:22.063 22:58:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2521965 00:27:22.063 22:58:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2521965 ']' 00:27:22.063 22:58:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2521965 00:27:22.063 22:58:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:27:22.063 22:58:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:22.063 22:58:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2521965 00:27:22.063 22:58:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:22.063 22:58:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:22.063 22:58:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2521965' 00:27:22.063 killing process with pid 2521965 00:27:22.063 22:58:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2521965 00:27:22.063 Received shutdown signal, test time was about 2.000000 seconds 00:27:22.063 00:27:22.063 Latency(us) 00:27:22.063 [2024-12-10T21:58:29.792Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:22.063 [2024-12-10T21:58:29.792Z] =================================================================================================================== 00:27:22.063 [2024-12-10T21:58:29.792Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:22.063 22:58:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2521965 00:27:22.321 22:58:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2520304 00:27:22.321 22:58:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2520304 ']' 00:27:22.322 22:58:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2520304 00:27:22.322 22:58:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:27:22.322 22:58:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:22.322 22:58:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2520304 00:27:22.322 22:58:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:22.322 22:58:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:22.322 22:58:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2520304' 00:27:22.322 killing process with pid 2520304 00:27:22.322 22:58:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2520304 00:27:22.322 22:58:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2520304 00:27:22.581 00:27:22.581 real 0m14.016s 00:27:22.581 user 0m26.755s 00:27:22.581 sys 0m4.653s 00:27:22.581 22:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:22.581 22:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:22.581 ************************************ 00:27:22.581 END TEST nvmf_digest_clean 00:27:22.581 ************************************ 00:27:22.581 22:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:27:22.581 22:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:22.581 22:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:22.581 22:58:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:22.581 ************************************ 00:27:22.581 START TEST nvmf_digest_error 00:27:22.581 ************************************ 00:27:22.581 22:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:27:22.581 22:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:27:22.581 22:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:22.581 22:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:22.581 22:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:22.581 22:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=2522580 00:27:22.581 22:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 2522580 00:27:22.581 22:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:22.581 22:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2522580 ']' 00:27:22.581 22:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:22.581 22:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:22.581 22:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:22.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:22.581 22:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:22.581 22:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:22.581 [2024-12-10 22:58:30.209680] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:27:22.581 [2024-12-10 22:58:30.209720] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:22.581 [2024-12-10 22:58:30.289541] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:22.840 [2024-12-10 22:58:30.330569] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:22.840 [2024-12-10 22:58:30.330603] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:22.840 [2024-12-10 22:58:30.330610] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:22.840 [2024-12-10 22:58:30.330616] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:22.840 [2024-12-10 22:58:30.330621] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:22.840 [2024-12-10 22:58:30.331207] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:27:22.840 22:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:22.840 22:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:22.840 22:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:22.840 22:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:22.840 22:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:22.840 22:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:22.840 22:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:27:22.840 22:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.840 22:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:22.840 [2024-12-10 22:58:30.399663] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:27:22.840 22:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.840 22:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:27:22.840 22:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:27:22.840 22:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:22.840 22:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:22.840 null0 00:27:22.840 [2024-12-10 22:58:30.495256] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:22.840 [2024-12-10 22:58:30.519461] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:22.840 22:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:22.840 22:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:27:22.840 22:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:22.840 22:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:22.840 22:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:22.840 22:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:22.840 22:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2522707 00:27:22.840 22:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2522707 /var/tmp/bperf.sock 00:27:22.840 22:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:27:22.840 22:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2522707 ']' 00:27:22.840 22:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:22.840 22:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:22.841 22:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:22.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:22.841 22:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:22.841 22:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:23.100 [2024-12-10 22:58:30.570275] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:27:23.100 [2024-12-10 22:58:30.570317] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2522707 ] 00:27:23.100 [2024-12-10 22:58:30.644668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:23.100 [2024-12-10 22:58:30.685603] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:27:23.100 22:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:23.100 22:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:23.100 22:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:23.100 22:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:23.359 22:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:23.359 22:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.359 22:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:23.359 22:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.359 22:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:23.359 22:58:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:23.927 nvme0n1 00:27:23.927 22:58:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:23.927 22:58:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.927 22:58:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:23.927 22:58:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.927 22:58:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:23.927 22:58:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:23.927 Running I/O for 2 seconds... 00:27:23.927 [2024-12-10 22:58:31.511684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:23.927 [2024-12-10 22:58:31.511716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.927 [2024-12-10 22:58:31.511727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.927 [2024-12-10 22:58:31.523938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:23.927 [2024-12-10 22:58:31.523962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.927 [2024-12-10 22:58:31.523971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.927 [2024-12-10 22:58:31.534140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:23.927 [2024-12-10 22:58:31.534168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:6912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.927 [2024-12-10 22:58:31.534177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.927 [2024-12-10 22:58:31.542931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:23.927 [2024-12-10 22:58:31.542957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.927 [2024-12-10 22:58:31.542966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.927 [2024-12-10 22:58:31.552905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:23.927 [2024-12-10 22:58:31.552926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:1158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.927 [2024-12-10 22:58:31.552935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.927 [2024-12-10 22:58:31.564664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:23.927 [2024-12-10 22:58:31.564685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.927 [2024-12-10 22:58:31.564693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.927 [2024-12-10 22:58:31.574008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:23.927 [2024-12-10 22:58:31.574028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.927 [2024-12-10 22:58:31.574036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.927 [2024-12-10 22:58:31.585327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:23.927 [2024-12-10 22:58:31.585352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:8193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.927 [2024-12-10 22:58:31.585361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.927 [2024-12-10 22:58:31.596245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:23.927 [2024-12-10 22:58:31.596266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:19861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.927 [2024-12-10 22:58:31.596274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.927 [2024-12-10 22:58:31.608954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:23.927 [2024-12-10 22:58:31.608976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:14011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.927 [2024-12-10 22:58:31.608984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.927 [2024-12-10 22:58:31.620527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:23.927 [2024-12-10 22:58:31.620547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:24075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.927 [2024-12-10 22:58:31.620555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.927 [2024-12-10 22:58:31.629104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:23.927 [2024-12-10 22:58:31.629124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:3786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.927 [2024-12-10 22:58:31.629133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.927 [2024-12-10 22:58:31.639900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:23.927 [2024-12-10 22:58:31.639921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:6326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.927 [2024-12-10 22:58:31.639929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:23.927 [2024-12-10 22:58:31.648454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:23.927 [2024-12-10 22:58:31.648475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:17132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:23.928 [2024-12-10 22:58:31.648483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.187 [2024-12-10 22:58:31.659694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.187 [2024-12-10 22:58:31.659714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.187 [2024-12-10 22:58:31.659722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.187 [2024-12-10 22:58:31.668993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.187 [2024-12-10 22:58:31.669013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:5770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.187 [2024-12-10 22:58:31.669021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.187 [2024-12-10 22:58:31.677234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.187 [2024-12-10 22:58:31.677255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.187 [2024-12-10 22:58:31.677264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.187 [2024-12-10 22:58:31.688692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.187 [2024-12-10 22:58:31.688712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.187 [2024-12-10 22:58:31.688720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.187 [2024-12-10 22:58:31.698387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.187 [2024-12-10 22:58:31.698407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:23812 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.187 [2024-12-10 22:58:31.698415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.187 [2024-12-10 22:58:31.709088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.187 [2024-12-10 22:58:31.709108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:13879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.187 [2024-12-10 22:58:31.709116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.187 [2024-12-10 22:58:31.718589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.187 [2024-12-10 22:58:31.718608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:9070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.187 [2024-12-10 22:58:31.718620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.187 [2024-12-10 22:58:31.727453] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.187 [2024-12-10 22:58:31.727473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:5255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.187 [2024-12-10 22:58:31.727480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.187 [2024-12-10 22:58:31.737636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.187 [2024-12-10 22:58:31.737656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:20487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.187 [2024-12-10 22:58:31.737665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.187 [2024-12-10 22:58:31.745963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.187 [2024-12-10 22:58:31.745983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:22837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.187 [2024-12-10 22:58:31.745991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.187 [2024-12-10 22:58:31.756895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.187 [2024-12-10 22:58:31.756915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:8438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.187 [2024-12-10 22:58:31.756924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.187 [2024-12-10 22:58:31.767520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.187 [2024-12-10 22:58:31.767540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:22075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.187 [2024-12-10 22:58:31.767548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.187 [2024-12-10 22:58:31.777466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.188 [2024-12-10 22:58:31.777486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:5620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.188 [2024-12-10 22:58:31.777494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.188 [2024-12-10 22:58:31.790134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.188 [2024-12-10 22:58:31.790154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:3419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.188 [2024-12-10 22:58:31.790170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.188 [2024-12-10 22:58:31.800221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.188 [2024-12-10 22:58:31.800241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:7289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.188 [2024-12-10 22:58:31.800249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.188 [2024-12-10 22:58:31.810956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.188 [2024-12-10 22:58:31.810975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:14796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.188 [2024-12-10 22:58:31.810983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.188 [2024-12-10 22:58:31.819544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.188 [2024-12-10 22:58:31.819563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:9960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.188 [2024-12-10 22:58:31.819571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.188 [2024-12-10 22:58:31.832441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.188 [2024-12-10 22:58:31.832461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:1261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.188 [2024-12-10 22:58:31.832469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.188 [2024-12-10 22:58:31.844850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.188 [2024-12-10 22:58:31.844869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:24263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.188 [2024-12-10 22:58:31.844877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.188 [2024-12-10 22:58:31.857256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.188 [2024-12-10 22:58:31.857276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:24054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.188 [2024-12-10 22:58:31.857284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.188 [2024-12-10 22:58:31.865538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.188 [2024-12-10 22:58:31.865556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:9705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.188 [2024-12-10 22:58:31.865564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.188 [2024-12-10 22:58:31.877484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.188 [2024-12-10 22:58:31.877504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.188 [2024-12-10 22:58:31.877512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.188 [2024-12-10 22:58:31.889847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.188 [2024-12-10 22:58:31.889867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.188 [2024-12-10 22:58:31.889875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.188 [2024-12-10 22:58:31.901975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.188 [2024-12-10 22:58:31.901995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.188 [2024-12-10 22:58:31.902022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.447 [2024-12-10 22:58:31.913145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.447 [2024-12-10 22:58:31.913172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:5876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.447 [2024-12-10 22:58:31.913181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.447 [2024-12-10 22:58:31.922838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.447 [2024-12-10 22:58:31.922858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:13729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.447 [2024-12-10 22:58:31.922866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.447 [2024-12-10 22:58:31.931247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.447 [2024-12-10 22:58:31.931267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:17085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.448 [2024-12-10 22:58:31.931275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.448 [2024-12-10 22:58:31.944008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.448 [2024-12-10 22:58:31.944028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:12351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.448 [2024-12-10 22:58:31.944036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.448 [2024-12-10 22:58:31.955374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.448 [2024-12-10 22:58:31.955394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:19696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.448 [2024-12-10 22:58:31.955402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.448 [2024-12-10 22:58:31.965545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.448 [2024-12-10 22:58:31.965564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7036 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.448 [2024-12-10 22:58:31.965572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.448 [2024-12-10 22:58:31.975650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.448 [2024-12-10 22:58:31.975670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:10169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.448 [2024-12-10 22:58:31.975678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.448 [2024-12-10 22:58:31.984049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.448 [2024-12-10 22:58:31.984069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.448 [2024-12-10 22:58:31.984077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.448 [2024-12-10 22:58:31.994636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.448 [2024-12-10 22:58:31.994662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:4023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.448 [2024-12-10 22:58:31.994670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.448 [2024-12-10 22:58:32.003175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.448 [2024-12-10 22:58:32.003196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.448 [2024-12-10 22:58:32.003203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.448 [2024-12-10 22:58:32.015723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.448 [2024-12-10 22:58:32.015743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:23078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.448 [2024-12-10 22:58:32.015751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.448 [2024-12-10 22:58:32.026852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.448 [2024-12-10 22:58:32.026872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.448 [2024-12-10 22:58:32.026881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.448 [2024-12-10 22:58:32.036979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.448 [2024-12-10 22:58:32.036998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:3602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.448 [2024-12-10 22:58:32.037006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.448 [2024-12-10 22:58:32.046611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.448 [2024-12-10 22:58:32.046631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:3152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.448 [2024-12-10 22:58:32.046639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.448 [2024-12-10 22:58:32.058648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.448 [2024-12-10 22:58:32.058668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:2182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.448 [2024-12-10 22:58:32.058676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.448 [2024-12-10 22:58:32.070886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.448 [2024-12-10 22:58:32.070907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.448 [2024-12-10 22:58:32.070916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.448 [2024-12-10 22:58:32.079414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.448 [2024-12-10 22:58:32.079433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:14267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.448 [2024-12-10 22:58:32.079441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.448 [2024-12-10 22:58:32.091636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.448 [2024-12-10 22:58:32.091656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:4260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.448 [2024-12-10 22:58:32.091664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.448 [2024-12-10 22:58:32.103872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.448 [2024-12-10 22:58:32.103893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:19787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.448 [2024-12-10 22:58:32.103901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.448 [2024-12-10 22:58:32.115716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.448 [2024-12-10 22:58:32.115736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.448 [2024-12-10 22:58:32.115744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.448 [2024-12-10 22:58:32.128830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.448 [2024-12-10 22:58:32.128850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:3933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.448 [2024-12-10 22:58:32.128858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.448 [2024-12-10 22:58:32.139179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.448 [2024-12-10 22:58:32.139199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:3212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.448 [2024-12-10 22:58:32.139206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.448 [2024-12-10 22:58:32.147307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.448 [2024-12-10 22:58:32.147326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:20694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.448 [2024-12-10 22:58:32.147333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.448 [2024-12-10 22:58:32.157444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.448 [2024-12-10 22:58:32.157464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:6142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.448 [2024-12-10 22:58:32.157472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.448 [2024-12-10 22:58:32.168292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.448 [2024-12-10 22:58:32.168314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:3214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.448 [2024-12-10 22:58:32.168321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.708 [2024-12-10 22:58:32.180833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.708 [2024-12-10 22:58:32.180853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:5581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.708 [2024-12-10 22:58:32.180865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.708 [2024-12-10 22:58:32.189828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.708 [2024-12-10 22:58:32.189849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:3961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.708 [2024-12-10 22:58:32.189858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.708 [2024-12-10 22:58:32.202360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.708 [2024-12-10 22:58:32.202381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.708 [2024-12-10 22:58:32.202390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.708 [2024-12-10 22:58:32.214201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.708 [2024-12-10 22:58:32.214221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:2524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.708 [2024-12-10 22:58:32.214229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.708 [2024-12-10 22:58:32.225975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.708 [2024-12-10 22:58:32.225994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:1181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.708 [2024-12-10 22:58:32.226002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.708 [2024-12-10 22:58:32.234410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.708 [2024-12-10 22:58:32.234431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:7030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.708 [2024-12-10 22:58:32.234438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.708 [2024-12-10 22:58:32.245220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.708 [2024-12-10 22:58:32.245239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:18991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.708 [2024-12-10 22:58:32.245247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.708 [2024-12-10 22:58:32.256210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.708 [2024-12-10 22:58:32.256230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:16060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.708 [2024-12-10 22:58:32.256238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.708 [2024-12-10 22:58:32.267859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.708 [2024-12-10 22:58:32.267879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:13727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.708 [2024-12-10 22:58:32.267887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.708 [2024-12-10 22:58:32.276391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.708 [2024-12-10 22:58:32.276415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:2813 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.708 [2024-12-10 22:58:32.276423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.708 [2024-12-10 22:58:32.288691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.708 [2024-12-10 22:58:32.288711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:16721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.708 [2024-12-10 22:58:32.288719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.708 [2024-12-10 22:58:32.301698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.708 [2024-12-10 22:58:32.301718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:12698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.708 [2024-12-10 22:58:32.301727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.708 [2024-12-10 22:58:32.313713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.708 [2024-12-10 22:58:32.313732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:17389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.708 [2024-12-10 22:58:32.313740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.708 [2024-12-10 22:58:32.322574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.708 [2024-12-10 22:58:32.322595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.708 [2024-12-10 22:58:32.322603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.708 [2024-12-10 22:58:32.335097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.708 [2024-12-10 22:58:32.335117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.708 [2024-12-10 22:58:32.335125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.708 [2024-12-10 22:58:32.346667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.708 [2024-12-10 22:58:32.346687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:25040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.708 [2024-12-10 22:58:32.346695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.708 [2024-12-10 22:58:32.359464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.708 [2024-12-10 22:58:32.359484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:23304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.708 [2024-12-10 22:58:32.359492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.708 [2024-12-10 22:58:32.368370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.708 [2024-12-10 22:58:32.368389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.708 [2024-12-10 22:58:32.368397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.708 [2024-12-10 22:58:32.380862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.708 [2024-12-10 22:58:32.380887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:4206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.708 [2024-12-10 22:58:32.380895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.708 [2024-12-10 22:58:32.393502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.708 [2024-12-10 22:58:32.393521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:9638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.708 [2024-12-10 22:58:32.393529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.708 [2024-12-10 22:58:32.406311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.708 [2024-12-10 22:58:32.406330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:18665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.708 [2024-12-10 22:58:32.406338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.708 [2024-12-10 22:58:32.418568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.708 [2024-12-10 22:58:32.418588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:14805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.708 [2024-12-10 22:58:32.418597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.708 [2024-12-10 22:58:32.427106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.708 [2024-12-10 22:58:32.427125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.708 [2024-12-10 22:58:32.427132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.968 [2024-12-10 22:58:32.438406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.968 [2024-12-10 22:58:32.438428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:16471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.968 [2024-12-10 22:58:32.438436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.968 [2024-12-10 22:58:32.451425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.968 [2024-12-10 22:58:32.451447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:9203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.968 [2024-12-10 22:58:32.451455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.968 [2024-12-10 22:58:32.462336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.968 [2024-12-10 22:58:32.462355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:25142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.968 [2024-12-10 22:58:32.462363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.968 [2024-12-10 22:58:32.471021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.968 [2024-12-10 22:58:32.471041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.968 [2024-12-10 22:58:32.471052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.968 [2024-12-10 22:58:32.483586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.968 [2024-12-10 22:58:32.483606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:3083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.968 [2024-12-10 22:58:32.483614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.968 23404.00 IOPS, 91.42 MiB/s [2024-12-10T21:58:32.697Z] [2024-12-10 22:58:32.496363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.968 [2024-12-10 22:58:32.496381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:22780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.968 [2024-12-10 22:58:32.496389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.968 [2024-12-10 22:58:32.508932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.968 [2024-12-10 22:58:32.508952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:7723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.968 [2024-12-10 22:58:32.508960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.968 [2024-12-10 22:58:32.519655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.968 [2024-12-10 22:58:32.519675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.968 [2024-12-10 22:58:32.519684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.968 [2024-12-10 22:58:32.531993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.968 [2024-12-10 22:58:32.532013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:6108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.968 [2024-12-10 22:58:32.532021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.968 [2024-12-10 22:58:32.540789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.968 [2024-12-10 22:58:32.540810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:15691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.968 [2024-12-10 22:58:32.540818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.968 [2024-12-10 22:58:32.552436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.968 [2024-12-10 22:58:32.552456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:24963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.968 [2024-12-10 22:58:32.552464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.968 [2024-12-10 22:58:32.563978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.968 [2024-12-10 22:58:32.563998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:2578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.968 [2024-12-10 22:58:32.564006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.968 [2024-12-10 22:58:32.572539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.968 [2024-12-10 22:58:32.572558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.968 [2024-12-10 22:58:32.572567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.968 [2024-12-10 22:58:32.584319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.968 [2024-12-10 22:58:32.584339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:2196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.968 [2024-12-10 22:58:32.584347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.968 [2024-12-10 22:58:32.595781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.968 [2024-12-10 22:58:32.595801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:10753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.968 [2024-12-10 22:58:32.595809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.968 [2024-12-10 22:58:32.604114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.968 [2024-12-10 22:58:32.604134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.968 [2024-12-10 22:58:32.604142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.968 [2024-12-10 22:58:32.615236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.968 [2024-12-10 22:58:32.615256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:10475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.968 [2024-12-10 22:58:32.615264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.968 [2024-12-10 22:58:32.623698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.968 [2024-12-10 22:58:32.623718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:16125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.968 [2024-12-10 22:58:32.623726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.968 [2024-12-10 22:58:32.633329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.968 [2024-12-10 22:58:32.633349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:18773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.968 [2024-12-10 22:58:32.633357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.968 [2024-12-10 22:58:32.642593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.968 [2024-12-10 22:58:32.642612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:3314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.968 [2024-12-10 22:58:32.642620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.968 [2024-12-10 22:58:32.652284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.968 [2024-12-10 22:58:32.652303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:20237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.968 [2024-12-10 22:58:32.652315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.968 [2024-12-10 22:58:32.660984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.968 [2024-12-10 22:58:32.661003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:6940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.968 [2024-12-10 22:58:32.661011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.968 [2024-12-10 22:58:32.670767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.968 [2024-12-10 22:58:32.670786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11041 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.968 [2024-12-10 22:58:32.670793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.968 [2024-12-10 22:58:32.680923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.969 [2024-12-10 22:58:32.680943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:9995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.969 [2024-12-10 22:58:32.680950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:24.969 [2024-12-10 22:58:32.690804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:24.969 [2024-12-10 22:58:32.690823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:23314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:24.969 [2024-12-10 22:58:32.690832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.228 [2024-12-10 22:58:32.699260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:25.228 [2024-12-10 22:58:32.699280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.228 [2024-12-10 22:58:32.699288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.228 [2024-12-10 22:58:32.710877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:25.228 [2024-12-10 22:58:32.710896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:24313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.228 [2024-12-10 22:58:32.710903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.228 [2024-12-10 22:58:32.721740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:25.228 [2024-12-10 22:58:32.721761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:11883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.228 [2024-12-10 22:58:32.721769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.228 [2024-12-10 22:58:32.729708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:25.228 [2024-12-10 22:58:32.729728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:18114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.228 [2024-12-10 22:58:32.729736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.228 [2024-12-10 22:58:32.740737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:25.228 [2024-12-10 22:58:32.740764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:3840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.228 [2024-12-10 22:58:32.740772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.228 [2024-12-10 22:58:32.749809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:25.228 [2024-12-10 22:58:32.749829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.228 [2024-12-10 22:58:32.749837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.228 [2024-12-10 22:58:32.758696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:25.228 [2024-12-10 22:58:32.758716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:1024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.228 [2024-12-10 22:58:32.758724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.228 [2024-12-10 22:58:32.770008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:25.228 [2024-12-10 22:58:32.770028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.228 [2024-12-10 22:58:32.770036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.228 [2024-12-10 22:58:32.781379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:25.228 [2024-12-10 22:58:32.781399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:19999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.228 [2024-12-10 22:58:32.781407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.228 [2024-12-10 22:58:32.789615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:25.228 [2024-12-10 22:58:32.789634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:16057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.228 [2024-12-10 22:58:32.789642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.228 [2024-12-10 22:58:32.801542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:25.228 [2024-12-10 22:58:32.801562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:1064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.228 [2024-12-10 22:58:32.801571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.228 [2024-12-10 22:58:32.811741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:25.228 [2024-12-10 22:58:32.811762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:24320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.228 [2024-12-10 22:58:32.811771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.228 [2024-12-10 22:58:32.820172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:25.228 [2024-12-10 22:58:32.820193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:1452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.228 [2024-12-10 22:58:32.820201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.228 [2024-12-10 22:58:32.832254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:25.228 [2024-12-10 22:58:32.832276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:15222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.228 [2024-12-10 22:58:32.832284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.228 [2024-12-10 22:58:32.843690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:25.228 [2024-12-10 22:58:32.843711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:16561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.228 [2024-12-10 22:58:32.843719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.228 [2024-12-10 22:58:32.854225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:25.228 [2024-12-10 22:58:32.854247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:3024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.228 [2024-12-10 22:58:32.854255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.228 [2024-12-10 22:58:32.864351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:25.228 [2024-12-10 22:58:32.864372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.228 [2024-12-10 22:58:32.864380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.228 [2024-12-10 22:58:32.873900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:25.228 [2024-12-10 22:58:32.873924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.228 [2024-12-10 22:58:32.873934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.229 [2024-12-10 22:58:32.883506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:25.229 [2024-12-10 22:58:32.883528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:15063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.229 [2024-12-10 22:58:32.883537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.229 [2024-12-10 22:58:32.893472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:25.229 [2024-12-10 22:58:32.893493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:10450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.229 [2024-12-10 22:58:32.893501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.229 [2024-12-10 22:58:32.902749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:25.229 [2024-12-10 22:58:32.902770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:7051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.229 [2024-12-10 22:58:32.902779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.229 [2024-12-10 22:58:32.912185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:25.229 [2024-12-10 22:58:32.912206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.229 [2024-12-10 22:58:32.912217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.229 [2024-12-10 22:58:32.924303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:25.229 [2024-12-10 22:58:32.924325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:21833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.229 [2024-12-10 22:58:32.924333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.229 [2024-12-10 22:58:32.934912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:25.229 [2024-12-10 22:58:32.934932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:23681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.229 [2024-12-10 22:58:32.934940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.229 [2024-12-10 22:58:32.947286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:25.229 [2024-12-10 22:58:32.947306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:21690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.229 [2024-12-10 22:58:32.947315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.488 [2024-12-10 22:58:32.959395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:25.488 [2024-12-10 22:58:32.959416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.488 [2024-12-10 22:58:32.959424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.488 [2024-12-10 22:58:32.969994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:25.488 [2024-12-10 22:58:32.970014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:11811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.488 [2024-12-10 22:58:32.970022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.488 [2024-12-10 22:58:32.978709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:25.488 [2024-12-10 22:58:32.978730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:14406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.488 [2024-12-10 22:58:32.978738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.488 [2024-12-10 22:58:32.989172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:25.488 [2024-12-10 22:58:32.989193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:2059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.488 [2024-12-10 22:58:32.989201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.488 [2024-12-10 22:58:32.999839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:25.488 [2024-12-10 22:58:32.999859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.488 [2024-12-10 22:58:32.999867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.488 [2024-12-10 22:58:33.008571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:25.488 [2024-12-10 22:58:33.008594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:5363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.489 [2024-12-10 22:58:33.008603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.489 [2024-12-10 22:58:33.018507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:25.489 [2024-12-10 22:58:33.018528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:12921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.489 [2024-12-10 22:58:33.018536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.489 [2024-12-10 22:58:33.027844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:25.489 [2024-12-10 22:58:33.027866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:4099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.489 [2024-12-10 22:58:33.027874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.489 [2024-12-10 22:58:33.037124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:25.489 [2024-12-10 22:58:33.037145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.489 [2024-12-10 22:58:33.037153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.489 [2024-12-10 22:58:33.046193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:25.489 [2024-12-10 22:58:33.046214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:23735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.489 [2024-12-10 22:58:33.046222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.489 [2024-12-10 22:58:33.056265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:25.489 [2024-12-10 22:58:33.056286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.489 [2024-12-10 22:58:33.056294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.489 [2024-12-10 22:58:33.065860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:25.489 [2024-12-10 22:58:33.065880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:7574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.489 [2024-12-10 22:58:33.065887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.489 [2024-12-10 22:58:33.075991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:25.489 [2024-12-10 22:58:33.076011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.489 [2024-12-10 22:58:33.076019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.489 [2024-12-10 22:58:33.084823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:25.489 [2024-12-10 22:58:33.084843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:21980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.489 [2024-12-10 22:58:33.084851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.489 [2024-12-10 22:58:33.093414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:25.489 [2024-12-10 22:58:33.093435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:14404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.489 [2024-12-10 22:58:33.093443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.489 [2024-12-10 22:58:33.103361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:25.489 [2024-12-10 22:58:33.103381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:10796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.489 [2024-12-10 22:58:33.103389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.489 [2024-12-10 22:58:33.113903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:25.489 [2024-12-10 22:58:33.113924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:16850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.489 [2024-12-10 22:58:33.113933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.489 [2024-12-10 22:58:33.125251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:25.489 [2024-12-10 22:58:33.125272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:64 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.489 [2024-12-10 22:58:33.125280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.489 [2024-12-10 22:58:33.136368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:25.489 [2024-12-10 22:58:33.136388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:12093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.489 [2024-12-10 22:58:33.136396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.489 [2024-12-10 22:58:33.145062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:25.489 [2024-12-10 22:58:33.145081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:4506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.489 [2024-12-10 22:58:33.145089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.489 [2024-12-10 22:58:33.155583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:25.489 [2024-12-10 22:58:33.155603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.489 [2024-12-10 22:58:33.155611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.489 [2024-12-10 22:58:33.165820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:25.489 [2024-12-10 22:58:33.165841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:24881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.489 [2024-12-10 22:58:33.165850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.489 [2024-12-10 22:58:33.179560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:25.489 [2024-12-10 22:58:33.179582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:25451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.489 [2024-12-10 22:58:33.179595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.489 [2024-12-10 22:58:33.187742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:25.489 [2024-12-10 22:58:33.187763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:36 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.489 [2024-12-10 22:58:33.187772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.489 [2024-12-10 22:58:33.198618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:25.489 [2024-12-10 22:58:33.198639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:15770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.489 [2024-12-10 22:58:33.198649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.489 [2024-12-10 22:58:33.208181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:25.489 [2024-12-10 22:58:33.208201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:1386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.489 [2024-12-10 22:58:33.208209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.749 [2024-12-10 22:58:33.218123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:25.749 [2024-12-10 22:58:33.218144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:9689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.749 [2024-12-10 22:58:33.218152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.749 [2024-12-10 22:58:33.227597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:25.749 [2024-12-10 22:58:33.227619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:9345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.749 [2024-12-10 22:58:33.227627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.749 [2024-12-10 22:58:33.236636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:25.749 [2024-12-10 22:58:33.236657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:2106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.749 [2024-12-10 22:58:33.236665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.749 [2024-12-10 22:58:33.247009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:25.749 [2024-12-10 22:58:33.247029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:5916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.749 [2024-12-10 22:58:33.247038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.749 [2024-12-10 22:58:33.255700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:25.749 [2024-12-10 22:58:33.255721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:4223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.749 [2024-12-10 22:58:33.255729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.749 [2024-12-10 22:58:33.265527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:25.749 [2024-12-10 22:58:33.265547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:17878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.749 [2024-12-10 22:58:33.265555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.749 [2024-12-10 22:58:33.274699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:25.749 [2024-12-10 22:58:33.274720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.749 [2024-12-10 22:58:33.274728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.749 [2024-12-10 22:58:33.283960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:25.749 [2024-12-10 22:58:33.283981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:10210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.749 [2024-12-10 22:58:33.283989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.749 [2024-12-10 22:58:33.293461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:25.749 [2024-12-10 22:58:33.293481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:22598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.749 [2024-12-10 22:58:33.293490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.749 [2024-12-10 22:58:33.302086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:25.749 [2024-12-10 22:58:33.302106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:25017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.749 [2024-12-10 22:58:33.302114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.749 [2024-12-10 22:58:33.313522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:25.749 [2024-12-10 22:58:33.313544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:5222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.749 [2024-12-10 22:58:33.313552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.749 [2024-12-10 22:58:33.326091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:25.749 [2024-12-10 22:58:33.326113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:1738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.749 [2024-12-10 22:58:33.326121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.749 [2024-12-10 22:58:33.336579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:25.749 [2024-12-10 22:58:33.336599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:20232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.749 [2024-12-10 22:58:33.336608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.749 [2024-12-10 22:58:33.344809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:25.749 [2024-12-10 22:58:33.344829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:9774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.749 [2024-12-10 22:58:33.344841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.749 [2024-12-10 22:58:33.356015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:25.749 [2024-12-10 22:58:33.356036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:11047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.749 [2024-12-10 22:58:33.356044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.749 [2024-12-10 22:58:33.366347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:25.749 [2024-12-10 22:58:33.366367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:8786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.749 [2024-12-10 22:58:33.366375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.749 [2024-12-10 22:58:33.377110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:25.749 [2024-12-10 22:58:33.377130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:17807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.749 [2024-12-10 22:58:33.377138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.749 [2024-12-10 22:58:33.385299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:25.749 [2024-12-10 22:58:33.385320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.749 [2024-12-10 22:58:33.385329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.749 [2024-12-10 22:58:33.396186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:25.749 [2024-12-10 22:58:33.396206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.749 [2024-12-10 22:58:33.396215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.749 [2024-12-10 22:58:33.404519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:25.749 [2024-12-10 22:58:33.404539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:15975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.749 [2024-12-10 22:58:33.404547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.749 [2024-12-10 22:58:33.414757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:25.749 [2024-12-10 22:58:33.414778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.749 [2024-12-10 22:58:33.414786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.749 [2024-12-10 22:58:33.427614] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:25.749 [2024-12-10 22:58:33.427634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:1816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.749 [2024-12-10 22:58:33.427642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.749 [2024-12-10 22:58:33.440500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:25.749 [2024-12-10 22:58:33.440523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:6788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.749 [2024-12-10 22:58:33.440532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.749 [2024-12-10 22:58:33.448454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:25.749 [2024-12-10 22:58:33.448475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:16999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.749 [2024-12-10 22:58:33.448483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.749 [2024-12-10 22:58:33.460670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:25.749 [2024-12-10 22:58:33.460691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:13248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.749 [2024-12-10 22:58:33.460699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:25.749 [2024-12-10 22:58:33.472604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:25.749 [2024-12-10 22:58:33.472625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:16327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:25.749 [2024-12-10 22:58:33.472634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.008 [2024-12-10 22:58:33.482138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:26.008 [2024-12-10 22:58:33.482164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:1522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.008 [2024-12-10 22:58:33.482173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.008 [2024-12-10 22:58:33.494132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x997300) 00:27:26.008 [2024-12-10 22:58:33.494153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:16159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:26.008 [2024-12-10 22:58:33.494167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:26.008 24261.50 IOPS, 94.77 MiB/s 00:27:26.008 Latency(us) 00:27:26.008 [2024-12-10T21:58:33.737Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:26.008 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:26.008 nvme0n1 : 2.01 24282.08 94.85 0.00 0.00 5264.96 2607.19 18464.06 00:27:26.008 [2024-12-10T21:58:33.737Z] =================================================================================================================== 00:27:26.008 [2024-12-10T21:58:33.737Z] Total : 24282.08 94.85 0.00 0.00 5264.96 2607.19 18464.06 00:27:26.008 { 00:27:26.008 "results": [ 00:27:26.008 { 00:27:26.008 "job": "nvme0n1", 00:27:26.008 "core_mask": "0x2", 00:27:26.008 "workload": "randread", 00:27:26.008 "status": "finished", 00:27:26.008 "queue_depth": 128, 00:27:26.008 "io_size": 4096, 00:27:26.008 "runtime": 2.005388, 00:27:26.008 "iops": 24282.08406552747, 00:27:26.008 "mibps": 94.85189088096668, 00:27:26.008 "io_failed": 0, 00:27:26.008 "io_timeout": 0, 00:27:26.008 "avg_latency_us": 5264.964757492287, 00:27:26.008 "min_latency_us": 2607.1930434782607, 00:27:26.008 "max_latency_us": 18464.055652173913 00:27:26.008 } 00:27:26.008 ], 00:27:26.008 "core_count": 1 00:27:26.008 } 00:27:26.008 22:58:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:26.008 22:58:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:26.008 22:58:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:26.008 22:58:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:26.008 | .driver_specific 00:27:26.008 | .nvme_error 00:27:26.008 | .status_code 00:27:26.008 | .command_transient_transport_error' 00:27:26.008 22:58:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 190 > 0 )) 00:27:26.008 22:58:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2522707 00:27:26.008 22:58:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2522707 ']' 00:27:26.008 22:58:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2522707 00:27:26.009 22:58:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:26.009 22:58:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:26.009 22:58:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2522707 00:27:26.267 22:58:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:26.267 22:58:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:26.267 22:58:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2522707' 00:27:26.267 killing process with pid 2522707 00:27:26.267 22:58:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2522707 00:27:26.267 Received shutdown signal, test time was about 2.000000 seconds 00:27:26.268 00:27:26.268 Latency(us) 00:27:26.268 [2024-12-10T21:58:33.997Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:26.268 [2024-12-10T21:58:33.997Z] =================================================================================================================== 00:27:26.268 [2024-12-10T21:58:33.997Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:26.268 22:58:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2522707 00:27:26.268 22:58:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:27:26.268 22:58:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:26.268 22:58:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:26.268 22:58:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:26.268 22:58:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:26.268 22:58:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2523179 00:27:26.268 22:58:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2523179 /var/tmp/bperf.sock 00:27:26.268 22:58:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:27:26.268 22:58:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2523179 ']' 00:27:26.268 22:58:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:26.268 22:58:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:26.268 22:58:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:26.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:26.268 22:58:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:26.268 22:58:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:26.268 [2024-12-10 22:58:33.977946] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:27:26.268 [2024-12-10 22:58:33.977992] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2523179 ] 00:27:26.268 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:26.268 Zero copy mechanism will not be used. 00:27:26.526 [2024-12-10 22:58:34.051755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:26.526 [2024-12-10 22:58:34.088572] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:27:26.526 22:58:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:26.526 22:58:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:26.526 22:58:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:26.526 22:58:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:26.785 22:58:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:26.785 22:58:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.785 22:58:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:26.785 22:58:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.785 22:58:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:26.785 22:58:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:27.044 nvme0n1 00:27:27.044 22:58:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:27.044 22:58:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.044 22:58:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:27.044 22:58:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.044 22:58:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:27.044 22:58:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:27.303 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:27.303 Zero copy mechanism will not be used. 00:27:27.303 Running I/O for 2 seconds... 00:27:27.303 [2024-12-10 22:58:34.814321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.303 [2024-12-10 22:58:34.814356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.304 [2024-12-10 22:58:34.814366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.304 [2024-12-10 22:58:34.818749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.304 [2024-12-10 22:58:34.818775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.304 [2024-12-10 22:58:34.818784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.304 [2024-12-10 22:58:34.823126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.304 [2024-12-10 22:58:34.823150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.304 [2024-12-10 22:58:34.823165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.304 [2024-12-10 22:58:34.828018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.304 [2024-12-10 22:58:34.828044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.304 [2024-12-10 22:58:34.828053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.304 [2024-12-10 22:58:34.833221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.304 [2024-12-10 22:58:34.833245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.304 [2024-12-10 22:58:34.833254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.304 [2024-12-10 22:58:34.837996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.304 [2024-12-10 22:58:34.838020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.304 [2024-12-10 22:58:34.838029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.304 [2024-12-10 22:58:34.843845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.304 [2024-12-10 22:58:34.843869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.304 [2024-12-10 22:58:34.843878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.304 [2024-12-10 22:58:34.849740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.304 [2024-12-10 22:58:34.849763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.304 [2024-12-10 22:58:34.849773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.304 [2024-12-10 22:58:34.853590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.304 [2024-12-10 22:58:34.853613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.304 [2024-12-10 22:58:34.853622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.304 [2024-12-10 22:58:34.861172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.304 [2024-12-10 22:58:34.861196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.304 [2024-12-10 22:58:34.861206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.304 [2024-12-10 22:58:34.867663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.304 [2024-12-10 22:58:34.867687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.304 [2024-12-10 22:58:34.867700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.304 [2024-12-10 22:58:34.874647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.304 [2024-12-10 22:58:34.874671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.304 [2024-12-10 22:58:34.874680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.304 [2024-12-10 22:58:34.880659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.304 [2024-12-10 22:58:34.880682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.304 [2024-12-10 22:58:34.880691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.304 [2024-12-10 22:58:34.888172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.304 [2024-12-10 22:58:34.888195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.304 [2024-12-10 22:58:34.888204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.304 [2024-12-10 22:58:34.895381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.304 [2024-12-10 22:58:34.895406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.304 [2024-12-10 22:58:34.895415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.304 [2024-12-10 22:58:34.902939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.304 [2024-12-10 22:58:34.902964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.304 [2024-12-10 22:58:34.902973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.304 [2024-12-10 22:58:34.910879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.304 [2024-12-10 22:58:34.910903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.304 [2024-12-10 22:58:34.910912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.304 [2024-12-10 22:58:34.918263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.304 [2024-12-10 22:58:34.918287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.304 [2024-12-10 22:58:34.918297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.304 [2024-12-10 22:58:34.924449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.304 [2024-12-10 22:58:34.924472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.304 [2024-12-10 22:58:34.924481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.304 [2024-12-10 22:58:34.929736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.304 [2024-12-10 22:58:34.929759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.304 [2024-12-10 22:58:34.929768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.304 [2024-12-10 22:58:34.934974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.304 [2024-12-10 22:58:34.934997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.304 [2024-12-10 22:58:34.935005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.304 [2024-12-10 22:58:34.940217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.304 [2024-12-10 22:58:34.940244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.304 [2024-12-10 22:58:34.940254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.304 [2024-12-10 22:58:34.945920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.304 [2024-12-10 22:58:34.945942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.304 [2024-12-10 22:58:34.945950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.304 [2024-12-10 22:58:34.951827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.304 [2024-12-10 22:58:34.951849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.304 [2024-12-10 22:58:34.951858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.304 [2024-12-10 22:58:34.957079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.304 [2024-12-10 22:58:34.957100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.304 [2024-12-10 22:58:34.957109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.304 [2024-12-10 22:58:34.962383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.305 [2024-12-10 22:58:34.962405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.305 [2024-12-10 22:58:34.962413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.305 [2024-12-10 22:58:34.967664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.305 [2024-12-10 22:58:34.967685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.305 [2024-12-10 22:58:34.967693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.305 [2024-12-10 22:58:34.972964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.305 [2024-12-10 22:58:34.972986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.305 [2024-12-10 22:58:34.972997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.305 [2024-12-10 22:58:34.978173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.305 [2024-12-10 22:58:34.978196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.305 [2024-12-10 22:58:34.978204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.305 [2024-12-10 22:58:34.983322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.305 [2024-12-10 22:58:34.983345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.305 [2024-12-10 22:58:34.983353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.305 [2024-12-10 22:58:34.988601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.305 [2024-12-10 22:58:34.988624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.305 [2024-12-10 22:58:34.988633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.305 [2024-12-10 22:58:34.993808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.305 [2024-12-10 22:58:34.993830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.305 [2024-12-10 22:58:34.993838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.305 [2024-12-10 22:58:34.999012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.305 [2024-12-10 22:58:34.999033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.305 [2024-12-10 22:58:34.999042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.305 [2024-12-10 22:58:35.004246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.305 [2024-12-10 22:58:35.004266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.305 [2024-12-10 22:58:35.004275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.305 [2024-12-10 22:58:35.009492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.305 [2024-12-10 22:58:35.009514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.305 [2024-12-10 22:58:35.009522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.305 [2024-12-10 22:58:35.014667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.305 [2024-12-10 22:58:35.014688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.305 [2024-12-10 22:58:35.014697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.305 [2024-12-10 22:58:35.019853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.305 [2024-12-10 22:58:35.019881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.305 [2024-12-10 22:58:35.019890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.305 [2024-12-10 22:58:35.025087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.305 [2024-12-10 22:58:35.025109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.305 [2024-12-10 22:58:35.025118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.567 [2024-12-10 22:58:35.030310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.567 [2024-12-10 22:58:35.030333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.567 [2024-12-10 22:58:35.030342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.567 [2024-12-10 22:58:35.035520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.567 [2024-12-10 22:58:35.035541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.567 [2024-12-10 22:58:35.035549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.567 [2024-12-10 22:58:35.040482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.567 [2024-12-10 22:58:35.040504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.567 [2024-12-10 22:58:35.040513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.567 [2024-12-10 22:58:35.045665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.567 [2024-12-10 22:58:35.045686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.567 [2024-12-10 22:58:35.045695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.567 [2024-12-10 22:58:35.050832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.567 [2024-12-10 22:58:35.050853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.567 [2024-12-10 22:58:35.050862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.567 [2024-12-10 22:58:35.055950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.567 [2024-12-10 22:58:35.055971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.567 [2024-12-10 22:58:35.055979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.567 [2024-12-10 22:58:35.061076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.567 [2024-12-10 22:58:35.061098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.567 [2024-12-10 22:58:35.061106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.567 [2024-12-10 22:58:35.066269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.567 [2024-12-10 22:58:35.066291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.567 [2024-12-10 22:58:35.066299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.567 [2024-12-10 22:58:35.071498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.567 [2024-12-10 22:58:35.071521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.567 [2024-12-10 22:58:35.071530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.567 [2024-12-10 22:58:35.076764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.567 [2024-12-10 22:58:35.076786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.567 [2024-12-10 22:58:35.076797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.567 [2024-12-10 22:58:35.081973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.567 [2024-12-10 22:58:35.081997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.567 [2024-12-10 22:58:35.082005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.567 [2024-12-10 22:58:35.087218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.567 [2024-12-10 22:58:35.087241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.567 [2024-12-10 22:58:35.087250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.567 [2024-12-10 22:58:35.092402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.567 [2024-12-10 22:58:35.092424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.567 [2024-12-10 22:58:35.092433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.567 [2024-12-10 22:58:35.097623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.567 [2024-12-10 22:58:35.097646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.567 [2024-12-10 22:58:35.097655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.567 [2024-12-10 22:58:35.102835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.567 [2024-12-10 22:58:35.102857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.567 [2024-12-10 22:58:35.102865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.567 [2024-12-10 22:58:35.108009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.567 [2024-12-10 22:58:35.108031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.567 [2024-12-10 22:58:35.108044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.567 [2024-12-10 22:58:35.113164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.567 [2024-12-10 22:58:35.113188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.567 [2024-12-10 22:58:35.113196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.567 [2024-12-10 22:58:35.118309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.567 [2024-12-10 22:58:35.118333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.567 [2024-12-10 22:58:35.118342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.567 [2024-12-10 22:58:35.123519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.567 [2024-12-10 22:58:35.123541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.567 [2024-12-10 22:58:35.123550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.567 [2024-12-10 22:58:35.128703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.567 [2024-12-10 22:58:35.128725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.567 [2024-12-10 22:58:35.128733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.567 [2024-12-10 22:58:35.133900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.567 [2024-12-10 22:58:35.133921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.567 [2024-12-10 22:58:35.133930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.567 [2024-12-10 22:58:35.139116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.567 [2024-12-10 22:58:35.139137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.567 [2024-12-10 22:58:35.139146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.567 [2024-12-10 22:58:35.144373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.567 [2024-12-10 22:58:35.144395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.567 [2024-12-10 22:58:35.144403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.567 [2024-12-10 22:58:35.149605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.567 [2024-12-10 22:58:35.149627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.567 [2024-12-10 22:58:35.149636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.567 [2024-12-10 22:58:35.154771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.567 [2024-12-10 22:58:35.154792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.567 [2024-12-10 22:58:35.154800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.567 [2024-12-10 22:58:35.159978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.567 [2024-12-10 22:58:35.160000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.567 [2024-12-10 22:58:35.160009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.567 [2024-12-10 22:58:35.165131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.567 [2024-12-10 22:58:35.165152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.567 [2024-12-10 22:58:35.165166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.567 [2024-12-10 22:58:35.170330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.567 [2024-12-10 22:58:35.170351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.567 [2024-12-10 22:58:35.170360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.567 [2024-12-10 22:58:35.175496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.567 [2024-12-10 22:58:35.175519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.567 [2024-12-10 22:58:35.175527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.567 [2024-12-10 22:58:35.180639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.568 [2024-12-10 22:58:35.180661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.568 [2024-12-10 22:58:35.180670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.568 [2024-12-10 22:58:35.185806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.568 [2024-12-10 22:58:35.185829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.568 [2024-12-10 22:58:35.185838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.568 [2024-12-10 22:58:35.190978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.568 [2024-12-10 22:58:35.191000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.568 [2024-12-10 22:58:35.191008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.568 [2024-12-10 22:58:35.196172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.568 [2024-12-10 22:58:35.196193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.568 [2024-12-10 22:58:35.196204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.568 [2024-12-10 22:58:35.201331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.568 [2024-12-10 22:58:35.201353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.568 [2024-12-10 22:58:35.201361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.568 [2024-12-10 22:58:35.206518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.568 [2024-12-10 22:58:35.206540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.568 [2024-12-10 22:58:35.206548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.568 [2024-12-10 22:58:35.211657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.568 [2024-12-10 22:58:35.211680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.568 [2024-12-10 22:58:35.211688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.568 [2024-12-10 22:58:35.216828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.568 [2024-12-10 22:58:35.216850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.568 [2024-12-10 22:58:35.216858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.568 [2024-12-10 22:58:35.222037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.568 [2024-12-10 22:58:35.222059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.568 [2024-12-10 22:58:35.222067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.568 [2024-12-10 22:58:35.227259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.568 [2024-12-10 22:58:35.227284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.568 [2024-12-10 22:58:35.227293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.568 [2024-12-10 22:58:35.232445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.568 [2024-12-10 22:58:35.232467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.568 [2024-12-10 22:58:35.232476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.568 [2024-12-10 22:58:35.237682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.568 [2024-12-10 22:58:35.237704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.568 [2024-12-10 22:58:35.237713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.568 [2024-12-10 22:58:35.242883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.568 [2024-12-10 22:58:35.242909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.568 [2024-12-10 22:58:35.242917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.568 [2024-12-10 22:58:35.248062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.568 [2024-12-10 22:58:35.248085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.568 [2024-12-10 22:58:35.248093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.568 [2024-12-10 22:58:35.253225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.568 [2024-12-10 22:58:35.253247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.568 [2024-12-10 22:58:35.253255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.568 [2024-12-10 22:58:35.258437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.568 [2024-12-10 22:58:35.258459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.568 [2024-12-10 22:58:35.258467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.568 [2024-12-10 22:58:35.263682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.568 [2024-12-10 22:58:35.263704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.568 [2024-12-10 22:58:35.263712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.568 [2024-12-10 22:58:35.268836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.568 [2024-12-10 22:58:35.268857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.568 [2024-12-10 22:58:35.268866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.568 [2024-12-10 22:58:35.274062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.568 [2024-12-10 22:58:35.274084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.568 [2024-12-10 22:58:35.274092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.568 [2024-12-10 22:58:35.279239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.568 [2024-12-10 22:58:35.279261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.568 [2024-12-10 22:58:35.279269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.568 [2024-12-10 22:58:35.284411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.568 [2024-12-10 22:58:35.284432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.568 [2024-12-10 22:58:35.284441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.568 [2024-12-10 22:58:35.289647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.568 [2024-12-10 22:58:35.289669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.568 [2024-12-10 22:58:35.289677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.828 [2024-12-10 22:58:35.294982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.828 [2024-12-10 22:58:35.295004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.828 [2024-12-10 22:58:35.295013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.828 [2024-12-10 22:58:35.300223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.828 [2024-12-10 22:58:35.300245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.828 [2024-12-10 22:58:35.300253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.828 [2024-12-10 22:58:35.305458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.828 [2024-12-10 22:58:35.305479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.828 [2024-12-10 22:58:35.305488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.828 [2024-12-10 22:58:35.310669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.828 [2024-12-10 22:58:35.310690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.828 [2024-12-10 22:58:35.310699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.828 [2024-12-10 22:58:35.315864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.828 [2024-12-10 22:58:35.315886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.828 [2024-12-10 22:58:35.315894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.828 [2024-12-10 22:58:35.321063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.828 [2024-12-10 22:58:35.321085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.828 [2024-12-10 22:58:35.321093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.828 [2024-12-10 22:58:35.326303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.828 [2024-12-10 22:58:35.326324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.829 [2024-12-10 22:58:35.326332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.829 [2024-12-10 22:58:35.331582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.829 [2024-12-10 22:58:35.331605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.829 [2024-12-10 22:58:35.331617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.829 [2024-12-10 22:58:35.336840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.829 [2024-12-10 22:58:35.336863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.829 [2024-12-10 22:58:35.336872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.829 [2024-12-10 22:58:35.342047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.829 [2024-12-10 22:58:35.342069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.829 [2024-12-10 22:58:35.342078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.829 [2024-12-10 22:58:35.347326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.829 [2024-12-10 22:58:35.347349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.829 [2024-12-10 22:58:35.347358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.829 [2024-12-10 22:58:35.352563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.829 [2024-12-10 22:58:35.352586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.829 [2024-12-10 22:58:35.352594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.829 [2024-12-10 22:58:35.357827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.829 [2024-12-10 22:58:35.357850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.829 [2024-12-10 22:58:35.357859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.829 [2024-12-10 22:58:35.362697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.829 [2024-12-10 22:58:35.362719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.829 [2024-12-10 22:58:35.362728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.829 [2024-12-10 22:58:35.367910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.829 [2024-12-10 22:58:35.367932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.829 [2024-12-10 22:58:35.367941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.829 [2024-12-10 22:58:35.373206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.829 [2024-12-10 22:58:35.373229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.829 [2024-12-10 22:58:35.373238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.829 [2024-12-10 22:58:35.378501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.829 [2024-12-10 22:58:35.378527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.829 [2024-12-10 22:58:35.378536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.829 [2024-12-10 22:58:35.383699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.829 [2024-12-10 22:58:35.383722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.829 [2024-12-10 22:58:35.383730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.829 [2024-12-10 22:58:35.388945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.829 [2024-12-10 22:58:35.388968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.829 [2024-12-10 22:58:35.388976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.829 [2024-12-10 22:58:35.394137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.829 [2024-12-10 22:58:35.394166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.829 [2024-12-10 22:58:35.394175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.829 [2024-12-10 22:58:35.399375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.829 [2024-12-10 22:58:35.399397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.829 [2024-12-10 22:58:35.399405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.829 [2024-12-10 22:58:35.404638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.829 [2024-12-10 22:58:35.404660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.829 [2024-12-10 22:58:35.404668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.829 [2024-12-10 22:58:35.409772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.829 [2024-12-10 22:58:35.409794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.829 [2024-12-10 22:58:35.409803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.829 [2024-12-10 22:58:35.412637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.829 [2024-12-10 22:58:35.412659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.829 [2024-12-10 22:58:35.412667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.829 [2024-12-10 22:58:35.418561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.829 [2024-12-10 22:58:35.418583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.829 [2024-12-10 22:58:35.418592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.829 [2024-12-10 22:58:35.424540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.829 [2024-12-10 22:58:35.424563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.829 [2024-12-10 22:58:35.424571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.829 [2024-12-10 22:58:35.430525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.829 [2024-12-10 22:58:35.430546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.829 [2024-12-10 22:58:35.430555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.829 [2024-12-10 22:58:35.435746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.829 [2024-12-10 22:58:35.435768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.829 [2024-12-10 22:58:35.435776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.829 [2024-12-10 22:58:35.440981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.829 [2024-12-10 22:58:35.441003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.829 [2024-12-10 22:58:35.441011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.829 [2024-12-10 22:58:35.446284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.829 [2024-12-10 22:58:35.446306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.829 [2024-12-10 22:58:35.446314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.829 [2024-12-10 22:58:35.451529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.829 [2024-12-10 22:58:35.451551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.829 [2024-12-10 22:58:35.451559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.829 [2024-12-10 22:58:35.456768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.829 [2024-12-10 22:58:35.456790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.829 [2024-12-10 22:58:35.456798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.829 [2024-12-10 22:58:35.462053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.830 [2024-12-10 22:58:35.462074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.830 [2024-12-10 22:58:35.462082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.830 [2024-12-10 22:58:35.467120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.830 [2024-12-10 22:58:35.467142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.830 [2024-12-10 22:58:35.467154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.830 [2024-12-10 22:58:35.472602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.830 [2024-12-10 22:58:35.472624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.830 [2024-12-10 22:58:35.472633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.830 [2024-12-10 22:58:35.478384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.830 [2024-12-10 22:58:35.478406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.830 [2024-12-10 22:58:35.478414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.830 [2024-12-10 22:58:35.483629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.830 [2024-12-10 22:58:35.483650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.830 [2024-12-10 22:58:35.483658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.830 [2024-12-10 22:58:35.488860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.830 [2024-12-10 22:58:35.488882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.830 [2024-12-10 22:58:35.488890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.830 [2024-12-10 22:58:35.494072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.830 [2024-12-10 22:58:35.494093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.830 [2024-12-10 22:58:35.494102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.830 [2024-12-10 22:58:35.499274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.830 [2024-12-10 22:58:35.499295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.830 [2024-12-10 22:58:35.499304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.830 [2024-12-10 22:58:35.504424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.830 [2024-12-10 22:58:35.504447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.830 [2024-12-10 22:58:35.504456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.830 [2024-12-10 22:58:35.509624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.830 [2024-12-10 22:58:35.509646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.830 [2024-12-10 22:58:35.509654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.830 [2024-12-10 22:58:35.514816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.830 [2024-12-10 22:58:35.514839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.830 [2024-12-10 22:58:35.514847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.830 [2024-12-10 22:58:35.519983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.830 [2024-12-10 22:58:35.520004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.830 [2024-12-10 22:58:35.520012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.830 [2024-12-10 22:58:35.525192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.830 [2024-12-10 22:58:35.525214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.830 [2024-12-10 22:58:35.525222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.830 [2024-12-10 22:58:35.530421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.830 [2024-12-10 22:58:35.530443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.830 [2024-12-10 22:58:35.530452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:27.830 [2024-12-10 22:58:35.535657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.830 [2024-12-10 22:58:35.535678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.830 [2024-12-10 22:58:35.535687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:27.830 [2024-12-10 22:58:35.540893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.830 [2024-12-10 22:58:35.540914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.830 [2024-12-10 22:58:35.540923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:27.830 [2024-12-10 22:58:35.545983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.830 [2024-12-10 22:58:35.546003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.830 [2024-12-10 22:58:35.546012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:27.830 [2024-12-10 22:58:35.551073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:27.830 [2024-12-10 22:58:35.551096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:27.830 [2024-12-10 22:58:35.551104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:28.090 [2024-12-10 22:58:35.556364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.090 [2024-12-10 22:58:35.556386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.090 [2024-12-10 22:58:35.556398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:28.090 [2024-12-10 22:58:35.561598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.090 [2024-12-10 22:58:35.561621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.090 [2024-12-10 22:58:35.561629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:28.090 [2024-12-10 22:58:35.566793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.090 [2024-12-10 22:58:35.566814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.090 [2024-12-10 22:58:35.566822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:28.090 [2024-12-10 22:58:35.571976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.090 [2024-12-10 22:58:35.571997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.090 [2024-12-10 22:58:35.572005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:28.090 [2024-12-10 22:58:35.577291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.090 [2024-12-10 22:58:35.577313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.090 [2024-12-10 22:58:35.577322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:28.090 [2024-12-10 22:58:35.582553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.090 [2024-12-10 22:58:35.582574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.090 [2024-12-10 22:58:35.582583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:28.090 [2024-12-10 22:58:35.587833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.090 [2024-12-10 22:58:35.587855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.090 [2024-12-10 22:58:35.587863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:28.090 [2024-12-10 22:58:35.593018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.090 [2024-12-10 22:58:35.593040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.090 [2024-12-10 22:58:35.593048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:28.090 [2024-12-10 22:58:35.598255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.090 [2024-12-10 22:58:35.598276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.090 [2024-12-10 22:58:35.598284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:28.090 [2024-12-10 22:58:35.603484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.090 [2024-12-10 22:58:35.603509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.090 [2024-12-10 22:58:35.603517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:28.090 [2024-12-10 22:58:35.608681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.090 [2024-12-10 22:58:35.608703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.090 [2024-12-10 22:58:35.608711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:28.090 [2024-12-10 22:58:35.613867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.090 [2024-12-10 22:58:35.613889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.090 [2024-12-10 22:58:35.613898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:28.090 [2024-12-10 22:58:35.619114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.090 [2024-12-10 22:58:35.619136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.090 [2024-12-10 22:58:35.619145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:28.090 [2024-12-10 22:58:35.624274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.090 [2024-12-10 22:58:35.624295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.090 [2024-12-10 22:58:35.624304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:28.090 [2024-12-10 22:58:35.629450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.090 [2024-12-10 22:58:35.629473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.090 [2024-12-10 22:58:35.629481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:28.090 [2024-12-10 22:58:35.634649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.090 [2024-12-10 22:58:35.634671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.090 [2024-12-10 22:58:35.634679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:28.090 [2024-12-10 22:58:35.639862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.090 [2024-12-10 22:58:35.639885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.090 [2024-12-10 22:58:35.639893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:28.090 [2024-12-10 22:58:35.645063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.090 [2024-12-10 22:58:35.645085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.090 [2024-12-10 22:58:35.645093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:28.090 [2024-12-10 22:58:35.650290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.090 [2024-12-10 22:58:35.650312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.090 [2024-12-10 22:58:35.650322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:28.090 [2024-12-10 22:58:35.656257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.090 [2024-12-10 22:58:35.656280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.090 [2024-12-10 22:58:35.656289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:28.090 [2024-12-10 22:58:35.661921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.090 [2024-12-10 22:58:35.661943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.090 [2024-12-10 22:58:35.661951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:28.090 [2024-12-10 22:58:35.667257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.090 [2024-12-10 22:58:35.667278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.090 [2024-12-10 22:58:35.667288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:28.090 [2024-12-10 22:58:35.672401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.090 [2024-12-10 22:58:35.672424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.090 [2024-12-10 22:58:35.672432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:28.090 [2024-12-10 22:58:35.677668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.090 [2024-12-10 22:58:35.677691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.090 [2024-12-10 22:58:35.677699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:28.090 [2024-12-10 22:58:35.683008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.090 [2024-12-10 22:58:35.683030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.090 [2024-12-10 22:58:35.683038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:28.091 [2024-12-10 22:58:35.688275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.091 [2024-12-10 22:58:35.688297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.091 [2024-12-10 22:58:35.688305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:28.091 [2024-12-10 22:58:35.693585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.091 [2024-12-10 22:58:35.693607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.091 [2024-12-10 22:58:35.693622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:28.091 [2024-12-10 22:58:35.698719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.091 [2024-12-10 22:58:35.698742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.091 [2024-12-10 22:58:35.698750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:28.091 [2024-12-10 22:58:35.704044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.091 [2024-12-10 22:58:35.704065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.091 [2024-12-10 22:58:35.704074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:28.091 [2024-12-10 22:58:35.709433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.091 [2024-12-10 22:58:35.709455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.091 [2024-12-10 22:58:35.709464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:28.091 [2024-12-10 22:58:35.714829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.091 [2024-12-10 22:58:35.714851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.091 [2024-12-10 22:58:35.714860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:28.091 [2024-12-10 22:58:35.720097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.091 [2024-12-10 22:58:35.720119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.091 [2024-12-10 22:58:35.720127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:28.091 [2024-12-10 22:58:35.725423] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.091 [2024-12-10 22:58:35.725446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.091 [2024-12-10 22:58:35.725455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:28.091 [2024-12-10 22:58:35.730682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.091 [2024-12-10 22:58:35.730704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.091 [2024-12-10 22:58:35.730713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:28.091 [2024-12-10 22:58:35.735816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.091 [2024-12-10 22:58:35.735839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.091 [2024-12-10 22:58:35.735848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:28.091 [2024-12-10 22:58:35.741093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.091 [2024-12-10 22:58:35.741119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.091 [2024-12-10 22:58:35.741128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:28.091 [2024-12-10 22:58:35.746534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.091 [2024-12-10 22:58:35.746557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.091 [2024-12-10 22:58:35.746565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:28.091 [2024-12-10 22:58:35.752090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.091 [2024-12-10 22:58:35.752112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.091 [2024-12-10 22:58:35.752121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:28.091 [2024-12-10 22:58:35.757487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.091 [2024-12-10 22:58:35.757508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.091 [2024-12-10 22:58:35.757517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:28.091 [2024-12-10 22:58:35.762920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.091 [2024-12-10 22:58:35.762943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.091 [2024-12-10 22:58:35.762951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:28.091 [2024-12-10 22:58:35.768375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.091 [2024-12-10 22:58:35.768398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.091 [2024-12-10 22:58:35.768407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:28.091 [2024-12-10 22:58:35.773800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.091 [2024-12-10 22:58:35.773823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.091 [2024-12-10 22:58:35.773832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:28.091 [2024-12-10 22:58:35.779240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.091 [2024-12-10 22:58:35.779261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.091 [2024-12-10 22:58:35.779269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:28.091 [2024-12-10 22:58:35.784806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.091 [2024-12-10 22:58:35.784828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.091 [2024-12-10 22:58:35.784837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:28.091 [2024-12-10 22:58:35.790215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.091 [2024-12-10 22:58:35.790237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.091 [2024-12-10 22:58:35.790246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:28.091 [2024-12-10 22:58:35.795855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.091 [2024-12-10 22:58:35.795882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.091 [2024-12-10 22:58:35.795891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:28.091 [2024-12-10 22:58:35.801274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.091 [2024-12-10 22:58:35.801296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.091 [2024-12-10 22:58:35.801304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:28.091 [2024-12-10 22:58:35.806763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.091 [2024-12-10 22:58:35.806785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.091 [2024-12-10 22:58:35.806794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:28.091 [2024-12-10 22:58:35.812000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.091 [2024-12-10 22:58:35.812023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.091 [2024-12-10 22:58:35.812032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:28.350 5797.00 IOPS, 724.62 MiB/s [2024-12-10T21:58:36.079Z] [2024-12-10 22:58:35.818025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.350 [2024-12-10 22:58:35.818047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.350 [2024-12-10 22:58:35.818056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:28.350 [2024-12-10 22:58:35.823392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.350 [2024-12-10 22:58:35.823415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.350 [2024-12-10 22:58:35.823423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:28.350 [2024-12-10 22:58:35.828818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.350 [2024-12-10 22:58:35.828841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.350 [2024-12-10 22:58:35.828850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:28.350 [2024-12-10 22:58:35.834259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.350 [2024-12-10 22:58:35.834284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.350 [2024-12-10 22:58:35.834293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:28.350 [2024-12-10 22:58:35.839726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.350 [2024-12-10 22:58:35.839749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.350 [2024-12-10 22:58:35.839757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:28.350 [2024-12-10 22:58:35.845319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.350 [2024-12-10 22:58:35.845340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.350 [2024-12-10 22:58:35.845349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:28.350 [2024-12-10 22:58:35.850621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.350 [2024-12-10 22:58:35.850643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.350 [2024-12-10 22:58:35.850652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:28.350 [2024-12-10 22:58:35.856123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.350 [2024-12-10 22:58:35.856146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.350 [2024-12-10 22:58:35.856155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:28.350 [2024-12-10 22:58:35.861588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.350 [2024-12-10 22:58:35.861610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.350 [2024-12-10 22:58:35.861618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:28.350 [2024-12-10 22:58:35.866943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.350 [2024-12-10 22:58:35.866967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.350 [2024-12-10 22:58:35.866975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:28.350 [2024-12-10 22:58:35.872365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.350 [2024-12-10 22:58:35.872386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.350 [2024-12-10 22:58:35.872395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:28.350 [2024-12-10 22:58:35.877779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.350 [2024-12-10 22:58:35.877805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.350 [2024-12-10 22:58:35.877815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:28.350 [2024-12-10 22:58:35.883170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.350 [2024-12-10 22:58:35.883193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.350 [2024-12-10 22:58:35.883202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:28.350 [2024-12-10 22:58:35.888626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.350 [2024-12-10 22:58:35.888649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.350 [2024-12-10 22:58:35.888657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:28.350 [2024-12-10 22:58:35.893978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.350 [2024-12-10 22:58:35.894001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.350 [2024-12-10 22:58:35.894010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:28.350 [2024-12-10 22:58:35.899569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.350 [2024-12-10 22:58:35.899591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.350 [2024-12-10 22:58:35.899600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:28.350 [2024-12-10 22:58:35.905027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.350 [2024-12-10 22:58:35.905049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.350 [2024-12-10 22:58:35.905057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:28.350 [2024-12-10 22:58:35.910413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.350 [2024-12-10 22:58:35.910436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.350 [2024-12-10 22:58:35.910446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:28.351 [2024-12-10 22:58:35.915902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.351 [2024-12-10 22:58:35.915926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.351 [2024-12-10 22:58:35.915934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:28.351 [2024-12-10 22:58:35.921429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.351 [2024-12-10 22:58:35.921451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.351 [2024-12-10 22:58:35.921460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:28.351 [2024-12-10 22:58:35.926981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.351 [2024-12-10 22:58:35.927003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.351 [2024-12-10 22:58:35.927015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:28.351 [2024-12-10 22:58:35.932231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.351 [2024-12-10 22:58:35.932254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.351 [2024-12-10 22:58:35.932263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:28.351 [2024-12-10 22:58:35.937572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.351 [2024-12-10 22:58:35.937594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.351 [2024-12-10 22:58:35.937602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:28.351 [2024-12-10 22:58:35.942924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.351 [2024-12-10 22:58:35.942946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.351 [2024-12-10 22:58:35.942955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:28.351 [2024-12-10 22:58:35.948061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.351 [2024-12-10 22:58:35.948083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.351 [2024-12-10 22:58:35.948092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:28.351 [2024-12-10 22:58:35.953437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.351 [2024-12-10 22:58:35.953459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.351 [2024-12-10 22:58:35.953468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:28.351 [2024-12-10 22:58:35.958913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.351 [2024-12-10 22:58:35.958935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.351 [2024-12-10 22:58:35.958944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:28.351 [2024-12-10 22:58:35.964473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.351 [2024-12-10 22:58:35.964495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.351 [2024-12-10 22:58:35.964503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:28.351 [2024-12-10 22:58:35.970002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.351 [2024-12-10 22:58:35.970024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.351 [2024-12-10 22:58:35.970033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:28.351 [2024-12-10 22:58:35.975299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.351 [2024-12-10 22:58:35.975324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.351 [2024-12-10 22:58:35.975333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:28.351 [2024-12-10 22:58:35.980617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.351 [2024-12-10 22:58:35.980639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.351 [2024-12-10 22:58:35.980647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:28.351 [2024-12-10 22:58:35.986038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.351 [2024-12-10 22:58:35.986059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.351 [2024-12-10 22:58:35.986068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:28.351 [2024-12-10 22:58:35.991375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.351 [2024-12-10 22:58:35.991397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.351 [2024-12-10 22:58:35.991406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:28.351 [2024-12-10 22:58:35.996785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.351 [2024-12-10 22:58:35.996806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.351 [2024-12-10 22:58:35.996815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:28.351 [2024-12-10 22:58:36.002182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.351 [2024-12-10 22:58:36.002205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.351 [2024-12-10 22:58:36.002214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:28.351 [2024-12-10 22:58:36.007584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.351 [2024-12-10 22:58:36.007606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.351 [2024-12-10 22:58:36.007615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:28.351 [2024-12-10 22:58:36.013454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.351 [2024-12-10 22:58:36.013476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.351 [2024-12-10 22:58:36.013485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:28.351 [2024-12-10 22:58:36.018763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.351 [2024-12-10 22:58:36.018784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.351 [2024-12-10 22:58:36.018793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:28.351 [2024-12-10 22:58:36.024145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.351 [2024-12-10 22:58:36.024175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.351 [2024-12-10 22:58:36.024184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:28.351 [2024-12-10 22:58:36.029355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.351 [2024-12-10 22:58:36.029377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.351 [2024-12-10 22:58:36.029389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:28.351 [2024-12-10 22:58:36.034672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.351 [2024-12-10 22:58:36.034695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.351 [2024-12-10 22:58:36.034704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:28.351 [2024-12-10 22:58:36.040021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.351 [2024-12-10 22:58:36.040043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.351 [2024-12-10 22:58:36.040051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:28.351 [2024-12-10 22:58:36.045435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.351 [2024-12-10 22:58:36.045456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.351 [2024-12-10 22:58:36.045464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:28.351 [2024-12-10 22:58:36.050616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.351 [2024-12-10 22:58:36.050638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.351 [2024-12-10 22:58:36.050647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:28.351 [2024-12-10 22:58:36.055942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.351 [2024-12-10 22:58:36.055964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.351 [2024-12-10 22:58:36.055973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:28.351 [2024-12-10 22:58:36.061237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.351 [2024-12-10 22:58:36.061258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.351 [2024-12-10 22:58:36.061267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:28.351 [2024-12-10 22:58:36.066707] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.351 [2024-12-10 22:58:36.066729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.351 [2024-12-10 22:58:36.066741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:28.351 [2024-12-10 22:58:36.071886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.351 [2024-12-10 22:58:36.071908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.351 [2024-12-10 22:58:36.071916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:28.612 [2024-12-10 22:58:36.077383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.612 [2024-12-10 22:58:36.077406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.612 [2024-12-10 22:58:36.077414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:28.612 [2024-12-10 22:58:36.082953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.612 [2024-12-10 22:58:36.082975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.612 [2024-12-10 22:58:36.082984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:28.612 [2024-12-10 22:58:36.088236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.612 [2024-12-10 22:58:36.088259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.612 [2024-12-10 22:58:36.088267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:28.612 [2024-12-10 22:58:36.093827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.612 [2024-12-10 22:58:36.093849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.612 [2024-12-10 22:58:36.093858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:28.612 [2024-12-10 22:58:36.099322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.612 [2024-12-10 22:58:36.099345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.612 [2024-12-10 22:58:36.099354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:28.612 [2024-12-10 22:58:36.104705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.612 [2024-12-10 22:58:36.104728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.612 [2024-12-10 22:58:36.104737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:28.612 [2024-12-10 22:58:36.110072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.612 [2024-12-10 22:58:36.110096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.612 [2024-12-10 22:58:36.110104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:28.612 [2024-12-10 22:58:36.115380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.612 [2024-12-10 22:58:36.115408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.612 [2024-12-10 22:58:36.115417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:28.612 [2024-12-10 22:58:36.120701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.612 [2024-12-10 22:58:36.120724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.612 [2024-12-10 22:58:36.120732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:28.612 [2024-12-10 22:58:36.125864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.612 [2024-12-10 22:58:36.125886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.612 [2024-12-10 22:58:36.125895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:28.612 [2024-12-10 22:58:36.131459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.612 [2024-12-10 22:58:36.131482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.612 [2024-12-10 22:58:36.131491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:28.612 [2024-12-10 22:58:36.136784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.612 [2024-12-10 22:58:36.136806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.612 [2024-12-10 22:58:36.136815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:28.612 [2024-12-10 22:58:36.142190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.612 [2024-12-10 22:58:36.142212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.612 [2024-12-10 22:58:36.142220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:28.612 [2024-12-10 22:58:36.147714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.612 [2024-12-10 22:58:36.147735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.612 [2024-12-10 22:58:36.147744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:28.612 [2024-12-10 22:58:36.153006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.612 [2024-12-10 22:58:36.153029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.612 [2024-12-10 22:58:36.153037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:28.612 [2024-12-10 22:58:36.158730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.612 [2024-12-10 22:58:36.158753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.612 [2024-12-10 22:58:36.158761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:28.612 [2024-12-10 22:58:36.164231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.612 [2024-12-10 22:58:36.164253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.612 [2024-12-10 22:58:36.164262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:28.612 [2024-12-10 22:58:36.169665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.612 [2024-12-10 22:58:36.169688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.612 [2024-12-10 22:58:36.169696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:28.612 [2024-12-10 22:58:36.175127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.612 [2024-12-10 22:58:36.175149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.612 [2024-12-10 22:58:36.175164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:28.612 [2024-12-10 22:58:36.180492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.612 [2024-12-10 22:58:36.180514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.612 [2024-12-10 22:58:36.180522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:28.612 [2024-12-10 22:58:36.185995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.612 [2024-12-10 22:58:36.186018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.613 [2024-12-10 22:58:36.186027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:28.613 [2024-12-10 22:58:36.191605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.613 [2024-12-10 22:58:36.191627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.613 [2024-12-10 22:58:36.191635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:28.613 [2024-12-10 22:58:36.197153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.613 [2024-12-10 22:58:36.197181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.613 [2024-12-10 22:58:36.197189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:28.613 [2024-12-10 22:58:36.202677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.613 [2024-12-10 22:58:36.202699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.613 [2024-12-10 22:58:36.202707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:28.613 [2024-12-10 22:58:36.208208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.613 [2024-12-10 22:58:36.208230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.613 [2024-12-10 22:58:36.208241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:28.613 [2024-12-10 22:58:36.213530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.613 [2024-12-10 22:58:36.213551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.613 [2024-12-10 22:58:36.213559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:28.613 [2024-12-10 22:58:36.219093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.613 [2024-12-10 22:58:36.219115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.613 [2024-12-10 22:58:36.219124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:28.613 [2024-12-10 22:58:36.224417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.613 [2024-12-10 22:58:36.224440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.613 [2024-12-10 22:58:36.224449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:28.613 [2024-12-10 22:58:36.229776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.613 [2024-12-10 22:58:36.229799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.613 [2024-12-10 22:58:36.229807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:28.613 [2024-12-10 22:58:36.235138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.613 [2024-12-10 22:58:36.235166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.613 [2024-12-10 22:58:36.235175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:28.613 [2024-12-10 22:58:36.240477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.613 [2024-12-10 22:58:36.240499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.613 [2024-12-10 22:58:36.240507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:28.613 [2024-12-10 22:58:36.245843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.613 [2024-12-10 22:58:36.245865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.613 [2024-12-10 22:58:36.245874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:28.613 [2024-12-10 22:58:36.251248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.613 [2024-12-10 22:58:36.251270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.613 [2024-12-10 22:58:36.251278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:28.613 [2024-12-10 22:58:36.256468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.613 [2024-12-10 22:58:36.256492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.613 [2024-12-10 22:58:36.256500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:28.613 [2024-12-10 22:58:36.261730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.613 [2024-12-10 22:58:36.261752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.613 [2024-12-10 22:58:36.261761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:28.613 [2024-12-10 22:58:36.267051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.613 [2024-12-10 22:58:36.267073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.613 [2024-12-10 22:58:36.267081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:28.613 [2024-12-10 22:58:36.272411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.613 [2024-12-10 22:58:36.272434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.613 [2024-12-10 22:58:36.272443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:28.613 [2024-12-10 22:58:36.277894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.613 [2024-12-10 22:58:36.277917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.613 [2024-12-10 22:58:36.277925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:28.613 [2024-12-10 22:58:36.283263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.613 [2024-12-10 22:58:36.283285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.613 [2024-12-10 22:58:36.283293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:28.613 [2024-12-10 22:58:36.288730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.613 [2024-12-10 22:58:36.288752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.613 [2024-12-10 22:58:36.288760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:28.613 [2024-12-10 22:58:36.294051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.613 [2024-12-10 22:58:36.294073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.613 [2024-12-10 22:58:36.294082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:28.613 [2024-12-10 22:58:36.300415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.613 [2024-12-10 22:58:36.300437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.613 [2024-12-10 22:58:36.300449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:28.613 [2024-12-10 22:58:36.306131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.613 [2024-12-10 22:58:36.306154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.613 [2024-12-10 22:58:36.306168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:28.613 [2024-12-10 22:58:36.309828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.613 [2024-12-10 22:58:36.309849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.613 [2024-12-10 22:58:36.309858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:28.613 [2024-12-10 22:58:36.314297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.613 [2024-12-10 22:58:36.314318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.613 [2024-12-10 22:58:36.314327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:28.613 [2024-12-10 22:58:36.319606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.613 [2024-12-10 22:58:36.319628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.613 [2024-12-10 22:58:36.319636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:28.613 [2024-12-10 22:58:36.325035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.613 [2024-12-10 22:58:36.325057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.613 [2024-12-10 22:58:36.325065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:28.613 [2024-12-10 22:58:36.330365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.613 [2024-12-10 22:58:36.330387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.613 [2024-12-10 22:58:36.330396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:28.613 [2024-12-10 22:58:36.335881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.613 [2024-12-10 22:58:36.335903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.613 [2024-12-10 22:58:36.335912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:28.873 [2024-12-10 22:58:36.341274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.873 [2024-12-10 22:58:36.341296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.873 [2024-12-10 22:58:36.341305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:28.873 [2024-12-10 22:58:36.347440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.873 [2024-12-10 22:58:36.347466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.873 [2024-12-10 22:58:36.347476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:28.873 [2024-12-10 22:58:36.353004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.873 [2024-12-10 22:58:36.353026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.873 [2024-12-10 22:58:36.353035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:28.873 [2024-12-10 22:58:36.358431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.873 [2024-12-10 22:58:36.358453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.873 [2024-12-10 22:58:36.358461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:28.873 [2024-12-10 22:58:36.363910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.873 [2024-12-10 22:58:36.363932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.873 [2024-12-10 22:58:36.363941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:28.873 [2024-12-10 22:58:36.369323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.873 [2024-12-10 22:58:36.369345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.873 [2024-12-10 22:58:36.369353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:28.873 [2024-12-10 22:58:36.374721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.873 [2024-12-10 22:58:36.374742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.873 [2024-12-10 22:58:36.374751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:28.873 [2024-12-10 22:58:36.380078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.873 [2024-12-10 22:58:36.380100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.873 [2024-12-10 22:58:36.380108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:28.873 [2024-12-10 22:58:36.385435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.873 [2024-12-10 22:58:36.385456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.873 [2024-12-10 22:58:36.385465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:28.873 [2024-12-10 22:58:36.390234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.873 [2024-12-10 22:58:36.390256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.873 [2024-12-10 22:58:36.390265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:28.873 [2024-12-10 22:58:36.395389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.873 [2024-12-10 22:58:36.395412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.873 [2024-12-10 22:58:36.395421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:28.873 [2024-12-10 22:58:36.400868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.873 [2024-12-10 22:58:36.400889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.873 [2024-12-10 22:58:36.400898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:28.873 [2024-12-10 22:58:36.406297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.873 [2024-12-10 22:58:36.406319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.873 [2024-12-10 22:58:36.406327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:28.873 [2024-12-10 22:58:36.411732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.873 [2024-12-10 22:58:36.411754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.873 [2024-12-10 22:58:36.411763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:28.873 [2024-12-10 22:58:36.417223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.873 [2024-12-10 22:58:36.417245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.873 [2024-12-10 22:58:36.417253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:28.873 [2024-12-10 22:58:36.422640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.873 [2024-12-10 22:58:36.422663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.873 [2024-12-10 22:58:36.422672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:28.873 [2024-12-10 22:58:36.428232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.873 [2024-12-10 22:58:36.428255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.873 [2024-12-10 22:58:36.428264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:28.873 [2024-12-10 22:58:36.433701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.873 [2024-12-10 22:58:36.433723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.873 [2024-12-10 22:58:36.433731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:28.873 [2024-12-10 22:58:36.439095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.873 [2024-12-10 22:58:36.439117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.873 [2024-12-10 22:58:36.439129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:28.873 [2024-12-10 22:58:36.444516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.874 [2024-12-10 22:58:36.444538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.874 [2024-12-10 22:58:36.444547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:28.874 [2024-12-10 22:58:36.450077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.874 [2024-12-10 22:58:36.450100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.874 [2024-12-10 22:58:36.450109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:28.874 [2024-12-10 22:58:36.455813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.874 [2024-12-10 22:58:36.455836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.874 [2024-12-10 22:58:36.455845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:28.874 [2024-12-10 22:58:36.461021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.874 [2024-12-10 22:58:36.461045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.874 [2024-12-10 22:58:36.461054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:28.874 [2024-12-10 22:58:36.466184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.874 [2024-12-10 22:58:36.466205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.874 [2024-12-10 22:58:36.466214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:28.874 [2024-12-10 22:58:36.471514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.874 [2024-12-10 22:58:36.471535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.874 [2024-12-10 22:58:36.471544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:28.874 [2024-12-10 22:58:36.476945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.874 [2024-12-10 22:58:36.476967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.874 [2024-12-10 22:58:36.476976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:28.874 [2024-12-10 22:58:36.482569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.874 [2024-12-10 22:58:36.482591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.874 [2024-12-10 22:58:36.482600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:28.874 [2024-12-10 22:58:36.488144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.874 [2024-12-10 22:58:36.488176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.874 [2024-12-10 22:58:36.488184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:28.874 [2024-12-10 22:58:36.493683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.874 [2024-12-10 22:58:36.493705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.874 [2024-12-10 22:58:36.493713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:28.874 [2024-12-10 22:58:36.499205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.874 [2024-12-10 22:58:36.499228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.874 [2024-12-10 22:58:36.499237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:28.874 [2024-12-10 22:58:36.504972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.874 [2024-12-10 22:58:36.504994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.874 [2024-12-10 22:58:36.505002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:28.874 [2024-12-10 22:58:36.510403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.874 [2024-12-10 22:58:36.510425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.874 [2024-12-10 22:58:36.510433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:28.874 [2024-12-10 22:58:36.515908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.874 [2024-12-10 22:58:36.515929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.874 [2024-12-10 22:58:36.515938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:28.874 [2024-12-10 22:58:36.521250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.874 [2024-12-10 22:58:36.521272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.874 [2024-12-10 22:58:36.521280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:28.874 [2024-12-10 22:58:36.526837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.874 [2024-12-10 22:58:36.526859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.874 [2024-12-10 22:58:36.526868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:28.874 [2024-12-10 22:58:36.532655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.874 [2024-12-10 22:58:36.532678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.874 [2024-12-10 22:58:36.532687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:28.874 [2024-12-10 22:58:36.538082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.874 [2024-12-10 22:58:36.538105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.874 [2024-12-10 22:58:36.538113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:28.874 [2024-12-10 22:58:36.543507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.874 [2024-12-10 22:58:36.543529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.874 [2024-12-10 22:58:36.543537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:28.874 [2024-12-10 22:58:36.549124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.874 [2024-12-10 22:58:36.549147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.874 [2024-12-10 22:58:36.549155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:28.874 [2024-12-10 22:58:36.554722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.874 [2024-12-10 22:58:36.554744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.874 [2024-12-10 22:58:36.554753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:28.874 [2024-12-10 22:58:36.559965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.874 [2024-12-10 22:58:36.559986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.874 [2024-12-10 22:58:36.559995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:28.874 [2024-12-10 22:58:36.565347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.874 [2024-12-10 22:58:36.565369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.874 [2024-12-10 22:58:36.565378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:28.874 [2024-12-10 22:58:36.570896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.874 [2024-12-10 22:58:36.570918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.874 [2024-12-10 22:58:36.570927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:28.874 [2024-12-10 22:58:36.576355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.874 [2024-12-10 22:58:36.576377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.874 [2024-12-10 22:58:36.576385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:28.874 [2024-12-10 22:58:36.582080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.874 [2024-12-10 22:58:36.582103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.874 [2024-12-10 22:58:36.582114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:28.874 [2024-12-10 22:58:36.587669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.874 [2024-12-10 22:58:36.587692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.874 [2024-12-10 22:58:36.587700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:28.874 [2024-12-10 22:58:36.593265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.874 [2024-12-10 22:58:36.593288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.874 [2024-12-10 22:58:36.593297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:28.874 [2024-12-10 22:58:36.596334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:28.874 [2024-12-10 22:58:36.596356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:28.874 [2024-12-10 22:58:36.596365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:29.134 [2024-12-10 22:58:36.602054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:29.134 [2024-12-10 22:58:36.602077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.134 [2024-12-10 22:58:36.602086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:29.134 [2024-12-10 22:58:36.608101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:29.134 [2024-12-10 22:58:36.608124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.134 [2024-12-10 22:58:36.608133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:29.134 [2024-12-10 22:58:36.614141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:29.134 [2024-12-10 22:58:36.614169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.134 [2024-12-10 22:58:36.614178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:29.134 [2024-12-10 22:58:36.619636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:29.134 [2024-12-10 22:58:36.619657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.134 [2024-12-10 22:58:36.619666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:29.134 [2024-12-10 22:58:36.624748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:29.134 [2024-12-10 22:58:36.624769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.134 [2024-12-10 22:58:36.624778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:29.134 [2024-12-10 22:58:36.630200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:29.134 [2024-12-10 22:58:36.630223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.134 [2024-12-10 22:58:36.630231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:29.134 [2024-12-10 22:58:36.635669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:29.134 [2024-12-10 22:58:36.635691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.134 [2024-12-10 22:58:36.635699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:29.134 [2024-12-10 22:58:36.641015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:29.134 [2024-12-10 22:58:36.641038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.134 [2024-12-10 22:58:36.641047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:29.134 [2024-12-10 22:58:36.646481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:29.134 [2024-12-10 22:58:36.646503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.134 [2024-12-10 22:58:36.646513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:29.134 [2024-12-10 22:58:36.652266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:29.135 [2024-12-10 22:58:36.652289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.135 [2024-12-10 22:58:36.652298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:29.135 [2024-12-10 22:58:36.657706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:29.135 [2024-12-10 22:58:36.657727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.135 [2024-12-10 22:58:36.657735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:29.135 [2024-12-10 22:58:36.663331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:29.135 [2024-12-10 22:58:36.663353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.135 [2024-12-10 22:58:36.663362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:29.135 [2024-12-10 22:58:36.669088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:29.135 [2024-12-10 22:58:36.669110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.135 [2024-12-10 22:58:36.669118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:29.135 [2024-12-10 22:58:36.674740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:29.135 [2024-12-10 22:58:36.674763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.135 [2024-12-10 22:58:36.674776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:29.135 [2024-12-10 22:58:36.680408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:29.135 [2024-12-10 22:58:36.680430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.135 [2024-12-10 22:58:36.680438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:29.135 [2024-12-10 22:58:36.686135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:29.135 [2024-12-10 22:58:36.686162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.135 [2024-12-10 22:58:36.686172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:29.135 [2024-12-10 22:58:36.691759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:29.135 [2024-12-10 22:58:36.691781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.135 [2024-12-10 22:58:36.691789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:29.135 [2024-12-10 22:58:36.697234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:29.135 [2024-12-10 22:58:36.697256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.135 [2024-12-10 22:58:36.697264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:29.135 [2024-12-10 22:58:36.702692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:29.135 [2024-12-10 22:58:36.702714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.135 [2024-12-10 22:58:36.702722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:29.135 [2024-12-10 22:58:36.708165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:29.135 [2024-12-10 22:58:36.708187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.135 [2024-12-10 22:58:36.708195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:29.135 [2024-12-10 22:58:36.713727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:29.135 [2024-12-10 22:58:36.713748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.135 [2024-12-10 22:58:36.713757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:29.135 [2024-12-10 22:58:36.719014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:29.135 [2024-12-10 22:58:36.719037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.135 [2024-12-10 22:58:36.719045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:29.135 [2024-12-10 22:58:36.724524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:29.135 [2024-12-10 22:58:36.724551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.135 [2024-12-10 22:58:36.724559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:29.135 [2024-12-10 22:58:36.730073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:29.135 [2024-12-10 22:58:36.730096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.135 [2024-12-10 22:58:36.730104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:29.135 [2024-12-10 22:58:36.735701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:29.135 [2024-12-10 22:58:36.735723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.135 [2024-12-10 22:58:36.735732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:29.135 [2024-12-10 22:58:36.741298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:29.135 [2024-12-10 22:58:36.741320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.135 [2024-12-10 22:58:36.741329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:29.135 [2024-12-10 22:58:36.747465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:29.135 [2024-12-10 22:58:36.747488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.135 [2024-12-10 22:58:36.747496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:29.135 [2024-12-10 22:58:36.752943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:29.135 [2024-12-10 22:58:36.752965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.135 [2024-12-10 22:58:36.752973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:29.135 [2024-12-10 22:58:36.758461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:29.135 [2024-12-10 22:58:36.758483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.135 [2024-12-10 22:58:36.758491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:29.135 [2024-12-10 22:58:36.764296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:29.135 [2024-12-10 22:58:36.764319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.135 [2024-12-10 22:58:36.764328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:29.135 [2024-12-10 22:58:36.769695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:29.135 [2024-12-10 22:58:36.769718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.135 [2024-12-10 22:58:36.769727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:29.135 [2024-12-10 22:58:36.775252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:29.135 [2024-12-10 22:58:36.775274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.135 [2024-12-10 22:58:36.775283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:29.135 [2024-12-10 22:58:36.780829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:29.135 [2024-12-10 22:58:36.780851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.135 [2024-12-10 22:58:36.780859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:29.135 [2024-12-10 22:58:36.786474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:29.135 [2024-12-10 22:58:36.786497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.135 [2024-12-10 22:58:36.786505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:29.135 [2024-12-10 22:58:36.792236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:29.135 [2024-12-10 22:58:36.792259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.135 [2024-12-10 22:58:36.792268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:29.135 [2024-12-10 22:58:36.797944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:29.135 [2024-12-10 22:58:36.797967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.135 [2024-12-10 22:58:36.797975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:29.135 [2024-12-10 22:58:36.803438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:29.135 [2024-12-10 22:58:36.803460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.136 [2024-12-10 22:58:36.803469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:29.136 [2024-12-10 22:58:36.809110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:29.136 [2024-12-10 22:58:36.809132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.136 [2024-12-10 22:58:36.809140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:29.136 [2024-12-10 22:58:36.812716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1fc3f80) 00:27:29.136 [2024-12-10 22:58:36.812736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:29.136 [2024-12-10 22:58:36.812745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:29.136 5758.00 IOPS, 719.75 MiB/s 00:27:29.136 Latency(us) 00:27:29.136 [2024-12-10T21:58:36.865Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:29.136 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:29.136 nvme0n1 : 2.00 5757.15 719.64 0.00 0.00 2776.14 651.80 8035.28 00:27:29.136 [2024-12-10T21:58:36.865Z] =================================================================================================================== 00:27:29.136 [2024-12-10T21:58:36.865Z] Total : 5757.15 719.64 0.00 0.00 2776.14 651.80 8035.28 00:27:29.136 { 00:27:29.136 "results": [ 00:27:29.136 { 00:27:29.136 "job": "nvme0n1", 00:27:29.136 "core_mask": "0x2", 00:27:29.136 "workload": "randread", 00:27:29.136 "status": "finished", 00:27:29.136 "queue_depth": 16, 00:27:29.136 "io_size": 131072, 00:27:29.136 "runtime": 2.003074, 00:27:29.136 "iops": 5757.1512585156615, 00:27:29.136 "mibps": 719.6439073144577, 00:27:29.136 "io_failed": 0, 00:27:29.136 "io_timeout": 0, 00:27:29.136 "avg_latency_us": 2776.140148697764, 00:27:29.136 "min_latency_us": 651.7982608695652, 00:27:29.136 "max_latency_us": 8035.28347826087 00:27:29.136 } 00:27:29.136 ], 00:27:29.136 "core_count": 1 00:27:29.136 } 00:27:29.136 22:58:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:29.136 22:58:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:29.136 22:58:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:29.136 | .driver_specific 00:27:29.136 | .nvme_error 00:27:29.136 | .status_code 00:27:29.136 | .command_transient_transport_error' 00:27:29.136 22:58:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:29.395 22:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 372 > 0 )) 00:27:29.395 22:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2523179 00:27:29.395 22:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2523179 ']' 00:27:29.395 22:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2523179 00:27:29.395 22:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:29.395 22:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:29.395 22:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2523179 00:27:29.395 22:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:29.395 22:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:29.395 22:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2523179' 00:27:29.395 killing process with pid 2523179 00:27:29.395 22:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2523179 00:27:29.395 Received shutdown signal, test time was about 2.000000 seconds 00:27:29.395 00:27:29.395 Latency(us) 00:27:29.395 [2024-12-10T21:58:37.124Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:29.395 [2024-12-10T21:58:37.124Z] =================================================================================================================== 00:27:29.395 [2024-12-10T21:58:37.124Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:29.395 22:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2523179 00:27:29.654 22:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:27:29.654 22:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:29.654 22:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:29.654 22:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:29.654 22:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:29.654 22:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2523720 00:27:29.654 22:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2523720 /var/tmp/bperf.sock 00:27:29.654 22:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:27:29.654 22:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2523720 ']' 00:27:29.654 22:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:29.654 22:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:29.654 22:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:29.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:29.654 22:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:29.654 22:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:29.654 [2024-12-10 22:58:37.298956] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:27:29.654 [2024-12-10 22:58:37.299004] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2523720 ] 00:27:29.654 [2024-12-10 22:58:37.375896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:29.913 [2024-12-10 22:58:37.415512] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:27:29.913 22:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:29.913 22:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:29.913 22:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:29.913 22:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:30.172 22:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:30.172 22:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.172 22:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:30.172 22:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.172 22:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:30.172 22:58:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:30.430 nvme0n1 00:27:30.688 22:58:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:30.688 22:58:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.688 22:58:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:30.688 22:58:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.688 22:58:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:30.689 22:58:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:30.689 Running I/O for 2 seconds... 00:27:30.689 [2024-12-10 22:58:38.278606] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:30.689 [2024-12-10 22:58:38.278769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:8340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.689 [2024-12-10 22:58:38.278796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:30.689 [2024-12-10 22:58:38.288110] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:30.689 [2024-12-10 22:58:38.288270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:3050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.689 [2024-12-10 22:58:38.288290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:30.689 [2024-12-10 22:58:38.297725] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:30.689 [2024-12-10 22:58:38.297876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:21235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.689 [2024-12-10 22:58:38.297894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:30.689 [2024-12-10 22:58:38.307351] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:30.689 [2024-12-10 22:58:38.307502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:23859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.689 [2024-12-10 22:58:38.307521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:30.689 [2024-12-10 22:58:38.316896] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:30.689 [2024-12-10 22:58:38.317045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.689 [2024-12-10 22:58:38.317063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:30.689 [2024-12-10 22:58:38.326495] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:30.689 [2024-12-10 22:58:38.326644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:9621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.689 [2024-12-10 22:58:38.326662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:30.689 [2024-12-10 22:58:38.336024] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:30.689 [2024-12-10 22:58:38.336174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:5639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.689 [2024-12-10 22:58:38.336193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:30.689 [2024-12-10 22:58:38.345560] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:30.689 [2024-12-10 22:58:38.345709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:11274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.689 [2024-12-10 22:58:38.345726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:30.689 [2024-12-10 22:58:38.355138] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:30.689 [2024-12-10 22:58:38.355293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:14427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.689 [2024-12-10 22:58:38.355315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:30.689 [2024-12-10 22:58:38.364667] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:30.689 [2024-12-10 22:58:38.364814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:12930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.689 [2024-12-10 22:58:38.364832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:30.689 [2024-12-10 22:58:38.374186] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:30.689 [2024-12-10 22:58:38.374335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:16951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.689 [2024-12-10 22:58:38.374353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:30.689 [2024-12-10 22:58:38.383732] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:30.689 [2024-12-10 22:58:38.383878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:16021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.689 [2024-12-10 22:58:38.383896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:30.689 [2024-12-10 22:58:38.393227] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:30.689 [2024-12-10 22:58:38.393375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:15640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.689 [2024-12-10 22:58:38.393392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:30.689 [2024-12-10 22:58:38.402805] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:30.689 [2024-12-10 22:58:38.402955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:17712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.689 [2024-12-10 22:58:38.402972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:30.689 [2024-12-10 22:58:38.412373] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:30.689 [2024-12-10 22:58:38.412525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:15435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.689 [2024-12-10 22:58:38.412543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:30.948 [2024-12-10 22:58:38.422152] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:30.948 [2024-12-10 22:58:38.422308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:7496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.948 [2024-12-10 22:58:38.422326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:30.948 [2024-12-10 22:58:38.431706] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:30.948 [2024-12-10 22:58:38.431855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:2329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.948 [2024-12-10 22:58:38.431873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:30.948 [2024-12-10 22:58:38.441225] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:30.948 [2024-12-10 22:58:38.441377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:18826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.948 [2024-12-10 22:58:38.441395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:30.948 [2024-12-10 22:58:38.450716] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:30.948 [2024-12-10 22:58:38.450864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:16892 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.948 [2024-12-10 22:58:38.450882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:30.948 [2024-12-10 22:58:38.460307] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:30.948 [2024-12-10 22:58:38.460456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:7432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.948 [2024-12-10 22:58:38.460473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:30.948 [2024-12-10 22:58:38.469776] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:30.948 [2024-12-10 22:58:38.469922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:12494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.948 [2024-12-10 22:58:38.469940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:30.948 [2024-12-10 22:58:38.479330] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:30.948 [2024-12-10 22:58:38.479477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:23279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.948 [2024-12-10 22:58:38.479494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:30.948 [2024-12-10 22:58:38.488824] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:30.948 [2024-12-10 22:58:38.488974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:9207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.948 [2024-12-10 22:58:38.488992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:30.948 [2024-12-10 22:58:38.498347] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:30.948 [2024-12-10 22:58:38.498496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:11565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.948 [2024-12-10 22:58:38.498515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:30.948 [2024-12-10 22:58:38.507882] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:30.948 [2024-12-10 22:58:38.508031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:9882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.948 [2024-12-10 22:58:38.508049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:30.949 [2024-12-10 22:58:38.517389] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:30.949 [2024-12-10 22:58:38.517536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:15916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.949 [2024-12-10 22:58:38.517554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:30.949 [2024-12-10 22:58:38.526886] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:30.949 [2024-12-10 22:58:38.527034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:21457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.949 [2024-12-10 22:58:38.527051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:30.949 [2024-12-10 22:58:38.536442] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:30.949 [2024-12-10 22:58:38.536593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:8570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.949 [2024-12-10 22:58:38.536612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:30.949 [2024-12-10 22:58:38.546173] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:30.949 [2024-12-10 22:58:38.546321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:14543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.949 [2024-12-10 22:58:38.546339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:30.949 [2024-12-10 22:58:38.555726] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:30.949 [2024-12-10 22:58:38.555875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:21375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.949 [2024-12-10 22:58:38.555892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:30.949 [2024-12-10 22:58:38.565317] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:30.949 [2024-12-10 22:58:38.565466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:10774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.949 [2024-12-10 22:58:38.565484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:30.949 [2024-12-10 22:58:38.574818] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:30.949 [2024-12-10 22:58:38.574968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:13350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.949 [2024-12-10 22:58:38.574986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:30.949 [2024-12-10 22:58:38.584406] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:30.949 [2024-12-10 22:58:38.584555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:8107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.949 [2024-12-10 22:58:38.584574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:30.949 [2024-12-10 22:58:38.593972] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:30.949 [2024-12-10 22:58:38.594125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:9365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.949 [2024-12-10 22:58:38.594144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:30.949 [2024-12-10 22:58:38.603799] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:30.949 [2024-12-10 22:58:38.603952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:11328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.949 [2024-12-10 22:58:38.603976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:30.949 [2024-12-10 22:58:38.613650] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:30.949 [2024-12-10 22:58:38.613802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:5078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.949 [2024-12-10 22:58:38.613821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:30.949 [2024-12-10 22:58:38.623270] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:30.949 [2024-12-10 22:58:38.623418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:8036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.949 [2024-12-10 22:58:38.623435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:30.949 [2024-12-10 22:58:38.632942] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:30.949 [2024-12-10 22:58:38.633092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:11916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.949 [2024-12-10 22:58:38.633110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:30.949 [2024-12-10 22:58:38.642560] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:30.949 [2024-12-10 22:58:38.642708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:10935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.949 [2024-12-10 22:58:38.642727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:30.949 [2024-12-10 22:58:38.652079] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:30.949 [2024-12-10 22:58:38.652237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:17823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.949 [2024-12-10 22:58:38.652255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:30.949 [2024-12-10 22:58:38.661637] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:30.949 [2024-12-10 22:58:38.661787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:23064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.949 [2024-12-10 22:58:38.661804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:30.949 [2024-12-10 22:58:38.671232] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:30.949 [2024-12-10 22:58:38.671386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:23537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:30.949 [2024-12-10 22:58:38.671404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.209 [2024-12-10 22:58:38.681021] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.209 [2024-12-10 22:58:38.681173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:9941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.209 [2024-12-10 22:58:38.681191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.209 [2024-12-10 22:58:38.690600] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.209 [2024-12-10 22:58:38.690755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:25352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.209 [2024-12-10 22:58:38.690773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.209 [2024-12-10 22:58:38.700128] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.209 [2024-12-10 22:58:38.700286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:5595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.209 [2024-12-10 22:58:38.700303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.209 [2024-12-10 22:58:38.709681] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.209 [2024-12-10 22:58:38.709829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:6850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.209 [2024-12-10 22:58:38.709847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.209 [2024-12-10 22:58:38.719409] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.209 [2024-12-10 22:58:38.719559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:5348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.209 [2024-12-10 22:58:38.719576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.209 [2024-12-10 22:58:38.728918] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.209 [2024-12-10 22:58:38.729068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:22533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.209 [2024-12-10 22:58:38.729086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.209 [2024-12-10 22:58:38.738479] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.209 [2024-12-10 22:58:38.738628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:11948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.209 [2024-12-10 22:58:38.738646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.209 [2024-12-10 22:58:38.747987] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.209 [2024-12-10 22:58:38.748136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:8812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.209 [2024-12-10 22:58:38.748153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.209 [2024-12-10 22:58:38.757510] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.209 [2024-12-10 22:58:38.757659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:9756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.209 [2024-12-10 22:58:38.757676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.209 [2024-12-10 22:58:38.767087] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.209 [2024-12-10 22:58:38.767240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:22779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.209 [2024-12-10 22:58:38.767258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.209 [2024-12-10 22:58:38.776613] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.209 [2024-12-10 22:58:38.776761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:7294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.209 [2024-12-10 22:58:38.776779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.209 [2024-12-10 22:58:38.786176] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.209 [2024-12-10 22:58:38.786325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:12749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.209 [2024-12-10 22:58:38.786343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.209 [2024-12-10 22:58:38.795716] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.209 [2024-12-10 22:58:38.795866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:9897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.209 [2024-12-10 22:58:38.795884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.209 [2024-12-10 22:58:38.805474] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.209 [2024-12-10 22:58:38.805622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.209 [2024-12-10 22:58:38.805639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.209 [2024-12-10 22:58:38.815145] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.209 [2024-12-10 22:58:38.815310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:7475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.209 [2024-12-10 22:58:38.815328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.209 [2024-12-10 22:58:38.824677] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.209 [2024-12-10 22:58:38.824823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:3377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.209 [2024-12-10 22:58:38.824841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.209 [2024-12-10 22:58:38.834205] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.209 [2024-12-10 22:58:38.834355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:23458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.209 [2024-12-10 22:58:38.834373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.209 [2024-12-10 22:58:38.843763] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.209 [2024-12-10 22:58:38.843912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:22172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.209 [2024-12-10 22:58:38.843930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.209 [2024-12-10 22:58:38.853330] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.209 [2024-12-10 22:58:38.853481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:23195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.209 [2024-12-10 22:58:38.853502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.210 [2024-12-10 22:58:38.862856] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.210 [2024-12-10 22:58:38.863004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:9304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.210 [2024-12-10 22:58:38.863022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.210 [2024-12-10 22:58:38.872426] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.210 [2024-12-10 22:58:38.872577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:4371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.210 [2024-12-10 22:58:38.872594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.210 [2024-12-10 22:58:38.882029] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.210 [2024-12-10 22:58:38.882185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:23618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.210 [2024-12-10 22:58:38.882207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.210 [2024-12-10 22:58:38.891898] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.210 [2024-12-10 22:58:38.892052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:20602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.210 [2024-12-10 22:58:38.892071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.210 [2024-12-10 22:58:38.901896] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.210 [2024-12-10 22:58:38.902047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:5230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.210 [2024-12-10 22:58:38.902066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.210 [2024-12-10 22:58:38.911485] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.210 [2024-12-10 22:58:38.911635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:17395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.210 [2024-12-10 22:58:38.911654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.210 [2024-12-10 22:58:38.921083] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.210 [2024-12-10 22:58:38.921241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:23512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.210 [2024-12-10 22:58:38.921260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.210 [2024-12-10 22:58:38.930640] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.210 [2024-12-10 22:58:38.930792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:10879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.210 [2024-12-10 22:58:38.930810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.469 [2024-12-10 22:58:38.940519] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.469 [2024-12-10 22:58:38.940673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:25105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.469 [2024-12-10 22:58:38.940691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.469 [2024-12-10 22:58:38.950113] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.469 [2024-12-10 22:58:38.950273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:4141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.469 [2024-12-10 22:58:38.950291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.469 [2024-12-10 22:58:38.959651] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.469 [2024-12-10 22:58:38.959799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:10739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.469 [2024-12-10 22:58:38.959816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.469 [2024-12-10 22:58:38.969231] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.469 [2024-12-10 22:58:38.969382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:4684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.469 [2024-12-10 22:58:38.969400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.469 [2024-12-10 22:58:38.978797] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.469 [2024-12-10 22:58:38.978944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:2894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.469 [2024-12-10 22:58:38.978961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.469 [2024-12-10 22:58:38.988305] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.469 [2024-12-10 22:58:38.988455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:16025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.469 [2024-12-10 22:58:38.988473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.469 [2024-12-10 22:58:38.997884] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.469 [2024-12-10 22:58:38.998032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:13209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.469 [2024-12-10 22:58:38.998049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.469 [2024-12-10 22:58:39.007390] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.470 [2024-12-10 22:58:39.007539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:16755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.470 [2024-12-10 22:58:39.007556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.470 [2024-12-10 22:58:39.016980] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.470 [2024-12-10 22:58:39.017128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:22946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.470 [2024-12-10 22:58:39.017145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.470 [2024-12-10 22:58:39.026523] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.470 [2024-12-10 22:58:39.026669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:7940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.470 [2024-12-10 22:58:39.026686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.470 [2024-12-10 22:58:39.036020] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.470 [2024-12-10 22:58:39.036172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:24639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.470 [2024-12-10 22:58:39.036190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.470 [2024-12-10 22:58:39.045632] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.470 [2024-12-10 22:58:39.045780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:11125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.470 [2024-12-10 22:58:39.045797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.470 [2024-12-10 22:58:39.055459] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.470 [2024-12-10 22:58:39.055611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:23548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.470 [2024-12-10 22:58:39.055629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.470 [2024-12-10 22:58:39.065044] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.470 [2024-12-10 22:58:39.065197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:5667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.470 [2024-12-10 22:58:39.065215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.470 [2024-12-10 22:58:39.074619] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.470 [2024-12-10 22:58:39.074766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:10188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.470 [2024-12-10 22:58:39.074784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.470 [2024-12-10 22:58:39.084125] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.470 [2024-12-10 22:58:39.084280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:17349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.470 [2024-12-10 22:58:39.084297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.470 [2024-12-10 22:58:39.093648] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.470 [2024-12-10 22:58:39.093797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:8377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.470 [2024-12-10 22:58:39.093814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.470 [2024-12-10 22:58:39.103283] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.470 [2024-12-10 22:58:39.103433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:20387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.470 [2024-12-10 22:58:39.103455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.470 [2024-12-10 22:58:39.112786] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.470 [2024-12-10 22:58:39.112932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:4245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.470 [2024-12-10 22:58:39.112951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.470 [2024-12-10 22:58:39.122340] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.470 [2024-12-10 22:58:39.122487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:6083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.470 [2024-12-10 22:58:39.122504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.470 [2024-12-10 22:58:39.131877] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.470 [2024-12-10 22:58:39.132026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:24486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.470 [2024-12-10 22:58:39.132044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.470 [2024-12-10 22:58:39.141386] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.470 [2024-12-10 22:58:39.141533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:6170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.470 [2024-12-10 22:58:39.141550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.470 [2024-12-10 22:58:39.150997] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.470 [2024-12-10 22:58:39.151147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:14685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.470 [2024-12-10 22:58:39.151169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.470 [2024-12-10 22:58:39.160493] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.470 [2024-12-10 22:58:39.160641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.470 [2024-12-10 22:58:39.160658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.470 [2024-12-10 22:58:39.170008] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.470 [2024-12-10 22:58:39.170156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:3073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.470 [2024-12-10 22:58:39.170178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.470 [2024-12-10 22:58:39.179588] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.470 [2024-12-10 22:58:39.179735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:14102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.470 [2024-12-10 22:58:39.179753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.470 [2024-12-10 22:58:39.189070] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.470 [2024-12-10 22:58:39.189240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:23389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.470 [2024-12-10 22:58:39.189258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.729 [2024-12-10 22:58:39.198979] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.729 [2024-12-10 22:58:39.199132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:17919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.729 [2024-12-10 22:58:39.199149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.730 [2024-12-10 22:58:39.208603] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.730 [2024-12-10 22:58:39.208752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:11895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.730 [2024-12-10 22:58:39.208769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.730 [2024-12-10 22:58:39.218154] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.730 [2024-12-10 22:58:39.218310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:6414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.730 [2024-12-10 22:58:39.218328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.730 [2024-12-10 22:58:39.227852] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.730 [2024-12-10 22:58:39.228000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:12344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.730 [2024-12-10 22:58:39.228017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.730 [2024-12-10 22:58:39.237350] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.730 [2024-12-10 22:58:39.237499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:10929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.730 [2024-12-10 22:58:39.237516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.730 [2024-12-10 22:58:39.246880] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.730 [2024-12-10 22:58:39.247027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:11045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.730 [2024-12-10 22:58:39.247045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.730 [2024-12-10 22:58:39.256531] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.730 [2024-12-10 22:58:39.256678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:15722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.730 [2024-12-10 22:58:39.256696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.730 [2024-12-10 22:58:39.266103] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.730 [2024-12-10 22:58:39.267175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:23557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.730 [2024-12-10 22:58:39.267195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.730 26479.00 IOPS, 103.43 MiB/s [2024-12-10T21:58:39.459Z] [2024-12-10 22:58:39.275655] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.730 [2024-12-10 22:58:39.275803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:7159 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.730 [2024-12-10 22:58:39.275822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.730 [2024-12-10 22:58:39.285189] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.730 [2024-12-10 22:58:39.285339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:9114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.730 [2024-12-10 22:58:39.285356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.730 [2024-12-10 22:58:39.294691] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.730 [2024-12-10 22:58:39.294839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:18566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.730 [2024-12-10 22:58:39.294859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.730 [2024-12-10 22:58:39.304255] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.730 [2024-12-10 22:58:39.304407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:5961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.730 [2024-12-10 22:58:39.304425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.730 [2024-12-10 22:58:39.314048] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.730 [2024-12-10 22:58:39.314207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:22243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.730 [2024-12-10 22:58:39.314225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.730 [2024-12-10 22:58:39.323739] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.730 [2024-12-10 22:58:39.323888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:20691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.730 [2024-12-10 22:58:39.323907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.730 [2024-12-10 22:58:39.333440] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.730 [2024-12-10 22:58:39.333590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:5546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.730 [2024-12-10 22:58:39.333608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.730 [2024-12-10 22:58:39.343007] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.730 [2024-12-10 22:58:39.343154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:13562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.730 [2024-12-10 22:58:39.343176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.730 [2024-12-10 22:58:39.352562] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.730 [2024-12-10 22:58:39.352712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:8822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.730 [2024-12-10 22:58:39.352733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.730 [2024-12-10 22:58:39.362099] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.730 [2024-12-10 22:58:39.362252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:21821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.730 [2024-12-10 22:58:39.362270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.730 [2024-12-10 22:58:39.371641] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.730 [2024-12-10 22:58:39.371789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:7441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.730 [2024-12-10 22:58:39.371807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.730 [2024-12-10 22:58:39.381213] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.730 [2024-12-10 22:58:39.381360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:19581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.730 [2024-12-10 22:58:39.381377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.730 [2024-12-10 22:58:39.390704] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.730 [2024-12-10 22:58:39.390851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:16802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.730 [2024-12-10 22:58:39.390867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.730 [2024-12-10 22:58:39.400227] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.730 [2024-12-10 22:58:39.400377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.730 [2024-12-10 22:58:39.400395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.730 [2024-12-10 22:58:39.409793] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.730 [2024-12-10 22:58:39.409940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:11747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.730 [2024-12-10 22:58:39.409958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.730 [2024-12-10 22:58:39.419313] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.730 [2024-12-10 22:58:39.419461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:21415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.730 [2024-12-10 22:58:39.419478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.730 [2024-12-10 22:58:39.428843] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.730 [2024-12-10 22:58:39.428992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:4425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.730 [2024-12-10 22:58:39.429010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.730 [2024-12-10 22:58:39.438452] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.730 [2024-12-10 22:58:39.438600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:20618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.731 [2024-12-10 22:58:39.438619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.731 [2024-12-10 22:58:39.447964] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.731 [2024-12-10 22:58:39.448111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:1083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.731 [2024-12-10 22:58:39.448129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.990 [2024-12-10 22:58:39.457755] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.990 [2024-12-10 22:58:39.457903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:6334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.990 [2024-12-10 22:58:39.457921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.990 [2024-12-10 22:58:39.467440] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.990 [2024-12-10 22:58:39.467589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:13884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.990 [2024-12-10 22:58:39.467606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.990 [2024-12-10 22:58:39.476960] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.990 [2024-12-10 22:58:39.477109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:18808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.990 [2024-12-10 22:58:39.477126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.990 [2024-12-10 22:58:39.486529] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.990 [2024-12-10 22:58:39.486676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:16160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.990 [2024-12-10 22:58:39.486694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.990 [2024-12-10 22:58:39.496033] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.990 [2024-12-10 22:58:39.496183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:16503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.990 [2024-12-10 22:58:39.496200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.990 [2024-12-10 22:58:39.505584] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.990 [2024-12-10 22:58:39.505732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:20283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.990 [2024-12-10 22:58:39.505750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.990 [2024-12-10 22:58:39.515124] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.990 [2024-12-10 22:58:39.515285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:2478 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.990 [2024-12-10 22:58:39.515303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.990 [2024-12-10 22:58:39.524630] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.990 [2024-12-10 22:58:39.524776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:4750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.990 [2024-12-10 22:58:39.524794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.990 [2024-12-10 22:58:39.534228] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.990 [2024-12-10 22:58:39.534376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:6136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.990 [2024-12-10 22:58:39.534394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.990 [2024-12-10 22:58:39.543744] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.990 [2024-12-10 22:58:39.543891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:12026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.990 [2024-12-10 22:58:39.543909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.990 [2024-12-10 22:58:39.553346] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.990 [2024-12-10 22:58:39.553493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:17609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.990 [2024-12-10 22:58:39.553510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.990 [2024-12-10 22:58:39.563041] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.990 [2024-12-10 22:58:39.563197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:11630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.990 [2024-12-10 22:58:39.563231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.990 [2024-12-10 22:58:39.572828] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.990 [2024-12-10 22:58:39.572975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:14039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.990 [2024-12-10 22:58:39.572992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.990 [2024-12-10 22:58:39.582432] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.990 [2024-12-10 22:58:39.582578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.990 [2024-12-10 22:58:39.582596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.990 [2024-12-10 22:58:39.591957] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.990 [2024-12-10 22:58:39.592103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:7125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.990 [2024-12-10 22:58:39.592121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.990 [2024-12-10 22:58:39.601467] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.990 [2024-12-10 22:58:39.601615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.990 [2024-12-10 22:58:39.601636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.990 [2024-12-10 22:58:39.611039] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.990 [2024-12-10 22:58:39.611193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.990 [2024-12-10 22:58:39.611212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.990 [2024-12-10 22:58:39.620596] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.990 [2024-12-10 22:58:39.620743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:10538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.990 [2024-12-10 22:58:39.620761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.990 [2024-12-10 22:58:39.630103] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.990 [2024-12-10 22:58:39.630257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:23377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.991 [2024-12-10 22:58:39.630276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.991 [2024-12-10 22:58:39.639771] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.991 [2024-12-10 22:58:39.639921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:5080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.991 [2024-12-10 22:58:39.639939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.991 [2024-12-10 22:58:39.649306] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.991 [2024-12-10 22:58:39.649454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.991 [2024-12-10 22:58:39.649472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.991 [2024-12-10 22:58:39.658809] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.991 [2024-12-10 22:58:39.658955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:9902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.991 [2024-12-10 22:58:39.658973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.991 [2024-12-10 22:58:39.668439] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.991 [2024-12-10 22:58:39.668587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:12588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.991 [2024-12-10 22:58:39.668605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.991 [2024-12-10 22:58:39.678083] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.991 [2024-12-10 22:58:39.678240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:20965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.991 [2024-12-10 22:58:39.678258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.991 [2024-12-10 22:58:39.687672] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.991 [2024-12-10 22:58:39.687821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:9739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.991 [2024-12-10 22:58:39.687839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.991 [2024-12-10 22:58:39.697178] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.991 [2024-12-10 22:58:39.697329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:17688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.991 [2024-12-10 22:58:39.697347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:31.991 [2024-12-10 22:58:39.706744] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:31.991 [2024-12-10 22:58:39.706891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:12991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:31.991 [2024-12-10 22:58:39.706910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:32.250 [2024-12-10 22:58:39.716542] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:32.250 [2024-12-10 22:58:39.716693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.250 [2024-12-10 22:58:39.716711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:32.250 [2024-12-10 22:58:39.726275] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:32.250 [2024-12-10 22:58:39.726423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:8440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.250 [2024-12-10 22:58:39.726441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:32.250 [2024-12-10 22:58:39.735973] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:32.250 [2024-12-10 22:58:39.736123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:17607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.250 [2024-12-10 22:58:39.736141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:32.250 [2024-12-10 22:58:39.745681] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:32.250 [2024-12-10 22:58:39.745829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:3804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.250 [2024-12-10 22:58:39.745846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:32.250 [2024-12-10 22:58:39.755334] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:32.250 [2024-12-10 22:58:39.755485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:21249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.250 [2024-12-10 22:58:39.755503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:32.250 [2024-12-10 22:58:39.764984] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:32.250 [2024-12-10 22:58:39.765132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:21436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.250 [2024-12-10 22:58:39.765153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:32.250 [2024-12-10 22:58:39.774570] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:32.250 [2024-12-10 22:58:39.774718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:4622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.250 [2024-12-10 22:58:39.774736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:32.250 [2024-12-10 22:58:39.784085] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:32.250 [2024-12-10 22:58:39.784240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:24511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.250 [2024-12-10 22:58:39.784258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:32.250 [2024-12-10 22:58:39.793658] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:32.250 [2024-12-10 22:58:39.793807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:1073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.250 [2024-12-10 22:58:39.793824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:32.250 [2024-12-10 22:58:39.803173] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:32.250 [2024-12-10 22:58:39.803323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:4752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.251 [2024-12-10 22:58:39.803341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:32.251 [2024-12-10 22:58:39.812743] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:32.251 [2024-12-10 22:58:39.812892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:20270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.251 [2024-12-10 22:58:39.812910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:32.251 [2024-12-10 22:58:39.822329] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:32.251 [2024-12-10 22:58:39.822478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:18709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.251 [2024-12-10 22:58:39.822495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:32.251 [2024-12-10 22:58:39.832087] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:32.251 [2024-12-10 22:58:39.832245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:9078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.251 [2024-12-10 22:58:39.832264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:32.251 [2024-12-10 22:58:39.841835] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:32.251 [2024-12-10 22:58:39.841985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:11799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.251 [2024-12-10 22:58:39.842002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:32.251 [2024-12-10 22:58:39.851490] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:32.251 [2024-12-10 22:58:39.851644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:14939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.251 [2024-12-10 22:58:39.851661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:32.251 [2024-12-10 22:58:39.860995] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:32.251 [2024-12-10 22:58:39.861142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:17914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.251 [2024-12-10 22:58:39.861164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:32.251 [2024-12-10 22:58:39.870623] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:32.251 [2024-12-10 22:58:39.870771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:23353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.251 [2024-12-10 22:58:39.870789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:32.251 [2024-12-10 22:58:39.880214] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:32.251 [2024-12-10 22:58:39.880364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.251 [2024-12-10 22:58:39.880382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:32.251 [2024-12-10 22:58:39.889734] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:32.251 [2024-12-10 22:58:39.889882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:13234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.251 [2024-12-10 22:58:39.889900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:32.251 [2024-12-10 22:58:39.899474] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:32.251 [2024-12-10 22:58:39.899625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:5887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.251 [2024-12-10 22:58:39.899642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:32.251 [2024-12-10 22:58:39.908992] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:32.251 [2024-12-10 22:58:39.909141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:7891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.251 [2024-12-10 22:58:39.909166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:32.251 [2024-12-10 22:58:39.918548] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:32.251 [2024-12-10 22:58:39.918697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:15368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.251 [2024-12-10 22:58:39.918715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:32.251 [2024-12-10 22:58:39.928075] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:32.251 [2024-12-10 22:58:39.928229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:5960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.251 [2024-12-10 22:58:39.928246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:32.251 [2024-12-10 22:58:39.937636] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:32.251 [2024-12-10 22:58:39.937785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:9247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.251 [2024-12-10 22:58:39.937803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:32.251 [2024-12-10 22:58:39.947208] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:32.251 [2024-12-10 22:58:39.947356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:5261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.251 [2024-12-10 22:58:39.947373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:32.251 [2024-12-10 22:58:39.956703] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:32.251 [2024-12-10 22:58:39.956851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:14316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.251 [2024-12-10 22:58:39.956869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:32.251 [2024-12-10 22:58:39.966272] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:32.251 [2024-12-10 22:58:39.966422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:18205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.251 [2024-12-10 22:58:39.966440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:32.251 [2024-12-10 22:58:39.975967] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:32.511 [2024-12-10 22:58:39.976119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:23241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.511 [2024-12-10 22:58:39.976137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:32.511 [2024-12-10 22:58:39.985701] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:32.511 [2024-12-10 22:58:39.985849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:15265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.511 [2024-12-10 22:58:39.985866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:32.511 [2024-12-10 22:58:39.995256] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:32.511 [2024-12-10 22:58:39.995403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:20347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.511 [2024-12-10 22:58:39.995421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:32.511 [2024-12-10 22:58:40.004933] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:32.511 [2024-12-10 22:58:40.005089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.511 [2024-12-10 22:58:40.005114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:32.511 [2024-12-10 22:58:40.014850] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:32.511 [2024-12-10 22:58:40.015004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:6106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.511 [2024-12-10 22:58:40.015028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:32.511 [2024-12-10 22:58:40.024726] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:32.511 [2024-12-10 22:58:40.024874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:22073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.511 [2024-12-10 22:58:40.024894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:32.511 [2024-12-10 22:58:40.034450] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:32.511 [2024-12-10 22:58:40.034604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:25480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.511 [2024-12-10 22:58:40.034624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:32.511 [2024-12-10 22:58:40.044839] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:32.511 [2024-12-10 22:58:40.044994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.511 [2024-12-10 22:58:40.045013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:32.511 [2024-12-10 22:58:40.054714] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:32.511 [2024-12-10 22:58:40.054868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:22942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.511 [2024-12-10 22:58:40.054887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:32.511 [2024-12-10 22:58:40.064571] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:32.511 [2024-12-10 22:58:40.064726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.511 [2024-12-10 22:58:40.064744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:32.511 [2024-12-10 22:58:40.074453] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:32.511 [2024-12-10 22:58:40.074612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:14682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.511 [2024-12-10 22:58:40.074646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:32.511 [2024-12-10 22:58:40.085277] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:32.511 [2024-12-10 22:58:40.085434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:15160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.511 [2024-12-10 22:58:40.085451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:32.511 [2024-12-10 22:58:40.095066] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:32.511 [2024-12-10 22:58:40.095228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:5588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.511 [2024-12-10 22:58:40.095247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:32.511 [2024-12-10 22:58:40.104982] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:32.511 [2024-12-10 22:58:40.105139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:19786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.511 [2024-12-10 22:58:40.105162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:32.511 [2024-12-10 22:58:40.114811] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:32.511 [2024-12-10 22:58:40.114963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:8360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.511 [2024-12-10 22:58:40.114983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:32.511 [2024-12-10 22:58:40.124667] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:32.511 [2024-12-10 22:58:40.124820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:2265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.511 [2024-12-10 22:58:40.124838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:32.511 [2024-12-10 22:58:40.134525] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:32.511 [2024-12-10 22:58:40.134678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:12201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.511 [2024-12-10 22:58:40.134697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:32.511 [2024-12-10 22:58:40.144332] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:32.511 [2024-12-10 22:58:40.144484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:5830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.511 [2024-12-10 22:58:40.144502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:32.511 [2024-12-10 22:58:40.154180] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:32.511 [2024-12-10 22:58:40.154331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:20885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.511 [2024-12-10 22:58:40.154349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:32.511 [2024-12-10 22:58:40.164010] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:32.511 [2024-12-10 22:58:40.164169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:25553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.511 [2024-12-10 22:58:40.164188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:32.511 [2024-12-10 22:58:40.173829] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:32.511 [2024-12-10 22:58:40.173982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:19900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.511 [2024-12-10 22:58:40.174000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:32.511 [2024-12-10 22:58:40.183689] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:32.511 [2024-12-10 22:58:40.183836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:18891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.511 [2024-12-10 22:58:40.183854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:32.511 [2024-12-10 22:58:40.193551] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:32.511 [2024-12-10 22:58:40.193704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:14008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.511 [2024-12-10 22:58:40.193723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:32.511 [2024-12-10 22:58:40.203371] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:32.511 [2024-12-10 22:58:40.203525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:7106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.511 [2024-12-10 22:58:40.203543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:32.511 [2024-12-10 22:58:40.213241] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:32.511 [2024-12-10 22:58:40.213394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:16408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.511 [2024-12-10 22:58:40.213413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:32.511 [2024-12-10 22:58:40.223058] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:32.511 [2024-12-10 22:58:40.223219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:19764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.512 [2024-12-10 22:58:40.223237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:32.512 [2024-12-10 22:58:40.233115] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:32.512 [2024-12-10 22:58:40.233276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:2838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.512 [2024-12-10 22:58:40.233295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:32.770 [2024-12-10 22:58:40.242980] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:32.770 [2024-12-10 22:58:40.243127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:24637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.770 [2024-12-10 22:58:40.243145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:32.770 [2024-12-10 22:58:40.252796] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:32.770 [2024-12-10 22:58:40.252946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.770 [2024-12-10 22:58:40.252964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:32.770 [2024-12-10 22:58:40.262656] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:32.770 [2024-12-10 22:58:40.262809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.770 [2024-12-10 22:58:40.262828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:32.770 26447.50 IOPS, 103.31 MiB/s [2024-12-10T21:58:40.499Z] [2024-12-10 22:58:40.272431] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1589e10) with pdu=0x200016efda78 00:27:32.770 [2024-12-10 22:58:40.272586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:20516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.770 [2024-12-10 22:58:40.272610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:32.770 00:27:32.770 Latency(us) 00:27:32.770 [2024-12-10T21:58:40.499Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:32.770 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:32.770 nvme0n1 : 2.01 26448.90 103.32 0.00 0.00 4830.89 3575.99 11853.47 00:27:32.770 [2024-12-10T21:58:40.500Z] =================================================================================================================== 00:27:32.771 [2024-12-10T21:58:40.500Z] Total : 26448.90 103.32 0.00 0.00 4830.89 3575.99 11853.47 00:27:32.771 { 00:27:32.771 "results": [ 00:27:32.771 { 00:27:32.771 "job": "nvme0n1", 00:27:32.771 "core_mask": "0x2", 00:27:32.771 "workload": "randwrite", 00:27:32.771 "status": "finished", 00:27:32.771 "queue_depth": 128, 00:27:32.771 "io_size": 4096, 00:27:32.771 "runtime": 2.006246, 00:27:32.771 "iops": 26448.90008503444, 00:27:32.771 "mibps": 103.31601595716577, 00:27:32.771 "io_failed": 0, 00:27:32.771 "io_timeout": 0, 00:27:32.771 "avg_latency_us": 4830.89129957909, 00:27:32.771 "min_latency_us": 3575.986086956522, 00:27:32.771 "max_latency_us": 11853.467826086957 00:27:32.771 } 00:27:32.771 ], 00:27:32.771 "core_count": 1 00:27:32.771 } 00:27:32.771 22:58:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:32.771 22:58:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:32.771 22:58:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:32.771 | .driver_specific 00:27:32.771 | .nvme_error 00:27:32.771 | .status_code 00:27:32.771 | .command_transient_transport_error' 00:27:32.771 22:58:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:33.030 22:58:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 208 > 0 )) 00:27:33.030 22:58:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2523720 00:27:33.030 22:58:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2523720 ']' 00:27:33.030 22:58:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2523720 00:27:33.030 22:58:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:33.030 22:58:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:33.030 22:58:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2523720 00:27:33.030 22:58:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:33.030 22:58:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:33.030 22:58:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2523720' 00:27:33.030 killing process with pid 2523720 00:27:33.030 22:58:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2523720 00:27:33.030 Received shutdown signal, test time was about 2.000000 seconds 00:27:33.030 00:27:33.030 Latency(us) 00:27:33.030 [2024-12-10T21:58:40.759Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:33.030 [2024-12-10T21:58:40.759Z] =================================================================================================================== 00:27:33.030 [2024-12-10T21:58:40.759Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:33.030 22:58:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2523720 00:27:33.030 22:58:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:27:33.030 22:58:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:33.030 22:58:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:33.030 22:58:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:33.030 22:58:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:33.030 22:58:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2524341 00:27:33.030 22:58:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2524341 /var/tmp/bperf.sock 00:27:33.030 22:58:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:27:33.030 22:58:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2524341 ']' 00:27:33.030 22:58:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:33.030 22:58:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:33.030 22:58:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:33.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:33.030 22:58:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:33.030 22:58:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:33.288 [2024-12-10 22:58:40.774991] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:27:33.289 [2024-12-10 22:58:40.775042] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2524341 ] 00:27:33.289 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:33.289 Zero copy mechanism will not be used. 00:27:33.289 [2024-12-10 22:58:40.850435] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:33.289 [2024-12-10 22:58:40.887232] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:27:33.289 22:58:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:33.289 22:58:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:33.289 22:58:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:33.289 22:58:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:33.547 22:58:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:33.547 22:58:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:33.547 22:58:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:33.547 22:58:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:33.547 22:58:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:33.547 22:58:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:34.115 nvme0n1 00:27:34.115 22:58:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:34.115 22:58:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.115 22:58:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:34.115 22:58:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.115 22:58:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:34.115 22:58:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:34.115 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:34.115 Zero copy mechanism will not be used. 00:27:34.115 Running I/O for 2 seconds... 00:27:34.115 [2024-12-10 22:58:41.675343] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.115 [2024-12-10 22:58:41.675419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.115 [2024-12-10 22:58:41.675450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:34.115 [2024-12-10 22:58:41.681886] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.115 [2024-12-10 22:58:41.681958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.116 [2024-12-10 22:58:41.681983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:34.116 [2024-12-10 22:58:41.687001] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.116 [2024-12-10 22:58:41.687095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.116 [2024-12-10 22:58:41.687118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:34.116 [2024-12-10 22:58:41.693521] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.116 [2024-12-10 22:58:41.693686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.116 [2024-12-10 22:58:41.693707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:34.116 [2024-12-10 22:58:41.700100] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.116 [2024-12-10 22:58:41.700200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.116 [2024-12-10 22:58:41.700221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:34.116 [2024-12-10 22:58:41.706806] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.116 [2024-12-10 22:58:41.706946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.116 [2024-12-10 22:58:41.706966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:34.116 [2024-12-10 22:58:41.713364] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.116 [2024-12-10 22:58:41.713521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.116 [2024-12-10 22:58:41.713541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:34.116 [2024-12-10 22:58:41.721067] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.116 [2024-12-10 22:58:41.721199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.116 [2024-12-10 22:58:41.721219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:34.116 [2024-12-10 22:58:41.728716] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.116 [2024-12-10 22:58:41.728829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.116 [2024-12-10 22:58:41.728849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:34.116 [2024-12-10 22:58:41.734445] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.116 [2024-12-10 22:58:41.734503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.116 [2024-12-10 22:58:41.734523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:34.116 [2024-12-10 22:58:41.739281] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.116 [2024-12-10 22:58:41.739341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.116 [2024-12-10 22:58:41.739361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:34.116 [2024-12-10 22:58:41.743987] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.116 [2024-12-10 22:58:41.744055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.116 [2024-12-10 22:58:41.744075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:34.116 [2024-12-10 22:58:41.748691] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.116 [2024-12-10 22:58:41.748746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.116 [2024-12-10 22:58:41.748765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:34.116 [2024-12-10 22:58:41.753454] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.116 [2024-12-10 22:58:41.753521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.116 [2024-12-10 22:58:41.753540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:34.116 [2024-12-10 22:58:41.758798] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.116 [2024-12-10 22:58:41.758854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.116 [2024-12-10 22:58:41.758872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:34.116 [2024-12-10 22:58:41.764295] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.116 [2024-12-10 22:58:41.764374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.116 [2024-12-10 22:58:41.764393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:34.116 [2024-12-10 22:58:41.769691] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.116 [2024-12-10 22:58:41.769777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.116 [2024-12-10 22:58:41.769796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:34.116 [2024-12-10 22:58:41.774523] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.116 [2024-12-10 22:58:41.774609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.116 [2024-12-10 22:58:41.774628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:34.116 [2024-12-10 22:58:41.780200] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.116 [2024-12-10 22:58:41.780263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.116 [2024-12-10 22:58:41.780282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:34.116 [2024-12-10 22:58:41.785447] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.116 [2024-12-10 22:58:41.785525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.116 [2024-12-10 22:58:41.785545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:34.116 [2024-12-10 22:58:41.790839] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.116 [2024-12-10 22:58:41.790912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.116 [2024-12-10 22:58:41.790931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:34.116 [2024-12-10 22:58:41.795907] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.116 [2024-12-10 22:58:41.795962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.116 [2024-12-10 22:58:41.795980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:34.116 [2024-12-10 22:58:41.801166] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.116 [2024-12-10 22:58:41.801231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.116 [2024-12-10 22:58:41.801250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:34.116 [2024-12-10 22:58:41.807326] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.116 [2024-12-10 22:58:41.807380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.116 [2024-12-10 22:58:41.807398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:34.116 [2024-12-10 22:58:41.812450] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.116 [2024-12-10 22:58:41.812526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.116 [2024-12-10 22:58:41.812549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:34.116 [2024-12-10 22:58:41.817500] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.116 [2024-12-10 22:58:41.817623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.116 [2024-12-10 22:58:41.817641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:34.116 [2024-12-10 22:58:41.822945] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.116 [2024-12-10 22:58:41.823076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.117 [2024-12-10 22:58:41.823095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:34.117 [2024-12-10 22:58:41.828713] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.117 [2024-12-10 22:58:41.828769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.117 [2024-12-10 22:58:41.828788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:34.117 [2024-12-10 22:58:41.834537] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.117 [2024-12-10 22:58:41.834591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.117 [2024-12-10 22:58:41.834611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:34.117 [2024-12-10 22:58:41.839597] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.117 [2024-12-10 22:58:41.839723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.117 [2024-12-10 22:58:41.839743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:34.378 [2024-12-10 22:58:41.844374] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.378 [2024-12-10 22:58:41.844446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.378 [2024-12-10 22:58:41.844466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:34.378 [2024-12-10 22:58:41.849020] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.378 [2024-12-10 22:58:41.849072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.378 [2024-12-10 22:58:41.849091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:34.378 [2024-12-10 22:58:41.853609] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.378 [2024-12-10 22:58:41.853665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.378 [2024-12-10 22:58:41.853684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:34.378 [2024-12-10 22:58:41.858210] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.378 [2024-12-10 22:58:41.858275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.378 [2024-12-10 22:58:41.858295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:34.378 [2024-12-10 22:58:41.862711] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.378 [2024-12-10 22:58:41.862771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.378 [2024-12-10 22:58:41.862790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:34.378 [2024-12-10 22:58:41.867217] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.378 [2024-12-10 22:58:41.867272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.378 [2024-12-10 22:58:41.867291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:34.378 [2024-12-10 22:58:41.871739] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.378 [2024-12-10 22:58:41.871793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.378 [2024-12-10 22:58:41.871813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:34.378 [2024-12-10 22:58:41.876383] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.378 [2024-12-10 22:58:41.876440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.378 [2024-12-10 22:58:41.876458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:34.378 [2024-12-10 22:58:41.881092] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.378 [2024-12-10 22:58:41.881225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.378 [2024-12-10 22:58:41.881245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:34.378 [2024-12-10 22:58:41.886524] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.378 [2024-12-10 22:58:41.886660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.378 [2024-12-10 22:58:41.886683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:34.378 [2024-12-10 22:58:41.891859] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.378 [2024-12-10 22:58:41.891937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.378 [2024-12-10 22:58:41.891958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:34.378 [2024-12-10 22:58:41.896648] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.378 [2024-12-10 22:58:41.896709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.378 [2024-12-10 22:58:41.896728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:34.378 [2024-12-10 22:58:41.901530] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.378 [2024-12-10 22:58:41.901639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.378 [2024-12-10 22:58:41.901660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:34.378 [2024-12-10 22:58:41.906229] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.378 [2024-12-10 22:58:41.906307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.378 [2024-12-10 22:58:41.906326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:34.378 [2024-12-10 22:58:41.910924] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.378 [2024-12-10 22:58:41.911029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.378 [2024-12-10 22:58:41.911048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:34.378 [2024-12-10 22:58:41.915662] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.378 [2024-12-10 22:58:41.915728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.378 [2024-12-10 22:58:41.915747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:34.378 [2024-12-10 22:58:41.920242] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.378 [2024-12-10 22:58:41.920304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.378 [2024-12-10 22:58:41.920323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:34.378 [2024-12-10 22:58:41.924661] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.378 [2024-12-10 22:58:41.924719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.378 [2024-12-10 22:58:41.924738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:34.378 [2024-12-10 22:58:41.929142] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.378 [2024-12-10 22:58:41.929211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.378 [2024-12-10 22:58:41.929231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:34.378 [2024-12-10 22:58:41.933744] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.378 [2024-12-10 22:58:41.933806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.378 [2024-12-10 22:58:41.933826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:34.378 [2024-12-10 22:58:41.938251] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.378 [2024-12-10 22:58:41.938321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.378 [2024-12-10 22:58:41.938344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:34.378 [2024-12-10 22:58:41.942744] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.378 [2024-12-10 22:58:41.942805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.379 [2024-12-10 22:58:41.942825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:34.379 [2024-12-10 22:58:41.947261] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.379 [2024-12-10 22:58:41.947322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.379 [2024-12-10 22:58:41.947341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:34.379 [2024-12-10 22:58:41.951769] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.379 [2024-12-10 22:58:41.951829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.379 [2024-12-10 22:58:41.951848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:34.379 [2024-12-10 22:58:41.956263] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.379 [2024-12-10 22:58:41.956324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.379 [2024-12-10 22:58:41.956344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:34.379 [2024-12-10 22:58:41.960755] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.379 [2024-12-10 22:58:41.960813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.379 [2024-12-10 22:58:41.960833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:34.379 [2024-12-10 22:58:41.965279] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.379 [2024-12-10 22:58:41.965334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.379 [2024-12-10 22:58:41.965353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:34.379 [2024-12-10 22:58:41.969799] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.379 [2024-12-10 22:58:41.969856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.379 [2024-12-10 22:58:41.969875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:34.379 [2024-12-10 22:58:41.974240] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.379 [2024-12-10 22:58:41.974293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.379 [2024-12-10 22:58:41.974312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:34.379 [2024-12-10 22:58:41.978680] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.379 [2024-12-10 22:58:41.978738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.379 [2024-12-10 22:58:41.978757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:34.379 [2024-12-10 22:58:41.983069] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.379 [2024-12-10 22:58:41.983130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.379 [2024-12-10 22:58:41.983150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:34.379 [2024-12-10 22:58:41.987468] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.379 [2024-12-10 22:58:41.987527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.379 [2024-12-10 22:58:41.987546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:34.379 [2024-12-10 22:58:41.991849] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.379 [2024-12-10 22:58:41.991920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.379 [2024-12-10 22:58:41.991939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:34.379 [2024-12-10 22:58:41.996247] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.379 [2024-12-10 22:58:41.996314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.379 [2024-12-10 22:58:41.996333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:34.379 [2024-12-10 22:58:42.000638] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.379 [2024-12-10 22:58:42.000710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.379 [2024-12-10 22:58:42.000728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:34.379 [2024-12-10 22:58:42.005107] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.379 [2024-12-10 22:58:42.005174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.379 [2024-12-10 22:58:42.005193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:34.379 [2024-12-10 22:58:42.009482] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.379 [2024-12-10 22:58:42.009545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.379 [2024-12-10 22:58:42.009563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:34.379 [2024-12-10 22:58:42.013956] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.379 [2024-12-10 22:58:42.014022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.379 [2024-12-10 22:58:42.014041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:34.379 [2024-12-10 22:58:42.018377] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.379 [2024-12-10 22:58:42.018436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.379 [2024-12-10 22:58:42.018454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:34.379 [2024-12-10 22:58:42.023349] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.379 [2024-12-10 22:58:42.023400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.379 [2024-12-10 22:58:42.023419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:34.379 [2024-12-10 22:58:42.027938] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.379 [2024-12-10 22:58:42.027994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.379 [2024-12-10 22:58:42.028013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:34.379 [2024-12-10 22:58:42.032320] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.379 [2024-12-10 22:58:42.032377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.379 [2024-12-10 22:58:42.032396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:34.379 [2024-12-10 22:58:42.036770] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.379 [2024-12-10 22:58:42.036832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.379 [2024-12-10 22:58:42.036852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:34.379 [2024-12-10 22:58:42.041202] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.379 [2024-12-10 22:58:42.041267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.379 [2024-12-10 22:58:42.041287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:34.379 [2024-12-10 22:58:42.045596] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.379 [2024-12-10 22:58:42.045669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.379 [2024-12-10 22:58:42.045687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:34.379 [2024-12-10 22:58:42.050004] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.379 [2024-12-10 22:58:42.050078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.380 [2024-12-10 22:58:42.050096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:34.380 [2024-12-10 22:58:42.054426] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.380 [2024-12-10 22:58:42.054495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.380 [2024-12-10 22:58:42.054517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:34.380 [2024-12-10 22:58:42.058810] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.380 [2024-12-10 22:58:42.058896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.380 [2024-12-10 22:58:42.058915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:34.380 [2024-12-10 22:58:42.063225] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.380 [2024-12-10 22:58:42.063287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.380 [2024-12-10 22:58:42.063306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:34.380 [2024-12-10 22:58:42.067615] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.380 [2024-12-10 22:58:42.067678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.380 [2024-12-10 22:58:42.067697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:34.380 [2024-12-10 22:58:42.072033] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.380 [2024-12-10 22:58:42.072091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.380 [2024-12-10 22:58:42.072110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:34.380 [2024-12-10 22:58:42.076411] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.380 [2024-12-10 22:58:42.076473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.380 [2024-12-10 22:58:42.076492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:34.380 [2024-12-10 22:58:42.080768] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.380 [2024-12-10 22:58:42.080827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.380 [2024-12-10 22:58:42.080846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:34.380 [2024-12-10 22:58:42.085486] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.380 [2024-12-10 22:58:42.085660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.380 [2024-12-10 22:58:42.085678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:34.380 [2024-12-10 22:58:42.091616] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.380 [2024-12-10 22:58:42.091771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.380 [2024-12-10 22:58:42.091790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:34.380 [2024-12-10 22:58:42.097884] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.380 [2024-12-10 22:58:42.097966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.380 [2024-12-10 22:58:42.097986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:34.641 [2024-12-10 22:58:42.104190] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.641 [2024-12-10 22:58:42.104343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.641 [2024-12-10 22:58:42.104362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:34.641 [2024-12-10 22:58:42.110589] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.641 [2024-12-10 22:58:42.110745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.641 [2024-12-10 22:58:42.110764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:34.641 [2024-12-10 22:58:42.116986] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.641 [2024-12-10 22:58:42.117132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.641 [2024-12-10 22:58:42.117152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:34.641 [2024-12-10 22:58:42.123407] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.641 [2024-12-10 22:58:42.123570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.641 [2024-12-10 22:58:42.123589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:34.641 [2024-12-10 22:58:42.129917] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.641 [2024-12-10 22:58:42.130076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.641 [2024-12-10 22:58:42.130095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:34.641 [2024-12-10 22:58:42.136378] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.641 [2024-12-10 22:58:42.136525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.641 [2024-12-10 22:58:42.136545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:34.641 [2024-12-10 22:58:42.142896] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.641 [2024-12-10 22:58:42.143044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.641 [2024-12-10 22:58:42.143063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:34.641 [2024-12-10 22:58:42.149417] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.641 [2024-12-10 22:58:42.149575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.641 [2024-12-10 22:58:42.149595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:34.641 [2024-12-10 22:58:42.156175] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.641 [2024-12-10 22:58:42.156323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.641 [2024-12-10 22:58:42.156343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:34.641 [2024-12-10 22:58:42.162779] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.641 [2024-12-10 22:58:42.162938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.641 [2024-12-10 22:58:42.162956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:34.641 [2024-12-10 22:58:42.168747] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.641 [2024-12-10 22:58:42.169043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.641 [2024-12-10 22:58:42.169065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:34.641 [2024-12-10 22:58:42.174880] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.641 [2024-12-10 22:58:42.175196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.641 [2024-12-10 22:58:42.175217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:34.641 [2024-12-10 22:58:42.181195] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.641 [2024-12-10 22:58:42.181315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.641 [2024-12-10 22:58:42.181336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:34.641 [2024-12-10 22:58:42.187564] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.641 [2024-12-10 22:58:42.187896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.641 [2024-12-10 22:58:42.187918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:34.641 [2024-12-10 22:58:42.193598] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.641 [2024-12-10 22:58:42.193826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.641 [2024-12-10 22:58:42.193847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:34.641 [2024-12-10 22:58:42.199543] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.641 [2024-12-10 22:58:42.199820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.641 [2024-12-10 22:58:42.199842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:34.641 [2024-12-10 22:58:42.205557] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.641 [2024-12-10 22:58:42.205869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.641 [2024-12-10 22:58:42.205893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:34.641 [2024-12-10 22:58:42.212344] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.641 [2024-12-10 22:58:42.212634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.641 [2024-12-10 22:58:42.212655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:34.641 [2024-12-10 22:58:42.219474] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.641 [2024-12-10 22:58:42.219778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.641 [2024-12-10 22:58:42.219799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:34.641 [2024-12-10 22:58:42.227135] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.641 [2024-12-10 22:58:42.227394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.641 [2024-12-10 22:58:42.227416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:34.641 [2024-12-10 22:58:42.233103] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.641 [2024-12-10 22:58:42.233356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.641 [2024-12-10 22:58:42.233377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:34.641 [2024-12-10 22:58:42.239753] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.641 [2024-12-10 22:58:42.239973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.641 [2024-12-10 22:58:42.239994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:34.641 [2024-12-10 22:58:42.244456] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.642 [2024-12-10 22:58:42.244678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.642 [2024-12-10 22:58:42.244699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:34.642 [2024-12-10 22:58:42.249014] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.642 [2024-12-10 22:58:42.249259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.642 [2024-12-10 22:58:42.249281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:34.642 [2024-12-10 22:58:42.253413] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.642 [2024-12-10 22:58:42.253630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.642 [2024-12-10 22:58:42.253651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:34.642 [2024-12-10 22:58:42.257964] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.642 [2024-12-10 22:58:42.258193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.642 [2024-12-10 22:58:42.258214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:34.642 [2024-12-10 22:58:42.262405] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.642 [2024-12-10 22:58:42.262631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.642 [2024-12-10 22:58:42.262651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:34.642 [2024-12-10 22:58:42.266898] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.642 [2024-12-10 22:58:42.267123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.642 [2024-12-10 22:58:42.267144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:34.642 [2024-12-10 22:58:42.271354] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.642 [2024-12-10 22:58:42.271593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.642 [2024-12-10 22:58:42.271614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:34.642 [2024-12-10 22:58:42.275640] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.642 [2024-12-10 22:58:42.275879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.642 [2024-12-10 22:58:42.275900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:34.642 [2024-12-10 22:58:42.280059] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.642 [2024-12-10 22:58:42.280292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.642 [2024-12-10 22:58:42.280313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:34.642 [2024-12-10 22:58:42.284658] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.642 [2024-12-10 22:58:42.284883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.642 [2024-12-10 22:58:42.284904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:34.642 [2024-12-10 22:58:42.289427] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.642 [2024-12-10 22:58:42.289656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.642 [2024-12-10 22:58:42.289676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:34.642 [2024-12-10 22:58:42.294460] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.642 [2024-12-10 22:58:42.294682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.642 [2024-12-10 22:58:42.294702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:34.642 [2024-12-10 22:58:42.299419] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.642 [2024-12-10 22:58:42.299651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.642 [2024-12-10 22:58:42.299673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:34.642 [2024-12-10 22:58:42.303847] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.642 [2024-12-10 22:58:42.304068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.642 [2024-12-10 22:58:42.304087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:34.642 [2024-12-10 22:58:42.308660] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.642 [2024-12-10 22:58:42.308882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.642 [2024-12-10 22:58:42.308903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:34.642 [2024-12-10 22:58:42.313790] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.642 [2024-12-10 22:58:42.314014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.642 [2024-12-10 22:58:42.314034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:34.642 [2024-12-10 22:58:42.318455] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.642 [2024-12-10 22:58:42.318669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.642 [2024-12-10 22:58:42.318689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:34.642 [2024-12-10 22:58:42.322895] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.642 [2024-12-10 22:58:42.323119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.642 [2024-12-10 22:58:42.323140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:34.642 [2024-12-10 22:58:42.327698] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.642 [2024-12-10 22:58:42.327933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.642 [2024-12-10 22:58:42.327954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:34.642 [2024-12-10 22:58:42.332770] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.642 [2024-12-10 22:58:42.333006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.642 [2024-12-10 22:58:42.333027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:34.642 [2024-12-10 22:58:42.337959] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.642 [2024-12-10 22:58:42.338189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.642 [2024-12-10 22:58:42.338214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:34.642 [2024-12-10 22:58:42.343020] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.642 [2024-12-10 22:58:42.343521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.642 [2024-12-10 22:58:42.343542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:34.642 [2024-12-10 22:58:42.348497] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.642 [2024-12-10 22:58:42.348715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.642 [2024-12-10 22:58:42.348736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:34.642 [2024-12-10 22:58:42.353210] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.642 [2024-12-10 22:58:42.353428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.642 [2024-12-10 22:58:42.353448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:34.643 [2024-12-10 22:58:42.357656] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.643 [2024-12-10 22:58:42.357885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.643 [2024-12-10 22:58:42.357906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:34.643 [2024-12-10 22:58:42.362010] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.643 [2024-12-10 22:58:42.362252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.643 [2024-12-10 22:58:42.362273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:34.904 [2024-12-10 22:58:42.366479] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.904 [2024-12-10 22:58:42.366709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.904 [2024-12-10 22:58:42.366730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:34.904 [2024-12-10 22:58:42.370878] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.904 [2024-12-10 22:58:42.371108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.904 [2024-12-10 22:58:42.371129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:34.904 [2024-12-10 22:58:42.375281] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.904 [2024-12-10 22:58:42.375528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.904 [2024-12-10 22:58:42.375550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:34.904 [2024-12-10 22:58:42.379584] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.904 [2024-12-10 22:58:42.379822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.904 [2024-12-10 22:58:42.379842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:34.904 [2024-12-10 22:58:42.383941] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.904 [2024-12-10 22:58:42.384183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.904 [2024-12-10 22:58:42.384204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:34.904 [2024-12-10 22:58:42.388072] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.904 [2024-12-10 22:58:42.388283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.904 [2024-12-10 22:58:42.388302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:34.904 [2024-12-10 22:58:42.392111] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.904 [2024-12-10 22:58:42.392292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.904 [2024-12-10 22:58:42.392311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:34.904 [2024-12-10 22:58:42.396080] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.904 [2024-12-10 22:58:42.396265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.904 [2024-12-10 22:58:42.396284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:34.904 [2024-12-10 22:58:42.400045] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.904 [2024-12-10 22:58:42.400236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.904 [2024-12-10 22:58:42.400255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:34.904 [2024-12-10 22:58:42.403988] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.904 [2024-12-10 22:58:42.404167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.904 [2024-12-10 22:58:42.404185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:34.904 [2024-12-10 22:58:42.407979] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.904 [2024-12-10 22:58:42.408165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.904 [2024-12-10 22:58:42.408184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:34.904 [2024-12-10 22:58:42.411978] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.904 [2024-12-10 22:58:42.412176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.904 [2024-12-10 22:58:42.412196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:34.904 [2024-12-10 22:58:42.416032] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.904 [2024-12-10 22:58:42.416221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.904 [2024-12-10 22:58:42.416240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:34.904 [2024-12-10 22:58:42.420005] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.904 [2024-12-10 22:58:42.420180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.904 [2024-12-10 22:58:42.420198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:34.904 [2024-12-10 22:58:42.423954] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.904 [2024-12-10 22:58:42.424130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.904 [2024-12-10 22:58:42.424149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:34.904 [2024-12-10 22:58:42.427916] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.904 [2024-12-10 22:58:42.428095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.904 [2024-12-10 22:58:42.428114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:34.904 [2024-12-10 22:58:42.431845] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.904 [2024-12-10 22:58:42.432029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.904 [2024-12-10 22:58:42.432048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:34.904 [2024-12-10 22:58:42.435868] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.904 [2024-12-10 22:58:42.436040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.904 [2024-12-10 22:58:42.436060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:34.904 [2024-12-10 22:58:42.439909] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.904 [2024-12-10 22:58:42.440094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.904 [2024-12-10 22:58:42.440113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:34.904 [2024-12-10 22:58:42.443914] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.904 [2024-12-10 22:58:42.444100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.905 [2024-12-10 22:58:42.444119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:34.905 [2024-12-10 22:58:42.447944] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.905 [2024-12-10 22:58:42.448112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.905 [2024-12-10 22:58:42.448134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:34.905 [2024-12-10 22:58:42.451985] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.905 [2024-12-10 22:58:42.452139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.905 [2024-12-10 22:58:42.452165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:34.905 [2024-12-10 22:58:42.456307] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.905 [2024-12-10 22:58:42.456491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.905 [2024-12-10 22:58:42.456509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:34.905 [2024-12-10 22:58:42.461820] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.905 [2024-12-10 22:58:42.461966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.905 [2024-12-10 22:58:42.461985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:34.905 [2024-12-10 22:58:42.467365] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.905 [2024-12-10 22:58:42.467542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.905 [2024-12-10 22:58:42.467562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:34.905 [2024-12-10 22:58:42.474754] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.905 [2024-12-10 22:58:42.474934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.905 [2024-12-10 22:58:42.474953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:34.905 [2024-12-10 22:58:42.480645] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.905 [2024-12-10 22:58:42.480778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.905 [2024-12-10 22:58:42.480797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:34.905 [2024-12-10 22:58:42.486644] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.905 [2024-12-10 22:58:42.486822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.905 [2024-12-10 22:58:42.486840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:34.905 [2024-12-10 22:58:42.491415] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.905 [2024-12-10 22:58:42.491596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.905 [2024-12-10 22:58:42.491615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:34.905 [2024-12-10 22:58:42.496255] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.905 [2024-12-10 22:58:42.496404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.905 [2024-12-10 22:58:42.496423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:34.905 [2024-12-10 22:58:42.500984] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.905 [2024-12-10 22:58:42.501148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.905 [2024-12-10 22:58:42.501175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:34.905 [2024-12-10 22:58:42.506176] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.905 [2024-12-10 22:58:42.506276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.905 [2024-12-10 22:58:42.506295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:34.905 [2024-12-10 22:58:42.511487] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.905 [2024-12-10 22:58:42.511639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.905 [2024-12-10 22:58:42.511657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:34.905 [2024-12-10 22:58:42.516062] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.905 [2024-12-10 22:58:42.516227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.905 [2024-12-10 22:58:42.516246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:34.905 [2024-12-10 22:58:42.520868] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.905 [2024-12-10 22:58:42.521059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.905 [2024-12-10 22:58:42.521077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:34.905 [2024-12-10 22:58:42.527006] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.905 [2024-12-10 22:58:42.527171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.905 [2024-12-10 22:58:42.527191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:34.905 [2024-12-10 22:58:42.531822] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.905 [2024-12-10 22:58:42.531932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.905 [2024-12-10 22:58:42.531951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:34.905 [2024-12-10 22:58:42.536293] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.905 [2024-12-10 22:58:42.536457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.905 [2024-12-10 22:58:42.536476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:34.905 [2024-12-10 22:58:42.540896] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.905 [2024-12-10 22:58:42.541056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.905 [2024-12-10 22:58:42.541075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:34.905 [2024-12-10 22:58:42.545397] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.905 [2024-12-10 22:58:42.545575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.905 [2024-12-10 22:58:42.545596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:34.905 [2024-12-10 22:58:42.550226] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.905 [2024-12-10 22:58:42.550291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.905 [2024-12-10 22:58:42.550309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:34.905 [2024-12-10 22:58:42.555016] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.905 [2024-12-10 22:58:42.555192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.905 [2024-12-10 22:58:42.555211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:34.905 [2024-12-10 22:58:42.559634] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.905 [2024-12-10 22:58:42.559816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.905 [2024-12-10 22:58:42.559836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:34.905 [2024-12-10 22:58:42.563848] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.905 [2024-12-10 22:58:42.564015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.905 [2024-12-10 22:58:42.564034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:34.905 [2024-12-10 22:58:42.567890] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.905 [2024-12-10 22:58:42.568063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.906 [2024-12-10 22:58:42.568081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:34.906 [2024-12-10 22:58:42.571934] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.906 [2024-12-10 22:58:42.572104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.906 [2024-12-10 22:58:42.572123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:34.906 [2024-12-10 22:58:42.576054] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.906 [2024-12-10 22:58:42.576257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.906 [2024-12-10 22:58:42.576280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:34.906 [2024-12-10 22:58:42.580166] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.906 [2024-12-10 22:58:42.580334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.906 [2024-12-10 22:58:42.580353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:34.906 [2024-12-10 22:58:42.584192] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.906 [2024-12-10 22:58:42.584358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.906 [2024-12-10 22:58:42.584377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:34.906 [2024-12-10 22:58:42.588282] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.906 [2024-12-10 22:58:42.588445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.906 [2024-12-10 22:58:42.588465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:34.906 [2024-12-10 22:58:42.592545] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.906 [2024-12-10 22:58:42.592720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.906 [2024-12-10 22:58:42.592741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:34.906 [2024-12-10 22:58:42.597031] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.906 [2024-12-10 22:58:42.597188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.906 [2024-12-10 22:58:42.597207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:34.906 [2024-12-10 22:58:42.601654] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.906 [2024-12-10 22:58:42.601800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.906 [2024-12-10 22:58:42.601819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:34.906 [2024-12-10 22:58:42.606552] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.906 [2024-12-10 22:58:42.606701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.906 [2024-12-10 22:58:42.606720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:34.906 [2024-12-10 22:58:42.611235] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.906 [2024-12-10 22:58:42.611403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.906 [2024-12-10 22:58:42.611421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:34.906 [2024-12-10 22:58:42.615404] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.906 [2024-12-10 22:58:42.615586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.906 [2024-12-10 22:58:42.615607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:34.906 [2024-12-10 22:58:42.619415] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.906 [2024-12-10 22:58:42.619568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.906 [2024-12-10 22:58:42.619586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:34.906 [2024-12-10 22:58:42.623524] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.906 [2024-12-10 22:58:42.623686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.906 [2024-12-10 22:58:42.623705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:34.906 [2024-12-10 22:58:42.627715] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:34.906 [2024-12-10 22:58:42.627891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:34.906 [2024-12-10 22:58:42.627910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:35.167 [2024-12-10 22:58:42.631965] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.167 [2024-12-10 22:58:42.632119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.167 [2024-12-10 22:58:42.632138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:35.167 [2024-12-10 22:58:42.636189] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.167 [2024-12-10 22:58:42.636370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.167 [2024-12-10 22:58:42.636392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:35.167 [2024-12-10 22:58:42.640471] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.167 [2024-12-10 22:58:42.640640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.167 [2024-12-10 22:58:42.640661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:35.167 [2024-12-10 22:58:42.644548] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.167 [2024-12-10 22:58:42.644707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.167 [2024-12-10 22:58:42.644726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:35.167 [2024-12-10 22:58:42.648712] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.167 [2024-12-10 22:58:42.648891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.167 [2024-12-10 22:58:42.648912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:35.167 [2024-12-10 22:58:42.652803] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.167 [2024-12-10 22:58:42.652969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.167 [2024-12-10 22:58:42.652987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:35.167 [2024-12-10 22:58:42.656763] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.167 [2024-12-10 22:58:42.656931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.167 [2024-12-10 22:58:42.656950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:35.167 [2024-12-10 22:58:42.660922] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.167 [2024-12-10 22:58:42.661091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.167 [2024-12-10 22:58:42.661110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:35.167 [2024-12-10 22:58:42.665471] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.167 [2024-12-10 22:58:42.665619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.167 [2024-12-10 22:58:42.665637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:35.167 [2024-12-10 22:58:42.670047] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.167 [2024-12-10 22:58:42.670219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.167 [2024-12-10 22:58:42.670237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:35.167 [2024-12-10 22:58:42.674200] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.167 [2024-12-10 22:58:42.674374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.167 [2024-12-10 22:58:42.674393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:35.167 6310.00 IOPS, 788.75 MiB/s [2024-12-10T21:58:42.896Z] [2024-12-10 22:58:42.679407] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.167 [2024-12-10 22:58:42.679489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.167 [2024-12-10 22:58:42.679509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:35.167 [2024-12-10 22:58:42.683515] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.167 [2024-12-10 22:58:42.683608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.167 [2024-12-10 22:58:42.683627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:35.167 [2024-12-10 22:58:42.687599] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.167 [2024-12-10 22:58:42.687689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.167 [2024-12-10 22:58:42.687708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:35.167 [2024-12-10 22:58:42.691638] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.167 [2024-12-10 22:58:42.691740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.167 [2024-12-10 22:58:42.691759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:35.167 [2024-12-10 22:58:42.695712] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.167 [2024-12-10 22:58:42.695796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.167 [2024-12-10 22:58:42.695815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:35.167 [2024-12-10 22:58:42.699782] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.167 [2024-12-10 22:58:42.699870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.167 [2024-12-10 22:58:42.699889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:35.167 [2024-12-10 22:58:42.703842] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.168 [2024-12-10 22:58:42.703933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.168 [2024-12-10 22:58:42.703952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:35.168 [2024-12-10 22:58:42.707894] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.168 [2024-12-10 22:58:42.707986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.168 [2024-12-10 22:58:42.708005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:35.168 [2024-12-10 22:58:42.711933] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.168 [2024-12-10 22:58:42.712018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.168 [2024-12-10 22:58:42.712037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:35.168 [2024-12-10 22:58:42.715948] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.168 [2024-12-10 22:58:42.716033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.168 [2024-12-10 22:58:42.716052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:35.168 [2024-12-10 22:58:42.719947] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.168 [2024-12-10 22:58:42.720045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.168 [2024-12-10 22:58:42.720064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:35.168 [2024-12-10 22:58:42.723921] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.168 [2024-12-10 22:58:42.724034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.168 [2024-12-10 22:58:42.724053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:35.168 [2024-12-10 22:58:42.727866] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.168 [2024-12-10 22:58:42.727953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.168 [2024-12-10 22:58:42.727971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:35.168 [2024-12-10 22:58:42.731799] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.168 [2024-12-10 22:58:42.731899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.168 [2024-12-10 22:58:42.731918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:35.168 [2024-12-10 22:58:42.735735] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.168 [2024-12-10 22:58:42.735828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.168 [2024-12-10 22:58:42.735848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:35.168 [2024-12-10 22:58:42.739648] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.168 [2024-12-10 22:58:42.739752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.168 [2024-12-10 22:58:42.739771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:35.168 [2024-12-10 22:58:42.743649] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.168 [2024-12-10 22:58:42.743729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.168 [2024-12-10 22:58:42.743749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:35.168 [2024-12-10 22:58:42.747719] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.168 [2024-12-10 22:58:42.747833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.168 [2024-12-10 22:58:42.747852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:35.168 [2024-12-10 22:58:42.751717] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.168 [2024-12-10 22:58:42.751793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.168 [2024-12-10 22:58:42.751812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:35.168 [2024-12-10 22:58:42.755718] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.168 [2024-12-10 22:58:42.755803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.168 [2024-12-10 22:58:42.755825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:35.168 [2024-12-10 22:58:42.759650] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.168 [2024-12-10 22:58:42.759749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.168 [2024-12-10 22:58:42.759768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:35.168 [2024-12-10 22:58:42.763580] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.168 [2024-12-10 22:58:42.763667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.168 [2024-12-10 22:58:42.763686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:35.168 [2024-12-10 22:58:42.767501] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.168 [2024-12-10 22:58:42.767597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.168 [2024-12-10 22:58:42.767616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:35.168 [2024-12-10 22:58:42.771460] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.168 [2024-12-10 22:58:42.771539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.168 [2024-12-10 22:58:42.771558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:35.168 [2024-12-10 22:58:42.775449] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.168 [2024-12-10 22:58:42.775515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.168 [2024-12-10 22:58:42.775534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:35.168 [2024-12-10 22:58:42.779380] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.168 [2024-12-10 22:58:42.779499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.168 [2024-12-10 22:58:42.779518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:35.168 [2024-12-10 22:58:42.783314] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.168 [2024-12-10 22:58:42.783409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.168 [2024-12-10 22:58:42.783427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:35.168 [2024-12-10 22:58:42.787307] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.168 [2024-12-10 22:58:42.787388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.168 [2024-12-10 22:58:42.787408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:35.168 [2024-12-10 22:58:42.791799] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.168 [2024-12-10 22:58:42.791975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.168 [2024-12-10 22:58:42.791994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:35.168 [2024-12-10 22:58:42.796482] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.168 [2024-12-10 22:58:42.796562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.168 [2024-12-10 22:58:42.796581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:35.168 [2024-12-10 22:58:42.800777] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.168 [2024-12-10 22:58:42.800871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.168 [2024-12-10 22:58:42.800890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:35.168 [2024-12-10 22:58:42.805127] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.168 [2024-12-10 22:58:42.805212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.168 [2024-12-10 22:58:42.805231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:35.169 [2024-12-10 22:58:42.810244] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.169 [2024-12-10 22:58:42.810362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.169 [2024-12-10 22:58:42.810380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:35.169 [2024-12-10 22:58:42.814677] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.169 [2024-12-10 22:58:42.814807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.169 [2024-12-10 22:58:42.814826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:35.169 [2024-12-10 22:58:42.820759] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.169 [2024-12-10 22:58:42.820972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.169 [2024-12-10 22:58:42.820992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:35.169 [2024-12-10 22:58:42.827296] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.169 [2024-12-10 22:58:42.827427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.169 [2024-12-10 22:58:42.827446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:35.169 [2024-12-10 22:58:42.833340] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.169 [2024-12-10 22:58:42.833451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.169 [2024-12-10 22:58:42.833471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:35.169 [2024-12-10 22:58:42.839066] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.169 [2024-12-10 22:58:42.839151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.169 [2024-12-10 22:58:42.839176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:35.169 [2024-12-10 22:58:42.845033] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.169 [2024-12-10 22:58:42.845163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.169 [2024-12-10 22:58:42.845183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:35.169 [2024-12-10 22:58:42.850452] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.169 [2024-12-10 22:58:42.850518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.169 [2024-12-10 22:58:42.850536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:35.169 [2024-12-10 22:58:42.854463] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.169 [2024-12-10 22:58:42.854543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.169 [2024-12-10 22:58:42.854562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:35.169 [2024-12-10 22:58:42.858419] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.169 [2024-12-10 22:58:42.858504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.169 [2024-12-10 22:58:42.858523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:35.169 [2024-12-10 22:58:42.862473] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.169 [2024-12-10 22:58:42.862591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.169 [2024-12-10 22:58:42.862610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:35.169 [2024-12-10 22:58:42.867069] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.169 [2024-12-10 22:58:42.867273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.169 [2024-12-10 22:58:42.867292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:35.169 [2024-12-10 22:58:42.872139] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.169 [2024-12-10 22:58:42.872346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.169 [2024-12-10 22:58:42.872365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:35.169 [2024-12-10 22:58:42.876666] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.169 [2024-12-10 22:58:42.876787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.169 [2024-12-10 22:58:42.876810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:35.169 [2024-12-10 22:58:42.880823] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.169 [2024-12-10 22:58:42.880941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.169 [2024-12-10 22:58:42.880961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:35.169 [2024-12-10 22:58:42.884868] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.169 [2024-12-10 22:58:42.884944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.169 [2024-12-10 22:58:42.884963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:35.169 [2024-12-10 22:58:42.888938] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.169 [2024-12-10 22:58:42.889079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.169 [2024-12-10 22:58:42.889099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:35.430 [2024-12-10 22:58:42.893336] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.430 [2024-12-10 22:58:42.893458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.430 [2024-12-10 22:58:42.893477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:35.430 [2024-12-10 22:58:42.897403] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.430 [2024-12-10 22:58:42.897487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.430 [2024-12-10 22:58:42.897507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:35.430 [2024-12-10 22:58:42.901790] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.430 [2024-12-10 22:58:42.901885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.430 [2024-12-10 22:58:42.901906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:35.430 [2024-12-10 22:58:42.905845] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.430 [2024-12-10 22:58:42.905934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.430 [2024-12-10 22:58:42.905953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:35.431 [2024-12-10 22:58:42.910234] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.431 [2024-12-10 22:58:42.910417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.431 [2024-12-10 22:58:42.910439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:35.431 [2024-12-10 22:58:42.915468] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.431 [2024-12-10 22:58:42.915653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.431 [2024-12-10 22:58:42.915673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:35.431 [2024-12-10 22:58:42.920541] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.431 [2024-12-10 22:58:42.920711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.431 [2024-12-10 22:58:42.920730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:35.431 [2024-12-10 22:58:42.925632] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.431 [2024-12-10 22:58:42.925784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.431 [2024-12-10 22:58:42.925803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:35.431 [2024-12-10 22:58:42.930784] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.431 [2024-12-10 22:58:42.930907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.431 [2024-12-10 22:58:42.930926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:35.431 [2024-12-10 22:58:42.936308] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.431 [2024-12-10 22:58:42.936457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.431 [2024-12-10 22:58:42.936492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:35.431 [2024-12-10 22:58:42.941302] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.431 [2024-12-10 22:58:42.941452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.431 [2024-12-10 22:58:42.941472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:35.431 [2024-12-10 22:58:42.946705] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.431 [2024-12-10 22:58:42.946877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.431 [2024-12-10 22:58:42.946897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:35.431 [2024-12-10 22:58:42.952065] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.431 [2024-12-10 22:58:42.952210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.431 [2024-12-10 22:58:42.952229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:35.431 [2024-12-10 22:58:42.957489] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.431 [2024-12-10 22:58:42.957696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.431 [2024-12-10 22:58:42.957716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:35.431 [2024-12-10 22:58:42.962610] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.431 [2024-12-10 22:58:42.962779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.431 [2024-12-10 22:58:42.962799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:35.431 [2024-12-10 22:58:42.967806] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.431 [2024-12-10 22:58:42.968008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.431 [2024-12-10 22:58:42.968027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:35.431 [2024-12-10 22:58:42.972903] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.431 [2024-12-10 22:58:42.973099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.431 [2024-12-10 22:58:42.973118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:35.431 [2024-12-10 22:58:42.978021] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.431 [2024-12-10 22:58:42.978174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.431 [2024-12-10 22:58:42.978194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:35.431 [2024-12-10 22:58:42.983148] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.431 [2024-12-10 22:58:42.983437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.431 [2024-12-10 22:58:42.983459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:35.431 [2024-12-10 22:58:42.988366] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.431 [2024-12-10 22:58:42.988516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.431 [2024-12-10 22:58:42.988536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:35.431 [2024-12-10 22:58:42.993733] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.431 [2024-12-10 22:58:42.993886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.431 [2024-12-10 22:58:42.993905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:35.431 [2024-12-10 22:58:42.998918] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.431 [2024-12-10 22:58:42.999091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.431 [2024-12-10 22:58:42.999110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:35.431 [2024-12-10 22:58:43.004201] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.431 [2024-12-10 22:58:43.004363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.431 [2024-12-10 22:58:43.004386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:35.431 [2024-12-10 22:58:43.009573] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.431 [2024-12-10 22:58:43.009747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.431 [2024-12-10 22:58:43.009767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:35.431 [2024-12-10 22:58:43.014654] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.431 [2024-12-10 22:58:43.014820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.431 [2024-12-10 22:58:43.014838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:35.431 [2024-12-10 22:58:43.019739] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.431 [2024-12-10 22:58:43.019906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.431 [2024-12-10 22:58:43.019926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:35.431 [2024-12-10 22:58:43.024810] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.431 [2024-12-10 22:58:43.024991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.431 [2024-12-10 22:58:43.025010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:35.431 [2024-12-10 22:58:43.029906] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.431 [2024-12-10 22:58:43.030064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.431 [2024-12-10 22:58:43.030083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:35.431 [2024-12-10 22:58:43.035070] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.431 [2024-12-10 22:58:43.035262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.431 [2024-12-10 22:58:43.035281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:35.431 [2024-12-10 22:58:43.040167] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.431 [2024-12-10 22:58:43.040355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.432 [2024-12-10 22:58:43.040374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:35.432 [2024-12-10 22:58:43.045671] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.432 [2024-12-10 22:58:43.045871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.432 [2024-12-10 22:58:43.045891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:35.432 [2024-12-10 22:58:43.050798] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.432 [2024-12-10 22:58:43.051069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.432 [2024-12-10 22:58:43.051090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:35.432 [2024-12-10 22:58:43.056479] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.432 [2024-12-10 22:58:43.056728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.432 [2024-12-10 22:58:43.056749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:35.432 [2024-12-10 22:58:43.061660] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.432 [2024-12-10 22:58:43.061823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.432 [2024-12-10 22:58:43.061842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:35.432 [2024-12-10 22:58:43.066880] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.432 [2024-12-10 22:58:43.067166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.432 [2024-12-10 22:58:43.067186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:35.432 [2024-12-10 22:58:43.072191] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.432 [2024-12-10 22:58:43.072359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.432 [2024-12-10 22:58:43.072378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:35.432 [2024-12-10 22:58:43.077336] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.432 [2024-12-10 22:58:43.077556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.432 [2024-12-10 22:58:43.077576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:35.432 [2024-12-10 22:58:43.082353] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.432 [2024-12-10 22:58:43.082515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.432 [2024-12-10 22:58:43.082535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:35.432 [2024-12-10 22:58:43.087438] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.432 [2024-12-10 22:58:43.087599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.432 [2024-12-10 22:58:43.087617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:35.432 [2024-12-10 22:58:43.092473] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.432 [2024-12-10 22:58:43.092677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.432 [2024-12-10 22:58:43.092696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:35.432 [2024-12-10 22:58:43.097601] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.432 [2024-12-10 22:58:43.097763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.432 [2024-12-10 22:58:43.097782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:35.432 [2024-12-10 22:58:43.102829] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.432 [2024-12-10 22:58:43.103045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.432 [2024-12-10 22:58:43.103074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:35.432 [2024-12-10 22:58:43.108339] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.432 [2024-12-10 22:58:43.108520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.432 [2024-12-10 22:58:43.108540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:35.432 [2024-12-10 22:58:43.113449] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.432 [2024-12-10 22:58:43.113643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.432 [2024-12-10 22:58:43.113661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:35.432 [2024-12-10 22:58:43.117918] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.432 [2024-12-10 22:58:43.118069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.432 [2024-12-10 22:58:43.118089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:35.432 [2024-12-10 22:58:43.122325] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.432 [2024-12-10 22:58:43.122483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.432 [2024-12-10 22:58:43.122502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:35.432 [2024-12-10 22:58:43.126549] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.432 [2024-12-10 22:58:43.126743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.432 [2024-12-10 22:58:43.126766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:35.432 [2024-12-10 22:58:43.130684] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.432 [2024-12-10 22:58:43.130847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.432 [2024-12-10 22:58:43.130868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:35.432 [2024-12-10 22:58:43.134696] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.432 [2024-12-10 22:58:43.134830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.432 [2024-12-10 22:58:43.134854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:35.432 [2024-12-10 22:58:43.138833] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.432 [2024-12-10 22:58:43.138991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.432 [2024-12-10 22:58:43.139010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:35.432 [2024-12-10 22:58:43.143806] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.432 [2024-12-10 22:58:43.143956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.432 [2024-12-10 22:58:43.143975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:35.432 [2024-12-10 22:58:43.148878] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.432 [2024-12-10 22:58:43.149037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.432 [2024-12-10 22:58:43.149057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:35.432 [2024-12-10 22:58:43.153915] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.432 [2024-12-10 22:58:43.154118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.432 [2024-12-10 22:58:43.154136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:35.693 [2024-12-10 22:58:43.159006] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.693 [2024-12-10 22:58:43.159194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.693 [2024-12-10 22:58:43.159213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:35.693 [2024-12-10 22:58:43.164091] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.693 [2024-12-10 22:58:43.164282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.693 [2024-12-10 22:58:43.164301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:35.693 [2024-12-10 22:58:43.169186] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.693 [2024-12-10 22:58:43.169349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.694 [2024-12-10 22:58:43.169368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:35.694 [2024-12-10 22:58:43.174257] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.694 [2024-12-10 22:58:43.174441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.694 [2024-12-10 22:58:43.174460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:35.694 [2024-12-10 22:58:43.179355] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.694 [2024-12-10 22:58:43.179512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.694 [2024-12-10 22:58:43.179531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:35.694 [2024-12-10 22:58:43.184636] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.694 [2024-12-10 22:58:43.184848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.694 [2024-12-10 22:58:43.184868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:35.694 [2024-12-10 22:58:43.189756] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.694 [2024-12-10 22:58:43.189944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.694 [2024-12-10 22:58:43.189963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:35.694 [2024-12-10 22:58:43.195047] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.694 [2024-12-10 22:58:43.195244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.694 [2024-12-10 22:58:43.195265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:35.694 [2024-12-10 22:58:43.200188] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.694 [2024-12-10 22:58:43.200450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.694 [2024-12-10 22:58:43.200471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:35.694 [2024-12-10 22:58:43.205136] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.694 [2024-12-10 22:58:43.205347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.694 [2024-12-10 22:58:43.205366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:35.694 [2024-12-10 22:58:43.210342] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.694 [2024-12-10 22:58:43.210432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.694 [2024-12-10 22:58:43.210452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:35.694 [2024-12-10 22:58:43.215888] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.694 [2024-12-10 22:58:43.216031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.694 [2024-12-10 22:58:43.216050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:35.694 [2024-12-10 22:58:43.221073] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.694 [2024-12-10 22:58:43.221198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.694 [2024-12-10 22:58:43.221217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:35.694 [2024-12-10 22:58:43.225832] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.694 [2024-12-10 22:58:43.225886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.694 [2024-12-10 22:58:43.225906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:35.694 [2024-12-10 22:58:43.231462] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.694 [2024-12-10 22:58:43.231545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.694 [2024-12-10 22:58:43.231565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:35.694 [2024-12-10 22:58:43.236551] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.694 [2024-12-10 22:58:43.236682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.694 [2024-12-10 22:58:43.236701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:35.694 [2024-12-10 22:58:43.242809] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.694 [2024-12-10 22:58:43.242943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.694 [2024-12-10 22:58:43.242962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:35.694 [2024-12-10 22:58:43.248985] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.694 [2024-12-10 22:58:43.249089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.694 [2024-12-10 22:58:43.249108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:35.694 [2024-12-10 22:58:43.254235] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.694 [2024-12-10 22:58:43.254372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.694 [2024-12-10 22:58:43.254391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:35.694 [2024-12-10 22:58:43.258636] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.694 [2024-12-10 22:58:43.258691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.694 [2024-12-10 22:58:43.258711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:35.694 [2024-12-10 22:58:43.262725] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.694 [2024-12-10 22:58:43.262831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.694 [2024-12-10 22:58:43.262850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:35.694 [2024-12-10 22:58:43.267400] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.694 [2024-12-10 22:58:43.267554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.694 [2024-12-10 22:58:43.267576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:35.694 [2024-12-10 22:58:43.272783] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.694 [2024-12-10 22:58:43.272907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.694 [2024-12-10 22:58:43.272927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:35.694 [2024-12-10 22:58:43.277017] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.694 [2024-12-10 22:58:43.277111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.694 [2024-12-10 22:58:43.277130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:35.694 [2024-12-10 22:58:43.281338] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.694 [2024-12-10 22:58:43.281395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.694 [2024-12-10 22:58:43.281414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:35.694 [2024-12-10 22:58:43.285833] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.694 [2024-12-10 22:58:43.285959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.694 [2024-12-10 22:58:43.285978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:35.694 [2024-12-10 22:58:43.290000] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.694 [2024-12-10 22:58:43.290057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.694 [2024-12-10 22:58:43.290076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:35.694 [2024-12-10 22:58:43.294037] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.694 [2024-12-10 22:58:43.294100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.694 [2024-12-10 22:58:43.294119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:35.695 [2024-12-10 22:58:43.297972] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.695 [2024-12-10 22:58:43.298028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.695 [2024-12-10 22:58:43.298047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:35.695 [2024-12-10 22:58:43.301883] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.695 [2024-12-10 22:58:43.301937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.695 [2024-12-10 22:58:43.301956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:35.695 [2024-12-10 22:58:43.306818] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.695 [2024-12-10 22:58:43.306920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.695 [2024-12-10 22:58:43.306940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:35.695 [2024-12-10 22:58:43.312281] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.695 [2024-12-10 22:58:43.312372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.695 [2024-12-10 22:58:43.312391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:35.695 [2024-12-10 22:58:43.317445] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.695 [2024-12-10 22:58:43.317563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.695 [2024-12-10 22:58:43.317582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:35.695 [2024-12-10 22:58:43.323590] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.695 [2024-12-10 22:58:43.323693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.695 [2024-12-10 22:58:43.323711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:35.695 [2024-12-10 22:58:43.328863] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.695 [2024-12-10 22:58:43.328942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.695 [2024-12-10 22:58:43.328961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:35.695 [2024-12-10 22:58:43.333302] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.695 [2024-12-10 22:58:43.333407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.695 [2024-12-10 22:58:43.333425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:35.695 [2024-12-10 22:58:43.337512] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.695 [2024-12-10 22:58:43.337602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.695 [2024-12-10 22:58:43.337621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:35.695 [2024-12-10 22:58:43.341778] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.695 [2024-12-10 22:58:43.341881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.695 [2024-12-10 22:58:43.341900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:35.695 [2024-12-10 22:58:43.345975] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.695 [2024-12-10 22:58:43.346084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.695 [2024-12-10 22:58:43.346103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:35.695 [2024-12-10 22:58:43.350120] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.695 [2024-12-10 22:58:43.350192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.695 [2024-12-10 22:58:43.350210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:35.695 [2024-12-10 22:58:43.354175] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.695 [2024-12-10 22:58:43.354302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.695 [2024-12-10 22:58:43.354321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:35.695 [2024-12-10 22:58:43.359041] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.695 [2024-12-10 22:58:43.359209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.695 [2024-12-10 22:58:43.359228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:35.695 [2024-12-10 22:58:43.364232] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.695 [2024-12-10 22:58:43.364416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.695 [2024-12-10 22:58:43.364434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:35.695 [2024-12-10 22:58:43.369632] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.695 [2024-12-10 22:58:43.369820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.695 [2024-12-10 22:58:43.369838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:35.695 [2024-12-10 22:58:43.374748] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.695 [2024-12-10 22:58:43.374875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.695 [2024-12-10 22:58:43.374893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:35.695 [2024-12-10 22:58:43.380222] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.695 [2024-12-10 22:58:43.380366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.695 [2024-12-10 22:58:43.380385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:35.695 [2024-12-10 22:58:43.385389] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.695 [2024-12-10 22:58:43.385577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.695 [2024-12-10 22:58:43.385596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:35.695 [2024-12-10 22:58:43.390779] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.695 [2024-12-10 22:58:43.390953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.695 [2024-12-10 22:58:43.390976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:35.695 [2024-12-10 22:58:43.396012] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.695 [2024-12-10 22:58:43.396173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.695 [2024-12-10 22:58:43.396191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:35.695 [2024-12-10 22:58:43.401066] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.695 [2024-12-10 22:58:43.401246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.695 [2024-12-10 22:58:43.401265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:35.695 [2024-12-10 22:58:43.406222] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.695 [2024-12-10 22:58:43.406336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.695 [2024-12-10 22:58:43.406355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:35.695 [2024-12-10 22:58:43.411397] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.695 [2024-12-10 22:58:43.411536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.695 [2024-12-10 22:58:43.411556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:35.695 [2024-12-10 22:58:43.416961] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.695 [2024-12-10 22:58:43.417080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.695 [2024-12-10 22:58:43.417100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:35.956 [2024-12-10 22:58:43.422026] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.956 [2024-12-10 22:58:43.422165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.956 [2024-12-10 22:58:43.422200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:35.956 [2024-12-10 22:58:43.427626] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.956 [2024-12-10 22:58:43.427819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.956 [2024-12-10 22:58:43.427838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:35.956 [2024-12-10 22:58:43.432854] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.956 [2024-12-10 22:58:43.433048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.956 [2024-12-10 22:58:43.433067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:35.956 [2024-12-10 22:58:43.437977] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.956 [2024-12-10 22:58:43.438131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.956 [2024-12-10 22:58:43.438151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:35.956 [2024-12-10 22:58:43.443425] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.956 [2024-12-10 22:58:43.443583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.956 [2024-12-10 22:58:43.443601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:35.956 [2024-12-10 22:58:43.448511] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.956 [2024-12-10 22:58:43.448678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.956 [2024-12-10 22:58:43.448697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:35.956 [2024-12-10 22:58:43.453809] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.956 [2024-12-10 22:58:43.453988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.956 [2024-12-10 22:58:43.454008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:35.956 [2024-12-10 22:58:43.458922] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.956 [2024-12-10 22:58:43.459079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.956 [2024-12-10 22:58:43.459098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:35.956 [2024-12-10 22:58:43.464257] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.956 [2024-12-10 22:58:43.464395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.957 [2024-12-10 22:58:43.464425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:35.957 [2024-12-10 22:58:43.469525] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.957 [2024-12-10 22:58:43.469730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.957 [2024-12-10 22:58:43.469749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:35.957 [2024-12-10 22:58:43.475077] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.957 [2024-12-10 22:58:43.475231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.957 [2024-12-10 22:58:43.475250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:35.957 [2024-12-10 22:58:43.480318] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.957 [2024-12-10 22:58:43.480452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.957 [2024-12-10 22:58:43.480471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:35.957 [2024-12-10 22:58:43.485816] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.957 [2024-12-10 22:58:43.485897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.957 [2024-12-10 22:58:43.485916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:35.957 [2024-12-10 22:58:43.491035] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.957 [2024-12-10 22:58:43.491225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.957 [2024-12-10 22:58:43.491244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:35.957 [2024-12-10 22:58:43.496294] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.957 [2024-12-10 22:58:43.496438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.957 [2024-12-10 22:58:43.496456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:35.957 [2024-12-10 22:58:43.501943] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.957 [2024-12-10 22:58:43.502063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.957 [2024-12-10 22:58:43.502081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:35.957 [2024-12-10 22:58:43.507384] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.957 [2024-12-10 22:58:43.507530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.957 [2024-12-10 22:58:43.507549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:35.957 [2024-12-10 22:58:43.514155] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.957 [2024-12-10 22:58:43.514248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.957 [2024-12-10 22:58:43.514268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:35.957 [2024-12-10 22:58:43.520336] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.957 [2024-12-10 22:58:43.520528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.957 [2024-12-10 22:58:43.520547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:35.957 [2024-12-10 22:58:43.527214] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.957 [2024-12-10 22:58:43.527281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.957 [2024-12-10 22:58:43.527299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:35.957 [2024-12-10 22:58:43.531510] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.957 [2024-12-10 22:58:43.531565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.957 [2024-12-10 22:58:43.531587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:35.957 [2024-12-10 22:58:43.535362] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.957 [2024-12-10 22:58:43.535414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.957 [2024-12-10 22:58:43.535433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:35.957 [2024-12-10 22:58:43.539271] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.957 [2024-12-10 22:58:43.539339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.957 [2024-12-10 22:58:43.539357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:35.957 [2024-12-10 22:58:43.543132] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.957 [2024-12-10 22:58:43.543195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.957 [2024-12-10 22:58:43.543214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:35.957 [2024-12-10 22:58:43.546949] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.957 [2024-12-10 22:58:43.547015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.957 [2024-12-10 22:58:43.547033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:35.957 [2024-12-10 22:58:43.550816] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.957 [2024-12-10 22:58:43.550870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.957 [2024-12-10 22:58:43.550888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:35.957 [2024-12-10 22:58:43.554649] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.957 [2024-12-10 22:58:43.554705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.957 [2024-12-10 22:58:43.554724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:35.957 [2024-12-10 22:58:43.558456] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.957 [2024-12-10 22:58:43.558516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.957 [2024-12-10 22:58:43.558534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:35.957 [2024-12-10 22:58:43.562323] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.957 [2024-12-10 22:58:43.562375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.957 [2024-12-10 22:58:43.562393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:35.957 [2024-12-10 22:58:43.566327] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.957 [2024-12-10 22:58:43.566414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.957 [2024-12-10 22:58:43.566435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:35.957 [2024-12-10 22:58:43.570733] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.957 [2024-12-10 22:58:43.570798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.957 [2024-12-10 22:58:43.570816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:35.957 [2024-12-10 22:58:43.575542] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.957 [2024-12-10 22:58:43.575598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.957 [2024-12-10 22:58:43.575616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:35.957 [2024-12-10 22:58:43.580421] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.957 [2024-12-10 22:58:43.580489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.957 [2024-12-10 22:58:43.580508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:35.957 [2024-12-10 22:58:43.585389] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.957 [2024-12-10 22:58:43.585560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.957 [2024-12-10 22:58:43.585578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:35.957 [2024-12-10 22:58:43.592022] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.958 [2024-12-10 22:58:43.592174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.958 [2024-12-10 22:58:43.592193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:35.958 [2024-12-10 22:58:43.597791] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.958 [2024-12-10 22:58:43.597895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.958 [2024-12-10 22:58:43.597914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:35.958 [2024-12-10 22:58:43.602711] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.958 [2024-12-10 22:58:43.602782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.958 [2024-12-10 22:58:43.602801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:35.958 [2024-12-10 22:58:43.606870] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.958 [2024-12-10 22:58:43.606968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.958 [2024-12-10 22:58:43.606986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:35.958 [2024-12-10 22:58:43.611853] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.958 [2024-12-10 22:58:43.611941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.958 [2024-12-10 22:58:43.611961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:35.958 [2024-12-10 22:58:43.616914] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.958 [2024-12-10 22:58:43.617036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.958 [2024-12-10 22:58:43.617055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:35.958 [2024-12-10 22:58:43.621062] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.958 [2024-12-10 22:58:43.621181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.958 [2024-12-10 22:58:43.621201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:35.958 [2024-12-10 22:58:43.625164] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.958 [2024-12-10 22:58:43.625241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.958 [2024-12-10 22:58:43.625260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:35.958 [2024-12-10 22:58:43.629254] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.958 [2024-12-10 22:58:43.629355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.958 [2024-12-10 22:58:43.629374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:35.958 [2024-12-10 22:58:43.633330] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.958 [2024-12-10 22:58:43.633430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.958 [2024-12-10 22:58:43.633449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:35.958 [2024-12-10 22:58:43.637342] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.958 [2024-12-10 22:58:43.637447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.958 [2024-12-10 22:58:43.637466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:35.958 [2024-12-10 22:58:43.641605] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.958 [2024-12-10 22:58:43.641724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.958 [2024-12-10 22:58:43.641742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:35.958 [2024-12-10 22:58:43.645720] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.958 [2024-12-10 22:58:43.645859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.958 [2024-12-10 22:58:43.645881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:35.958 [2024-12-10 22:58:43.649781] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.958 [2024-12-10 22:58:43.649868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.958 [2024-12-10 22:58:43.649887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:35.958 [2024-12-10 22:58:43.653942] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.958 [2024-12-10 22:58:43.654063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.958 [2024-12-10 22:58:43.654082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:35.958 [2024-12-10 22:58:43.658425] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.958 [2024-12-10 22:58:43.658515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.958 [2024-12-10 22:58:43.658534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:35.958 [2024-12-10 22:58:43.663089] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.958 [2024-12-10 22:58:43.663151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.958 [2024-12-10 22:58:43.663175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:35.958 [2024-12-10 22:58:43.668033] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.958 [2024-12-10 22:58:43.668126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.958 [2024-12-10 22:58:43.668145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:35.958 [2024-12-10 22:58:43.673761] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:35.958 [2024-12-10 22:58:43.673878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:35.958 [2024-12-10 22:58:43.673896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:35.958 6382.50 IOPS, 797.81 MiB/s [2024-12-10T21:58:43.687Z] [2024-12-10 22:58:43.681142] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x158a150) with pdu=0x200016eff3c8 00:27:36.217 [2024-12-10 22:58:43.681285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:36.217 [2024-12-10 22:58:43.681305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:36.217 00:27:36.217 Latency(us) 00:27:36.217 [2024-12-10T21:58:43.946Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:36.217 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:36.217 nvme0n1 : 2.00 6378.38 797.30 0.00 0.00 2503.83 1446.07 8263.23 00:27:36.217 [2024-12-10T21:58:43.946Z] =================================================================================================================== 00:27:36.217 [2024-12-10T21:58:43.946Z] Total : 6378.38 797.30 0.00 0.00 2503.83 1446.07 8263.23 00:27:36.217 { 00:27:36.217 "results": [ 00:27:36.217 { 00:27:36.217 "job": "nvme0n1", 00:27:36.217 "core_mask": "0x2", 00:27:36.217 "workload": "randwrite", 00:27:36.217 "status": "finished", 00:27:36.218 "queue_depth": 16, 00:27:36.218 "io_size": 131072, 00:27:36.218 "runtime": 2.003801, 00:27:36.218 "iops": 6378.377892814706, 00:27:36.218 "mibps": 797.2972366018382, 00:27:36.218 "io_failed": 0, 00:27:36.218 "io_timeout": 0, 00:27:36.218 "avg_latency_us": 2503.8340547619937, 00:27:36.218 "min_latency_us": 1446.0660869565218, 00:27:36.218 "max_latency_us": 8263.234782608695 00:27:36.218 } 00:27:36.218 ], 00:27:36.218 "core_count": 1 00:27:36.218 } 00:27:36.218 22:58:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:36.218 22:58:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:36.218 22:58:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:36.218 22:58:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:36.218 | .driver_specific 00:27:36.218 | .nvme_error 00:27:36.218 | .status_code 00:27:36.218 | .command_transient_transport_error' 00:27:36.218 22:58:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 413 > 0 )) 00:27:36.218 22:58:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2524341 00:27:36.218 22:58:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2524341 ']' 00:27:36.218 22:58:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2524341 00:27:36.218 22:58:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:36.218 22:58:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:36.218 22:58:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2524341 00:27:36.477 22:58:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:36.477 22:58:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:36.477 22:58:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2524341' 00:27:36.477 killing process with pid 2524341 00:27:36.477 22:58:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2524341 00:27:36.477 Received shutdown signal, test time was about 2.000000 seconds 00:27:36.477 00:27:36.477 Latency(us) 00:27:36.477 [2024-12-10T21:58:44.206Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:36.477 [2024-12-10T21:58:44.206Z] =================================================================================================================== 00:27:36.477 [2024-12-10T21:58:44.206Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:36.477 22:58:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2524341 00:27:36.477 22:58:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2522580 00:27:36.477 22:58:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2522580 ']' 00:27:36.477 22:58:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2522580 00:27:36.477 22:58:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:36.477 22:58:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:36.477 22:58:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2522580 00:27:36.477 22:58:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:36.477 22:58:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:36.477 22:58:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2522580' 00:27:36.477 killing process with pid 2522580 00:27:36.477 22:58:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2522580 00:27:36.477 22:58:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2522580 00:27:36.736 00:27:36.736 real 0m14.178s 00:27:36.736 user 0m27.130s 00:27:36.736 sys 0m4.625s 00:27:36.736 22:58:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:36.736 22:58:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:36.736 ************************************ 00:27:36.736 END TEST nvmf_digest_error 00:27:36.736 ************************************ 00:27:36.736 22:58:44 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:27:36.736 22:58:44 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:27:36.736 22:58:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:36.736 22:58:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:27:36.736 22:58:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:36.736 22:58:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:27:36.736 22:58:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:36.736 22:58:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:36.736 rmmod nvme_tcp 00:27:36.736 rmmod nvme_fabrics 00:27:36.736 rmmod nvme_keyring 00:27:36.736 22:58:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:36.737 22:58:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:27:36.737 22:58:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:27:36.737 22:58:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 2522580 ']' 00:27:36.737 22:58:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 2522580 00:27:36.737 22:58:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 2522580 ']' 00:27:36.737 22:58:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 2522580 00:27:36.737 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/common/autotest_common.sh: line 958: kill: (2522580) - No such process 00:27:36.737 22:58:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 2522580 is not found' 00:27:36.737 Process with pid 2522580 is not found 00:27:36.737 22:58:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:36.737 22:58:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:36.737 22:58:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:36.737 22:58:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:27:36.737 22:58:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:27:36.737 22:58:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:36.737 22:58:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:27:36.737 22:58:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:36.737 22:58:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:36.737 22:58:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:36.737 22:58:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:36.737 22:58:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:39.281 22:58:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:39.281 00:27:39.281 real 0m36.612s 00:27:39.281 user 0m55.764s 00:27:39.281 sys 0m13.818s 00:27:39.281 22:58:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:39.281 22:58:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:39.281 ************************************ 00:27:39.281 END TEST nvmf_digest 00:27:39.282 ************************************ 00:27:39.282 22:58:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:27:39.282 22:58:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:27:39.282 22:58:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:27:39.282 22:58:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:27:39.282 22:58:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:39.282 22:58:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:39.282 22:58:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:39.282 ************************************ 00:27:39.282 START TEST nvmf_bdevperf 00:27:39.282 ************************************ 00:27:39.282 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:27:39.282 * Looking for test storage... 00:27:39.282 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host 00:27:39.282 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:39.282 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:27:39.282 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:39.282 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:39.282 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:39.282 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:39.282 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:39.282 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:27:39.282 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:27:39.282 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:27:39.282 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:27:39.282 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:27:39.282 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:27:39.282 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:27:39.282 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:39.282 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:27:39.282 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:27:39.282 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:39.282 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:39.282 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:27:39.282 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:27:39.282 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:39.282 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:27:39.282 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:27:39.282 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:27:39.282 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:27:39.282 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:39.282 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:27:39.282 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:27:39.282 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:39.282 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:39.282 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:27:39.282 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:39.282 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:39.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:39.282 --rc genhtml_branch_coverage=1 00:27:39.282 --rc genhtml_function_coverage=1 00:27:39.282 --rc genhtml_legend=1 00:27:39.282 --rc geninfo_all_blocks=1 00:27:39.282 --rc geninfo_unexecuted_blocks=1 00:27:39.282 00:27:39.282 ' 00:27:39.282 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:39.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:39.282 --rc genhtml_branch_coverage=1 00:27:39.282 --rc genhtml_function_coverage=1 00:27:39.282 --rc genhtml_legend=1 00:27:39.282 --rc geninfo_all_blocks=1 00:27:39.282 --rc geninfo_unexecuted_blocks=1 00:27:39.282 00:27:39.282 ' 00:27:39.282 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:39.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:39.283 --rc genhtml_branch_coverage=1 00:27:39.283 --rc genhtml_function_coverage=1 00:27:39.283 --rc genhtml_legend=1 00:27:39.283 --rc geninfo_all_blocks=1 00:27:39.283 --rc geninfo_unexecuted_blocks=1 00:27:39.283 00:27:39.283 ' 00:27:39.283 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:39.283 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:39.283 --rc genhtml_branch_coverage=1 00:27:39.283 --rc genhtml_function_coverage=1 00:27:39.283 --rc genhtml_legend=1 00:27:39.283 --rc geninfo_all_blocks=1 00:27:39.283 --rc geninfo_unexecuted_blocks=1 00:27:39.283 00:27:39.283 ' 00:27:39.283 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:27:39.283 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:27:39.283 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:39.283 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:39.283 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:39.283 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:39.283 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:39.283 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:39.283 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:39.283 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:39.283 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:39.283 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:39.283 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:39.283 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:39.283 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:39.283 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:39.283 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:39.283 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:39.283 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:27:39.283 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:27:39.283 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:39.284 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:39.284 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:39.284 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.284 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.284 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.284 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:27:39.284 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:39.284 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:27:39.284 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:39.284 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:39.284 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:39.284 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:39.284 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:39.284 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:39.284 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:39.284 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:39.284 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:39.284 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:39.284 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:39.284 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:39.284 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:27:39.284 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:39.284 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:39.284 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:39.284 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:39.284 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:39.284 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:39.284 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:39.284 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:39.284 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:39.284 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:39.284 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:27:39.285 22:58:46 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:45.863 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:45.863 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:27:45.863 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:45.863 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:45.863 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:45.863 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:45.863 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:45.863 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:27:45.863 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:45.863 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:27:45.863 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:27:45.863 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:27:45.863 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:27:45.863 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:27:45.863 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:27:45.863 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:45.863 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:45.863 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:45.863 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:45.863 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:45.863 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:45.863 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:45.864 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:45.864 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:45.864 Found net devices under 0000:86:00.0: cvl_0_0 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:45.864 Found net devices under 0000:86:00.1: cvl_0_1 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:45.864 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:45.864 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.269 ms 00:27:45.864 00:27:45.864 --- 10.0.0.2 ping statistics --- 00:27:45.864 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:45.864 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:45.864 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:45.864 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:27:45.864 00:27:45.864 --- 10.0.0.1 ping statistics --- 00:27:45.864 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:45.864 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2528358 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2528358 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 2528358 ']' 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:45.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:45.864 22:58:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:45.864 [2024-12-10 22:58:52.744109] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:27:45.864 [2024-12-10 22:58:52.744154] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:45.864 [2024-12-10 22:58:52.823954] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:45.865 [2024-12-10 22:58:52.866741] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:45.865 [2024-12-10 22:58:52.866776] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:45.865 [2024-12-10 22:58:52.866784] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:45.865 [2024-12-10 22:58:52.866790] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:45.865 [2024-12-10 22:58:52.866796] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:45.865 [2024-12-10 22:58:52.868141] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:27:45.865 [2024-12-10 22:58:52.868249] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:27:45.865 [2024-12-10 22:58:52.868250] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:27:45.865 22:58:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:45.865 22:58:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:27:45.865 22:58:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:45.865 22:58:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:45.865 22:58:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:46.124 22:58:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:46.124 22:58:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:46.124 22:58:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.124 22:58:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:46.124 [2024-12-10 22:58:53.627898] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:46.124 22:58:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.124 22:58:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:46.124 22:58:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.124 22:58:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:46.124 Malloc0 00:27:46.124 22:58:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.124 22:58:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:46.124 22:58:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.124 22:58:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:46.124 22:58:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.124 22:58:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:46.124 22:58:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.124 22:58:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:46.124 22:58:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.124 22:58:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:46.124 22:58:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.124 22:58:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:46.124 [2024-12-10 22:58:53.694894] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:46.124 22:58:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.124 22:58:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:27:46.124 22:58:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:27:46.125 22:58:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:27:46.125 22:58:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:27:46.125 22:58:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:46.125 22:58:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:46.125 { 00:27:46.125 "params": { 00:27:46.125 "name": "Nvme$subsystem", 00:27:46.125 "trtype": "$TEST_TRANSPORT", 00:27:46.125 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:46.125 "adrfam": "ipv4", 00:27:46.125 "trsvcid": "$NVMF_PORT", 00:27:46.125 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:46.125 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:46.125 "hdgst": ${hdgst:-false}, 00:27:46.125 "ddgst": ${ddgst:-false} 00:27:46.125 }, 00:27:46.125 "method": "bdev_nvme_attach_controller" 00:27:46.125 } 00:27:46.125 EOF 00:27:46.125 )") 00:27:46.125 22:58:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:27:46.125 22:58:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:27:46.125 22:58:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:27:46.125 22:58:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:46.125 "params": { 00:27:46.125 "name": "Nvme1", 00:27:46.125 "trtype": "tcp", 00:27:46.125 "traddr": "10.0.0.2", 00:27:46.125 "adrfam": "ipv4", 00:27:46.125 "trsvcid": "4420", 00:27:46.125 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:46.125 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:46.125 "hdgst": false, 00:27:46.125 "ddgst": false 00:27:46.125 }, 00:27:46.125 "method": "bdev_nvme_attach_controller" 00:27:46.125 }' 00:27:46.125 [2024-12-10 22:58:53.746167] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:27:46.125 [2024-12-10 22:58:53.746212] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2528604 ] 00:27:46.125 [2024-12-10 22:58:53.820936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:46.384 [2024-12-10 22:58:53.862106] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:27:46.384 Running I/O for 1 seconds... 00:27:47.762 11072.00 IOPS, 43.25 MiB/s 00:27:47.762 Latency(us) 00:27:47.762 [2024-12-10T21:58:55.491Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:47.762 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:47.762 Verification LBA range: start 0x0 length 0x4000 00:27:47.762 Nvme1n1 : 1.01 11165.29 43.61 0.00 0.00 11405.85 658.92 11739.49 00:27:47.762 [2024-12-10T21:58:55.491Z] =================================================================================================================== 00:27:47.762 [2024-12-10T21:58:55.491Z] Total : 11165.29 43.61 0.00 0.00 11405.85 658.92 11739.49 00:27:47.762 22:58:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2528839 00:27:47.762 22:58:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:27:47.762 22:58:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:27:47.762 22:58:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:27:47.762 22:58:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:27:47.762 22:58:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:27:47.762 22:58:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:47.762 22:58:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:47.762 { 00:27:47.762 "params": { 00:27:47.762 "name": "Nvme$subsystem", 00:27:47.762 "trtype": "$TEST_TRANSPORT", 00:27:47.762 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:47.762 "adrfam": "ipv4", 00:27:47.762 "trsvcid": "$NVMF_PORT", 00:27:47.762 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:47.762 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:47.762 "hdgst": ${hdgst:-false}, 00:27:47.762 "ddgst": ${ddgst:-false} 00:27:47.762 }, 00:27:47.762 "method": "bdev_nvme_attach_controller" 00:27:47.762 } 00:27:47.762 EOF 00:27:47.762 )") 00:27:47.762 22:58:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:27:47.762 22:58:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:27:47.762 22:58:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:27:47.763 22:58:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:47.763 "params": { 00:27:47.763 "name": "Nvme1", 00:27:47.763 "trtype": "tcp", 00:27:47.763 "traddr": "10.0.0.2", 00:27:47.763 "adrfam": "ipv4", 00:27:47.763 "trsvcid": "4420", 00:27:47.763 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:47.763 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:47.763 "hdgst": false, 00:27:47.763 "ddgst": false 00:27:47.763 }, 00:27:47.763 "method": "bdev_nvme_attach_controller" 00:27:47.763 }' 00:27:47.763 [2024-12-10 22:58:55.322427] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:27:47.763 [2024-12-10 22:58:55.322474] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2528839 ] 00:27:47.763 [2024-12-10 22:58:55.398299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:47.763 [2024-12-10 22:58:55.436300] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:27:48.021 Running I/O for 15 seconds... 00:27:50.336 11099.00 IOPS, 43.36 MiB/s [2024-12-10T21:58:58.326Z] 11054.00 IOPS, 43.18 MiB/s [2024-12-10T21:58:58.326Z] 22:58:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2528358 00:27:50.597 22:58:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:27:50.597 [2024-12-10 22:58:58.297637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:101464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.597 [2024-12-10 22:58:58.297675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.597 [2024-12-10 22:58:58.297695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:101472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.597 [2024-12-10 22:58:58.297704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.597 [2024-12-10 22:58:58.297713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:101480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.597 [2024-12-10 22:58:58.297721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.597 [2024-12-10 22:58:58.297730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:101488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.597 [2024-12-10 22:58:58.297737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.597 [2024-12-10 22:58:58.297746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:101496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.597 [2024-12-10 22:58:58.297759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.597 [2024-12-10 22:58:58.297769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:101504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.598 [2024-12-10 22:58:58.297777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.598 [2024-12-10 22:58:58.297785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:101512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.598 [2024-12-10 22:58:58.297793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.598 [2024-12-10 22:58:58.297803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:101520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.598 [2024-12-10 22:58:58.297811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.598 [2024-12-10 22:58:58.297820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:101528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.598 [2024-12-10 22:58:58.297826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.598 [2024-12-10 22:58:58.297836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:101536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.598 [2024-12-10 22:58:58.297843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.598 [2024-12-10 22:58:58.297851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:101544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.598 [2024-12-10 22:58:58.297859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.598 [2024-12-10 22:58:58.297868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:101552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.598 [2024-12-10 22:58:58.297875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.598 [2024-12-10 22:58:58.297886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:101560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.598 [2024-12-10 22:58:58.297893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.598 [2024-12-10 22:58:58.297902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:101568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.598 [2024-12-10 22:58:58.297911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.598 [2024-12-10 22:58:58.297920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:101576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.598 [2024-12-10 22:58:58.297929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.598 [2024-12-10 22:58:58.297941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:101584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.598 [2024-12-10 22:58:58.297950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.598 [2024-12-10 22:58:58.297959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:101592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.598 [2024-12-10 22:58:58.297966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.598 [2024-12-10 22:58:58.297978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:101600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.598 [2024-12-10 22:58:58.297985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.598 [2024-12-10 22:58:58.297994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:101608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.598 [2024-12-10 22:58:58.298002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.598 [2024-12-10 22:58:58.298012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:101616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.598 [2024-12-10 22:58:58.298022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.598 [2024-12-10 22:58:58.298032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:101624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.598 [2024-12-10 22:58:58.298039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.598 [2024-12-10 22:58:58.298047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:101632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.598 [2024-12-10 22:58:58.298056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.598 [2024-12-10 22:58:58.298064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:101640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.598 [2024-12-10 22:58:58.298075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.598 [2024-12-10 22:58:58.298084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:101648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.598 [2024-12-10 22:58:58.298095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.598 [2024-12-10 22:58:58.298105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:101656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.598 [2024-12-10 22:58:58.298114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.598 [2024-12-10 22:58:58.298124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:101664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.598 [2024-12-10 22:58:58.298131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.598 [2024-12-10 22:58:58.298140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:101672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.598 [2024-12-10 22:58:58.298147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.598 [2024-12-10 22:58:58.298155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:101680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.598 [2024-12-10 22:58:58.298169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.598 [2024-12-10 22:58:58.298177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:101688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.598 [2024-12-10 22:58:58.298184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.598 [2024-12-10 22:58:58.298192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:101696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.598 [2024-12-10 22:58:58.298201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.598 [2024-12-10 22:58:58.298209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:101704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.598 [2024-12-10 22:58:58.298219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.598 [2024-12-10 22:58:58.298230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:101712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.598 [2024-12-10 22:58:58.298239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.598 [2024-12-10 22:58:58.298251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:101720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.598 [2024-12-10 22:58:58.298262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.598 [2024-12-10 22:58:58.298274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:101728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.598 [2024-12-10 22:58:58.298286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.598 [2024-12-10 22:58:58.298298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:101736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.598 [2024-12-10 22:58:58.298310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.598 [2024-12-10 22:58:58.298323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:101744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.598 [2024-12-10 22:58:58.298334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.598 [2024-12-10 22:58:58.298346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:101752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.598 [2024-12-10 22:58:58.298354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.598 [2024-12-10 22:58:58.298368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:101760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.598 [2024-12-10 22:58:58.298378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.598 [2024-12-10 22:58:58.298393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:101768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.598 [2024-12-10 22:58:58.298407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.598 [2024-12-10 22:58:58.298420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:101776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.598 [2024-12-10 22:58:58.298428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.598 [2024-12-10 22:58:58.298441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:101784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.598 [2024-12-10 22:58:58.298451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.598 [2024-12-10 22:58:58.298463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.598 [2024-12-10 22:58:58.298473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.598 [2024-12-10 22:58:58.298483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.598 [2024-12-10 22:58:58.298490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.598 [2024-12-10 22:58:58.298498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:101808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.598 [2024-12-10 22:58:58.298505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.598 [2024-12-10 22:58:58.298513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:101816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.598 [2024-12-10 22:58:58.298520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.599 [2024-12-10 22:58:58.298527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:101824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.599 [2024-12-10 22:58:58.298534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.599 [2024-12-10 22:58:58.298542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:101832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.599 [2024-12-10 22:58:58.298548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.599 [2024-12-10 22:58:58.298556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:101840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.599 [2024-12-10 22:58:58.298562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.599 [2024-12-10 22:58:58.298570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:101848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.599 [2024-12-10 22:58:58.298576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.599 [2024-12-10 22:58:58.298585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:101856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.599 [2024-12-10 22:58:58.298592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.599 [2024-12-10 22:58:58.298600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:101864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.599 [2024-12-10 22:58:58.298607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.599 [2024-12-10 22:58:58.298614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:101872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.599 [2024-12-10 22:58:58.298621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.599 [2024-12-10 22:58:58.298629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:101880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.599 [2024-12-10 22:58:58.298635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.599 [2024-12-10 22:58:58.298643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:101888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.599 [2024-12-10 22:58:58.298649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.599 [2024-12-10 22:58:58.298657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:101896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.599 [2024-12-10 22:58:58.298665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.599 [2024-12-10 22:58:58.298675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:101904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.599 [2024-12-10 22:58:58.298682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.599 [2024-12-10 22:58:58.298691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:101912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.599 [2024-12-10 22:58:58.298697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.599 [2024-12-10 22:58:58.298705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:101920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.599 [2024-12-10 22:58:58.298712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.599 [2024-12-10 22:58:58.298720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:101928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.599 [2024-12-10 22:58:58.298727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.599 [2024-12-10 22:58:58.298735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:101936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.599 [2024-12-10 22:58:58.298742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.599 [2024-12-10 22:58:58.298750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:101944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.599 [2024-12-10 22:58:58.298756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.599 [2024-12-10 22:58:58.298764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:101952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.599 [2024-12-10 22:58:58.298771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.599 [2024-12-10 22:58:58.298779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:101960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.599 [2024-12-10 22:58:58.298786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.599 [2024-12-10 22:58:58.298794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.599 [2024-12-10 22:58:58.298801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.599 [2024-12-10 22:58:58.298809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:101976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.599 [2024-12-10 22:58:58.298816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.599 [2024-12-10 22:58:58.298824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:101984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.599 [2024-12-10 22:58:58.298831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.599 [2024-12-10 22:58:58.298839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:101992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.599 [2024-12-10 22:58:58.298845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.599 [2024-12-10 22:58:58.298854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:102000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.599 [2024-12-10 22:58:58.298867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.599 [2024-12-10 22:58:58.298875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:102008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.599 [2024-12-10 22:58:58.298881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.599 [2024-12-10 22:58:58.298889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:102016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.599 [2024-12-10 22:58:58.298896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.599 [2024-12-10 22:58:58.298905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:102024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.599 [2024-12-10 22:58:58.298912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.599 [2024-12-10 22:58:58.298920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:102032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.599 [2024-12-10 22:58:58.298927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.599 [2024-12-10 22:58:58.298935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:102040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.599 [2024-12-10 22:58:58.298941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.599 [2024-12-10 22:58:58.298949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:102048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.599 [2024-12-10 22:58:58.298956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.599 [2024-12-10 22:58:58.298964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:102056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.599 [2024-12-10 22:58:58.298971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.599 [2024-12-10 22:58:58.298979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:102064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.599 [2024-12-10 22:58:58.298985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.599 [2024-12-10 22:58:58.298993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:102072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.599 [2024-12-10 22:58:58.299000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.599 [2024-12-10 22:58:58.299007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:102080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.599 [2024-12-10 22:58:58.299014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.599 [2024-12-10 22:58:58.299023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:102088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.599 [2024-12-10 22:58:58.299030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.599 [2024-12-10 22:58:58.299038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:102480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:50.599 [2024-12-10 22:58:58.299044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.599 [2024-12-10 22:58:58.299056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:102096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.599 [2024-12-10 22:58:58.299064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.599 [2024-12-10 22:58:58.299078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:102104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.599 [2024-12-10 22:58:58.299085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.599 [2024-12-10 22:58:58.299094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:102112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.599 [2024-12-10 22:58:58.299100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.599 [2024-12-10 22:58:58.299108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:102120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.599 [2024-12-10 22:58:58.299115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.599 [2024-12-10 22:58:58.299123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:102128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.599 [2024-12-10 22:58:58.299131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.599 [2024-12-10 22:58:58.299140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.600 [2024-12-10 22:58:58.299147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.600 [2024-12-10 22:58:58.299155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:102144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.600 [2024-12-10 22:58:58.299169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.600 [2024-12-10 22:58:58.299177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:102152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.600 [2024-12-10 22:58:58.299184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.600 [2024-12-10 22:58:58.299192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:102160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.600 [2024-12-10 22:58:58.299199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.600 [2024-12-10 22:58:58.299208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:102168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.600 [2024-12-10 22:58:58.299215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.600 [2024-12-10 22:58:58.299223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:102176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.600 [2024-12-10 22:58:58.299229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.600 [2024-12-10 22:58:58.299238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:102184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.600 [2024-12-10 22:58:58.299245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.600 [2024-12-10 22:58:58.299254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:102192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.600 [2024-12-10 22:58:58.299263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.600 [2024-12-10 22:58:58.299271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:102200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.600 [2024-12-10 22:58:58.299278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.600 [2024-12-10 22:58:58.299286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.600 [2024-12-10 22:58:58.299292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.600 [2024-12-10 22:58:58.299300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:102216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.600 [2024-12-10 22:58:58.299307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.600 [2024-12-10 22:58:58.299316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:102224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.600 [2024-12-10 22:58:58.299323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.600 [2024-12-10 22:58:58.299332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:102232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.600 [2024-12-10 22:58:58.299339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.600 [2024-12-10 22:58:58.299348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:102240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.600 [2024-12-10 22:58:58.299354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.600 [2024-12-10 22:58:58.299363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.600 [2024-12-10 22:58:58.299370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.600 [2024-12-10 22:58:58.299378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:102256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.600 [2024-12-10 22:58:58.299385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.600 [2024-12-10 22:58:58.299393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.600 [2024-12-10 22:58:58.299400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.600 [2024-12-10 22:58:58.299408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:102272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.600 [2024-12-10 22:58:58.299414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.600 [2024-12-10 22:58:58.299423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:102280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.600 [2024-12-10 22:58:58.299430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.600 [2024-12-10 22:58:58.299438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:102288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.600 [2024-12-10 22:58:58.299445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.600 [2024-12-10 22:58:58.299455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:102296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.600 [2024-12-10 22:58:58.299461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.600 [2024-12-10 22:58:58.299469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:102304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.600 [2024-12-10 22:58:58.299477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.600 [2024-12-10 22:58:58.299485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:102312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.600 [2024-12-10 22:58:58.299492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.600 [2024-12-10 22:58:58.299500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:102320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.600 [2024-12-10 22:58:58.299506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.600 [2024-12-10 22:58:58.299514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:102328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.600 [2024-12-10 22:58:58.299520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.600 [2024-12-10 22:58:58.299529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.600 [2024-12-10 22:58:58.299537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.600 [2024-12-10 22:58:58.299545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:102344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.600 [2024-12-10 22:58:58.299556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.600 [2024-12-10 22:58:58.299565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:102352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.600 [2024-12-10 22:58:58.299571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.600 [2024-12-10 22:58:58.299580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:102360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.600 [2024-12-10 22:58:58.299587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.600 [2024-12-10 22:58:58.299596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:102368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.600 [2024-12-10 22:58:58.299602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.600 [2024-12-10 22:58:58.299610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:102376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.600 [2024-12-10 22:58:58.299616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.600 [2024-12-10 22:58:58.299624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:102384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.600 [2024-12-10 22:58:58.299631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.600 [2024-12-10 22:58:58.299640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:102392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.600 [2024-12-10 22:58:58.299649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.600 [2024-12-10 22:58:58.299657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:102400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.600 [2024-12-10 22:58:58.299663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.600 [2024-12-10 22:58:58.299671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.600 [2024-12-10 22:58:58.299678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.600 [2024-12-10 22:58:58.299686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:102416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.600 [2024-12-10 22:58:58.299693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.600 [2024-12-10 22:58:58.299701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:102424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.600 [2024-12-10 22:58:58.299707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.600 [2024-12-10 22:58:58.299715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:102432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.600 [2024-12-10 22:58:58.299722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.600 [2024-12-10 22:58:58.299730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:102440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.600 [2024-12-10 22:58:58.299736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.600 [2024-12-10 22:58:58.299744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:102448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.600 [2024-12-10 22:58:58.299751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.600 [2024-12-10 22:58:58.299759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.600 [2024-12-10 22:58:58.299766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.601 [2024-12-10 22:58:58.299774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:102464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:50.601 [2024-12-10 22:58:58.299780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.601 [2024-12-10 22:58:58.299787] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a4510 is same with the state(6) to be set 00:27:50.601 [2024-12-10 22:58:58.299797] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:50.601 [2024-12-10 22:58:58.299803] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:50.601 [2024-12-10 22:58:58.299809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102472 len:8 PRP1 0x0 PRP2 0x0 00:27:50.601 [2024-12-10 22:58:58.299818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.601 [2024-12-10 22:58:58.302753] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:50.601 [2024-12-10 22:58:58.302804] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:50.601 [2024-12-10 22:58:58.303282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.601 [2024-12-10 22:58:58.303301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:50.601 [2024-12-10 22:58:58.303309] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:50.601 [2024-12-10 22:58:58.303483] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:50.601 [2024-12-10 22:58:58.303658] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:50.601 [2024-12-10 22:58:58.303666] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:50.601 [2024-12-10 22:58:58.303674] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:50.601 [2024-12-10 22:58:58.303681] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:50.601 [2024-12-10 22:58:58.315861] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:50.601 [2024-12-10 22:58:58.316259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.601 [2024-12-10 22:58:58.316278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:50.601 [2024-12-10 22:58:58.316287] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:50.601 [2024-12-10 22:58:58.316504] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:50.601 [2024-12-10 22:58:58.316684] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:50.601 [2024-12-10 22:58:58.316694] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:50.601 [2024-12-10 22:58:58.316702] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:50.601 [2024-12-10 22:58:58.316709] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:50.861 [2024-12-10 22:58:58.328907] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:50.861 [2024-12-10 22:58:58.329318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.862 [2024-12-10 22:58:58.329337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:50.862 [2024-12-10 22:58:58.329346] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:50.862 [2024-12-10 22:58:58.329532] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:50.862 [2024-12-10 22:58:58.329724] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:50.862 [2024-12-10 22:58:58.329734] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:50.862 [2024-12-10 22:58:58.329741] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:50.862 [2024-12-10 22:58:58.329748] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:50.862 [2024-12-10 22:58:58.341836] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:50.862 [2024-12-10 22:58:58.342235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.862 [2024-12-10 22:58:58.342253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:50.862 [2024-12-10 22:58:58.342260] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:50.862 [2024-12-10 22:58:58.342428] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:50.862 [2024-12-10 22:58:58.342594] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:50.862 [2024-12-10 22:58:58.342603] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:50.862 [2024-12-10 22:58:58.342610] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:50.862 [2024-12-10 22:58:58.342616] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:50.862 [2024-12-10 22:58:58.354783] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:50.862 [2024-12-10 22:58:58.355208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.862 [2024-12-10 22:58:58.355266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:50.862 [2024-12-10 22:58:58.355290] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:50.862 [2024-12-10 22:58:58.355874] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:50.862 [2024-12-10 22:58:58.356471] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:50.862 [2024-12-10 22:58:58.356499] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:50.862 [2024-12-10 22:58:58.356521] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:50.862 [2024-12-10 22:58:58.356550] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:50.862 [2024-12-10 22:58:58.367665] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:50.862 [2024-12-10 22:58:58.368014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.862 [2024-12-10 22:58:58.368031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:50.862 [2024-12-10 22:58:58.368039] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:50.862 [2024-12-10 22:58:58.368225] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:50.862 [2024-12-10 22:58:58.368399] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:50.862 [2024-12-10 22:58:58.368409] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:50.862 [2024-12-10 22:58:58.368416] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:50.862 [2024-12-10 22:58:58.368423] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:50.862 [2024-12-10 22:58:58.380559] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:50.862 [2024-12-10 22:58:58.380979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.862 [2024-12-10 22:58:58.380997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:50.862 [2024-12-10 22:58:58.381004] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:50.862 [2024-12-10 22:58:58.381174] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:50.862 [2024-12-10 22:58:58.381363] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:50.862 [2024-12-10 22:58:58.381376] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:50.862 [2024-12-10 22:58:58.381383] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:50.862 [2024-12-10 22:58:58.381390] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:50.862 [2024-12-10 22:58:58.393364] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:50.862 [2024-12-10 22:58:58.393785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.862 [2024-12-10 22:58:58.393822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:50.862 [2024-12-10 22:58:58.393848] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:50.862 [2024-12-10 22:58:58.394410] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:50.862 [2024-12-10 22:58:58.394585] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:50.862 [2024-12-10 22:58:58.394595] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:50.862 [2024-12-10 22:58:58.394602] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:50.862 [2024-12-10 22:58:58.394608] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:50.862 [2024-12-10 22:58:58.406265] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:50.862 [2024-12-10 22:58:58.406681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.862 [2024-12-10 22:58:58.406698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:50.862 [2024-12-10 22:58:58.406705] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:50.862 [2024-12-10 22:58:58.406869] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:50.862 [2024-12-10 22:58:58.407033] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:50.862 [2024-12-10 22:58:58.407043] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:50.862 [2024-12-10 22:58:58.407049] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:50.862 [2024-12-10 22:58:58.407055] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:50.862 [2024-12-10 22:58:58.419224] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:50.862 [2024-12-10 22:58:58.419641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.862 [2024-12-10 22:58:58.419658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:50.862 [2024-12-10 22:58:58.419666] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:50.862 [2024-12-10 22:58:58.419829] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:50.862 [2024-12-10 22:58:58.419993] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:50.862 [2024-12-10 22:58:58.420002] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:50.862 [2024-12-10 22:58:58.420009] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:50.862 [2024-12-10 22:58:58.420019] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:50.862 [2024-12-10 22:58:58.432043] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:50.862 [2024-12-10 22:58:58.432439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.862 [2024-12-10 22:58:58.432457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:50.862 [2024-12-10 22:58:58.432464] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:50.862 [2024-12-10 22:58:58.432628] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:50.862 [2024-12-10 22:58:58.432793] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:50.862 [2024-12-10 22:58:58.432803] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:50.863 [2024-12-10 22:58:58.432809] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:50.863 [2024-12-10 22:58:58.432815] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:50.863 [2024-12-10 22:58:58.444900] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:50.863 [2024-12-10 22:58:58.445255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.863 [2024-12-10 22:58:58.445300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:50.863 [2024-12-10 22:58:58.445325] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:50.863 [2024-12-10 22:58:58.445882] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:50.863 [2024-12-10 22:58:58.446047] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:50.863 [2024-12-10 22:58:58.446057] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:50.863 [2024-12-10 22:58:58.446062] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:50.863 [2024-12-10 22:58:58.446068] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:50.863 [2024-12-10 22:58:58.457813] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:50.863 [2024-12-10 22:58:58.458232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.863 [2024-12-10 22:58:58.458249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:50.863 [2024-12-10 22:58:58.458257] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:50.863 [2024-12-10 22:58:58.458420] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:50.863 [2024-12-10 22:58:58.458585] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:50.863 [2024-12-10 22:58:58.458594] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:50.863 [2024-12-10 22:58:58.458600] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:50.863 [2024-12-10 22:58:58.458606] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:50.863 [2024-12-10 22:58:58.470768] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:50.863 [2024-12-10 22:58:58.471186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.863 [2024-12-10 22:58:58.471206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:50.863 [2024-12-10 22:58:58.471214] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:50.863 [2024-12-10 22:58:58.471377] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:50.863 [2024-12-10 22:58:58.471541] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:50.863 [2024-12-10 22:58:58.471550] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:50.863 [2024-12-10 22:58:58.471557] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:50.863 [2024-12-10 22:58:58.471564] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:50.863 [2024-12-10 22:58:58.483686] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:50.863 [2024-12-10 22:58:58.484103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.863 [2024-12-10 22:58:58.484147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:50.863 [2024-12-10 22:58:58.484184] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:50.863 [2024-12-10 22:58:58.484703] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:50.863 [2024-12-10 22:58:58.484878] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:50.863 [2024-12-10 22:58:58.484888] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:50.863 [2024-12-10 22:58:58.484894] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:50.863 [2024-12-10 22:58:58.484900] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:50.863 [2024-12-10 22:58:58.496560] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:50.863 [2024-12-10 22:58:58.496983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.863 [2024-12-10 22:58:58.496999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:50.863 [2024-12-10 22:58:58.497007] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:50.863 [2024-12-10 22:58:58.497176] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:50.863 [2024-12-10 22:58:58.497364] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:50.863 [2024-12-10 22:58:58.497374] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:50.863 [2024-12-10 22:58:58.497381] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:50.863 [2024-12-10 22:58:58.497387] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:50.863 [2024-12-10 22:58:58.509374] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:50.863 [2024-12-10 22:58:58.509792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.863 [2024-12-10 22:58:58.509809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:50.863 [2024-12-10 22:58:58.509816] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:50.863 [2024-12-10 22:58:58.509984] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:50.863 [2024-12-10 22:58:58.510147] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:50.863 [2024-12-10 22:58:58.510163] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:50.863 [2024-12-10 22:58:58.510170] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:50.863 [2024-12-10 22:58:58.510177] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:50.863 [2024-12-10 22:58:58.522252] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:50.863 [2024-12-10 22:58:58.522667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.863 [2024-12-10 22:58:58.522684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:50.863 [2024-12-10 22:58:58.522692] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:50.863 [2024-12-10 22:58:58.522855] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:50.863 [2024-12-10 22:58:58.523019] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:50.863 [2024-12-10 22:58:58.523028] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:50.863 [2024-12-10 22:58:58.523034] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:50.863 [2024-12-10 22:58:58.523040] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:50.863 [2024-12-10 22:58:58.535131] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:50.863 [2024-12-10 22:58:58.535572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.863 [2024-12-10 22:58:58.535590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:50.863 [2024-12-10 22:58:58.535599] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:50.863 [2024-12-10 22:58:58.535772] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:50.863 [2024-12-10 22:58:58.535946] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:50.863 [2024-12-10 22:58:58.535956] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:50.863 [2024-12-10 22:58:58.535963] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:50.863 [2024-12-10 22:58:58.535969] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:50.863 [2024-12-10 22:58:58.548084] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:50.863 [2024-12-10 22:58:58.548444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.863 [2024-12-10 22:58:58.548462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:50.863 [2024-12-10 22:58:58.548469] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:50.863 [2024-12-10 22:58:58.548641] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:50.863 [2024-12-10 22:58:58.548814] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:50.863 [2024-12-10 22:58:58.548827] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:50.863 [2024-12-10 22:58:58.548834] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:50.863 [2024-12-10 22:58:58.548841] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:50.863 [2024-12-10 22:58:58.561265] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:50.863 [2024-12-10 22:58:58.561706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.863 [2024-12-10 22:58:58.561724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:50.863 [2024-12-10 22:58:58.561732] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:50.863 [2024-12-10 22:58:58.561911] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:50.863 [2024-12-10 22:58:58.562091] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:50.863 [2024-12-10 22:58:58.562100] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:50.863 [2024-12-10 22:58:58.562107] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:50.863 [2024-12-10 22:58:58.562114] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:50.864 [2024-12-10 22:58:58.574366] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:50.864 [2024-12-10 22:58:58.574778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:50.864 [2024-12-10 22:58:58.574795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:50.864 [2024-12-10 22:58:58.574803] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:50.864 [2024-12-10 22:58:58.574981] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:50.864 [2024-12-10 22:58:58.575166] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:50.864 [2024-12-10 22:58:58.575177] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:50.864 [2024-12-10 22:58:58.575185] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:50.864 [2024-12-10 22:58:58.575191] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:51.124 [2024-12-10 22:58:58.587468] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:51.124 [2024-12-10 22:58:58.587826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.124 [2024-12-10 22:58:58.587844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:51.124 [2024-12-10 22:58:58.587852] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:51.124 [2024-12-10 22:58:58.588030] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:51.124 [2024-12-10 22:58:58.588214] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:51.124 [2024-12-10 22:58:58.588225] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:51.124 [2024-12-10 22:58:58.588232] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:51.124 [2024-12-10 22:58:58.588238] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:51.124 [2024-12-10 22:58:58.600417] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:51.124 [2024-12-10 22:58:58.600727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.124 [2024-12-10 22:58:58.600743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:51.124 [2024-12-10 22:58:58.600751] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:51.124 [2024-12-10 22:58:58.600914] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:51.124 [2024-12-10 22:58:58.601079] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:51.124 [2024-12-10 22:58:58.601088] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:51.124 [2024-12-10 22:58:58.601095] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:51.124 [2024-12-10 22:58:58.601101] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:51.124 [2024-12-10 22:58:58.613360] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:51.124 [2024-12-10 22:58:58.613785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.124 [2024-12-10 22:58:58.613802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:51.124 [2024-12-10 22:58:58.613809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:51.124 [2024-12-10 22:58:58.613983] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:51.124 [2024-12-10 22:58:58.614174] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:51.124 [2024-12-10 22:58:58.614200] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:51.124 [2024-12-10 22:58:58.614208] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:51.124 [2024-12-10 22:58:58.614215] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:51.124 [2024-12-10 22:58:58.626259] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:51.125 [2024-12-10 22:58:58.626574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.125 [2024-12-10 22:58:58.626591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:51.125 [2024-12-10 22:58:58.626599] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:51.125 [2024-12-10 22:58:58.626762] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:51.125 [2024-12-10 22:58:58.626926] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:51.125 [2024-12-10 22:58:58.626936] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:51.125 [2024-12-10 22:58:58.626942] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:51.125 [2024-12-10 22:58:58.626948] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:51.125 [2024-12-10 22:58:58.639115] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:51.125 [2024-12-10 22:58:58.639433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.125 [2024-12-10 22:58:58.639454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:51.125 [2024-12-10 22:58:58.639462] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:51.125 [2024-12-10 22:58:58.639625] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:51.125 [2024-12-10 22:58:58.639789] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:51.125 [2024-12-10 22:58:58.639798] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:51.125 [2024-12-10 22:58:58.639804] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:51.125 [2024-12-10 22:58:58.639810] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:51.125 9689.00 IOPS, 37.85 MiB/s [2024-12-10T21:58:58.854Z] [2024-12-10 22:58:58.652008] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:51.125 [2024-12-10 22:58:58.652321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.125 [2024-12-10 22:58:58.652340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:51.125 [2024-12-10 22:58:58.652347] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:51.125 [2024-12-10 22:58:58.652511] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:51.125 [2024-12-10 22:58:58.652676] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:51.125 [2024-12-10 22:58:58.652685] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:51.125 [2024-12-10 22:58:58.652692] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:51.125 [2024-12-10 22:58:58.652698] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:51.125 [2024-12-10 22:58:58.664887] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:51.125 [2024-12-10 22:58:58.665302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.125 [2024-12-10 22:58:58.665349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:51.125 [2024-12-10 22:58:58.665374] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:51.125 [2024-12-10 22:58:58.665950] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:51.125 [2024-12-10 22:58:58.666116] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:51.125 [2024-12-10 22:58:58.666125] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:51.125 [2024-12-10 22:58:58.666131] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:51.125 [2024-12-10 22:58:58.666137] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:51.125 [2024-12-10 22:58:58.677741] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:51.125 [2024-12-10 22:58:58.678060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.125 [2024-12-10 22:58:58.678114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:51.125 [2024-12-10 22:58:58.678139] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:51.125 [2024-12-10 22:58:58.678747] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:51.125 [2024-12-10 22:58:58.679347] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:51.125 [2024-12-10 22:58:58.679374] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:51.125 [2024-12-10 22:58:58.679395] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:51.125 [2024-12-10 22:58:58.679414] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:51.125 [2024-12-10 22:58:58.690677] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:51.125 [2024-12-10 22:58:58.691019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.125 [2024-12-10 22:58:58.691038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:51.125 [2024-12-10 22:58:58.691046] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:51.125 [2024-12-10 22:58:58.691224] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:51.125 [2024-12-10 22:58:58.691401] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:51.125 [2024-12-10 22:58:58.691412] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:51.125 [2024-12-10 22:58:58.691420] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:51.125 [2024-12-10 22:58:58.691430] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:51.125 [2024-12-10 22:58:58.703682] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:51.125 [2024-12-10 22:58:58.704079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.125 [2024-12-10 22:58:58.704122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:51.125 [2024-12-10 22:58:58.704146] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:51.125 [2024-12-10 22:58:58.704744] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:51.125 [2024-12-10 22:58:58.705340] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:51.125 [2024-12-10 22:58:58.705376] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:51.125 [2024-12-10 22:58:58.705384] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:51.125 [2024-12-10 22:58:58.705391] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:51.125 [2024-12-10 22:58:58.716760] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:51.125 [2024-12-10 22:58:58.717088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.125 [2024-12-10 22:58:58.717105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:51.125 [2024-12-10 22:58:58.717112] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:51.125 [2024-12-10 22:58:58.717280] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:51.125 [2024-12-10 22:58:58.717446] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:51.125 [2024-12-10 22:58:58.717459] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:51.125 [2024-12-10 22:58:58.717465] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:51.125 [2024-12-10 22:58:58.717471] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:51.125 [2024-12-10 22:58:58.729688] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:51.125 [2024-12-10 22:58:58.730095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.125 [2024-12-10 22:58:58.730111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:51.125 [2024-12-10 22:58:58.730119] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:51.125 [2024-12-10 22:58:58.730309] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:51.125 [2024-12-10 22:58:58.730484] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:51.125 [2024-12-10 22:58:58.730493] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:51.125 [2024-12-10 22:58:58.730500] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:51.125 [2024-12-10 22:58:58.730506] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:51.125 [2024-12-10 22:58:58.742596] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:51.125 [2024-12-10 22:58:58.743014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.125 [2024-12-10 22:58:58.743031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:51.125 [2024-12-10 22:58:58.743038] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:51.125 [2024-12-10 22:58:58.743224] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:51.125 [2024-12-10 22:58:58.743399] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:51.125 [2024-12-10 22:58:58.743408] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:51.125 [2024-12-10 22:58:58.743415] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:51.125 [2024-12-10 22:58:58.743422] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:51.125 [2024-12-10 22:58:58.755493] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:51.126 [2024-12-10 22:58:58.755837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.126 [2024-12-10 22:58:58.755855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:51.126 [2024-12-10 22:58:58.755862] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:51.126 [2024-12-10 22:58:58.756025] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:51.126 [2024-12-10 22:58:58.756195] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:51.126 [2024-12-10 22:58:58.756205] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:51.126 [2024-12-10 22:58:58.756211] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:51.126 [2024-12-10 22:58:58.756217] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:51.126 [2024-12-10 22:58:58.768370] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:51.126 [2024-12-10 22:58:58.768773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.126 [2024-12-10 22:58:58.768791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:51.126 [2024-12-10 22:58:58.768799] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:51.126 [2024-12-10 22:58:58.768971] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:51.126 [2024-12-10 22:58:58.769146] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:51.126 [2024-12-10 22:58:58.769156] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:51.126 [2024-12-10 22:58:58.769169] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:51.126 [2024-12-10 22:58:58.769176] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:51.126 [2024-12-10 22:58:58.781304] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:51.126 [2024-12-10 22:58:58.781634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.126 [2024-12-10 22:58:58.781652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:51.126 [2024-12-10 22:58:58.781660] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:51.126 [2024-12-10 22:58:58.781823] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:51.126 [2024-12-10 22:58:58.781989] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:51.126 [2024-12-10 22:58:58.782000] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:51.126 [2024-12-10 22:58:58.782006] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:51.126 [2024-12-10 22:58:58.782013] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:51.126 [2024-12-10 22:58:58.794279] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:51.126 [2024-12-10 22:58:58.794624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.126 [2024-12-10 22:58:58.794672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:51.126 [2024-12-10 22:58:58.794698] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:51.126 [2024-12-10 22:58:58.795294] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:51.126 [2024-12-10 22:58:58.795570] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:51.126 [2024-12-10 22:58:58.795580] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:51.126 [2024-12-10 22:58:58.795586] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:51.126 [2024-12-10 22:58:58.795592] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:51.126 [2024-12-10 22:58:58.807290] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:51.126 [2024-12-10 22:58:58.807642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.126 [2024-12-10 22:58:58.807663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:51.126 [2024-12-10 22:58:58.807672] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:51.126 [2024-12-10 22:58:58.807845] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:51.126 [2024-12-10 22:58:58.808019] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:51.126 [2024-12-10 22:58:58.808029] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:51.126 [2024-12-10 22:58:58.808036] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:51.126 [2024-12-10 22:58:58.808042] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:51.126 [2024-12-10 22:58:58.820472] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:51.126 [2024-12-10 22:58:58.820808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.126 [2024-12-10 22:58:58.820826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:51.126 [2024-12-10 22:58:58.820834] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:51.126 [2024-12-10 22:58:58.821013] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:51.126 [2024-12-10 22:58:58.821197] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:51.126 [2024-12-10 22:58:58.821208] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:51.126 [2024-12-10 22:58:58.821214] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:51.126 [2024-12-10 22:58:58.821221] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:51.126 [2024-12-10 22:58:58.833473] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:51.126 [2024-12-10 22:58:58.833769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.126 [2024-12-10 22:58:58.833787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:51.126 [2024-12-10 22:58:58.833795] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:51.126 [2024-12-10 22:58:58.833968] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:51.126 [2024-12-10 22:58:58.834142] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:51.126 [2024-12-10 22:58:58.834152] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:51.126 [2024-12-10 22:58:58.834164] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:51.126 [2024-12-10 22:58:58.834171] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:51.126 [2024-12-10 22:58:58.846575] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:51.126 [2024-12-10 22:58:58.846919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.126 [2024-12-10 22:58:58.846937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:51.126 [2024-12-10 22:58:58.846945] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:51.126 [2024-12-10 22:58:58.847127] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:51.126 [2024-12-10 22:58:58.847313] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:51.126 [2024-12-10 22:58:58.847323] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:51.126 [2024-12-10 22:58:58.847330] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:51.126 [2024-12-10 22:58:58.847336] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:51.386 [2024-12-10 22:58:58.859533] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:51.387 [2024-12-10 22:58:58.859801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.387 [2024-12-10 22:58:58.859855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:51.387 [2024-12-10 22:58:58.859879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:51.387 [2024-12-10 22:58:58.860475] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:51.387 [2024-12-10 22:58:58.860736] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:51.387 [2024-12-10 22:58:58.860746] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:51.387 [2024-12-10 22:58:58.860752] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:51.387 [2024-12-10 22:58:58.860759] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:51.387 [2024-12-10 22:58:58.872526] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:51.387 [2024-12-10 22:58:58.872805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.387 [2024-12-10 22:58:58.872823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:51.387 [2024-12-10 22:58:58.872831] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:51.387 [2024-12-10 22:58:58.872994] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:51.387 [2024-12-10 22:58:58.873166] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:51.387 [2024-12-10 22:58:58.873175] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:51.387 [2024-12-10 22:58:58.873182] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:51.387 [2024-12-10 22:58:58.873206] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:51.387 [2024-12-10 22:58:58.885428] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:51.387 [2024-12-10 22:58:58.885695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.387 [2024-12-10 22:58:58.885713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:51.387 [2024-12-10 22:58:58.885720] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:51.387 [2024-12-10 22:58:58.885883] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:51.387 [2024-12-10 22:58:58.886048] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:51.387 [2024-12-10 22:58:58.886057] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:51.387 [2024-12-10 22:58:58.886067] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:51.387 [2024-12-10 22:58:58.886073] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:51.387 [2024-12-10 22:58:58.898483] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:51.387 [2024-12-10 22:58:58.898804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.387 [2024-12-10 22:58:58.898821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:51.387 [2024-12-10 22:58:58.898829] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:51.387 [2024-12-10 22:58:58.899214] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:51.387 [2024-12-10 22:58:58.899393] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:51.387 [2024-12-10 22:58:58.899404] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:51.387 [2024-12-10 22:58:58.899410] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:51.387 [2024-12-10 22:58:58.899417] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:51.387 [2024-12-10 22:58:58.911382] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:51.387 [2024-12-10 22:58:58.911659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.387 [2024-12-10 22:58:58.911678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:51.387 [2024-12-10 22:58:58.911686] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:51.387 [2024-12-10 22:58:58.911859] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:51.387 [2024-12-10 22:58:58.912033] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:51.387 [2024-12-10 22:58:58.912043] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:51.387 [2024-12-10 22:58:58.912050] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:51.387 [2024-12-10 22:58:58.912056] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:51.387 [2024-12-10 22:58:58.924364] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:51.387 [2024-12-10 22:58:58.924637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.387 [2024-12-10 22:58:58.924655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:51.387 [2024-12-10 22:58:58.924663] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:51.387 [2024-12-10 22:58:58.924836] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:51.387 [2024-12-10 22:58:58.925011] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:51.387 [2024-12-10 22:58:58.925020] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:51.387 [2024-12-10 22:58:58.925027] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:51.387 [2024-12-10 22:58:58.925034] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:51.387 [2024-12-10 22:58:58.937473] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:51.387 [2024-12-10 22:58:58.937819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.387 [2024-12-10 22:58:58.937837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:51.387 [2024-12-10 22:58:58.937844] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:51.387 [2024-12-10 22:58:58.938007] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:51.387 [2024-12-10 22:58:58.938176] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:51.387 [2024-12-10 22:58:58.938186] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:51.387 [2024-12-10 22:58:58.938192] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:51.387 [2024-12-10 22:58:58.938199] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:51.387 [2024-12-10 22:58:58.950435] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:51.387 [2024-12-10 22:58:58.950775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.387 [2024-12-10 22:58:58.950793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:51.387 [2024-12-10 22:58:58.950800] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:51.387 [2024-12-10 22:58:58.950963] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:51.387 [2024-12-10 22:58:58.951129] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:51.387 [2024-12-10 22:58:58.951139] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:51.387 [2024-12-10 22:58:58.951145] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:51.387 [2024-12-10 22:58:58.951151] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:51.387 [2024-12-10 22:58:58.963485] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:51.387 [2024-12-10 22:58:58.963773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.387 [2024-12-10 22:58:58.963816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:51.387 [2024-12-10 22:58:58.963841] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:51.387 [2024-12-10 22:58:58.964436] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:51.387 [2024-12-10 22:58:58.965025] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:51.387 [2024-12-10 22:58:58.965050] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:51.387 [2024-12-10 22:58:58.965082] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:51.387 [2024-12-10 22:58:58.965088] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:51.387 [2024-12-10 22:58:58.976328] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:51.387 [2024-12-10 22:58:58.976608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.387 [2024-12-10 22:58:58.976629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:51.387 [2024-12-10 22:58:58.976637] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:51.387 [2024-12-10 22:58:58.976810] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:51.387 [2024-12-10 22:58:58.976985] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:51.387 [2024-12-10 22:58:58.976994] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:51.387 [2024-12-10 22:58:58.977001] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:51.387 [2024-12-10 22:58:58.977007] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:51.387 [2024-12-10 22:58:58.989296] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:51.387 [2024-12-10 22:58:58.989592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.387 [2024-12-10 22:58:58.989609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:51.387 [2024-12-10 22:58:58.989616] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:51.387 [2024-12-10 22:58:58.989779] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:51.387 [2024-12-10 22:58:58.989943] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:51.387 [2024-12-10 22:58:58.989952] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:51.387 [2024-12-10 22:58:58.989958] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:51.387 [2024-12-10 22:58:58.989965] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:51.387 [2024-12-10 22:58:59.002339] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:51.387 [2024-12-10 22:58:59.002711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.387 [2024-12-10 22:58:59.002729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:51.387 [2024-12-10 22:58:59.002735] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:51.387 [2024-12-10 22:58:59.002900] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:51.387 [2024-12-10 22:58:59.003064] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:51.387 [2024-12-10 22:58:59.003073] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:51.387 [2024-12-10 22:58:59.003079] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:51.388 [2024-12-10 22:58:59.003085] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:51.388 [2024-12-10 22:58:59.015258] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:51.388 [2024-12-10 22:58:59.016141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.388 [2024-12-10 22:58:59.016172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:51.388 [2024-12-10 22:58:59.016181] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:51.388 [2024-12-10 22:58:59.016363] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:51.388 [2024-12-10 22:58:59.016544] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:51.388 [2024-12-10 22:58:59.016555] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:51.388 [2024-12-10 22:58:59.016563] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:51.388 [2024-12-10 22:58:59.016570] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:51.388 [2024-12-10 22:58:59.028364] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:51.388 [2024-12-10 22:58:59.028682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.388 [2024-12-10 22:58:59.028699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:51.388 [2024-12-10 22:58:59.028706] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:51.388 [2024-12-10 22:58:59.028869] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:51.388 [2024-12-10 22:58:59.029034] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:51.388 [2024-12-10 22:58:59.029043] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:51.388 [2024-12-10 22:58:59.029050] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:51.388 [2024-12-10 22:58:59.029056] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:51.388 [2024-12-10 22:58:59.041217] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:51.388 [2024-12-10 22:58:59.041562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.388 [2024-12-10 22:58:59.041580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:51.388 [2024-12-10 22:58:59.041588] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:51.388 [2024-12-10 22:58:59.041760] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:51.388 [2024-12-10 22:58:59.041934] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:51.388 [2024-12-10 22:58:59.041944] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:51.388 [2024-12-10 22:58:59.041950] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:51.388 [2024-12-10 22:58:59.041957] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:51.388 [2024-12-10 22:58:59.054265] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:51.388 [2024-12-10 22:58:59.054637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.388 [2024-12-10 22:58:59.054655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:51.388 [2024-12-10 22:58:59.054663] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:51.388 [2024-12-10 22:58:59.054836] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:51.388 [2024-12-10 22:58:59.055010] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:51.388 [2024-12-10 22:58:59.055020] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:51.388 [2024-12-10 22:58:59.055031] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:51.388 [2024-12-10 22:58:59.055037] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:51.388 [2024-12-10 22:58:59.067239] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:51.388 [2024-12-10 22:58:59.067522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.388 [2024-12-10 22:58:59.067556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:51.388 [2024-12-10 22:58:59.067564] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:51.388 [2024-12-10 22:58:59.067743] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:51.388 [2024-12-10 22:58:59.067922] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:51.388 [2024-12-10 22:58:59.067932] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:51.388 [2024-12-10 22:58:59.067940] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:51.388 [2024-12-10 22:58:59.067946] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:51.388 [2024-12-10 22:58:59.080401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:51.388 [2024-12-10 22:58:59.080697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.388 [2024-12-10 22:58:59.080716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:51.388 [2024-12-10 22:58:59.080724] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:51.388 [2024-12-10 22:58:59.080901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:51.388 [2024-12-10 22:58:59.081081] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:51.388 [2024-12-10 22:58:59.081091] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:51.388 [2024-12-10 22:58:59.081098] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:51.388 [2024-12-10 22:58:59.081104] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:51.388 [2024-12-10 22:58:59.093449] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:51.388 [2024-12-10 22:58:59.093914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.388 [2024-12-10 22:58:59.093958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:51.388 [2024-12-10 22:58:59.093981] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:51.388 [2024-12-10 22:58:59.094374] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:51.388 [2024-12-10 22:58:59.094549] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:51.388 [2024-12-10 22:58:59.094559] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:51.388 [2024-12-10 22:58:59.094565] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:51.388 [2024-12-10 22:58:59.094571] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:51.388 [2024-12-10 22:58:59.106353] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:51.388 [2024-12-10 22:58:59.106766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.388 [2024-12-10 22:58:59.106810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:51.388 [2024-12-10 22:58:59.106834] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:51.388 [2024-12-10 22:58:59.107337] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:51.388 [2024-12-10 22:58:59.107518] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:51.388 [2024-12-10 22:58:59.107528] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:51.388 [2024-12-10 22:58:59.107535] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:51.388 [2024-12-10 22:58:59.107542] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:51.648 [2024-12-10 22:58:59.119501] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:51.649 [2024-12-10 22:58:59.119908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.649 [2024-12-10 22:58:59.119926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:51.649 [2024-12-10 22:58:59.119934] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:51.649 [2024-12-10 22:58:59.120098] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:51.649 [2024-12-10 22:58:59.120291] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:51.649 [2024-12-10 22:58:59.120302] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:51.649 [2024-12-10 22:58:59.120309] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:51.649 [2024-12-10 22:58:59.120316] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:51.649 [2024-12-10 22:58:59.132468] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:51.649 [2024-12-10 22:58:59.132834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.649 [2024-12-10 22:58:59.132852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:51.649 [2024-12-10 22:58:59.132859] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:51.649 [2024-12-10 22:58:59.133033] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:51.649 [2024-12-10 22:58:59.133213] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:51.649 [2024-12-10 22:58:59.133224] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:51.649 [2024-12-10 22:58:59.133231] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:51.649 [2024-12-10 22:58:59.133237] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:51.649 [2024-12-10 22:58:59.145391] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:51.649 [2024-12-10 22:58:59.145685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.649 [2024-12-10 22:58:59.145703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:51.649 [2024-12-10 22:58:59.145713] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:51.649 [2024-12-10 22:58:59.145878] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:51.649 [2024-12-10 22:58:59.146043] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:51.649 [2024-12-10 22:58:59.146053] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:51.649 [2024-12-10 22:58:59.146059] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:51.649 [2024-12-10 22:58:59.146066] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:51.649 [2024-12-10 22:58:59.158264] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:51.649 [2024-12-10 22:58:59.158661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.649 [2024-12-10 22:58:59.158706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:51.649 [2024-12-10 22:58:59.158729] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:51.649 [2024-12-10 22:58:59.159328] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:51.649 [2024-12-10 22:58:59.159504] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:51.649 [2024-12-10 22:58:59.159514] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:51.649 [2024-12-10 22:58:59.159521] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:51.649 [2024-12-10 22:58:59.159527] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:51.649 [2024-12-10 22:58:59.171166] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:51.649 [2024-12-10 22:58:59.171598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.649 [2024-12-10 22:58:59.171642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:51.649 [2024-12-10 22:58:59.171665] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:51.649 [2024-12-10 22:58:59.172261] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:51.649 [2024-12-10 22:58:59.172731] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:51.649 [2024-12-10 22:58:59.172740] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:51.649 [2024-12-10 22:58:59.172747] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:51.649 [2024-12-10 22:58:59.172753] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:51.649 [2024-12-10 22:58:59.184112] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:51.649 [2024-12-10 22:58:59.184492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.649 [2024-12-10 22:58:59.184509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:51.649 [2024-12-10 22:58:59.184517] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:51.649 [2024-12-10 22:58:59.184680] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:51.649 [2024-12-10 22:58:59.184849] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:51.649 [2024-12-10 22:58:59.184860] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:51.649 [2024-12-10 22:58:59.184866] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:51.649 [2024-12-10 22:58:59.184872] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:51.649 [2024-12-10 22:58:59.197077] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:51.649 [2024-12-10 22:58:59.197502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.649 [2024-12-10 22:58:59.197546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:51.649 [2024-12-10 22:58:59.197570] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:51.649 [2024-12-10 22:58:59.198153] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:51.649 [2024-12-10 22:58:59.198701] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:51.649 [2024-12-10 22:58:59.198710] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:51.649 [2024-12-10 22:58:59.198717] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:51.649 [2024-12-10 22:58:59.198723] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:51.649 [2024-12-10 22:58:59.210015] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:51.649 [2024-12-10 22:58:59.210357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.649 [2024-12-10 22:58:59.210375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:51.649 [2024-12-10 22:58:59.210384] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:51.649 [2024-12-10 22:58:59.210561] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:51.649 [2024-12-10 22:58:59.210725] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:51.649 [2024-12-10 22:58:59.210735] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:51.649 [2024-12-10 22:58:59.210741] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:51.649 [2024-12-10 22:58:59.210747] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:51.649 [2024-12-10 22:58:59.222927] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:51.649 [2024-12-10 22:58:59.223352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.649 [2024-12-10 22:58:59.223370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:51.649 [2024-12-10 22:58:59.223378] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:51.649 [2024-12-10 22:58:59.223541] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:51.649 [2024-12-10 22:58:59.223706] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:51.649 [2024-12-10 22:58:59.223716] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:51.649 [2024-12-10 22:58:59.223726] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:51.649 [2024-12-10 22:58:59.223732] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:51.649 [2024-12-10 22:58:59.235754] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:51.649 [2024-12-10 22:58:59.236173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.650 [2024-12-10 22:58:59.236192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:51.650 [2024-12-10 22:58:59.236199] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:51.650 [2024-12-10 22:58:59.236363] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:51.650 [2024-12-10 22:58:59.236527] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:51.650 [2024-12-10 22:58:59.236538] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:51.650 [2024-12-10 22:58:59.236544] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:51.650 [2024-12-10 22:58:59.236550] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:51.650 [2024-12-10 22:58:59.248654] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:51.650 [2024-12-10 22:58:59.249066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.650 [2024-12-10 22:58:59.249084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:51.650 [2024-12-10 22:58:59.249091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:51.650 [2024-12-10 22:58:59.249281] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:51.650 [2024-12-10 22:58:59.249455] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:51.650 [2024-12-10 22:58:59.249465] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:51.650 [2024-12-10 22:58:59.249473] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:51.650 [2024-12-10 22:58:59.249479] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:51.650 [2024-12-10 22:58:59.261481] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:51.650 [2024-12-10 22:58:59.261834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.650 [2024-12-10 22:58:59.261878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:51.650 [2024-12-10 22:58:59.261902] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:51.650 [2024-12-10 22:58:59.262455] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:51.650 [2024-12-10 22:58:59.262630] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:51.650 [2024-12-10 22:58:59.262638] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:51.650 [2024-12-10 22:58:59.262644] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:51.650 [2024-12-10 22:58:59.262651] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:51.650 [2024-12-10 22:58:59.274320] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:51.650 [2024-12-10 22:58:59.274732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.650 [2024-12-10 22:58:59.274750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:51.650 [2024-12-10 22:58:59.274758] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:51.650 [2024-12-10 22:58:59.274922] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:51.650 [2024-12-10 22:58:59.275086] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:51.650 [2024-12-10 22:58:59.275096] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:51.650 [2024-12-10 22:58:59.275102] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:51.650 [2024-12-10 22:58:59.275108] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:51.650 [2024-12-10 22:58:59.287133] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:51.650 [2024-12-10 22:58:59.287477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.650 [2024-12-10 22:58:59.287495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:51.650 [2024-12-10 22:58:59.287503] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:51.650 [2024-12-10 22:58:59.287665] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:51.650 [2024-12-10 22:58:59.287830] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:51.650 [2024-12-10 22:58:59.287839] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:51.650 [2024-12-10 22:58:59.287845] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:51.650 [2024-12-10 22:58:59.287851] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:51.650 [2024-12-10 22:58:59.299966] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:51.650 [2024-12-10 22:58:59.300362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.650 [2024-12-10 22:58:59.300378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:51.650 [2024-12-10 22:58:59.300386] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:51.650 [2024-12-10 22:58:59.300549] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:51.650 [2024-12-10 22:58:59.300713] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:51.650 [2024-12-10 22:58:59.300722] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:51.650 [2024-12-10 22:58:59.300729] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:51.650 [2024-12-10 22:58:59.300735] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:51.650 [2024-12-10 22:58:59.312933] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:51.650 [2024-12-10 22:58:59.313331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.650 [2024-12-10 22:58:59.313349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:51.650 [2024-12-10 22:58:59.313360] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:51.650 [2024-12-10 22:58:59.313524] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:51.650 [2024-12-10 22:58:59.313688] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:51.650 [2024-12-10 22:58:59.313698] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:51.650 [2024-12-10 22:58:59.313704] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:51.650 [2024-12-10 22:58:59.313711] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:51.650 [2024-12-10 22:58:59.325921] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:51.650 [2024-12-10 22:58:59.326381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.650 [2024-12-10 22:58:59.326400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:51.650 [2024-12-10 22:58:59.326407] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:51.650 [2024-12-10 22:58:59.326581] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:51.650 [2024-12-10 22:58:59.326755] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:51.650 [2024-12-10 22:58:59.326765] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:51.650 [2024-12-10 22:58:59.326772] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:51.650 [2024-12-10 22:58:59.326779] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:51.650 [2024-12-10 22:58:59.339049] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:51.650 [2024-12-10 22:58:59.339490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.650 [2024-12-10 22:58:59.339541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:51.650 [2024-12-10 22:58:59.339567] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:51.650 [2024-12-10 22:58:59.340076] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:51.650 [2024-12-10 22:58:59.340255] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:51.650 [2024-12-10 22:58:59.340265] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:51.650 [2024-12-10 22:58:59.340272] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:51.650 [2024-12-10 22:58:59.340278] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:51.650 [2024-12-10 22:58:59.352063] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:51.650 [2024-12-10 22:58:59.352498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.650 [2024-12-10 22:58:59.352544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:51.650 [2024-12-10 22:58:59.352569] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:51.650 [2024-12-10 22:58:59.353078] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:51.650 [2024-12-10 22:58:59.353263] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:51.650 [2024-12-10 22:58:59.353274] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:51.650 [2024-12-10 22:58:59.353281] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:51.650 [2024-12-10 22:58:59.353287] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:51.650 [2024-12-10 22:58:59.365150] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:51.650 [2024-12-10 22:58:59.365561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.650 [2024-12-10 22:58:59.365604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:51.650 [2024-12-10 22:58:59.365629] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:51.651 [2024-12-10 22:58:59.366174] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:51.651 [2024-12-10 22:58:59.366367] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:51.651 [2024-12-10 22:58:59.366377] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:51.651 [2024-12-10 22:58:59.366383] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:51.651 [2024-12-10 22:58:59.366390] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:51.911 [2024-12-10 22:58:59.378194] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:51.911 [2024-12-10 22:58:59.378609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.911 [2024-12-10 22:58:59.378654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:51.911 [2024-12-10 22:58:59.378678] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:51.911 [2024-12-10 22:58:59.379135] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:51.911 [2024-12-10 22:58:59.379322] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:51.911 [2024-12-10 22:58:59.379331] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:51.911 [2024-12-10 22:58:59.379338] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:51.911 [2024-12-10 22:58:59.379344] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:51.911 [2024-12-10 22:58:59.391083] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:51.911 [2024-12-10 22:58:59.391460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.911 [2024-12-10 22:58:59.391478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:51.911 [2024-12-10 22:58:59.391486] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:51.911 [2024-12-10 22:58:59.391659] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:51.911 [2024-12-10 22:58:59.391833] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:51.911 [2024-12-10 22:58:59.391842] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:51.911 [2024-12-10 22:58:59.391853] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:51.911 [2024-12-10 22:58:59.391860] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:51.911 [2024-12-10 22:58:59.404046] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:51.911 [2024-12-10 22:58:59.404472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.911 [2024-12-10 22:58:59.404490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:51.911 [2024-12-10 22:58:59.404497] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:51.911 [2024-12-10 22:58:59.404661] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:51.911 [2024-12-10 22:58:59.404825] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:51.911 [2024-12-10 22:58:59.404835] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:51.911 [2024-12-10 22:58:59.404841] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:51.911 [2024-12-10 22:58:59.404847] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:51.911 [2024-12-10 22:58:59.416973] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:51.911 [2024-12-10 22:58:59.417382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.911 [2024-12-10 22:58:59.417398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:51.911 [2024-12-10 22:58:59.417406] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:51.911 [2024-12-10 22:58:59.417570] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:51.911 [2024-12-10 22:58:59.417733] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:51.911 [2024-12-10 22:58:59.417743] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:51.911 [2024-12-10 22:58:59.417750] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:51.911 [2024-12-10 22:58:59.417756] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:51.911 [2024-12-10 22:58:59.429820] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:51.911 [2024-12-10 22:58:59.430266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.911 [2024-12-10 22:58:59.430312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:51.911 [2024-12-10 22:58:59.430336] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:51.911 [2024-12-10 22:58:59.430918] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:51.911 [2024-12-10 22:58:59.431131] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:51.911 [2024-12-10 22:58:59.431139] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:51.911 [2024-12-10 22:58:59.431146] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:51.911 [2024-12-10 22:58:59.431151] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:51.911 [2024-12-10 22:58:59.442736] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:51.911 [2024-12-10 22:58:59.443078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.911 [2024-12-10 22:58:59.443095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:51.911 [2024-12-10 22:58:59.443102] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:51.911 [2024-12-10 22:58:59.443271] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:51.911 [2024-12-10 22:58:59.443437] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:51.911 [2024-12-10 22:58:59.443446] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:51.911 [2024-12-10 22:58:59.443452] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:51.911 [2024-12-10 22:58:59.443458] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:51.911 [2024-12-10 22:58:59.455749] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:51.911 [2024-12-10 22:58:59.456162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.911 [2024-12-10 22:58:59.456210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:51.911 [2024-12-10 22:58:59.456235] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:51.911 [2024-12-10 22:58:59.456818] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:51.911 [2024-12-10 22:58:59.457360] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:51.911 [2024-12-10 22:58:59.457370] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:51.911 [2024-12-10 22:58:59.457377] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:51.911 [2024-12-10 22:58:59.457383] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:51.911 [2024-12-10 22:58:59.468555] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:51.911 [2024-12-10 22:58:59.468903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.911 [2024-12-10 22:58:59.468920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:51.911 [2024-12-10 22:58:59.468927] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:51.911 [2024-12-10 22:58:59.469091] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:51.911 [2024-12-10 22:58:59.469282] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:51.911 [2024-12-10 22:58:59.469292] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:51.911 [2024-12-10 22:58:59.469299] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:51.911 [2024-12-10 22:58:59.469306] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:51.911 [2024-12-10 22:58:59.481457] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:51.911 [2024-12-10 22:58:59.481875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.911 [2024-12-10 22:58:59.481920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:51.911 [2024-12-10 22:58:59.481951] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:51.911 [2024-12-10 22:58:59.482361] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:51.912 [2024-12-10 22:58:59.482538] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:51.912 [2024-12-10 22:58:59.482547] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:51.912 [2024-12-10 22:58:59.482554] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:51.912 [2024-12-10 22:58:59.482560] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:51.912 [2024-12-10 22:58:59.494477] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:51.912 [2024-12-10 22:58:59.494881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.912 [2024-12-10 22:58:59.494927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:51.912 [2024-12-10 22:58:59.494951] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:51.912 [2024-12-10 22:58:59.495409] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:51.912 [2024-12-10 22:58:59.495584] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:51.912 [2024-12-10 22:58:59.495594] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:51.912 [2024-12-10 22:58:59.495600] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:51.912 [2024-12-10 22:58:59.495607] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:51.912 [2024-12-10 22:58:59.507329] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:51.912 [2024-12-10 22:58:59.507745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.912 [2024-12-10 22:58:59.507762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:51.912 [2024-12-10 22:58:59.507770] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:51.912 [2024-12-10 22:58:59.507934] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:51.912 [2024-12-10 22:58:59.508099] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:51.912 [2024-12-10 22:58:59.508109] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:51.912 [2024-12-10 22:58:59.508115] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:51.912 [2024-12-10 22:58:59.508121] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:51.912 [2024-12-10 22:58:59.520254] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:51.912 [2024-12-10 22:58:59.520668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.912 [2024-12-10 22:58:59.520709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:51.912 [2024-12-10 22:58:59.520734] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:51.912 [2024-12-10 22:58:59.521280] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:51.912 [2024-12-10 22:58:59.521459] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:51.912 [2024-12-10 22:58:59.521469] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:51.912 [2024-12-10 22:58:59.521476] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:51.912 [2024-12-10 22:58:59.521482] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:51.912 [2024-12-10 22:58:59.533107] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:51.912 [2024-12-10 22:58:59.533535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.912 [2024-12-10 22:58:59.533552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:51.912 [2024-12-10 22:58:59.533560] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:51.912 [2024-12-10 22:58:59.533723] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:51.912 [2024-12-10 22:58:59.533887] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:51.912 [2024-12-10 22:58:59.533897] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:51.912 [2024-12-10 22:58:59.533903] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:51.912 [2024-12-10 22:58:59.533909] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:51.912 [2024-12-10 22:58:59.546059] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:51.912 [2024-12-10 22:58:59.546382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.912 [2024-12-10 22:58:59.546400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:51.912 [2024-12-10 22:58:59.546407] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:51.912 [2024-12-10 22:58:59.546570] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:51.912 [2024-12-10 22:58:59.546734] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:51.912 [2024-12-10 22:58:59.546744] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:51.912 [2024-12-10 22:58:59.546750] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:51.912 [2024-12-10 22:58:59.546756] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:51.912 [2024-12-10 22:58:59.558894] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:51.912 [2024-12-10 22:58:59.559317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.912 [2024-12-10 22:58:59.559336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:51.912 [2024-12-10 22:58:59.559344] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:51.912 [2024-12-10 22:58:59.559518] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:51.912 [2024-12-10 22:58:59.559691] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:51.912 [2024-12-10 22:58:59.559701] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:51.912 [2024-12-10 22:58:59.559708] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:51.912 [2024-12-10 22:58:59.559718] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:51.912 [2024-12-10 22:58:59.571815] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:51.912 [2024-12-10 22:58:59.572214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.912 [2024-12-10 22:58:59.572231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:51.912 [2024-12-10 22:58:59.572239] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:51.912 [2024-12-10 22:58:59.572402] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:51.912 [2024-12-10 22:58:59.572567] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:51.912 [2024-12-10 22:58:59.572576] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:51.912 [2024-12-10 22:58:59.572582] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:51.912 [2024-12-10 22:58:59.572589] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:51.912 [2024-12-10 22:58:59.584615] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:51.912 [2024-12-10 22:58:59.584976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.912 [2024-12-10 22:58:59.584993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:51.912 [2024-12-10 22:58:59.585001] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:51.912 [2024-12-10 22:58:59.585181] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:51.912 [2024-12-10 22:58:59.585377] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:51.912 [2024-12-10 22:58:59.585387] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:51.912 [2024-12-10 22:58:59.585394] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:51.912 [2024-12-10 22:58:59.585401] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:51.912 [2024-12-10 22:58:59.597818] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:51.912 [2024-12-10 22:58:59.598229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.912 [2024-12-10 22:58:59.598248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:51.912 [2024-12-10 22:58:59.598256] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:51.912 [2024-12-10 22:58:59.598443] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:51.912 [2024-12-10 22:58:59.598626] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:51.912 [2024-12-10 22:58:59.598636] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:51.912 [2024-12-10 22:58:59.598642] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:51.912 [2024-12-10 22:58:59.598649] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:51.912 [2024-12-10 22:58:59.610865] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:51.912 [2024-12-10 22:58:59.611315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.912 [2024-12-10 22:58:59.611359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:51.912 [2024-12-10 22:58:59.611383] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:51.912 [2024-12-10 22:58:59.611966] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:51.912 [2024-12-10 22:58:59.612556] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:51.913 [2024-12-10 22:58:59.612566] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:51.913 [2024-12-10 22:58:59.612573] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:51.913 [2024-12-10 22:58:59.612579] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:51.913 [2024-12-10 22:58:59.623807] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:51.913 [2024-12-10 22:58:59.624222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:51.913 [2024-12-10 22:58:59.624240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:51.913 [2024-12-10 22:58:59.624247] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:51.913 [2024-12-10 22:58:59.624412] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:51.913 [2024-12-10 22:58:59.624576] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:51.913 [2024-12-10 22:58:59.624585] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:51.913 [2024-12-10 22:58:59.624592] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:51.913 [2024-12-10 22:58:59.624598] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:52.173 [2024-12-10 22:58:59.636885] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:52.173 [2024-12-10 22:58:59.637321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.173 [2024-12-10 22:58:59.637367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:52.173 [2024-12-10 22:58:59.637391] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:52.173 [2024-12-10 22:58:59.637788] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:52.173 [2024-12-10 22:58:59.637954] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:52.173 [2024-12-10 22:58:59.637963] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:52.173 [2024-12-10 22:58:59.637969] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:52.173 [2024-12-10 22:58:59.637975] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:52.173 [2024-12-10 22:58:59.649799] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:52.173 [2024-12-10 22:58:59.650209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.173 [2024-12-10 22:58:59.650227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:52.173 [2024-12-10 22:58:59.650235] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:52.173 [2024-12-10 22:58:59.650402] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:52.173 [2024-12-10 22:58:59.650566] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:52.173 [2024-12-10 22:58:59.650576] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:52.173 [2024-12-10 22:58:59.650582] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:52.173 [2024-12-10 22:58:59.650588] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:52.173 7266.75 IOPS, 28.39 MiB/s [2024-12-10T21:58:59.902Z] [2024-12-10 22:58:59.662680] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:52.173 [2024-12-10 22:58:59.663084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.173 [2024-12-10 22:58:59.663129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:52.173 [2024-12-10 22:58:59.663153] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:52.173 [2024-12-10 22:58:59.663626] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:52.173 [2024-12-10 22:58:59.663801] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:52.173 [2024-12-10 22:58:59.663810] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:52.173 [2024-12-10 22:58:59.663817] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:52.173 [2024-12-10 22:58:59.663823] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:52.173 [2024-12-10 22:58:59.675623] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:52.173 [2024-12-10 22:58:59.676048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.173 [2024-12-10 22:58:59.676093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:52.173 [2024-12-10 22:58:59.676117] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:52.173 [2024-12-10 22:58:59.676612] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:52.173 [2024-12-10 22:58:59.676788] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:52.173 [2024-12-10 22:58:59.676798] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:52.173 [2024-12-10 22:58:59.676804] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:52.173 [2024-12-10 22:58:59.676811] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:52.173 [2024-12-10 22:58:59.688459] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:52.173 [2024-12-10 22:58:59.688871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.173 [2024-12-10 22:58:59.688888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:52.173 [2024-12-10 22:58:59.688895] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:52.173 [2024-12-10 22:58:59.689058] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:52.173 [2024-12-10 22:58:59.689249] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:52.173 [2024-12-10 22:58:59.689262] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:52.173 [2024-12-10 22:58:59.689269] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:52.173 [2024-12-10 22:58:59.689275] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:52.174 [2024-12-10 22:58:59.701334] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:52.174 [2024-12-10 22:58:59.701686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.174 [2024-12-10 22:58:59.701704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:52.174 [2024-12-10 22:58:59.701711] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:52.174 [2024-12-10 22:58:59.701874] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:52.174 [2024-12-10 22:58:59.702039] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:52.174 [2024-12-10 22:58:59.702049] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:52.174 [2024-12-10 22:58:59.702055] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:52.174 [2024-12-10 22:58:59.702061] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:52.174 [2024-12-10 22:58:59.714297] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:52.174 [2024-12-10 22:58:59.714649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.174 [2024-12-10 22:58:59.714667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:52.174 [2024-12-10 22:58:59.714674] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:52.174 [2024-12-10 22:58:59.714838] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:52.174 [2024-12-10 22:58:59.715003] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:52.174 [2024-12-10 22:58:59.715013] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:52.174 [2024-12-10 22:58:59.715019] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:52.174 [2024-12-10 22:58:59.715025] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:52.174 [2024-12-10 22:58:59.727205] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:52.174 [2024-12-10 22:58:59.727621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.174 [2024-12-10 22:58:59.727639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:52.174 [2024-12-10 22:58:59.727647] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:52.174 [2024-12-10 22:58:59.727820] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:52.174 [2024-12-10 22:58:59.727994] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:52.174 [2024-12-10 22:58:59.728003] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:52.174 [2024-12-10 22:58:59.728010] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:52.174 [2024-12-10 22:58:59.728020] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:52.174 [2024-12-10 22:58:59.740143] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:52.174 [2024-12-10 22:58:59.740573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.174 [2024-12-10 22:58:59.740590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:52.174 [2024-12-10 22:58:59.740598] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:52.174 [2024-12-10 22:58:59.740762] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:52.174 [2024-12-10 22:58:59.740927] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:52.174 [2024-12-10 22:58:59.740936] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:52.174 [2024-12-10 22:58:59.740943] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:52.174 [2024-12-10 22:58:59.740950] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:52.174 [2024-12-10 22:58:59.753029] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:52.174 [2024-12-10 22:58:59.753459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.174 [2024-12-10 22:58:59.753504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:52.174 [2024-12-10 22:58:59.753528] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:52.174 [2024-12-10 22:58:59.754009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:52.174 [2024-12-10 22:58:59.754179] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:52.174 [2024-12-10 22:58:59.754189] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:52.174 [2024-12-10 22:58:59.754195] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:52.174 [2024-12-10 22:58:59.754202] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:52.174 [2024-12-10 22:58:59.765910] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:52.174 [2024-12-10 22:58:59.766332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.174 [2024-12-10 22:58:59.766371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:52.174 [2024-12-10 22:58:59.766397] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:52.174 [2024-12-10 22:58:59.766979] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:52.174 [2024-12-10 22:58:59.767181] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:52.174 [2024-12-10 22:58:59.767191] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:52.174 [2024-12-10 22:58:59.767198] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:52.174 [2024-12-10 22:58:59.767205] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:52.174 [2024-12-10 22:58:59.778773] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:52.174 [2024-12-10 22:58:59.779183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.174 [2024-12-10 22:58:59.779228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:52.174 [2024-12-10 22:58:59.779252] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:52.174 [2024-12-10 22:58:59.779792] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:52.174 [2024-12-10 22:58:59.779956] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:52.174 [2024-12-10 22:58:59.779964] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:52.174 [2024-12-10 22:58:59.779971] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:52.174 [2024-12-10 22:58:59.779977] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:52.174 [2024-12-10 22:58:59.791579] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:52.174 [2024-12-10 22:58:59.791992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.174 [2024-12-10 22:58:59.792009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:52.174 [2024-12-10 22:58:59.792016] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:52.174 [2024-12-10 22:58:59.792202] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:52.174 [2024-12-10 22:58:59.792377] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:52.174 [2024-12-10 22:58:59.792387] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:52.174 [2024-12-10 22:58:59.792394] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:52.174 [2024-12-10 22:58:59.792400] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:52.174 [2024-12-10 22:58:59.804398] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:52.174 [2024-12-10 22:58:59.804799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.174 [2024-12-10 22:58:59.804843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:52.174 [2024-12-10 22:58:59.804867] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:52.174 [2024-12-10 22:58:59.805334] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:52.174 [2024-12-10 22:58:59.805510] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:52.174 [2024-12-10 22:58:59.805520] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:52.174 [2024-12-10 22:58:59.805526] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:52.174 [2024-12-10 22:58:59.805533] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:52.174 [2024-12-10 22:58:59.817253] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:52.174 [2024-12-10 22:58:59.817673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.174 [2024-12-10 22:58:59.817690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:52.174 [2024-12-10 22:58:59.817698] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:52.174 [2024-12-10 22:58:59.817865] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:52.174 [2024-12-10 22:58:59.818030] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:52.174 [2024-12-10 22:58:59.818040] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:52.174 [2024-12-10 22:58:59.818046] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:52.174 [2024-12-10 22:58:59.818052] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:52.174 [2024-12-10 22:58:59.830058] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:52.174 [2024-12-10 22:58:59.830489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.174 [2024-12-10 22:58:59.830535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:52.175 [2024-12-10 22:58:59.830558] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:52.175 [2024-12-10 22:58:59.831030] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:52.175 [2024-12-10 22:58:59.831201] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:52.175 [2024-12-10 22:58:59.831227] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:52.175 [2024-12-10 22:58:59.831234] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:52.175 [2024-12-10 22:58:59.831241] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:52.175 [2024-12-10 22:58:59.843028] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:52.175 [2024-12-10 22:58:59.843395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.175 [2024-12-10 22:58:59.843413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:52.175 [2024-12-10 22:58:59.843420] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:52.175 [2024-12-10 22:58:59.843594] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:52.175 [2024-12-10 22:58:59.843767] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:52.175 [2024-12-10 22:58:59.843777] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:52.175 [2024-12-10 22:58:59.843783] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:52.175 [2024-12-10 22:58:59.843789] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:52.175 [2024-12-10 22:58:59.856210] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:52.175 [2024-12-10 22:58:59.856640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.175 [2024-12-10 22:58:59.856658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:52.175 [2024-12-10 22:58:59.856666] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:52.175 [2024-12-10 22:58:59.856844] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:52.175 [2024-12-10 22:58:59.857022] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:52.175 [2024-12-10 22:58:59.857036] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:52.175 [2024-12-10 22:58:59.857042] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:52.175 [2024-12-10 22:58:59.857049] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:52.175 [2024-12-10 22:58:59.869308] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:52.175 [2024-12-10 22:58:59.869733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.175 [2024-12-10 22:58:59.869751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:52.175 [2024-12-10 22:58:59.869759] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:52.175 [2024-12-10 22:58:59.869932] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:52.175 [2024-12-10 22:58:59.870106] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:52.175 [2024-12-10 22:58:59.870116] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:52.175 [2024-12-10 22:58:59.870123] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:52.175 [2024-12-10 22:58:59.870130] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:52.175 [2024-12-10 22:58:59.882210] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:52.175 [2024-12-10 22:58:59.882623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.175 [2024-12-10 22:58:59.882641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:52.175 [2024-12-10 22:58:59.882649] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:52.175 [2024-12-10 22:58:59.882811] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:52.175 [2024-12-10 22:58:59.882976] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:52.175 [2024-12-10 22:58:59.882985] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:52.175 [2024-12-10 22:58:59.882992] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:52.175 [2024-12-10 22:58:59.882998] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:52.175 [2024-12-10 22:58:59.895273] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:52.175 [2024-12-10 22:58:59.895716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.175 [2024-12-10 22:58:59.895734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:52.175 [2024-12-10 22:58:59.895742] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:52.175 [2024-12-10 22:58:59.895921] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:52.175 [2024-12-10 22:58:59.896100] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:52.175 [2024-12-10 22:58:59.896111] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:52.175 [2024-12-10 22:58:59.896118] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:52.175 [2024-12-10 22:58:59.896127] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:52.435 [2024-12-10 22:58:59.908128] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:52.435 [2024-12-10 22:58:59.908525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.435 [2024-12-10 22:58:59.908544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:52.435 [2024-12-10 22:58:59.908551] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:52.435 [2024-12-10 22:58:59.908715] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:52.435 [2024-12-10 22:58:59.908879] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:52.435 [2024-12-10 22:58:59.908888] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:52.435 [2024-12-10 22:58:59.908894] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:52.435 [2024-12-10 22:58:59.908900] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:52.435 [2024-12-10 22:58:59.921253] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:52.435 [2024-12-10 22:58:59.921686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.435 [2024-12-10 22:58:59.921705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:52.435 [2024-12-10 22:58:59.921713] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:52.435 [2024-12-10 22:58:59.921886] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:52.435 [2024-12-10 22:58:59.922059] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:52.435 [2024-12-10 22:58:59.922069] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:52.435 [2024-12-10 22:58:59.922076] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:52.435 [2024-12-10 22:58:59.922083] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:52.435 [2024-12-10 22:58:59.934374] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:52.435 [2024-12-10 22:58:59.934735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.435 [2024-12-10 22:58:59.934753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:52.435 [2024-12-10 22:58:59.934760] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:52.435 [2024-12-10 22:58:59.934933] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:52.435 [2024-12-10 22:58:59.935107] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:52.435 [2024-12-10 22:58:59.935117] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:52.435 [2024-12-10 22:58:59.935123] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:52.435 [2024-12-10 22:58:59.935130] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:52.435 [2024-12-10 22:58:59.947336] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:52.435 [2024-12-10 22:58:59.947761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.435 [2024-12-10 22:58:59.947800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:52.435 [2024-12-10 22:58:59.947826] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:52.435 [2024-12-10 22:58:59.948426] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:52.435 [2024-12-10 22:58:59.948633] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:52.435 [2024-12-10 22:58:59.948642] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:52.435 [2024-12-10 22:58:59.948648] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:52.435 [2024-12-10 22:58:59.948655] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:52.435 [2024-12-10 22:58:59.960264] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:52.435 [2024-12-10 22:58:59.960680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.435 [2024-12-10 22:58:59.960719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:52.435 [2024-12-10 22:58:59.960745] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:52.435 [2024-12-10 22:58:59.961322] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:52.435 [2024-12-10 22:58:59.961497] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:52.435 [2024-12-10 22:58:59.961505] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:52.435 [2024-12-10 22:58:59.961512] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:52.435 [2024-12-10 22:58:59.961519] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:52.436 [2024-12-10 22:58:59.973073] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:52.436 [2024-12-10 22:58:59.973428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.436 [2024-12-10 22:58:59.973445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:52.436 [2024-12-10 22:58:59.973452] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:52.436 [2024-12-10 22:58:59.973616] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:52.436 [2024-12-10 22:58:59.973780] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:52.436 [2024-12-10 22:58:59.973789] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:52.436 [2024-12-10 22:58:59.973795] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:52.436 [2024-12-10 22:58:59.973801] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:52.436 [2024-12-10 22:58:59.985975] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:52.436 [2024-12-10 22:58:59.986392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.436 [2024-12-10 22:58:59.986409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:52.436 [2024-12-10 22:58:59.986416] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:52.436 [2024-12-10 22:58:59.986583] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:52.436 [2024-12-10 22:58:59.986748] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:52.436 [2024-12-10 22:58:59.986757] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:52.436 [2024-12-10 22:58:59.986764] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:52.436 [2024-12-10 22:58:59.986769] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:52.436 [2024-12-10 22:58:59.998853] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:52.436 [2024-12-10 22:58:59.999271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.436 [2024-12-10 22:58:59.999288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:52.436 [2024-12-10 22:58:59.999296] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:52.436 [2024-12-10 22:58:59.999458] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:52.436 [2024-12-10 22:58:59.999622] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:52.436 [2024-12-10 22:58:59.999632] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:52.436 [2024-12-10 22:58:59.999638] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:52.436 [2024-12-10 22:58:59.999644] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:52.436 [2024-12-10 22:59:00.012027] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:52.436 [2024-12-10 22:59:00.012463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.436 [2024-12-10 22:59:00.012483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:52.436 [2024-12-10 22:59:00.012490] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:52.436 [2024-12-10 22:59:00.012669] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:52.436 [2024-12-10 22:59:00.012848] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:52.436 [2024-12-10 22:59:00.012857] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:52.436 [2024-12-10 22:59:00.012865] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:52.436 [2024-12-10 22:59:00.012871] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:52.436 [2024-12-10 22:59:00.025223] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:52.436 [2024-12-10 22:59:00.025651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.436 [2024-12-10 22:59:00.025669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:52.436 [2024-12-10 22:59:00.025677] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:52.436 [2024-12-10 22:59:00.025850] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:52.436 [2024-12-10 22:59:00.026024] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:52.436 [2024-12-10 22:59:00.026036] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:52.436 [2024-12-10 22:59:00.026043] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:52.436 [2024-12-10 22:59:00.026050] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:52.436 [2024-12-10 22:59:00.038873] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:52.436 [2024-12-10 22:59:00.039348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.436 [2024-12-10 22:59:00.039373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:52.436 [2024-12-10 22:59:00.039386] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:52.436 [2024-12-10 22:59:00.039588] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:52.436 [2024-12-10 22:59:00.039795] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:52.436 [2024-12-10 22:59:00.039809] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:52.436 [2024-12-10 22:59:00.039819] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:52.436 [2024-12-10 22:59:00.039830] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:52.436 [2024-12-10 22:59:00.052418] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:52.436 [2024-12-10 22:59:00.052740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.436 [2024-12-10 22:59:00.052763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:52.436 [2024-12-10 22:59:00.052776] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:52.436 [2024-12-10 22:59:00.052975] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:52.436 [2024-12-10 22:59:00.053200] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:52.436 [2024-12-10 22:59:00.053217] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:52.436 [2024-12-10 22:59:00.053229] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:52.436 [2024-12-10 22:59:00.053239] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:52.436 [2024-12-10 22:59:00.066192] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:52.436 [2024-12-10 22:59:00.066583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.436 [2024-12-10 22:59:00.066616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:52.436 [2024-12-10 22:59:00.066639] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:52.436 [2024-12-10 22:59:00.066920] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:52.436 [2024-12-10 22:59:00.067200] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:52.436 [2024-12-10 22:59:00.067221] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:52.436 [2024-12-10 22:59:00.067237] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:52.436 [2024-12-10 22:59:00.067258] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:52.436 [2024-12-10 22:59:00.079325] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:52.436 [2024-12-10 22:59:00.079702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.436 [2024-12-10 22:59:00.079720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:52.436 [2024-12-10 22:59:00.079729] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:52.436 [2024-12-10 22:59:00.079908] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:52.436 [2024-12-10 22:59:00.080088] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:52.436 [2024-12-10 22:59:00.080099] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:52.436 [2024-12-10 22:59:00.080107] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:52.436 [2024-12-10 22:59:00.080114] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:52.436 [2024-12-10 22:59:00.092544] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:52.436 [2024-12-10 22:59:00.092967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.436 [2024-12-10 22:59:00.092986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:52.436 [2024-12-10 22:59:00.092994] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:52.436 [2024-12-10 22:59:00.093180] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:52.436 [2024-12-10 22:59:00.093360] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:52.436 [2024-12-10 22:59:00.093371] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:52.436 [2024-12-10 22:59:00.093377] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:52.436 [2024-12-10 22:59:00.093384] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:52.436 [2024-12-10 22:59:00.105648] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:52.436 [2024-12-10 22:59:00.106071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.436 [2024-12-10 22:59:00.106089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:52.437 [2024-12-10 22:59:00.106097] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:52.437 [2024-12-10 22:59:00.106283] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:52.437 [2024-12-10 22:59:00.106463] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:52.437 [2024-12-10 22:59:00.106473] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:52.437 [2024-12-10 22:59:00.106480] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:52.437 [2024-12-10 22:59:00.106486] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:52.437 [2024-12-10 22:59:00.118755] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:52.437 [2024-12-10 22:59:00.119192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.437 [2024-12-10 22:59:00.119214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:52.437 [2024-12-10 22:59:00.119223] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:52.437 [2024-12-10 22:59:00.119402] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:52.437 [2024-12-10 22:59:00.119580] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:52.437 [2024-12-10 22:59:00.119591] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:52.437 [2024-12-10 22:59:00.119598] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:52.437 [2024-12-10 22:59:00.119605] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:52.437 [2024-12-10 22:59:00.131837] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:52.437 [2024-12-10 22:59:00.132268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.437 [2024-12-10 22:59:00.132313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:52.437 [2024-12-10 22:59:00.132339] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:52.437 [2024-12-10 22:59:00.132896] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:52.437 [2024-12-10 22:59:00.133072] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:52.437 [2024-12-10 22:59:00.133081] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:52.437 [2024-12-10 22:59:00.133088] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:52.437 [2024-12-10 22:59:00.133094] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:52.437 [2024-12-10 22:59:00.144886] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:52.437 [2024-12-10 22:59:00.145336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.437 [2024-12-10 22:59:00.145382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:52.437 [2024-12-10 22:59:00.145405] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:52.437 [2024-12-10 22:59:00.145912] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:52.437 [2024-12-10 22:59:00.146094] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:52.437 [2024-12-10 22:59:00.146104] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:52.437 [2024-12-10 22:59:00.146111] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:52.437 [2024-12-10 22:59:00.146118] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:52.437 [2024-12-10 22:59:00.157968] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:52.437 [2024-12-10 22:59:00.158400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.437 [2024-12-10 22:59:00.158418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:52.437 [2024-12-10 22:59:00.158426] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:52.437 [2024-12-10 22:59:00.158608] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:52.437 [2024-12-10 22:59:00.158787] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:52.437 [2024-12-10 22:59:00.158797] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:52.437 [2024-12-10 22:59:00.158804] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:52.437 [2024-12-10 22:59:00.158810] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:52.718 [2024-12-10 22:59:00.171096] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:52.718 [2024-12-10 22:59:00.171518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.718 [2024-12-10 22:59:00.171564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:52.719 [2024-12-10 22:59:00.171587] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:52.719 [2024-12-10 22:59:00.172015] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:52.719 [2024-12-10 22:59:00.172197] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:52.719 [2024-12-10 22:59:00.172223] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:52.719 [2024-12-10 22:59:00.172231] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:52.719 [2024-12-10 22:59:00.172238] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:52.719 [2024-12-10 22:59:00.184201] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:52.719 [2024-12-10 22:59:00.184547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.719 [2024-12-10 22:59:00.184564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:52.719 [2024-12-10 22:59:00.184572] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:52.719 [2024-12-10 22:59:00.184745] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:52.719 [2024-12-10 22:59:00.184918] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:52.719 [2024-12-10 22:59:00.184928] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:52.719 [2024-12-10 22:59:00.184935] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:52.719 [2024-12-10 22:59:00.184941] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:52.719 [2024-12-10 22:59:00.197273] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:52.719 [2024-12-10 22:59:00.197607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.719 [2024-12-10 22:59:00.197625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:52.719 [2024-12-10 22:59:00.197633] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:52.719 [2024-12-10 22:59:00.197806] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:52.719 [2024-12-10 22:59:00.197980] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:52.719 [2024-12-10 22:59:00.197993] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:52.719 [2024-12-10 22:59:00.198000] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:52.719 [2024-12-10 22:59:00.198008] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:52.719 [2024-12-10 22:59:00.210462] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:52.719 [2024-12-10 22:59:00.210794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.719 [2024-12-10 22:59:00.210812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:52.719 [2024-12-10 22:59:00.210819] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:52.719 [2024-12-10 22:59:00.210998] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:52.719 [2024-12-10 22:59:00.211182] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:52.719 [2024-12-10 22:59:00.211193] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:52.719 [2024-12-10 22:59:00.211200] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:52.719 [2024-12-10 22:59:00.211207] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:52.719 [2024-12-10 22:59:00.223613] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:52.719 [2024-12-10 22:59:00.224026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.719 [2024-12-10 22:59:00.224044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:52.719 [2024-12-10 22:59:00.224052] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:52.719 [2024-12-10 22:59:00.224236] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:52.719 [2024-12-10 22:59:00.224416] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:52.719 [2024-12-10 22:59:00.224425] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:52.719 [2024-12-10 22:59:00.224432] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:52.719 [2024-12-10 22:59:00.224438] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:52.719 [2024-12-10 22:59:00.236707] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:52.719 [2024-12-10 22:59:00.237069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.719 [2024-12-10 22:59:00.237087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:52.719 [2024-12-10 22:59:00.237094] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:52.719 [2024-12-10 22:59:00.237278] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:52.719 [2024-12-10 22:59:00.237458] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:52.719 [2024-12-10 22:59:00.237468] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:52.719 [2024-12-10 22:59:00.237475] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:52.719 [2024-12-10 22:59:00.237482] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:52.719 [2024-12-10 22:59:00.249913] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:52.719 [2024-12-10 22:59:00.250345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.719 [2024-12-10 22:59:00.250364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:52.719 [2024-12-10 22:59:00.250372] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:52.719 [2024-12-10 22:59:00.250549] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:52.719 [2024-12-10 22:59:00.250728] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:52.719 [2024-12-10 22:59:00.250738] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:52.719 [2024-12-10 22:59:00.250745] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:52.719 [2024-12-10 22:59:00.250751] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:52.719 [2024-12-10 22:59:00.262999] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:52.719 [2024-12-10 22:59:00.263412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.719 [2024-12-10 22:59:00.263429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:52.719 [2024-12-10 22:59:00.263438] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:52.719 [2024-12-10 22:59:00.263617] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:52.720 [2024-12-10 22:59:00.263796] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:52.720 [2024-12-10 22:59:00.263807] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:52.720 [2024-12-10 22:59:00.263814] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:52.720 [2024-12-10 22:59:00.263820] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:52.720 [2024-12-10 22:59:00.276081] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:52.720 [2024-12-10 22:59:00.276456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.720 [2024-12-10 22:59:00.276474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:52.720 [2024-12-10 22:59:00.276482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:52.720 [2024-12-10 22:59:00.276660] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:52.720 [2024-12-10 22:59:00.276839] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:52.720 [2024-12-10 22:59:00.276849] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:52.720 [2024-12-10 22:59:00.276855] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:52.720 [2024-12-10 22:59:00.276862] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:52.720 [2024-12-10 22:59:00.289284] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:52.720 [2024-12-10 22:59:00.289645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.720 [2024-12-10 22:59:00.289667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:52.720 [2024-12-10 22:59:00.289675] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:52.720 [2024-12-10 22:59:00.289852] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:52.720 [2024-12-10 22:59:00.290031] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:52.720 [2024-12-10 22:59:00.290041] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:52.720 [2024-12-10 22:59:00.290047] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:52.720 [2024-12-10 22:59:00.290054] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:52.720 [2024-12-10 22:59:00.302496] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:52.720 [2024-12-10 22:59:00.302933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.720 [2024-12-10 22:59:00.302951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:52.720 [2024-12-10 22:59:00.302958] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:52.720 [2024-12-10 22:59:00.303136] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:52.720 [2024-12-10 22:59:00.303321] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:52.720 [2024-12-10 22:59:00.303332] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:52.720 [2024-12-10 22:59:00.303339] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:52.720 [2024-12-10 22:59:00.303346] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:52.720 [2024-12-10 22:59:00.315596] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:52.720 [2024-12-10 22:59:00.316034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.720 [2024-12-10 22:59:00.316052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:52.720 [2024-12-10 22:59:00.316060] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:52.720 [2024-12-10 22:59:00.316242] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:52.720 [2024-12-10 22:59:00.316421] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:52.720 [2024-12-10 22:59:00.316432] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:52.720 [2024-12-10 22:59:00.316438] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:52.720 [2024-12-10 22:59:00.316445] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:52.720 [2024-12-10 22:59:00.328702] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:52.720 [2024-12-10 22:59:00.329135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.720 [2024-12-10 22:59:00.329153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:52.720 [2024-12-10 22:59:00.329166] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:52.720 [2024-12-10 22:59:00.329346] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:52.720 [2024-12-10 22:59:00.329528] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:52.720 [2024-12-10 22:59:00.329538] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:52.720 [2024-12-10 22:59:00.329545] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:52.720 [2024-12-10 22:59:00.329551] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:52.720 [2024-12-10 22:59:00.341767] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:52.720 [2024-12-10 22:59:00.342193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.720 [2024-12-10 22:59:00.342240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:52.720 [2024-12-10 22:59:00.342264] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:52.720 [2024-12-10 22:59:00.342669] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:52.720 [2024-12-10 22:59:00.342850] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:52.720 [2024-12-10 22:59:00.342861] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:52.720 [2024-12-10 22:59:00.342868] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:52.720 [2024-12-10 22:59:00.342874] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:52.720 [2024-12-10 22:59:00.354963] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:52.720 [2024-12-10 22:59:00.355330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.720 [2024-12-10 22:59:00.355349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:52.720 [2024-12-10 22:59:00.355358] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:52.720 [2024-12-10 22:59:00.355537] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:52.720 [2024-12-10 22:59:00.355716] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:52.720 [2024-12-10 22:59:00.355726] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:52.721 [2024-12-10 22:59:00.355734] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:52.721 [2024-12-10 22:59:00.355740] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:52.721 [2024-12-10 22:59:00.368163] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:52.721 [2024-12-10 22:59:00.368502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.721 [2024-12-10 22:59:00.368519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:52.721 [2024-12-10 22:59:00.368527] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:52.721 [2024-12-10 22:59:00.368705] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:52.721 [2024-12-10 22:59:00.368884] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:52.721 [2024-12-10 22:59:00.368894] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:52.721 [2024-12-10 22:59:00.368904] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:52.721 [2024-12-10 22:59:00.368911] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:52.721 [2024-12-10 22:59:00.381329] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:52.721 [2024-12-10 22:59:00.381624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.721 [2024-12-10 22:59:00.381674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:52.721 [2024-12-10 22:59:00.381698] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:52.721 [2024-12-10 22:59:00.382225] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:52.721 [2024-12-10 22:59:00.382401] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:52.721 [2024-12-10 22:59:00.382412] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:52.721 [2024-12-10 22:59:00.382419] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:52.721 [2024-12-10 22:59:00.382425] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:52.721 [2024-12-10 22:59:00.394422] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:52.721 [2024-12-10 22:59:00.394697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.721 [2024-12-10 22:59:00.394715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:52.721 [2024-12-10 22:59:00.394722] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:52.721 [2024-12-10 22:59:00.394895] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:52.721 [2024-12-10 22:59:00.395069] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:52.721 [2024-12-10 22:59:00.395078] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:52.721 [2024-12-10 22:59:00.395085] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:52.721 [2024-12-10 22:59:00.395092] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:52.721 [2024-12-10 22:59:00.407494] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:52.721 [2024-12-10 22:59:00.407848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.721 [2024-12-10 22:59:00.407867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:52.721 [2024-12-10 22:59:00.407875] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:52.721 [2024-12-10 22:59:00.408052] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:52.721 [2024-12-10 22:59:00.408236] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:52.721 [2024-12-10 22:59:00.408248] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:52.721 [2024-12-10 22:59:00.408254] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:52.721 [2024-12-10 22:59:00.408261] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:52.721 [2024-12-10 22:59:00.420720] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:52.721 [2024-12-10 22:59:00.421138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:52.721 [2024-12-10 22:59:00.421161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:52.721 [2024-12-10 22:59:00.421170] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:52.721 [2024-12-10 22:59:00.421354] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:52.721 [2024-12-10 22:59:00.421549] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:52.721 [2024-12-10 22:59:00.421559] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:52.721 [2024-12-10 22:59:00.421565] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:52.721 [2024-12-10 22:59:00.421572] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:53.022 [2024-12-10 22:59:00.433997] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:53.022 [2024-12-10 22:59:00.434414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.022 [2024-12-10 22:59:00.434433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:53.022 [2024-12-10 22:59:00.434442] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:53.022 [2024-12-10 22:59:00.434626] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:53.022 [2024-12-10 22:59:00.434812] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:53.022 [2024-12-10 22:59:00.434822] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:53.022 [2024-12-10 22:59:00.434829] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:53.022 [2024-12-10 22:59:00.434836] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:53.022 [2024-12-10 22:59:00.447264] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:53.022 [2024-12-10 22:59:00.447601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.022 [2024-12-10 22:59:00.447617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:53.022 [2024-12-10 22:59:00.447626] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:53.022 [2024-12-10 22:59:00.447804] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:53.022 [2024-12-10 22:59:00.447983] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:53.022 [2024-12-10 22:59:00.447993] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:53.022 [2024-12-10 22:59:00.448000] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:53.022 [2024-12-10 22:59:00.448007] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:53.022 [2024-12-10 22:59:00.460359] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:53.022 [2024-12-10 22:59:00.460747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.022 [2024-12-10 22:59:00.460766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:53.022 [2024-12-10 22:59:00.460778] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:53.022 [2024-12-10 22:59:00.460956] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:53.022 [2024-12-10 22:59:00.461137] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:53.022 [2024-12-10 22:59:00.461147] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:53.022 [2024-12-10 22:59:00.461153] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:53.022 [2024-12-10 22:59:00.461166] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:53.022 [2024-12-10 22:59:00.473423] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:53.022 [2024-12-10 22:59:00.473700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.022 [2024-12-10 22:59:00.473718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:53.023 [2024-12-10 22:59:00.473726] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:53.023 [2024-12-10 22:59:00.473904] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:53.023 [2024-12-10 22:59:00.474083] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:53.023 [2024-12-10 22:59:00.474094] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:53.023 [2024-12-10 22:59:00.474101] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:53.023 [2024-12-10 22:59:00.474107] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:53.023 [2024-12-10 22:59:00.486468] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:53.023 [2024-12-10 22:59:00.486867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.023 [2024-12-10 22:59:00.486911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:53.023 [2024-12-10 22:59:00.486935] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:53.023 [2024-12-10 22:59:00.487475] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:53.023 [2024-12-10 22:59:00.487656] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:53.023 [2024-12-10 22:59:00.487666] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:53.023 [2024-12-10 22:59:00.487673] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:53.023 [2024-12-10 22:59:00.487679] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:53.023 [2024-12-10 22:59:00.499623] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:53.023 [2024-12-10 22:59:00.499975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.023 [2024-12-10 22:59:00.499993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:53.023 [2024-12-10 22:59:00.500001] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:53.023 [2024-12-10 22:59:00.500184] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:53.023 [2024-12-10 22:59:00.500376] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:53.023 [2024-12-10 22:59:00.500385] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:53.023 [2024-12-10 22:59:00.500392] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:53.023 [2024-12-10 22:59:00.500398] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:53.023 [2024-12-10 22:59:00.512799] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:53.023 [2024-12-10 22:59:00.513128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.023 [2024-12-10 22:59:00.513146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:53.023 [2024-12-10 22:59:00.513154] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:53.023 [2024-12-10 22:59:00.513349] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:53.023 [2024-12-10 22:59:00.513523] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:53.023 [2024-12-10 22:59:00.513533] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:53.023 [2024-12-10 22:59:00.513540] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:53.023 [2024-12-10 22:59:00.513547] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:53.023 [2024-12-10 22:59:00.525955] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:53.023 [2024-12-10 22:59:00.526386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.023 [2024-12-10 22:59:00.526404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:53.023 [2024-12-10 22:59:00.526412] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:53.023 [2024-12-10 22:59:00.526585] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:53.023 [2024-12-10 22:59:00.526758] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:53.023 [2024-12-10 22:59:00.526768] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:53.023 [2024-12-10 22:59:00.526775] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:53.023 [2024-12-10 22:59:00.526781] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:53.023 [2024-12-10 22:59:00.539053] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:53.023 [2024-12-10 22:59:00.539335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.023 [2024-12-10 22:59:00.539364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:53.023 [2024-12-10 22:59:00.539372] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:53.023 [2024-12-10 22:59:00.539546] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:53.023 [2024-12-10 22:59:00.539719] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:53.023 [2024-12-10 22:59:00.539729] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:53.023 [2024-12-10 22:59:00.539740] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:53.023 [2024-12-10 22:59:00.539746] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:53.023 [2024-12-10 22:59:00.552186] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:53.023 [2024-12-10 22:59:00.552460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.023 [2024-12-10 22:59:00.552477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:53.023 [2024-12-10 22:59:00.552485] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:53.023 [2024-12-10 22:59:00.552658] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:53.023 [2024-12-10 22:59:00.552831] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:53.023 [2024-12-10 22:59:00.552841] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:53.023 [2024-12-10 22:59:00.552848] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:53.023 [2024-12-10 22:59:00.552854] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:53.023 [2024-12-10 22:59:00.565228] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:53.023 [2024-12-10 22:59:00.565521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.023 [2024-12-10 22:59:00.565538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:53.023 [2024-12-10 22:59:00.565546] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:53.023 [2024-12-10 22:59:00.565719] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:53.024 [2024-12-10 22:59:00.565892] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:53.024 [2024-12-10 22:59:00.565902] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:53.024 [2024-12-10 22:59:00.565909] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:53.024 [2024-12-10 22:59:00.565915] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:53.024 [2024-12-10 22:59:00.578341] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:53.024 [2024-12-10 22:59:00.578660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.024 [2024-12-10 22:59:00.578678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:53.024 [2024-12-10 22:59:00.578685] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:53.024 [2024-12-10 22:59:00.578858] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:53.024 [2024-12-10 22:59:00.579031] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:53.024 [2024-12-10 22:59:00.579041] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:53.024 [2024-12-10 22:59:00.579049] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:53.024 [2024-12-10 22:59:00.579055] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:53.024 [2024-12-10 22:59:00.591458] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:53.024 [2024-12-10 22:59:00.591721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.024 [2024-12-10 22:59:00.591738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:53.024 [2024-12-10 22:59:00.591746] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:53.024 [2024-12-10 22:59:00.591919] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:53.024 [2024-12-10 22:59:00.592092] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:53.024 [2024-12-10 22:59:00.592101] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:53.024 [2024-12-10 22:59:00.592108] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:53.024 [2024-12-10 22:59:00.592115] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:53.024 [2024-12-10 22:59:00.604532] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:53.024 [2024-12-10 22:59:00.604867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.024 [2024-12-10 22:59:00.604885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:53.024 [2024-12-10 22:59:00.604893] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:53.024 [2024-12-10 22:59:00.605065] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:53.024 [2024-12-10 22:59:00.605245] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:53.024 [2024-12-10 22:59:00.605256] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:53.024 [2024-12-10 22:59:00.605262] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:53.024 [2024-12-10 22:59:00.605269] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:53.024 [2024-12-10 22:59:00.617693] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:53.024 [2024-12-10 22:59:00.618044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.024 [2024-12-10 22:59:00.618088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:53.024 [2024-12-10 22:59:00.618111] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:53.024 [2024-12-10 22:59:00.618709] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:53.024 [2024-12-10 22:59:00.619188] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:53.024 [2024-12-10 22:59:00.619198] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:53.024 [2024-12-10 22:59:00.619205] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:53.024 [2024-12-10 22:59:00.619212] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:53.024 [2024-12-10 22:59:00.630749] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:53.024 [2024-12-10 22:59:00.631072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.024 [2024-12-10 22:59:00.631089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:53.024 [2024-12-10 22:59:00.631100] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:53.024 [2024-12-10 22:59:00.631284] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:53.024 [2024-12-10 22:59:00.631463] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:53.024 [2024-12-10 22:59:00.631473] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:53.024 [2024-12-10 22:59:00.631480] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:53.024 [2024-12-10 22:59:00.631486] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:53.024 [2024-12-10 22:59:00.643788] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:53.024 [2024-12-10 22:59:00.644106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.024 [2024-12-10 22:59:00.644124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:53.024 [2024-12-10 22:59:00.644132] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:53.024 [2024-12-10 22:59:00.644331] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:53.024 [2024-12-10 22:59:00.644511] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:53.024 [2024-12-10 22:59:00.644521] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:53.024 [2024-12-10 22:59:00.644528] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:53.024 [2024-12-10 22:59:00.644534] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:53.024 5813.40 IOPS, 22.71 MiB/s [2024-12-10T21:59:00.753Z] [2024-12-10 22:59:00.657204] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:53.024 [2024-12-10 22:59:00.657648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.024 [2024-12-10 22:59:00.657691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:53.024 [2024-12-10 22:59:00.657715] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:53.024 [2024-12-10 22:59:00.658196] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:53.024 [2024-12-10 22:59:00.658372] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:53.025 [2024-12-10 22:59:00.658382] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:53.025 [2024-12-10 22:59:00.658389] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:53.025 [2024-12-10 22:59:00.658396] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:53.025 [2024-12-10 22:59:00.670294] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:53.025 [2024-12-10 22:59:00.670614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.025 [2024-12-10 22:59:00.670632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:53.025 [2024-12-10 22:59:00.670640] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:53.025 [2024-12-10 22:59:00.670812] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:53.025 [2024-12-10 22:59:00.670990] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:53.025 [2024-12-10 22:59:00.671000] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:53.025 [2024-12-10 22:59:00.671006] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:53.025 [2024-12-10 22:59:00.671013] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:53.025 [2024-12-10 22:59:00.683301] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:53.025 [2024-12-10 22:59:00.683731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.025 [2024-12-10 22:59:00.683775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:53.025 [2024-12-10 22:59:00.683798] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:53.025 [2024-12-10 22:59:00.684310] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:53.025 [2024-12-10 22:59:00.684487] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:53.025 [2024-12-10 22:59:00.684496] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:53.025 [2024-12-10 22:59:00.684503] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:53.025 [2024-12-10 22:59:00.684509] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:53.025 [2024-12-10 22:59:00.696387] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:53.025 [2024-12-10 22:59:00.696811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.025 [2024-12-10 22:59:00.696829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:53.025 [2024-12-10 22:59:00.696837] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:53.025 [2024-12-10 22:59:00.697009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:53.025 [2024-12-10 22:59:00.697189] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:53.025 [2024-12-10 22:59:00.697200] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:53.025 [2024-12-10 22:59:00.697206] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:53.025 [2024-12-10 22:59:00.697213] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:53.025 [2024-12-10 22:59:00.709407] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:53.025 [2024-12-10 22:59:00.709731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.025 [2024-12-10 22:59:00.709748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:53.025 [2024-12-10 22:59:00.709756] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:53.025 [2024-12-10 22:59:00.709929] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:53.025 [2024-12-10 22:59:00.710102] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:53.025 [2024-12-10 22:59:00.710113] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:53.025 [2024-12-10 22:59:00.710124] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:53.025 [2024-12-10 22:59:00.710130] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:53.025 [2024-12-10 22:59:00.722237] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:53.025 [2024-12-10 22:59:00.722654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.025 [2024-12-10 22:59:00.722671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:53.025 [2024-12-10 22:59:00.722678] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:53.025 [2024-12-10 22:59:00.722842] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:53.025 [2024-12-10 22:59:00.723006] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:53.025 [2024-12-10 22:59:00.723015] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:53.025 [2024-12-10 22:59:00.723021] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:53.025 [2024-12-10 22:59:00.723027] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:53.025 [2024-12-10 22:59:00.735431] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:53.025 [2024-12-10 22:59:00.735761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.025 [2024-12-10 22:59:00.735779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:53.025 [2024-12-10 22:59:00.735787] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:53.025 [2024-12-10 22:59:00.735965] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:53.330 [2024-12-10 22:59:00.736144] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:53.330 [2024-12-10 22:59:00.736154] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:53.331 [2024-12-10 22:59:00.736168] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:53.331 [2024-12-10 22:59:00.736174] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:53.331 [2024-12-10 22:59:00.748545] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:53.331 [2024-12-10 22:59:00.748904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.331 [2024-12-10 22:59:00.748921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:53.331 [2024-12-10 22:59:00.748929] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:53.331 [2024-12-10 22:59:00.749101] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:53.331 [2024-12-10 22:59:00.749281] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:53.331 [2024-12-10 22:59:00.749292] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:53.331 [2024-12-10 22:59:00.749299] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:53.331 [2024-12-10 22:59:00.749306] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:53.331 [2024-12-10 22:59:00.761663] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:53.331 [2024-12-10 22:59:00.762027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.331 [2024-12-10 22:59:00.762044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:53.331 [2024-12-10 22:59:00.762052] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:53.331 [2024-12-10 22:59:00.762236] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:53.331 [2024-12-10 22:59:00.762415] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:53.331 [2024-12-10 22:59:00.762425] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:53.331 [2024-12-10 22:59:00.762432] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:53.331 [2024-12-10 22:59:00.762438] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:53.331 [2024-12-10 22:59:00.774716] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:53.331 [2024-12-10 22:59:00.775143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.331 [2024-12-10 22:59:00.775210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:53.331 [2024-12-10 22:59:00.775234] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:53.331 [2024-12-10 22:59:00.775816] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:53.331 [2024-12-10 22:59:00.776414] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:53.331 [2024-12-10 22:59:00.776459] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:53.331 [2024-12-10 22:59:00.776482] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:53.331 [2024-12-10 22:59:00.776502] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:53.331 [2024-12-10 22:59:00.789601] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:53.331 [2024-12-10 22:59:00.790095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.331 [2024-12-10 22:59:00.790117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:53.331 [2024-12-10 22:59:00.790127] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:53.331 [2024-12-10 22:59:00.790368] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:53.331 [2024-12-10 22:59:00.790606] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:53.331 [2024-12-10 22:59:00.790619] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:53.331 [2024-12-10 22:59:00.790628] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:53.331 [2024-12-10 22:59:00.790636] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:53.331 [2024-12-10 22:59:00.802406] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:53.331 [2024-12-10 22:59:00.802823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.331 [2024-12-10 22:59:00.802840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:53.331 [2024-12-10 22:59:00.802851] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:53.331 [2024-12-10 22:59:00.803015] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:53.331 [2024-12-10 22:59:00.803200] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:53.331 [2024-12-10 22:59:00.803210] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:53.331 [2024-12-10 22:59:00.803217] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:53.331 [2024-12-10 22:59:00.803224] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:53.331 [2024-12-10 22:59:00.815275] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:53.331 [2024-12-10 22:59:00.815669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.331 [2024-12-10 22:59:00.815686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:53.331 [2024-12-10 22:59:00.815694] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:53.331 [2024-12-10 22:59:00.815857] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:53.331 [2024-12-10 22:59:00.816021] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:53.331 [2024-12-10 22:59:00.816031] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:53.331 [2024-12-10 22:59:00.816037] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:53.331 [2024-12-10 22:59:00.816044] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:53.331 [2024-12-10 22:59:00.828232] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:53.331 [2024-12-10 22:59:00.828652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.331 [2024-12-10 22:59:00.828696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:53.331 [2024-12-10 22:59:00.828720] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:53.331 [2024-12-10 22:59:00.829326] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:53.331 [2024-12-10 22:59:00.829503] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:53.331 [2024-12-10 22:59:00.829513] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:53.331 [2024-12-10 22:59:00.829520] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:53.331 [2024-12-10 22:59:00.829526] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:53.331 [2024-12-10 22:59:00.841101] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:53.331 [2024-12-10 22:59:00.841518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.331 [2024-12-10 22:59:00.841560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:53.331 [2024-12-10 22:59:00.841585] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:53.331 [2024-12-10 22:59:00.842184] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:53.331 [2024-12-10 22:59:00.842484] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:53.331 [2024-12-10 22:59:00.842493] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:53.331 [2024-12-10 22:59:00.842500] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:53.331 [2024-12-10 22:59:00.842506] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:53.331 [2024-12-10 22:59:00.854035] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:53.331 [2024-12-10 22:59:00.854455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.331 [2024-12-10 22:59:00.854472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:53.331 [2024-12-10 22:59:00.854479] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:53.331 [2024-12-10 22:59:00.854643] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:53.331 [2024-12-10 22:59:00.854807] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:53.331 [2024-12-10 22:59:00.854816] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:53.331 [2024-12-10 22:59:00.854823] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:53.331 [2024-12-10 22:59:00.854829] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:53.331 [2024-12-10 22:59:00.866965] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:53.332 [2024-12-10 22:59:00.867388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.332 [2024-12-10 22:59:00.867406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:53.332 [2024-12-10 22:59:00.867415] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:53.332 [2024-12-10 22:59:00.867593] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:53.332 [2024-12-10 22:59:00.867773] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:53.332 [2024-12-10 22:59:00.867783] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:53.332 [2024-12-10 22:59:00.867790] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:53.332 [2024-12-10 22:59:00.867797] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:53.332 [2024-12-10 22:59:00.880039] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:53.332 [2024-12-10 22:59:00.880414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.332 [2024-12-10 22:59:00.880433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:53.332 [2024-12-10 22:59:00.880441] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:53.332 [2024-12-10 22:59:00.880618] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:53.332 [2024-12-10 22:59:00.880798] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:53.332 [2024-12-10 22:59:00.880808] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:53.332 [2024-12-10 22:59:00.880820] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:53.332 [2024-12-10 22:59:00.880828] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:53.332 [2024-12-10 22:59:00.893057] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:53.332 [2024-12-10 22:59:00.893485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.332 [2024-12-10 22:59:00.893502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:53.332 [2024-12-10 22:59:00.893510] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:53.332 [2024-12-10 22:59:00.893683] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:53.332 [2024-12-10 22:59:00.893857] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:53.332 [2024-12-10 22:59:00.893867] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:53.332 [2024-12-10 22:59:00.893873] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:53.332 [2024-12-10 22:59:00.893880] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:53.332 [2024-12-10 22:59:00.905972] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:53.332 [2024-12-10 22:59:00.906307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.332 [2024-12-10 22:59:00.906324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:53.332 [2024-12-10 22:59:00.906332] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:53.332 [2024-12-10 22:59:00.906495] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:53.332 [2024-12-10 22:59:00.906660] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:53.332 [2024-12-10 22:59:00.906670] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:53.332 [2024-12-10 22:59:00.906676] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:53.332 [2024-12-10 22:59:00.906683] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:53.332 [2024-12-10 22:59:00.918796] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:53.332 [2024-12-10 22:59:00.919219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.332 [2024-12-10 22:59:00.919237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:53.332 [2024-12-10 22:59:00.919244] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:53.332 [2024-12-10 22:59:00.919408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:53.332 [2024-12-10 22:59:00.919572] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:53.332 [2024-12-10 22:59:00.919582] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:53.332 [2024-12-10 22:59:00.919588] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:53.332 [2024-12-10 22:59:00.919594] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:53.332 [2024-12-10 22:59:00.931699] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:53.332 [2024-12-10 22:59:00.932118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.332 [2024-12-10 22:59:00.932134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:53.332 [2024-12-10 22:59:00.932141] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:53.332 [2024-12-10 22:59:00.932334] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:53.332 [2024-12-10 22:59:00.932508] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:53.332 [2024-12-10 22:59:00.932518] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:53.332 [2024-12-10 22:59:00.932525] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:53.332 [2024-12-10 22:59:00.932532] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:53.332 [2024-12-10 22:59:00.944516] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:53.332 [2024-12-10 22:59:00.944926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.332 [2024-12-10 22:59:00.944942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:53.332 [2024-12-10 22:59:00.944950] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:53.332 [2024-12-10 22:59:00.945113] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:53.332 [2024-12-10 22:59:00.945303] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:53.332 [2024-12-10 22:59:00.945313] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:53.332 [2024-12-10 22:59:00.945320] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:53.332 [2024-12-10 22:59:00.945326] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:53.332 [2024-12-10 22:59:00.957387] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:53.332 [2024-12-10 22:59:00.957799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.332 [2024-12-10 22:59:00.957816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:53.332 [2024-12-10 22:59:00.957823] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:53.332 [2024-12-10 22:59:00.957986] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:53.332 [2024-12-10 22:59:00.958150] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:53.332 [2024-12-10 22:59:00.958165] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:53.332 [2024-12-10 22:59:00.958172] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:53.332 [2024-12-10 22:59:00.958178] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:53.332 [2024-12-10 22:59:00.970209] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:53.332 [2024-12-10 22:59:00.970623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.332 [2024-12-10 22:59:00.970641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:53.332 [2024-12-10 22:59:00.970651] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:53.332 [2024-12-10 22:59:00.970815] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:53.332 [2024-12-10 22:59:00.970980] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:53.332 [2024-12-10 22:59:00.970990] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:53.332 [2024-12-10 22:59:00.970996] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:53.332 [2024-12-10 22:59:00.971002] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:53.332 [2024-12-10 22:59:00.983109] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:53.332 [2024-12-10 22:59:00.983506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.332 [2024-12-10 22:59:00.983523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:53.332 [2024-12-10 22:59:00.983530] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:53.332 [2024-12-10 22:59:00.983693] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:53.332 [2024-12-10 22:59:00.983857] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:53.332 [2024-12-10 22:59:00.983868] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:53.332 [2024-12-10 22:59:00.983873] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:53.332 [2024-12-10 22:59:00.983880] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:53.332 [2024-12-10 22:59:00.995948] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:53.332 [2024-12-10 22:59:00.996393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.332 [2024-12-10 22:59:00.996439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:53.333 [2024-12-10 22:59:00.996463] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:53.333 [2024-12-10 22:59:00.996977] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:53.333 [2024-12-10 22:59:00.997142] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:53.333 [2024-12-10 22:59:00.997152] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:53.333 [2024-12-10 22:59:00.997164] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:53.333 [2024-12-10 22:59:00.997170] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:53.333 [2024-12-10 22:59:01.008826] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:53.333 [2024-12-10 22:59:01.009222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.333 [2024-12-10 22:59:01.009240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:53.333 [2024-12-10 22:59:01.009248] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:53.333 [2024-12-10 22:59:01.009412] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:53.333 [2024-12-10 22:59:01.009579] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:53.333 [2024-12-10 22:59:01.009589] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:53.333 [2024-12-10 22:59:01.009595] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:53.333 [2024-12-10 22:59:01.009602] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:53.333 [2024-12-10 22:59:01.021914] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:53.333 [2024-12-10 22:59:01.022325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.333 [2024-12-10 22:59:01.022343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:53.333 [2024-12-10 22:59:01.022352] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:53.333 [2024-12-10 22:59:01.022531] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:53.333 [2024-12-10 22:59:01.022710] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:53.333 [2024-12-10 22:59:01.022721] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:53.333 [2024-12-10 22:59:01.022728] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:53.333 [2024-12-10 22:59:01.022734] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:53.333 [2024-12-10 22:59:01.034999] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:53.593 [2024-12-10 22:59:01.035411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.593 [2024-12-10 22:59:01.035429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:53.593 [2024-12-10 22:59:01.035437] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:53.593 [2024-12-10 22:59:01.035614] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:53.593 [2024-12-10 22:59:01.035793] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:53.593 [2024-12-10 22:59:01.035803] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:53.593 [2024-12-10 22:59:01.035810] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:53.593 [2024-12-10 22:59:01.035816] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:53.593 [2024-12-10 22:59:01.047868] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:53.593 [2024-12-10 22:59:01.048276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.593 [2024-12-10 22:59:01.048320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:53.593 [2024-12-10 22:59:01.048344] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:53.593 [2024-12-10 22:59:01.048881] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:53.593 [2024-12-10 22:59:01.049062] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:53.593 [2024-12-10 22:59:01.049073] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:53.593 [2024-12-10 22:59:01.049079] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:53.594 [2024-12-10 22:59:01.049089] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:53.594 [2024-12-10 22:59:01.060808] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:53.594 [2024-12-10 22:59:01.061230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.594 [2024-12-10 22:59:01.061276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:53.594 [2024-12-10 22:59:01.061299] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:53.594 [2024-12-10 22:59:01.061883] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:53.594 [2024-12-10 22:59:01.062103] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:53.594 [2024-12-10 22:59:01.062113] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:53.594 [2024-12-10 22:59:01.062120] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:53.594 [2024-12-10 22:59:01.062127] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:53.594 [2024-12-10 22:59:01.073640] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:53.594 [2024-12-10 22:59:01.074059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.594 [2024-12-10 22:59:01.074103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:53.594 [2024-12-10 22:59:01.074126] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:53.594 [2024-12-10 22:59:01.074725] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:53.594 [2024-12-10 22:59:01.075222] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:53.594 [2024-12-10 22:59:01.075233] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:53.594 [2024-12-10 22:59:01.075239] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:53.594 [2024-12-10 22:59:01.075246] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:53.594 [2024-12-10 22:59:01.086572] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:53.594 [2024-12-10 22:59:01.086913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.594 [2024-12-10 22:59:01.086930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:53.594 [2024-12-10 22:59:01.086938] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:53.594 [2024-12-10 22:59:01.087101] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:53.594 [2024-12-10 22:59:01.087290] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:53.594 [2024-12-10 22:59:01.087300] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:53.594 [2024-12-10 22:59:01.087307] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:53.594 [2024-12-10 22:59:01.087313] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:53.594 [2024-12-10 22:59:01.099428] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:53.594 [2024-12-10 22:59:01.099849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.594 [2024-12-10 22:59:01.099894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:53.594 [2024-12-10 22:59:01.099917] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:53.594 [2024-12-10 22:59:01.100350] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:53.594 [2024-12-10 22:59:01.100525] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:53.594 [2024-12-10 22:59:01.100535] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:53.594 [2024-12-10 22:59:01.100542] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:53.594 [2024-12-10 22:59:01.100548] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:53.594 [2024-12-10 22:59:01.114289] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:53.594 [2024-12-10 22:59:01.114722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.594 [2024-12-10 22:59:01.114766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:53.594 [2024-12-10 22:59:01.114789] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:53.594 [2024-12-10 22:59:01.115345] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:53.594 [2024-12-10 22:59:01.115603] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:53.594 [2024-12-10 22:59:01.115616] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:53.594 [2024-12-10 22:59:01.115627] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:53.594 [2024-12-10 22:59:01.115637] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:53.594 [2024-12-10 22:59:01.127447] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:53.594 [2024-12-10 22:59:01.127883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.594 [2024-12-10 22:59:01.127901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:53.594 [2024-12-10 22:59:01.127909] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:53.594 [2024-12-10 22:59:01.128087] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:53.594 [2024-12-10 22:59:01.128276] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:53.594 [2024-12-10 22:59:01.128287] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:53.594 [2024-12-10 22:59:01.128295] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:53.594 [2024-12-10 22:59:01.128302] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:53.594 [2024-12-10 22:59:01.140557] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:53.594 [2024-12-10 22:59:01.140956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.594 [2024-12-10 22:59:01.140974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:53.594 [2024-12-10 22:59:01.140982] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:53.594 [2024-12-10 22:59:01.141150] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:53.594 [2024-12-10 22:59:01.141346] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:53.594 [2024-12-10 22:59:01.141356] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:53.594 [2024-12-10 22:59:01.141362] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:53.594 [2024-12-10 22:59:01.141368] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:53.594 [2024-12-10 22:59:01.153639] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:53.594 [2024-12-10 22:59:01.154028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.594 [2024-12-10 22:59:01.154046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:53.594 [2024-12-10 22:59:01.154053] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:53.594 [2024-12-10 22:59:01.154223] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:53.594 [2024-12-10 22:59:01.154389] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:53.594 [2024-12-10 22:59:01.154399] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:53.594 [2024-12-10 22:59:01.154405] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:53.594 [2024-12-10 22:59:01.154411] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:53.594 [2024-12-10 22:59:01.166581] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:53.594 [2024-12-10 22:59:01.167004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.594 [2024-12-10 22:59:01.167048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:53.594 [2024-12-10 22:59:01.167071] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:53.594 [2024-12-10 22:59:01.167486] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:53.594 [2024-12-10 22:59:01.167662] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:53.594 [2024-12-10 22:59:01.167672] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:53.594 [2024-12-10 22:59:01.167679] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:53.594 [2024-12-10 22:59:01.167686] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:53.594 [2024-12-10 22:59:01.179440] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:53.594 [2024-12-10 22:59:01.179854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.594 [2024-12-10 22:59:01.179870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:53.594 [2024-12-10 22:59:01.179877] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:53.594 [2024-12-10 22:59:01.180040] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:53.594 [2024-12-10 22:59:01.180211] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:53.594 [2024-12-10 22:59:01.180224] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:53.595 [2024-12-10 22:59:01.180230] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:53.595 [2024-12-10 22:59:01.180237] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:53.595 [2024-12-10 22:59:01.192379] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:53.595 [2024-12-10 22:59:01.192800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.595 [2024-12-10 22:59:01.192817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:53.595 [2024-12-10 22:59:01.192825] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:53.595 [2024-12-10 22:59:01.192988] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:53.595 [2024-12-10 22:59:01.193153] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:53.595 [2024-12-10 22:59:01.193170] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:53.595 [2024-12-10 22:59:01.193176] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:53.595 [2024-12-10 22:59:01.193182] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:53.595 [2024-12-10 22:59:01.205184] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:53.595 [2024-12-10 22:59:01.205597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.595 [2024-12-10 22:59:01.205615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:53.595 [2024-12-10 22:59:01.205622] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:53.595 [2024-12-10 22:59:01.205785] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:53.595 [2024-12-10 22:59:01.205950] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:53.595 [2024-12-10 22:59:01.205960] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:53.595 [2024-12-10 22:59:01.205966] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:53.595 [2024-12-10 22:59:01.205972] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:53.595 [2024-12-10 22:59:01.218038] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:53.595 [2024-12-10 22:59:01.218379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.595 [2024-12-10 22:59:01.218396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:53.595 [2024-12-10 22:59:01.218405] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:53.595 [2024-12-10 22:59:01.218569] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:53.595 [2024-12-10 22:59:01.218733] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:53.595 [2024-12-10 22:59:01.218743] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:53.595 [2024-12-10 22:59:01.218750] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:53.595 [2024-12-10 22:59:01.218759] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:53.595 [2024-12-10 22:59:01.230989] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:53.595 [2024-12-10 22:59:01.231406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.595 [2024-12-10 22:59:01.231422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:53.595 [2024-12-10 22:59:01.231430] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:53.595 [2024-12-10 22:59:01.231594] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:53.595 [2024-12-10 22:59:01.231758] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:53.595 [2024-12-10 22:59:01.231767] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:53.595 [2024-12-10 22:59:01.231773] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:53.595 [2024-12-10 22:59:01.231780] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:53.595 [2024-12-10 22:59:01.244086] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:53.595 [2024-12-10 22:59:01.244516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.595 [2024-12-10 22:59:01.244534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:53.595 [2024-12-10 22:59:01.244542] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:53.595 [2024-12-10 22:59:01.244714] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:53.595 [2024-12-10 22:59:01.244888] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:53.595 [2024-12-10 22:59:01.244898] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:53.595 [2024-12-10 22:59:01.244905] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:53.595 [2024-12-10 22:59:01.244911] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:53.595 [2024-12-10 22:59:01.257013] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:53.595 [2024-12-10 22:59:01.257435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.595 [2024-12-10 22:59:01.257452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:53.595 [2024-12-10 22:59:01.257459] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:53.595 [2024-12-10 22:59:01.257623] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:53.595 [2024-12-10 22:59:01.257787] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:53.595 [2024-12-10 22:59:01.257796] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:53.595 [2024-12-10 22:59:01.257802] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:53.595 [2024-12-10 22:59:01.257809] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:53.595 [2024-12-10 22:59:01.269832] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:53.595 [2024-12-10 22:59:01.270260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.595 [2024-12-10 22:59:01.270305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:53.595 [2024-12-10 22:59:01.270327] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:53.595 [2024-12-10 22:59:01.270718] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:53.595 [2024-12-10 22:59:01.270883] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:53.595 [2024-12-10 22:59:01.270893] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:53.595 [2024-12-10 22:59:01.270899] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:53.595 [2024-12-10 22:59:01.270905] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:53.595 [2024-12-10 22:59:01.282679] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:53.595 [2024-12-10 22:59:01.283094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.595 [2024-12-10 22:59:01.283111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:53.595 [2024-12-10 22:59:01.283119] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:53.595 [2024-12-10 22:59:01.283310] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:53.595 [2024-12-10 22:59:01.283484] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:53.595 [2024-12-10 22:59:01.283495] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:53.595 [2024-12-10 22:59:01.283501] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:53.595 [2024-12-10 22:59:01.283507] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:53.595 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/bdevperf.sh: line 35: 2528358 Killed "${NVMF_APP[@]}" "$@" 00:27:53.595 22:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:27:53.595 22:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:27:53.595 22:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:53.595 22:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:53.595 22:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:53.595 22:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2529771 00:27:53.595 22:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2529771 00:27:53.595 22:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:53.595 22:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 2529771 ']' 00:27:53.595 22:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:53.595 [2024-12-10 22:59:01.295811] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:53.595 22:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:53.595 22:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:53.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:53.595 [2024-12-10 22:59:01.296242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.595 [2024-12-10 22:59:01.296264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:53.595 [2024-12-10 22:59:01.296273] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:53.595 [2024-12-10 22:59:01.296451] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:53.595 22:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:53.595 [2024-12-10 22:59:01.296630] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:53.596 [2024-12-10 22:59:01.296640] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:53.596 [2024-12-10 22:59:01.296647] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:53.596 [2024-12-10 22:59:01.296655] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:53.596 22:59:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:53.596 [2024-12-10 22:59:01.308908] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:53.596 [2024-12-10 22:59:01.309345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.596 [2024-12-10 22:59:01.309362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:53.596 [2024-12-10 22:59:01.309369] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:53.596 [2024-12-10 22:59:01.309547] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:53.596 [2024-12-10 22:59:01.309724] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:53.596 [2024-12-10 22:59:01.309732] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:53.596 [2024-12-10 22:59:01.309738] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:53.596 [2024-12-10 22:59:01.309745] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:53.856 [2024-12-10 22:59:01.322004] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:53.857 [2024-12-10 22:59:01.322452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.857 [2024-12-10 22:59:01.322470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:53.857 [2024-12-10 22:59:01.322478] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:53.857 [2024-12-10 22:59:01.322656] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:53.857 [2024-12-10 22:59:01.322836] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:53.857 [2024-12-10 22:59:01.322845] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:53.857 [2024-12-10 22:59:01.322851] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:53.857 [2024-12-10 22:59:01.322857] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:53.857 [2024-12-10 22:59:01.335067] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:53.857 [2024-12-10 22:59:01.335520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.857 [2024-12-10 22:59:01.335537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:53.857 [2024-12-10 22:59:01.335548] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:53.857 [2024-12-10 22:59:01.335721] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:53.857 [2024-12-10 22:59:01.335895] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:53.857 [2024-12-10 22:59:01.335903] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:53.857 [2024-12-10 22:59:01.335909] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:53.857 [2024-12-10 22:59:01.335915] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:53.857 [2024-12-10 22:59:01.346534] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:27:53.857 [2024-12-10 22:59:01.346573] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:53.857 [2024-12-10 22:59:01.348118] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:53.857 [2024-12-10 22:59:01.348548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.857 [2024-12-10 22:59:01.348566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:53.857 [2024-12-10 22:59:01.348573] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:53.857 [2024-12-10 22:59:01.348747] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:53.857 [2024-12-10 22:59:01.348920] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:53.857 [2024-12-10 22:59:01.348928] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:53.857 [2024-12-10 22:59:01.348935] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:53.857 [2024-12-10 22:59:01.348941] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:53.857 [2024-12-10 22:59:01.361196] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:53.857 [2024-12-10 22:59:01.361605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.857 [2024-12-10 22:59:01.361621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:53.857 [2024-12-10 22:59:01.361628] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:53.857 [2024-12-10 22:59:01.361802] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:53.857 [2024-12-10 22:59:01.361976] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:53.857 [2024-12-10 22:59:01.361984] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:53.857 [2024-12-10 22:59:01.361990] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:53.857 [2024-12-10 22:59:01.361997] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:53.857 [2024-12-10 22:59:01.374290] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:53.857 [2024-12-10 22:59:01.374732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.857 [2024-12-10 22:59:01.374748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:53.857 [2024-12-10 22:59:01.374758] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:53.857 [2024-12-10 22:59:01.374930] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:53.857 [2024-12-10 22:59:01.375104] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:53.857 [2024-12-10 22:59:01.375112] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:53.857 [2024-12-10 22:59:01.375119] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:53.857 [2024-12-10 22:59:01.375125] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:53.857 [2024-12-10 22:59:01.387421] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:53.857 [2024-12-10 22:59:01.387865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.857 [2024-12-10 22:59:01.387883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:53.857 [2024-12-10 22:59:01.387891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:53.857 [2024-12-10 22:59:01.388069] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:53.857 [2024-12-10 22:59:01.388253] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:53.857 [2024-12-10 22:59:01.388263] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:53.857 [2024-12-10 22:59:01.388269] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:53.857 [2024-12-10 22:59:01.388276] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:53.857 [2024-12-10 22:59:01.400520] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:53.857 [2024-12-10 22:59:01.400862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.857 [2024-12-10 22:59:01.400878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:53.857 [2024-12-10 22:59:01.400885] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:53.857 [2024-12-10 22:59:01.401063] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:53.857 [2024-12-10 22:59:01.401250] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:53.857 [2024-12-10 22:59:01.401258] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:53.857 [2024-12-10 22:59:01.401265] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:53.857 [2024-12-10 22:59:01.401272] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:53.857 [2024-12-10 22:59:01.413678] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:53.857 [2024-12-10 22:59:01.414032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.857 [2024-12-10 22:59:01.414048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:53.857 [2024-12-10 22:59:01.414057] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:53.857 [2024-12-10 22:59:01.414255] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:53.857 [2024-12-10 22:59:01.414438] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:53.857 [2024-12-10 22:59:01.414448] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:53.857 [2024-12-10 22:59:01.414454] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:53.857 [2024-12-10 22:59:01.414461] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:53.857 [2024-12-10 22:59:01.426643] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:53.857 [2024-12-10 22:59:01.427006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.857 [2024-12-10 22:59:01.427022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:53.857 [2024-12-10 22:59:01.427030] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:53.857 [2024-12-10 22:59:01.427211] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:53.857 [2024-12-10 22:59:01.427387] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:53.857 [2024-12-10 22:59:01.427395] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:53.857 [2024-12-10 22:59:01.427401] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:53.858 [2024-12-10 22:59:01.427408] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:53.858 [2024-12-10 22:59:01.428566] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:53.858 [2024-12-10 22:59:01.439645] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:53.858 [2024-12-10 22:59:01.440007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.858 [2024-12-10 22:59:01.440026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:53.858 [2024-12-10 22:59:01.440034] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:53.858 [2024-12-10 22:59:01.440233] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:53.858 [2024-12-10 22:59:01.440413] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:53.858 [2024-12-10 22:59:01.440423] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:53.858 [2024-12-10 22:59:01.440429] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:53.858 [2024-12-10 22:59:01.440437] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:53.858 [2024-12-10 22:59:01.452741] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:53.858 [2024-12-10 22:59:01.453177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.858 [2024-12-10 22:59:01.453196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:53.858 [2024-12-10 22:59:01.453203] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:53.858 [2024-12-10 22:59:01.453376] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:53.858 [2024-12-10 22:59:01.453551] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:53.858 [2024-12-10 22:59:01.453564] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:53.858 [2024-12-10 22:59:01.453572] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:53.858 [2024-12-10 22:59:01.453578] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:53.858 [2024-12-10 22:59:01.465830] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:53.858 [2024-12-10 22:59:01.466236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.858 [2024-12-10 22:59:01.466254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:53.858 [2024-12-10 22:59:01.466261] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:53.858 [2024-12-10 22:59:01.466435] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:53.858 [2024-12-10 22:59:01.466609] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:53.858 [2024-12-10 22:59:01.466618] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:53.858 [2024-12-10 22:59:01.466624] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:53.858 [2024-12-10 22:59:01.466630] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:53.858 [2024-12-10 22:59:01.469171] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:53.858 [2024-12-10 22:59:01.469198] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:53.858 [2024-12-10 22:59:01.469206] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:53.858 [2024-12-10 22:59:01.469211] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:53.858 [2024-12-10 22:59:01.469217] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:53.858 [2024-12-10 22:59:01.470482] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:27:53.858 [2024-12-10 22:59:01.470588] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:27:53.858 [2024-12-10 22:59:01.470588] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:27:53.858 [2024-12-10 22:59:01.478957] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:53.858 [2024-12-10 22:59:01.479443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.858 [2024-12-10 22:59:01.479463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:53.858 [2024-12-10 22:59:01.479472] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:53.858 [2024-12-10 22:59:01.479652] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:53.858 [2024-12-10 22:59:01.479832] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:53.858 [2024-12-10 22:59:01.479840] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:53.858 [2024-12-10 22:59:01.479848] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:53.858 [2024-12-10 22:59:01.479855] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:53.858 [2024-12-10 22:59:01.492116] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:53.858 [2024-12-10 22:59:01.492579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.858 [2024-12-10 22:59:01.492600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:53.858 [2024-12-10 22:59:01.492618] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:53.858 [2024-12-10 22:59:01.492797] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:53.858 [2024-12-10 22:59:01.492976] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:53.858 [2024-12-10 22:59:01.492986] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:53.858 [2024-12-10 22:59:01.492993] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:53.858 [2024-12-10 22:59:01.493000] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:53.858 [2024-12-10 22:59:01.505271] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:53.858 [2024-12-10 22:59:01.505723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.858 [2024-12-10 22:59:01.505744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:53.858 [2024-12-10 22:59:01.505753] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:53.858 [2024-12-10 22:59:01.505932] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:53.858 [2024-12-10 22:59:01.506113] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:53.858 [2024-12-10 22:59:01.506121] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:53.858 [2024-12-10 22:59:01.506128] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:53.858 [2024-12-10 22:59:01.506135] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:53.858 [2024-12-10 22:59:01.518399] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:53.858 [2024-12-10 22:59:01.518838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.858 [2024-12-10 22:59:01.518858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:53.858 [2024-12-10 22:59:01.518867] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:53.858 [2024-12-10 22:59:01.519047] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:53.858 [2024-12-10 22:59:01.519232] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:53.858 [2024-12-10 22:59:01.519242] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:53.858 [2024-12-10 22:59:01.519249] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:53.858 [2024-12-10 22:59:01.519257] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:53.858 [2024-12-10 22:59:01.531505] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:53.858 [2024-12-10 22:59:01.531960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.858 [2024-12-10 22:59:01.531981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:53.858 [2024-12-10 22:59:01.531989] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:53.858 [2024-12-10 22:59:01.532173] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:53.858 [2024-12-10 22:59:01.532360] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:53.858 [2024-12-10 22:59:01.532368] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:53.858 [2024-12-10 22:59:01.532375] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:53.858 [2024-12-10 22:59:01.532382] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:53.858 [2024-12-10 22:59:01.544633] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:53.858 [2024-12-10 22:59:01.545049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.858 [2024-12-10 22:59:01.545067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:53.858 [2024-12-10 22:59:01.545075] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:53.858 [2024-12-10 22:59:01.545257] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:53.858 [2024-12-10 22:59:01.545436] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:53.858 [2024-12-10 22:59:01.545445] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:53.858 [2024-12-10 22:59:01.545452] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:53.858 [2024-12-10 22:59:01.545458] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:53.858 [2024-12-10 22:59:01.557709] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:53.858 [2024-12-10 22:59:01.558120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.858 [2024-12-10 22:59:01.558138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:53.859 [2024-12-10 22:59:01.558146] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:53.859 [2024-12-10 22:59:01.558328] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:53.859 [2024-12-10 22:59:01.558507] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:53.859 [2024-12-10 22:59:01.558516] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:53.859 [2024-12-10 22:59:01.558522] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:53.859 [2024-12-10 22:59:01.558529] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:53.859 [2024-12-10 22:59:01.570792] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:53.859 [2024-12-10 22:59:01.571136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:53.859 [2024-12-10 22:59:01.571152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:53.859 [2024-12-10 22:59:01.571165] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:53.859 [2024-12-10 22:59:01.571344] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:53.859 [2024-12-10 22:59:01.571524] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:53.859 [2024-12-10 22:59:01.571532] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:53.859 [2024-12-10 22:59:01.571542] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:53.859 [2024-12-10 22:59:01.571549] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.119 [2024-12-10 22:59:01.583948] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.119 [2024-12-10 22:59:01.584305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.119 [2024-12-10 22:59:01.584324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:54.119 [2024-12-10 22:59:01.584332] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:54.119 [2024-12-10 22:59:01.584511] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:54.119 [2024-12-10 22:59:01.584692] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.119 [2024-12-10 22:59:01.584701] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.119 [2024-12-10 22:59:01.584708] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.119 [2024-12-10 22:59:01.584715] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.119 [2024-12-10 22:59:01.597133] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.119 [2024-12-10 22:59:01.597549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.119 [2024-12-10 22:59:01.597566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:54.119 [2024-12-10 22:59:01.597574] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:54.119 [2024-12-10 22:59:01.597751] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:54.119 [2024-12-10 22:59:01.597930] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.119 [2024-12-10 22:59:01.597939] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.119 [2024-12-10 22:59:01.597945] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.119 [2024-12-10 22:59:01.597951] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.119 [2024-12-10 22:59:01.610187] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.119 [2024-12-10 22:59:01.610647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.119 [2024-12-10 22:59:01.610664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:54.119 [2024-12-10 22:59:01.610672] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:54.119 [2024-12-10 22:59:01.610850] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:54.119 [2024-12-10 22:59:01.611029] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.119 [2024-12-10 22:59:01.611037] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.119 [2024-12-10 22:59:01.611044] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.119 [2024-12-10 22:59:01.611050] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.119 [2024-12-10 22:59:01.623300] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.119 [2024-12-10 22:59:01.623594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.119 [2024-12-10 22:59:01.623611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:54.119 [2024-12-10 22:59:01.623619] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:54.119 [2024-12-10 22:59:01.623796] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:54.119 [2024-12-10 22:59:01.623975] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.119 [2024-12-10 22:59:01.623983] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.120 [2024-12-10 22:59:01.623990] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.120 [2024-12-10 22:59:01.623996] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.120 [2024-12-10 22:59:01.636423] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.120 [2024-12-10 22:59:01.636843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.120 [2024-12-10 22:59:01.636860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:54.120 [2024-12-10 22:59:01.636867] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:54.120 [2024-12-10 22:59:01.637046] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:54.120 [2024-12-10 22:59:01.637228] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.120 [2024-12-10 22:59:01.637237] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.120 [2024-12-10 22:59:01.637243] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.120 [2024-12-10 22:59:01.637250] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.120 [2024-12-10 22:59:01.649506] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.120 [2024-12-10 22:59:01.649904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.120 [2024-12-10 22:59:01.649921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:54.120 [2024-12-10 22:59:01.649929] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:54.120 [2024-12-10 22:59:01.650108] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:54.120 [2024-12-10 22:59:01.650291] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.120 [2024-12-10 22:59:01.650301] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.120 [2024-12-10 22:59:01.650308] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.120 [2024-12-10 22:59:01.650314] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.120 4844.50 IOPS, 18.92 MiB/s [2024-12-10T21:59:01.849Z] [2024-12-10 22:59:01.662700] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.120 [2024-12-10 22:59:01.663119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.120 [2024-12-10 22:59:01.663136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:54.120 [2024-12-10 22:59:01.663148] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:54.120 [2024-12-10 22:59:01.663330] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:54.120 [2024-12-10 22:59:01.663509] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.120 [2024-12-10 22:59:01.663518] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.120 [2024-12-10 22:59:01.663525] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.120 [2024-12-10 22:59:01.663531] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.120 [2024-12-10 22:59:01.675788] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.120 [2024-12-10 22:59:01.676202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.120 [2024-12-10 22:59:01.676219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:54.120 [2024-12-10 22:59:01.676227] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:54.120 [2024-12-10 22:59:01.676405] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:54.120 [2024-12-10 22:59:01.676584] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.120 [2024-12-10 22:59:01.676593] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.120 [2024-12-10 22:59:01.676600] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.120 [2024-12-10 22:59:01.676606] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.120 [2024-12-10 22:59:01.688876] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.120 [2024-12-10 22:59:01.689279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.120 [2024-12-10 22:59:01.689298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:54.120 [2024-12-10 22:59:01.689305] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:54.120 [2024-12-10 22:59:01.689486] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:54.120 [2024-12-10 22:59:01.689664] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.120 [2024-12-10 22:59:01.689673] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.120 [2024-12-10 22:59:01.689679] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.120 [2024-12-10 22:59:01.689686] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.120 [2024-12-10 22:59:01.701939] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.120 [2024-12-10 22:59:01.702304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.120 [2024-12-10 22:59:01.702321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:54.120 [2024-12-10 22:59:01.702328] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:54.120 [2024-12-10 22:59:01.702507] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:54.120 [2024-12-10 22:59:01.702688] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.120 [2024-12-10 22:59:01.702697] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.120 [2024-12-10 22:59:01.702703] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.120 [2024-12-10 22:59:01.702710] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.120 [2024-12-10 22:59:01.715131] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.120 [2024-12-10 22:59:01.715550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.120 [2024-12-10 22:59:01.715568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:54.120 [2024-12-10 22:59:01.715576] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:54.120 [2024-12-10 22:59:01.715754] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:54.120 [2024-12-10 22:59:01.715933] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.120 [2024-12-10 22:59:01.715941] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.120 [2024-12-10 22:59:01.715948] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.120 [2024-12-10 22:59:01.715954] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.120 [2024-12-10 22:59:01.728222] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.120 [2024-12-10 22:59:01.728582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.120 [2024-12-10 22:59:01.728599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:54.120 [2024-12-10 22:59:01.728606] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:54.120 [2024-12-10 22:59:01.728784] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:54.120 [2024-12-10 22:59:01.728963] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.120 [2024-12-10 22:59:01.728971] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.120 [2024-12-10 22:59:01.728978] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.120 [2024-12-10 22:59:01.728984] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.120 [2024-12-10 22:59:01.741402] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.120 [2024-12-10 22:59:01.741882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.120 [2024-12-10 22:59:01.741899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:54.120 [2024-12-10 22:59:01.741906] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:54.120 [2024-12-10 22:59:01.742084] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:54.120 [2024-12-10 22:59:01.742268] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.120 [2024-12-10 22:59:01.742277] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.120 [2024-12-10 22:59:01.742287] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.120 [2024-12-10 22:59:01.742293] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.120 [2024-12-10 22:59:01.754575] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.120 [2024-12-10 22:59:01.754971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.120 [2024-12-10 22:59:01.754987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:54.120 [2024-12-10 22:59:01.754995] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:54.120 [2024-12-10 22:59:01.755178] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:54.120 [2024-12-10 22:59:01.755357] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.120 [2024-12-10 22:59:01.755365] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.121 [2024-12-10 22:59:01.755372] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.121 [2024-12-10 22:59:01.755378] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.121 [2024-12-10 22:59:01.767775] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.121 [2024-12-10 22:59:01.768128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.121 [2024-12-10 22:59:01.768145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:54.121 [2024-12-10 22:59:01.768152] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:54.121 [2024-12-10 22:59:01.768337] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:54.121 [2024-12-10 22:59:01.768516] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.121 [2024-12-10 22:59:01.768526] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.121 [2024-12-10 22:59:01.768534] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.121 [2024-12-10 22:59:01.768540] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.121 [2024-12-10 22:59:01.780958] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.121 [2024-12-10 22:59:01.781348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.121 [2024-12-10 22:59:01.781365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:54.121 [2024-12-10 22:59:01.781372] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:54.121 [2024-12-10 22:59:01.781550] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:54.121 [2024-12-10 22:59:01.781729] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.121 [2024-12-10 22:59:01.781737] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.121 [2024-12-10 22:59:01.781744] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.121 [2024-12-10 22:59:01.781750] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.121 [2024-12-10 22:59:01.794161] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.121 [2024-12-10 22:59:01.794456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.121 [2024-12-10 22:59:01.794473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:54.121 [2024-12-10 22:59:01.794480] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:54.121 [2024-12-10 22:59:01.794658] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:54.121 [2024-12-10 22:59:01.794837] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.121 [2024-12-10 22:59:01.794846] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.121 [2024-12-10 22:59:01.794852] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.121 [2024-12-10 22:59:01.794858] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.121 [2024-12-10 22:59:01.807288] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.121 [2024-12-10 22:59:01.807583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.121 [2024-12-10 22:59:01.807600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:54.121 [2024-12-10 22:59:01.807607] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:54.121 [2024-12-10 22:59:01.807785] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:54.121 [2024-12-10 22:59:01.807963] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.121 [2024-12-10 22:59:01.807971] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.121 [2024-12-10 22:59:01.807978] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.121 [2024-12-10 22:59:01.807984] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.121 [2024-12-10 22:59:01.820416] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.121 [2024-12-10 22:59:01.820715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.121 [2024-12-10 22:59:01.820731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:54.121 [2024-12-10 22:59:01.820739] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:54.121 [2024-12-10 22:59:01.820916] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:54.121 [2024-12-10 22:59:01.821095] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.121 [2024-12-10 22:59:01.821103] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.121 [2024-12-10 22:59:01.821109] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.121 [2024-12-10 22:59:01.821115] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.121 [2024-12-10 22:59:01.833513] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.121 [2024-12-10 22:59:01.833883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.121 [2024-12-10 22:59:01.833900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:54.121 [2024-12-10 22:59:01.833911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:54.121 [2024-12-10 22:59:01.834089] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:54.121 [2024-12-10 22:59:01.834274] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.121 [2024-12-10 22:59:01.834282] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.121 [2024-12-10 22:59:01.834289] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.121 [2024-12-10 22:59:01.834294] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.382 [2024-12-10 22:59:01.846710] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.382 [2024-12-10 22:59:01.847112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.382 [2024-12-10 22:59:01.847128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:54.382 [2024-12-10 22:59:01.847135] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:54.382 [2024-12-10 22:59:01.847319] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:54.382 [2024-12-10 22:59:01.847498] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.382 [2024-12-10 22:59:01.847507] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.382 [2024-12-10 22:59:01.847514] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.382 [2024-12-10 22:59:01.847520] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.382 [2024-12-10 22:59:01.859780] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.382 [2024-12-10 22:59:01.860182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.382 [2024-12-10 22:59:01.860199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:54.382 [2024-12-10 22:59:01.860206] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:54.382 [2024-12-10 22:59:01.860384] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:54.382 [2024-12-10 22:59:01.860563] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.382 [2024-12-10 22:59:01.860572] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.382 [2024-12-10 22:59:01.860579] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.382 [2024-12-10 22:59:01.860585] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.382 [2024-12-10 22:59:01.872992] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.382 [2024-12-10 22:59:01.873386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.382 [2024-12-10 22:59:01.873403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:54.382 [2024-12-10 22:59:01.873411] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:54.382 [2024-12-10 22:59:01.873588] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:54.382 [2024-12-10 22:59:01.873770] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.382 [2024-12-10 22:59:01.873779] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.382 [2024-12-10 22:59:01.873785] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.382 [2024-12-10 22:59:01.873792] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.382 [2024-12-10 22:59:01.886217] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.382 [2024-12-10 22:59:01.886579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.382 [2024-12-10 22:59:01.886596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:54.382 [2024-12-10 22:59:01.886603] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:54.382 [2024-12-10 22:59:01.886781] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:54.382 [2024-12-10 22:59:01.886959] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.382 [2024-12-10 22:59:01.886967] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.382 [2024-12-10 22:59:01.886974] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.382 [2024-12-10 22:59:01.886980] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.382 [2024-12-10 22:59:01.899397] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.382 [2024-12-10 22:59:01.899783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.382 [2024-12-10 22:59:01.899802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:54.382 [2024-12-10 22:59:01.899810] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:54.382 [2024-12-10 22:59:01.900031] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:54.382 [2024-12-10 22:59:01.900215] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.382 [2024-12-10 22:59:01.900224] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.382 [2024-12-10 22:59:01.900231] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.382 [2024-12-10 22:59:01.900237] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.382 [2024-12-10 22:59:01.912496] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.382 [2024-12-10 22:59:01.912923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.382 [2024-12-10 22:59:01.912940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:54.382 [2024-12-10 22:59:01.912947] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:54.382 [2024-12-10 22:59:01.913125] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:54.382 [2024-12-10 22:59:01.913309] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.382 [2024-12-10 22:59:01.913318] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.382 [2024-12-10 22:59:01.913329] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.382 [2024-12-10 22:59:01.913335] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.382 [2024-12-10 22:59:01.925588] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.382 [2024-12-10 22:59:01.926003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.382 [2024-12-10 22:59:01.926020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:54.382 [2024-12-10 22:59:01.926028] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:54.382 [2024-12-10 22:59:01.926210] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:54.382 [2024-12-10 22:59:01.926389] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.382 [2024-12-10 22:59:01.926398] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.382 [2024-12-10 22:59:01.926405] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.382 [2024-12-10 22:59:01.926411] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.382 [2024-12-10 22:59:01.938677] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.383 [2024-12-10 22:59:01.939041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.383 [2024-12-10 22:59:01.939058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:54.383 [2024-12-10 22:59:01.939065] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:54.383 [2024-12-10 22:59:01.939247] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:54.383 [2024-12-10 22:59:01.939425] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.383 [2024-12-10 22:59:01.939433] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.383 [2024-12-10 22:59:01.939440] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.383 [2024-12-10 22:59:01.939446] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.383 [2024-12-10 22:59:01.951872] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.383 [2024-12-10 22:59:01.952243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.383 [2024-12-10 22:59:01.952262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:54.383 [2024-12-10 22:59:01.952269] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:54.383 [2024-12-10 22:59:01.952447] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:54.383 [2024-12-10 22:59:01.952626] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.383 [2024-12-10 22:59:01.952634] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.383 [2024-12-10 22:59:01.952641] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.383 [2024-12-10 22:59:01.952647] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.383 [2024-12-10 22:59:01.965063] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.383 [2024-12-10 22:59:01.965407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.383 [2024-12-10 22:59:01.965423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:54.383 [2024-12-10 22:59:01.965430] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:54.383 [2024-12-10 22:59:01.965608] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:54.383 [2024-12-10 22:59:01.965786] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.383 [2024-12-10 22:59:01.965794] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.383 [2024-12-10 22:59:01.965801] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.383 [2024-12-10 22:59:01.965807] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.383 [2024-12-10 22:59:01.978240] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.383 [2024-12-10 22:59:01.978672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.383 [2024-12-10 22:59:01.978689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:54.383 [2024-12-10 22:59:01.978696] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:54.383 [2024-12-10 22:59:01.978874] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:54.383 [2024-12-10 22:59:01.979053] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.383 [2024-12-10 22:59:01.979062] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.383 [2024-12-10 22:59:01.979068] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.383 [2024-12-10 22:59:01.979074] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.383 [2024-12-10 22:59:01.991322] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.383 [2024-12-10 22:59:01.991679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.383 [2024-12-10 22:59:01.991695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:54.383 [2024-12-10 22:59:01.991703] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:54.383 [2024-12-10 22:59:01.991880] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:54.383 [2024-12-10 22:59:01.992059] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.383 [2024-12-10 22:59:01.992067] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.383 [2024-12-10 22:59:01.992073] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.383 [2024-12-10 22:59:01.992080] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.383 [2024-12-10 22:59:02.004506] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.383 [2024-12-10 22:59:02.004844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.383 [2024-12-10 22:59:02.004861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:54.383 [2024-12-10 22:59:02.004871] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:54.383 [2024-12-10 22:59:02.005049] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:54.383 [2024-12-10 22:59:02.005234] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.383 [2024-12-10 22:59:02.005244] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.383 [2024-12-10 22:59:02.005250] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.383 [2024-12-10 22:59:02.005256] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.383 [2024-12-10 22:59:02.017697] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.383 [2024-12-10 22:59:02.018039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.383 [2024-12-10 22:59:02.018056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:54.383 [2024-12-10 22:59:02.018063] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:54.383 [2024-12-10 22:59:02.018245] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:54.383 [2024-12-10 22:59:02.018425] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.383 [2024-12-10 22:59:02.018433] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.383 [2024-12-10 22:59:02.018440] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.383 [2024-12-10 22:59:02.018446] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.383 [2024-12-10 22:59:02.030883] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.383 [2024-12-10 22:59:02.031245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.383 [2024-12-10 22:59:02.031263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:54.383 [2024-12-10 22:59:02.031270] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:54.383 [2024-12-10 22:59:02.031447] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:54.383 [2024-12-10 22:59:02.031627] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.383 [2024-12-10 22:59:02.031636] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.383 [2024-12-10 22:59:02.031642] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.383 [2024-12-10 22:59:02.031648] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.383 [2024-12-10 22:59:02.044066] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.383 [2024-12-10 22:59:02.044365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.383 [2024-12-10 22:59:02.044383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:54.383 [2024-12-10 22:59:02.044391] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:54.383 [2024-12-10 22:59:02.044568] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:54.383 [2024-12-10 22:59:02.044750] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.383 [2024-12-10 22:59:02.044759] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.383 [2024-12-10 22:59:02.044766] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.383 [2024-12-10 22:59:02.044772] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.383 [2024-12-10 22:59:02.057197] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.383 [2024-12-10 22:59:02.057553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.383 [2024-12-10 22:59:02.057571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:54.383 [2024-12-10 22:59:02.057578] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:54.383 [2024-12-10 22:59:02.057756] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:54.383 [2024-12-10 22:59:02.057935] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.383 [2024-12-10 22:59:02.057945] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.383 [2024-12-10 22:59:02.057952] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.383 [2024-12-10 22:59:02.057958] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.383 [2024-12-10 22:59:02.070375] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.383 [2024-12-10 22:59:02.070733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.383 [2024-12-10 22:59:02.070749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:54.384 [2024-12-10 22:59:02.070757] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:54.384 [2024-12-10 22:59:02.070934] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:54.384 [2024-12-10 22:59:02.071113] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.384 [2024-12-10 22:59:02.071122] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.384 [2024-12-10 22:59:02.071128] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.384 [2024-12-10 22:59:02.071134] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.384 [2024-12-10 22:59:02.083553] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.384 [2024-12-10 22:59:02.083995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.384 [2024-12-10 22:59:02.084011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:54.384 [2024-12-10 22:59:02.084019] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:54.384 [2024-12-10 22:59:02.084201] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:54.384 [2024-12-10 22:59:02.084380] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.384 [2024-12-10 22:59:02.084388] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.384 [2024-12-10 22:59:02.084395] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.384 [2024-12-10 22:59:02.084405] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.384 [2024-12-10 22:59:02.096643] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.384 [2024-12-10 22:59:02.097083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.384 [2024-12-10 22:59:02.097100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:54.384 [2024-12-10 22:59:02.097108] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:54.384 [2024-12-10 22:59:02.097290] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:54.384 [2024-12-10 22:59:02.097469] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.384 [2024-12-10 22:59:02.097478] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.384 [2024-12-10 22:59:02.097487] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.384 [2024-12-10 22:59:02.097495] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.644 [2024-12-10 22:59:02.109735] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.644 [2024-12-10 22:59:02.110140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.644 [2024-12-10 22:59:02.110162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:54.644 [2024-12-10 22:59:02.110170] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:54.644 [2024-12-10 22:59:02.110348] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:54.644 [2024-12-10 22:59:02.110527] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.644 [2024-12-10 22:59:02.110536] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.644 [2024-12-10 22:59:02.110542] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.644 [2024-12-10 22:59:02.110548] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.644 [2024-12-10 22:59:02.122787] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.644 [2024-12-10 22:59:02.123223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.644 [2024-12-10 22:59:02.123241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:54.644 [2024-12-10 22:59:02.123248] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:54.644 [2024-12-10 22:59:02.123426] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:54.645 [2024-12-10 22:59:02.123606] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.645 [2024-12-10 22:59:02.123615] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.645 [2024-12-10 22:59:02.123622] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.645 [2024-12-10 22:59:02.123629] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.645 [2024-12-10 22:59:02.135873] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.645 [2024-12-10 22:59:02.136299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.645 [2024-12-10 22:59:02.136317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:54.645 [2024-12-10 22:59:02.136324] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:54.645 [2024-12-10 22:59:02.136502] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:54.645 [2024-12-10 22:59:02.136682] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.645 [2024-12-10 22:59:02.136691] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.645 [2024-12-10 22:59:02.136697] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.645 [2024-12-10 22:59:02.136703] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.645 [2024-12-10 22:59:02.148958] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.645 [2024-12-10 22:59:02.149308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.645 [2024-12-10 22:59:02.149326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:54.645 [2024-12-10 22:59:02.149333] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:54.645 [2024-12-10 22:59:02.149510] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:54.645 [2024-12-10 22:59:02.149689] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.645 [2024-12-10 22:59:02.149698] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.645 [2024-12-10 22:59:02.149704] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.645 [2024-12-10 22:59:02.149710] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.645 [2024-12-10 22:59:02.162123] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.645 [2024-12-10 22:59:02.162556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.645 [2024-12-10 22:59:02.162574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:54.645 [2024-12-10 22:59:02.162581] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:54.645 [2024-12-10 22:59:02.162758] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:54.645 [2024-12-10 22:59:02.162937] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.645 [2024-12-10 22:59:02.162946] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.645 [2024-12-10 22:59:02.162953] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.645 [2024-12-10 22:59:02.162959] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.645 [2024-12-10 22:59:02.175210] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.645 [2024-12-10 22:59:02.175585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.645 [2024-12-10 22:59:02.175602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:54.645 [2024-12-10 22:59:02.175613] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:54.645 [2024-12-10 22:59:02.175791] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:54.645 [2024-12-10 22:59:02.175970] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.645 [2024-12-10 22:59:02.175978] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.645 [2024-12-10 22:59:02.175985] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.645 [2024-12-10 22:59:02.175991] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.645 22:59:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:54.645 22:59:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:27:54.645 22:59:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:54.645 22:59:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:54.645 22:59:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:54.645 [2024-12-10 22:59:02.188412] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.645 [2024-12-10 22:59:02.188768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.645 [2024-12-10 22:59:02.188785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:54.645 [2024-12-10 22:59:02.188792] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:54.645 [2024-12-10 22:59:02.188970] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:54.645 [2024-12-10 22:59:02.189148] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.645 [2024-12-10 22:59:02.189162] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.645 [2024-12-10 22:59:02.189169] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.645 [2024-12-10 22:59:02.189176] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.645 [2024-12-10 22:59:02.201621] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.645 [2024-12-10 22:59:02.202030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.645 [2024-12-10 22:59:02.202047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:54.645 [2024-12-10 22:59:02.202055] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:54.645 [2024-12-10 22:59:02.202237] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:54.645 [2024-12-10 22:59:02.202416] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.645 [2024-12-10 22:59:02.202425] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.645 [2024-12-10 22:59:02.202431] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.645 [2024-12-10 22:59:02.202438] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.645 [2024-12-10 22:59:02.214697] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.645 [2024-12-10 22:59:02.214991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.645 [2024-12-10 22:59:02.215008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:54.645 [2024-12-10 22:59:02.215019] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:54.645 [2024-12-10 22:59:02.215203] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:54.645 [2024-12-10 22:59:02.215385] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.645 [2024-12-10 22:59:02.215394] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.645 [2024-12-10 22:59:02.215400] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.645 [2024-12-10 22:59:02.215406] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.645 22:59:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:54.645 22:59:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:54.645 22:59:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.645 22:59:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:54.645 [2024-12-10 22:59:02.225446] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:54.645 [2024-12-10 22:59:02.227811] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.645 [2024-12-10 22:59:02.228104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.645 [2024-12-10 22:59:02.228121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:54.645 [2024-12-10 22:59:02.228128] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:54.645 [2024-12-10 22:59:02.228318] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:54.645 [2024-12-10 22:59:02.228498] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.645 [2024-12-10 22:59:02.228507] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.645 [2024-12-10 22:59:02.228513] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.645 [2024-12-10 22:59:02.228520] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.645 22:59:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.645 22:59:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:54.645 22:59:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.645 22:59:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:54.646 [2024-12-10 22:59:02.240941] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.646 [2024-12-10 22:59:02.241375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.646 [2024-12-10 22:59:02.241392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:54.646 [2024-12-10 22:59:02.241399] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:54.646 [2024-12-10 22:59:02.241578] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:54.646 [2024-12-10 22:59:02.241757] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.646 [2024-12-10 22:59:02.241765] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.646 [2024-12-10 22:59:02.241775] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.646 [2024-12-10 22:59:02.241782] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.646 [2024-12-10 22:59:02.254053] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.646 [2024-12-10 22:59:02.254424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.646 [2024-12-10 22:59:02.254441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:54.646 [2024-12-10 22:59:02.254448] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:54.646 [2024-12-10 22:59:02.254627] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:54.646 [2024-12-10 22:59:02.254806] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.646 [2024-12-10 22:59:02.254815] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.646 [2024-12-10 22:59:02.254821] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.646 [2024-12-10 22:59:02.254828] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.646 Malloc0 00:27:54.646 22:59:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.646 22:59:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:54.646 22:59:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.646 22:59:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:54.646 [2024-12-10 22:59:02.267241] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.646 [2024-12-10 22:59:02.267677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.646 [2024-12-10 22:59:02.267694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:54.646 [2024-12-10 22:59:02.267701] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:54.646 [2024-12-10 22:59:02.267879] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:54.646 [2024-12-10 22:59:02.268058] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.646 [2024-12-10 22:59:02.268066] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.646 [2024-12-10 22:59:02.268073] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.646 [2024-12-10 22:59:02.268079] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.646 22:59:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.646 22:59:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:54.646 22:59:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.646 22:59:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:54.646 22:59:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.646 22:59:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:54.646 [2024-12-10 22:59:02.280323] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.646 22:59:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.646 22:59:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:54.646 [2024-12-10 22:59:02.280772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:54.646 [2024-12-10 22:59:02.280789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ae1a0 with addr=10.0.0.2, port=4420 00:27:54.646 [2024-12-10 22:59:02.280796] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7ae1a0 is same with the state(6) to be set 00:27:54.646 [2024-12-10 22:59:02.280974] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7ae1a0 (9): Bad file descriptor 00:27:54.646 [2024-12-10 22:59:02.281151] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:27:54.646 [2024-12-10 22:59:02.281165] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:27:54.646 [2024-12-10 22:59:02.281172] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:27:54.646 [2024-12-10 22:59:02.281179] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:27:54.646 [2024-12-10 22:59:02.287297] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:54.646 22:59:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.646 22:59:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2528839 00:27:54.646 [2024-12-10 22:59:02.293402] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:27:54.646 [2024-12-10 22:59:02.320153] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:27:56.151 4676.29 IOPS, 18.27 MiB/s [2024-12-10T21:59:04.816Z] 5495.38 IOPS, 21.47 MiB/s [2024-12-10T21:59:05.751Z] 6114.89 IOPS, 23.89 MiB/s [2024-12-10T21:59:06.686Z] 6617.50 IOPS, 25.85 MiB/s [2024-12-10T21:59:08.062Z] 7012.55 IOPS, 27.39 MiB/s [2024-12-10T21:59:08.998Z] 7364.08 IOPS, 28.77 MiB/s [2024-12-10T21:59:09.934Z] 7649.92 IOPS, 29.88 MiB/s [2024-12-10T21:59:10.871Z] 7901.36 IOPS, 30.86 MiB/s [2024-12-10T21:59:10.871Z] 8105.53 IOPS, 31.66 MiB/s 00:28:03.142 Latency(us) 00:28:03.142 [2024-12-10T21:59:10.871Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:03.142 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:03.142 Verification LBA range: start 0x0 length 0x4000 00:28:03.142 Nvme1n1 : 15.01 8107.68 31.67 12694.70 0.00 6133.12 438.09 25758.50 00:28:03.142 [2024-12-10T21:59:10.871Z] =================================================================================================================== 00:28:03.142 [2024-12-10T21:59:10.871Z] Total : 8107.68 31.67 12694.70 0.00 6133.12 438.09 25758.50 00:28:03.142 22:59:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:28:03.142 22:59:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:03.142 22:59:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.142 22:59:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:03.142 22:59:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.142 22:59:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:28:03.142 22:59:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:28:03.142 22:59:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:03.142 22:59:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:28:03.142 22:59:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:03.142 22:59:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:28:03.142 22:59:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:03.142 22:59:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:03.401 rmmod nvme_tcp 00:28:03.401 rmmod nvme_fabrics 00:28:03.401 rmmod nvme_keyring 00:28:03.401 22:59:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:03.401 22:59:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:28:03.401 22:59:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:28:03.401 22:59:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 2529771 ']' 00:28:03.401 22:59:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 2529771 00:28:03.401 22:59:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 2529771 ']' 00:28:03.401 22:59:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 2529771 00:28:03.401 22:59:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:28:03.401 22:59:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:03.401 22:59:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2529771 00:28:03.401 22:59:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:03.401 22:59:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:03.401 22:59:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2529771' 00:28:03.401 killing process with pid 2529771 00:28:03.401 22:59:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 2529771 00:28:03.401 22:59:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 2529771 00:28:03.660 22:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:03.660 22:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:03.660 22:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:03.660 22:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:28:03.660 22:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:28:03.660 22:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:28:03.660 22:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:03.660 22:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:03.660 22:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:03.660 22:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:03.660 22:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:03.660 22:59:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:05.565 22:59:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:05.565 00:28:05.565 real 0m26.627s 00:28:05.565 user 1m2.877s 00:28:05.565 sys 0m6.742s 00:28:05.565 22:59:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:05.565 22:59:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:05.565 ************************************ 00:28:05.565 END TEST nvmf_bdevperf 00:28:05.565 ************************************ 00:28:05.565 22:59:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:28:05.565 22:59:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:05.565 22:59:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:05.565 22:59:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.825 ************************************ 00:28:05.825 START TEST nvmf_target_disconnect 00:28:05.825 ************************************ 00:28:05.825 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:28:05.825 * Looking for test storage... 00:28:05.825 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host 00:28:05.825 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:05.825 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:28:05.825 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:05.825 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:05.825 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:05.825 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:05.825 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:05.825 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:28:05.825 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:28:05.825 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:28:05.825 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:28:05.825 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:28:05.825 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:28:05.825 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:28:05.825 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:05.825 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:28:05.825 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:28:05.825 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:05.825 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:05.825 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:28:05.825 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:28:05.825 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:05.825 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:28:05.825 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:28:05.825 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:28:05.825 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:28:05.825 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:05.825 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:28:05.825 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:28:05.825 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:05.825 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:05.825 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:28:05.825 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:05.825 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:05.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:05.825 --rc genhtml_branch_coverage=1 00:28:05.825 --rc genhtml_function_coverage=1 00:28:05.825 --rc genhtml_legend=1 00:28:05.825 --rc geninfo_all_blocks=1 00:28:05.825 --rc geninfo_unexecuted_blocks=1 00:28:05.825 00:28:05.825 ' 00:28:05.825 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:05.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:05.825 --rc genhtml_branch_coverage=1 00:28:05.825 --rc genhtml_function_coverage=1 00:28:05.825 --rc genhtml_legend=1 00:28:05.825 --rc geninfo_all_blocks=1 00:28:05.825 --rc geninfo_unexecuted_blocks=1 00:28:05.825 00:28:05.825 ' 00:28:05.825 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:05.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:05.825 --rc genhtml_branch_coverage=1 00:28:05.825 --rc genhtml_function_coverage=1 00:28:05.825 --rc genhtml_legend=1 00:28:05.825 --rc geninfo_all_blocks=1 00:28:05.825 --rc geninfo_unexecuted_blocks=1 00:28:05.825 00:28:05.825 ' 00:28:05.825 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:05.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:05.825 --rc genhtml_branch_coverage=1 00:28:05.825 --rc genhtml_function_coverage=1 00:28:05.825 --rc genhtml_legend=1 00:28:05.825 --rc geninfo_all_blocks=1 00:28:05.825 --rc geninfo_unexecuted_blocks=1 00:28:05.825 00:28:05.825 ' 00:28:05.825 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:28:05.825 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:28:05.825 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:05.825 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:05.825 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:05.825 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:05.825 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:05.825 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:05.825 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:05.825 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:05.825 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:05.825 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:05.825 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:05.825 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:05.825 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:05.825 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:05.825 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:05.825 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:05.825 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:28:05.825 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:28:05.825 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:05.826 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:05.826 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:05.826 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:05.826 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:05.826 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:05.826 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:28:05.826 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:05.826 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:28:05.826 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:05.826 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:05.826 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:05.826 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:05.826 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:05.826 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:05.826 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:05.826 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:05.826 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:05.826 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:05.826 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/app/fio/nvme 00:28:05.826 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:28:05.826 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:28:05.826 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:28:05.826 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:05.826 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:05.826 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:05.826 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:05.826 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:05.826 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:05.826 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:05.826 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:05.826 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:05.826 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:05.826 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:28:05.826 22:59:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:12.396 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:12.396 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:28:12.396 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:12.396 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:12.396 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:12.396 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:12.397 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:12.397 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:12.397 Found net devices under 0000:86:00.0: cvl_0_0 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:12.397 Found net devices under 0000:86:00.1: cvl_0_1 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:12.397 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:12.397 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.401 ms 00:28:12.397 00:28:12.397 --- 10.0.0.2 ping statistics --- 00:28:12.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:12.397 rtt min/avg/max/mdev = 0.401/0.401/0.401/0.000 ms 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:12.397 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:12.397 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:28:12.397 00:28:12.397 --- 10.0.0.1 ping statistics --- 00:28:12.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:12.397 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:28:12.397 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:12.398 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:28:12.398 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:12.398 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:12.398 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:12.398 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:12.398 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:12.398 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:12.398 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:12.398 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:28:12.398 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:12.398 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:12.398 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:12.398 ************************************ 00:28:12.398 START TEST nvmf_target_disconnect_tc1 00:28:12.398 ************************************ 00:28:12.398 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:28:12.398 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:12.398 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:28:12.398 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:12.398 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/reconnect 00:28:12.398 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:12.398 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/reconnect 00:28:12.398 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:12.398 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/reconnect 00:28:12.398 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:12.398 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/reconnect 00:28:12.398 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/reconnect ]] 00:28:12.398 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:12.398 [2024-12-10 22:59:19.527199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:12.398 [2024-12-10 22:59:19.527315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1764ac0 with addr=10.0.0.2, port=4420 00:28:12.398 [2024-12-10 22:59:19.527371] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:12.398 [2024-12-10 22:59:19.527398] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:12.398 [2024-12-10 22:59:19.527417] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:28:12.398 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:28:12.398 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/reconnect: errors occurred 00:28:12.398 Initializing NVMe Controllers 00:28:12.398 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:28:12.398 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:12.398 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:12.398 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:12.398 00:28:12.398 real 0m0.127s 00:28:12.398 user 0m0.057s 00:28:12.398 sys 0m0.070s 00:28:12.398 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:12.398 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:12.398 ************************************ 00:28:12.398 END TEST nvmf_target_disconnect_tc1 00:28:12.398 ************************************ 00:28:12.398 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:28:12.398 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:12.398 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:12.398 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:12.398 ************************************ 00:28:12.398 START TEST nvmf_target_disconnect_tc2 00:28:12.398 ************************************ 00:28:12.398 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:28:12.398 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:28:12.398 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:28:12.398 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:12.398 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:12.398 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:12.398 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2534939 00:28:12.398 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2534939 00:28:12.398 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:28:12.398 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2534939 ']' 00:28:12.398 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:12.398 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:12.398 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:12.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:12.398 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:12.398 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:12.398 [2024-12-10 22:59:19.675686] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:28:12.398 [2024-12-10 22:59:19.675730] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:12.398 [2024-12-10 22:59:19.756336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:12.398 [2024-12-10 22:59:19.797639] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:12.398 [2024-12-10 22:59:19.797675] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:12.398 [2024-12-10 22:59:19.797682] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:12.398 [2024-12-10 22:59:19.797688] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:12.398 [2024-12-10 22:59:19.797693] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:12.398 [2024-12-10 22:59:19.799283] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:28:12.398 [2024-12-10 22:59:19.799407] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:28:12.398 [2024-12-10 22:59:19.799520] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:28:12.398 [2024-12-10 22:59:19.799522] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:28:12.398 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:12.398 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:28:12.398 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:12.398 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:12.398 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:12.398 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:12.398 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:12.398 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.398 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:12.398 Malloc0 00:28:12.398 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.398 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:12.398 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.398 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:12.398 [2024-12-10 22:59:19.966049] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:12.398 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.398 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:12.398 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.399 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:12.399 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.399 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:12.399 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.399 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:12.399 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.399 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:12.399 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.399 22:59:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:12.399 [2024-12-10 22:59:19.998311] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:12.399 22:59:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.399 22:59:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:12.399 22:59:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:12.399 22:59:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:12.399 22:59:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:12.399 22:59:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2534964 00:28:12.399 22:59:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:28:12.399 22:59:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:14.304 22:59:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2534939 00:28:14.304 22:59:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:28:14.304 Read completed with error (sct=0, sc=8) 00:28:14.304 starting I/O failed 00:28:14.304 Read completed with error (sct=0, sc=8) 00:28:14.304 starting I/O failed 00:28:14.304 Write completed with error (sct=0, sc=8) 00:28:14.304 starting I/O failed 00:28:14.304 Read completed with error (sct=0, sc=8) 00:28:14.304 starting I/O failed 00:28:14.304 Read completed with error (sct=0, sc=8) 00:28:14.304 starting I/O failed 00:28:14.304 Read completed with error (sct=0, sc=8) 00:28:14.304 starting I/O failed 00:28:14.304 Read completed with error (sct=0, sc=8) 00:28:14.304 starting I/O failed 00:28:14.304 Write completed with error (sct=0, sc=8) 00:28:14.304 starting I/O failed 00:28:14.304 Write completed with error (sct=0, sc=8) 00:28:14.304 starting I/O failed 00:28:14.304 Read completed with error (sct=0, sc=8) 00:28:14.304 starting I/O failed 00:28:14.305 Write completed with error (sct=0, sc=8) 00:28:14.305 starting I/O failed 00:28:14.305 Write completed with error (sct=0, sc=8) 00:28:14.305 starting I/O failed 00:28:14.305 Read completed with error (sct=0, sc=8) 00:28:14.305 starting I/O failed 00:28:14.305 Read completed with error (sct=0, sc=8) 00:28:14.305 starting I/O failed 00:28:14.305 Write completed with error (sct=0, sc=8) 00:28:14.305 starting I/O failed 00:28:14.305 Read completed with error (sct=0, sc=8) 00:28:14.305 starting I/O failed 00:28:14.305 Write completed with error (sct=0, sc=8) 00:28:14.305 starting I/O failed 00:28:14.305 Write completed with error (sct=0, sc=8) 00:28:14.305 starting I/O failed 00:28:14.305 Read completed with error (sct=0, sc=8) 00:28:14.305 starting I/O failed 00:28:14.305 Read completed with error (sct=0, sc=8) 00:28:14.305 starting I/O failed 00:28:14.305 Read completed with error (sct=0, sc=8) 00:28:14.305 starting I/O failed 00:28:14.305 Read completed with error (sct=0, sc=8) 00:28:14.305 starting I/O failed 00:28:14.305 Write completed with error (sct=0, sc=8) 00:28:14.305 starting I/O failed 00:28:14.305 Read completed with error (sct=0, sc=8) 00:28:14.305 starting I/O failed 00:28:14.305 Read completed with error (sct=0, sc=8) 00:28:14.305 starting I/O failed 00:28:14.305 Read completed with error (sct=0, sc=8) 00:28:14.305 starting I/O failed 00:28:14.305 Write completed with error (sct=0, sc=8) 00:28:14.305 starting I/O failed 00:28:14.305 Read completed with error (sct=0, sc=8) 00:28:14.305 starting I/O failed 00:28:14.305 Read completed with error (sct=0, sc=8) 00:28:14.305 starting I/O failed 00:28:14.305 Write completed with error (sct=0, sc=8) 00:28:14.305 starting I/O failed 00:28:14.305 Write completed with error (sct=0, sc=8) 00:28:14.305 starting I/O failed 00:28:14.305 Write completed with error (sct=0, sc=8) 00:28:14.305 starting I/O failed 00:28:14.305 Read completed with error (sct=0, sc=8) 00:28:14.305 starting I/O failed 00:28:14.305 Read completed with error (sct=0, sc=8) 00:28:14.305 starting I/O failed 00:28:14.305 Read completed with error (sct=0, sc=8) 00:28:14.305 starting I/O failed 00:28:14.305 Read completed with error (sct=0, sc=8) 00:28:14.305 starting I/O failed 00:28:14.305 [2024-12-10 22:59:22.029981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:14.305 Read completed with error (sct=0, sc=8) 00:28:14.305 starting I/O failed 00:28:14.305 Read completed with error (sct=0, sc=8) 00:28:14.305 starting I/O failed 00:28:14.305 Read completed with error (sct=0, sc=8) 00:28:14.305 starting I/O failed 00:28:14.305 Read completed with error (sct=0, sc=8) 00:28:14.305 starting I/O failed 00:28:14.305 Read completed with error (sct=0, sc=8) 00:28:14.305 starting I/O failed 00:28:14.305 Read completed with error (sct=0, sc=8) 00:28:14.305 starting I/O failed 00:28:14.305 Read completed with error (sct=0, sc=8) 00:28:14.305 starting I/O failed 00:28:14.305 Read completed with error (sct=0, sc=8) 00:28:14.305 starting I/O failed 00:28:14.305 Read completed with error (sct=0, sc=8) 00:28:14.305 starting I/O failed 00:28:14.305 Read completed with error (sct=0, sc=8) 00:28:14.305 starting I/O failed 00:28:14.305 Read completed with error (sct=0, sc=8) 00:28:14.305 starting I/O failed 00:28:14.305 Read completed with error (sct=0, sc=8) 00:28:14.305 starting I/O failed 00:28:14.305 Write completed with error (sct=0, sc=8) 00:28:14.305 starting I/O failed 00:28:14.305 Read completed with error (sct=0, sc=8) 00:28:14.305 starting I/O failed 00:28:14.305 Read completed with error (sct=0, sc=8) 00:28:14.305 starting I/O failed 00:28:14.305 Read completed with error (sct=0, sc=8) 00:28:14.305 starting I/O failed 00:28:14.305 Write completed with error (sct=0, sc=8) 00:28:14.305 starting I/O failed 00:28:14.305 Read completed with error (sct=0, sc=8) 00:28:14.305 starting I/O failed 00:28:14.305 Read completed with error (sct=0, sc=8) 00:28:14.305 starting I/O failed 00:28:14.305 Write completed with error (sct=0, sc=8) 00:28:14.305 starting I/O failed 00:28:14.305 Read completed with error (sct=0, sc=8) 00:28:14.305 starting I/O failed 00:28:14.305 Write completed with error (sct=0, sc=8) 00:28:14.305 starting I/O failed 00:28:14.305 Read completed with error (sct=0, sc=8) 00:28:14.305 starting I/O failed 00:28:14.305 Read completed with error (sct=0, sc=8) 00:28:14.305 starting I/O failed 00:28:14.305 Read completed with error (sct=0, sc=8) 00:28:14.305 starting I/O failed 00:28:14.305 Write completed with error (sct=0, sc=8) 00:28:14.305 starting I/O failed 00:28:14.305 Read completed with error (sct=0, sc=8) 00:28:14.305 starting I/O failed 00:28:14.305 Read completed with error (sct=0, sc=8) 00:28:14.305 starting I/O failed 00:28:14.305 [2024-12-10 22:59:22.030201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:14.305 [2024-12-10 22:59:22.030418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.305 [2024-12-10 22:59:22.030439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.305 qpair failed and we were unable to recover it. 00:28:14.305 [2024-12-10 22:59:22.030547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.305 [2024-12-10 22:59:22.030558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.305 qpair failed and we were unable to recover it. 00:28:14.305 [2024-12-10 22:59:22.030659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.305 [2024-12-10 22:59:22.030669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.305 qpair failed and we were unable to recover it. 00:28:14.586 [2024-12-10 22:59:22.030800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.586 [2024-12-10 22:59:22.030810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.586 qpair failed and we were unable to recover it. 00:28:14.586 [2024-12-10 22:59:22.030959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.586 [2024-12-10 22:59:22.030969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.586 qpair failed and we were unable to recover it. 00:28:14.586 [2024-12-10 22:59:22.031050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.586 [2024-12-10 22:59:22.031060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.586 qpair failed and we were unable to recover it. 00:28:14.586 [2024-12-10 22:59:22.031183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.586 [2024-12-10 22:59:22.031208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.586 qpair failed and we were unable to recover it. 00:28:14.586 [2024-12-10 22:59:22.031327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.586 [2024-12-10 22:59:22.031358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.586 qpair failed and we were unable to recover it. 00:28:14.586 [2024-12-10 22:59:22.031476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.586 [2024-12-10 22:59:22.031507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.586 qpair failed and we were unable to recover it. 00:28:14.586 [2024-12-10 22:59:22.031646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.586 [2024-12-10 22:59:22.031678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.586 qpair failed and we were unable to recover it. 00:28:14.586 [2024-12-10 22:59:22.031793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.586 [2024-12-10 22:59:22.031824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.586 qpair failed and we were unable to recover it. 00:28:14.586 [2024-12-10 22:59:22.031950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.586 [2024-12-10 22:59:22.031982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.586 qpair failed and we were unable to recover it. 00:28:14.586 [2024-12-10 22:59:22.032097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.586 [2024-12-10 22:59:22.032129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.586 qpair failed and we were unable to recover it. 00:28:14.586 Read completed with error (sct=0, sc=8) 00:28:14.586 starting I/O failed 00:28:14.586 Read completed with error (sct=0, sc=8) 00:28:14.586 starting I/O failed 00:28:14.586 Read completed with error (sct=0, sc=8) 00:28:14.586 starting I/O failed 00:28:14.586 Read completed with error (sct=0, sc=8) 00:28:14.586 starting I/O failed 00:28:14.586 Read completed with error (sct=0, sc=8) 00:28:14.586 starting I/O failed 00:28:14.586 Read completed with error (sct=0, sc=8) 00:28:14.586 starting I/O failed 00:28:14.586 Read completed with error (sct=0, sc=8) 00:28:14.586 starting I/O failed 00:28:14.586 Read completed with error (sct=0, sc=8) 00:28:14.586 starting I/O failed 00:28:14.586 Read completed with error (sct=0, sc=8) 00:28:14.586 starting I/O failed 00:28:14.586 Read completed with error (sct=0, sc=8) 00:28:14.586 starting I/O failed 00:28:14.586 Read completed with error (sct=0, sc=8) 00:28:14.586 starting I/O failed 00:28:14.586 Read completed with error (sct=0, sc=8) 00:28:14.586 starting I/O failed 00:28:14.586 Read completed with error (sct=0, sc=8) 00:28:14.586 starting I/O failed 00:28:14.586 Read completed with error (sct=0, sc=8) 00:28:14.586 starting I/O failed 00:28:14.586 Read completed with error (sct=0, sc=8) 00:28:14.586 starting I/O failed 00:28:14.586 Read completed with error (sct=0, sc=8) 00:28:14.586 starting I/O failed 00:28:14.586 Read completed with error (sct=0, sc=8) 00:28:14.586 starting I/O failed 00:28:14.586 Read completed with error (sct=0, sc=8) 00:28:14.586 starting I/O failed 00:28:14.586 Write completed with error (sct=0, sc=8) 00:28:14.586 starting I/O failed 00:28:14.586 Read completed with error (sct=0, sc=8) 00:28:14.586 starting I/O failed 00:28:14.586 Write completed with error (sct=0, sc=8) 00:28:14.586 starting I/O failed 00:28:14.586 Write completed with error (sct=0, sc=8) 00:28:14.586 starting I/O failed 00:28:14.586 Write completed with error (sct=0, sc=8) 00:28:14.586 starting I/O failed 00:28:14.586 Write completed with error (sct=0, sc=8) 00:28:14.586 starting I/O failed 00:28:14.586 Read completed with error (sct=0, sc=8) 00:28:14.586 starting I/O failed 00:28:14.587 Write completed with error (sct=0, sc=8) 00:28:14.587 starting I/O failed 00:28:14.587 Write completed with error (sct=0, sc=8) 00:28:14.587 starting I/O failed 00:28:14.587 Write completed with error (sct=0, sc=8) 00:28:14.587 starting I/O failed 00:28:14.587 Write completed with error (sct=0, sc=8) 00:28:14.587 starting I/O failed 00:28:14.587 Read completed with error (sct=0, sc=8) 00:28:14.587 starting I/O failed 00:28:14.587 Write completed with error (sct=0, sc=8) 00:28:14.587 starting I/O failed 00:28:14.587 Read completed with error (sct=0, sc=8) 00:28:14.587 starting I/O failed 00:28:14.587 [2024-12-10 22:59:22.032555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:14.587 [2024-12-10 22:59:22.032708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.587 [2024-12-10 22:59:22.032726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.587 qpair failed and we were unable to recover it. 00:28:14.587 [2024-12-10 22:59:22.032809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.587 [2024-12-10 22:59:22.032820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.587 qpair failed and we were unable to recover it. 00:28:14.587 [2024-12-10 22:59:22.032968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.587 [2024-12-10 22:59:22.032978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.587 qpair failed and we were unable to recover it. 00:28:14.587 [2024-12-10 22:59:22.033063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.587 [2024-12-10 22:59:22.033073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.587 qpair failed and we were unable to recover it. 00:28:14.587 [2024-12-10 22:59:22.033131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.587 [2024-12-10 22:59:22.033142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.587 qpair failed and we were unable to recover it. 00:28:14.587 [2024-12-10 22:59:22.033227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.587 [2024-12-10 22:59:22.033238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.587 qpair failed and we were unable to recover it. 00:28:14.587 [2024-12-10 22:59:22.033303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.587 [2024-12-10 22:59:22.033313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.587 qpair failed and we were unable to recover it. 00:28:14.587 [2024-12-10 22:59:22.033456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.587 [2024-12-10 22:59:22.033467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.587 qpair failed and we were unable to recover it. 00:28:14.587 [2024-12-10 22:59:22.033525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.587 [2024-12-10 22:59:22.033535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.587 qpair failed and we were unable to recover it. 00:28:14.587 [2024-12-10 22:59:22.033617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.587 [2024-12-10 22:59:22.033627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.587 qpair failed and we were unable to recover it. 00:28:14.587 [2024-12-10 22:59:22.033705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.587 [2024-12-10 22:59:22.033716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.587 qpair failed and we were unable to recover it. 00:28:14.587 [2024-12-10 22:59:22.033866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.587 [2024-12-10 22:59:22.033876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.587 qpair failed and we were unable to recover it. 00:28:14.587 [2024-12-10 22:59:22.033935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.587 [2024-12-10 22:59:22.033945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.587 qpair failed and we were unable to recover it. 00:28:14.587 [2024-12-10 22:59:22.034084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.587 [2024-12-10 22:59:22.034095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.587 qpair failed and we were unable to recover it. 00:28:14.587 [2024-12-10 22:59:22.034173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.587 [2024-12-10 22:59:22.034184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.587 qpair failed and we were unable to recover it. 00:28:14.587 [2024-12-10 22:59:22.034249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.587 [2024-12-10 22:59:22.034259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.587 qpair failed and we were unable to recover it. 00:28:14.587 [2024-12-10 22:59:22.034384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.587 [2024-12-10 22:59:22.034394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.587 qpair failed and we were unable to recover it. 00:28:14.587 [2024-12-10 22:59:22.034452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.587 [2024-12-10 22:59:22.034462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.587 qpair failed and we were unable to recover it. 00:28:14.587 [2024-12-10 22:59:22.034525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.587 [2024-12-10 22:59:22.034535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.587 qpair failed and we were unable to recover it. 00:28:14.587 [2024-12-10 22:59:22.034616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.587 [2024-12-10 22:59:22.034647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.587 qpair failed and we were unable to recover it. 00:28:14.587 [2024-12-10 22:59:22.034767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.587 [2024-12-10 22:59:22.034798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.587 qpair failed and we were unable to recover it. 00:28:14.587 [2024-12-10 22:59:22.034904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.587 [2024-12-10 22:59:22.034934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.587 qpair failed and we were unable to recover it. 00:28:14.587 [2024-12-10 22:59:22.035058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.587 [2024-12-10 22:59:22.035089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.587 qpair failed and we were unable to recover it. 00:28:14.587 [2024-12-10 22:59:22.035270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.587 [2024-12-10 22:59:22.035281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.587 qpair failed and we were unable to recover it. 00:28:14.587 [2024-12-10 22:59:22.035338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.587 [2024-12-10 22:59:22.035349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.587 qpair failed and we were unable to recover it. 00:28:14.587 [2024-12-10 22:59:22.035440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.587 [2024-12-10 22:59:22.035451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.587 qpair failed and we were unable to recover it. 00:28:14.587 [2024-12-10 22:59:22.035568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.587 [2024-12-10 22:59:22.035599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.587 qpair failed and we were unable to recover it. 00:28:14.587 [2024-12-10 22:59:22.035783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.587 [2024-12-10 22:59:22.035815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.587 qpair failed and we were unable to recover it. 00:28:14.587 [2024-12-10 22:59:22.036007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.587 [2024-12-10 22:59:22.036038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.587 qpair failed and we were unable to recover it. 00:28:14.587 [2024-12-10 22:59:22.036231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.587 [2024-12-10 22:59:22.036263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.587 qpair failed and we were unable to recover it. 00:28:14.587 [2024-12-10 22:59:22.036381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.587 [2024-12-10 22:59:22.036415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.587 qpair failed and we were unable to recover it. 00:28:14.587 [2024-12-10 22:59:22.036531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.587 [2024-12-10 22:59:22.036562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.587 qpair failed and we were unable to recover it. 00:28:14.587 [2024-12-10 22:59:22.036826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.587 [2024-12-10 22:59:22.036858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.587 qpair failed and we were unable to recover it. 00:28:14.587 [2024-12-10 22:59:22.036967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.587 [2024-12-10 22:59:22.036998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.587 qpair failed and we were unable to recover it. 00:28:14.587 [2024-12-10 22:59:22.037113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.587 [2024-12-10 22:59:22.037145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.587 qpair failed and we were unable to recover it. 00:28:14.587 [2024-12-10 22:59:22.037274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.587 [2024-12-10 22:59:22.037306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.587 qpair failed and we were unable to recover it. 00:28:14.587 [2024-12-10 22:59:22.037473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.588 [2024-12-10 22:59:22.037503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.588 qpair failed and we were unable to recover it. 00:28:14.588 [2024-12-10 22:59:22.037640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.588 [2024-12-10 22:59:22.037671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.588 qpair failed and we were unable to recover it. 00:28:14.588 [2024-12-10 22:59:22.037781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.588 [2024-12-10 22:59:22.037812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.588 qpair failed and we were unable to recover it. 00:28:14.588 [2024-12-10 22:59:22.037983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.588 [2024-12-10 22:59:22.038021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.588 qpair failed and we were unable to recover it. 00:28:14.588 [2024-12-10 22:59:22.038128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.588 [2024-12-10 22:59:22.038172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.588 qpair failed and we were unable to recover it. 00:28:14.588 [2024-12-10 22:59:22.038345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.588 [2024-12-10 22:59:22.038377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.588 qpair failed and we were unable to recover it. 00:28:14.588 [2024-12-10 22:59:22.038490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.588 [2024-12-10 22:59:22.038521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.588 qpair failed and we were unable to recover it. 00:28:14.588 [2024-12-10 22:59:22.038710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.588 [2024-12-10 22:59:22.038741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.588 qpair failed and we were unable to recover it. 00:28:14.588 [2024-12-10 22:59:22.038858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.588 [2024-12-10 22:59:22.038890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.588 qpair failed and we were unable to recover it. 00:28:14.588 [2024-12-10 22:59:22.039066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.588 [2024-12-10 22:59:22.039096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.588 qpair failed and we were unable to recover it. 00:28:14.588 [2024-12-10 22:59:22.039215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.588 [2024-12-10 22:59:22.039248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.588 qpair failed and we were unable to recover it. 00:28:14.588 [2024-12-10 22:59:22.039505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.588 [2024-12-10 22:59:22.039536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.588 qpair failed and we were unable to recover it. 00:28:14.588 [2024-12-10 22:59:22.039709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.588 [2024-12-10 22:59:22.039740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.588 qpair failed and we were unable to recover it. 00:28:14.588 [2024-12-10 22:59:22.039910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.588 [2024-12-10 22:59:22.039941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.588 qpair failed and we were unable to recover it. 00:28:14.588 [2024-12-10 22:59:22.040061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.588 [2024-12-10 22:59:22.040093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.588 qpair failed and we were unable to recover it. 00:28:14.588 [2024-12-10 22:59:22.040207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.588 [2024-12-10 22:59:22.040240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.588 qpair failed and we were unable to recover it. 00:28:14.588 [2024-12-10 22:59:22.040430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.588 [2024-12-10 22:59:22.040463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.588 qpair failed and we were unable to recover it. 00:28:14.588 [2024-12-10 22:59:22.040579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.588 [2024-12-10 22:59:22.040609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.588 qpair failed and we were unable to recover it. 00:28:14.588 [2024-12-10 22:59:22.040712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.588 [2024-12-10 22:59:22.040744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.588 qpair failed and we were unable to recover it. 00:28:14.588 [2024-12-10 22:59:22.040877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.588 [2024-12-10 22:59:22.040910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.588 qpair failed and we were unable to recover it. 00:28:14.588 [2024-12-10 22:59:22.041081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.588 [2024-12-10 22:59:22.041111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.588 qpair failed and we were unable to recover it. 00:28:14.588 [2024-12-10 22:59:22.041222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.588 [2024-12-10 22:59:22.041255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.588 qpair failed and we were unable to recover it. 00:28:14.588 [2024-12-10 22:59:22.041360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.588 [2024-12-10 22:59:22.041394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.588 qpair failed and we were unable to recover it. 00:28:14.588 [2024-12-10 22:59:22.041517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.588 [2024-12-10 22:59:22.041547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.588 qpair failed and we were unable to recover it. 00:28:14.588 [2024-12-10 22:59:22.041656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.588 [2024-12-10 22:59:22.041687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.588 qpair failed and we were unable to recover it. 00:28:14.588 [2024-12-10 22:59:22.041784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.588 [2024-12-10 22:59:22.041813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.588 qpair failed and we were unable to recover it. 00:28:14.588 [2024-12-10 22:59:22.041947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.588 [2024-12-10 22:59:22.041978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.588 qpair failed and we were unable to recover it. 00:28:14.588 [2024-12-10 22:59:22.042097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.588 [2024-12-10 22:59:22.042129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.588 qpair failed and we were unable to recover it. 00:28:14.588 [2024-12-10 22:59:22.042275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.588 [2024-12-10 22:59:22.042308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.588 qpair failed and we were unable to recover it. 00:28:14.588 [2024-12-10 22:59:22.042475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.588 [2024-12-10 22:59:22.042507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.588 qpair failed and we were unable to recover it. 00:28:14.588 [2024-12-10 22:59:22.042636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.588 [2024-12-10 22:59:22.042673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.588 qpair failed and we were unable to recover it. 00:28:14.588 [2024-12-10 22:59:22.042778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.588 [2024-12-10 22:59:22.042811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.588 qpair failed and we were unable to recover it. 00:28:14.588 [2024-12-10 22:59:22.042990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.588 [2024-12-10 22:59:22.043022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.588 qpair failed and we were unable to recover it. 00:28:14.588 [2024-12-10 22:59:22.043193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.588 [2024-12-10 22:59:22.043226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.588 qpair failed and we were unable to recover it. 00:28:14.588 [2024-12-10 22:59:22.043344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.588 [2024-12-10 22:59:22.043376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.588 qpair failed and we were unable to recover it. 00:28:14.588 [2024-12-10 22:59:22.043497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.588 [2024-12-10 22:59:22.043528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.588 qpair failed and we were unable to recover it. 00:28:14.588 [2024-12-10 22:59:22.043698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.588 [2024-12-10 22:59:22.043733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.588 qpair failed and we were unable to recover it. 00:28:14.588 [2024-12-10 22:59:22.043846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.588 [2024-12-10 22:59:22.043878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.588 qpair failed and we were unable to recover it. 00:28:14.588 [2024-12-10 22:59:22.043986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.588 [2024-12-10 22:59:22.044017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.588 qpair failed and we were unable to recover it. 00:28:14.588 [2024-12-10 22:59:22.044124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.588 [2024-12-10 22:59:22.044155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.588 qpair failed and we were unable to recover it. 00:28:14.588 [2024-12-10 22:59:22.044350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.589 [2024-12-10 22:59:22.044383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.589 qpair failed and we were unable to recover it. 00:28:14.589 [2024-12-10 22:59:22.044482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.589 [2024-12-10 22:59:22.044513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.589 qpair failed and we were unable to recover it. 00:28:14.589 [2024-12-10 22:59:22.044615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.589 [2024-12-10 22:59:22.044646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.589 qpair failed and we were unable to recover it. 00:28:14.589 [2024-12-10 22:59:22.044845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.589 [2024-12-10 22:59:22.044880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.589 qpair failed and we were unable to recover it. 00:28:14.589 [2024-12-10 22:59:22.044997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.589 [2024-12-10 22:59:22.045029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.589 qpair failed and we were unable to recover it. 00:28:14.589 [2024-12-10 22:59:22.045221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.589 [2024-12-10 22:59:22.045254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.589 qpair failed and we were unable to recover it. 00:28:14.589 [2024-12-10 22:59:22.045428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.589 [2024-12-10 22:59:22.045460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.589 qpair failed and we were unable to recover it. 00:28:14.589 [2024-12-10 22:59:22.045566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.589 [2024-12-10 22:59:22.045598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.589 qpair failed and we were unable to recover it. 00:28:14.589 [2024-12-10 22:59:22.045763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.589 [2024-12-10 22:59:22.045794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.589 qpair failed and we were unable to recover it. 00:28:14.589 [2024-12-10 22:59:22.045984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.589 [2024-12-10 22:59:22.046016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.589 qpair failed and we were unable to recover it. 00:28:14.589 [2024-12-10 22:59:22.046117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.589 [2024-12-10 22:59:22.046148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.589 qpair failed and we were unable to recover it. 00:28:14.589 [2024-12-10 22:59:22.046286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.589 [2024-12-10 22:59:22.046330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.589 qpair failed and we were unable to recover it. 00:28:14.589 [2024-12-10 22:59:22.046439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.589 [2024-12-10 22:59:22.046468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.589 qpair failed and we were unable to recover it. 00:28:14.589 [2024-12-10 22:59:22.046564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.589 [2024-12-10 22:59:22.046592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.589 qpair failed and we were unable to recover it. 00:28:14.589 [2024-12-10 22:59:22.046768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.589 [2024-12-10 22:59:22.046798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.589 qpair failed and we were unable to recover it. 00:28:14.589 [2024-12-10 22:59:22.046923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.589 [2024-12-10 22:59:22.046952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.589 qpair failed and we were unable to recover it. 00:28:14.589 [2024-12-10 22:59:22.047147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.589 [2024-12-10 22:59:22.047186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.589 qpair failed and we were unable to recover it. 00:28:14.589 [2024-12-10 22:59:22.047296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.589 [2024-12-10 22:59:22.047324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.589 qpair failed and we were unable to recover it. 00:28:14.589 [2024-12-10 22:59:22.047432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.589 [2024-12-10 22:59:22.047460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.589 qpair failed and we were unable to recover it. 00:28:14.589 [2024-12-10 22:59:22.047624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.589 [2024-12-10 22:59:22.047652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.589 qpair failed and we were unable to recover it. 00:28:14.589 [2024-12-10 22:59:22.047746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.589 [2024-12-10 22:59:22.047775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.589 qpair failed and we were unable to recover it. 00:28:14.589 [2024-12-10 22:59:22.047892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.589 [2024-12-10 22:59:22.047921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.589 qpair failed and we were unable to recover it. 00:28:14.589 [2024-12-10 22:59:22.048035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.589 [2024-12-10 22:59:22.048064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.589 qpair failed and we were unable to recover it. 00:28:14.589 [2024-12-10 22:59:22.048247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.589 [2024-12-10 22:59:22.048276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.589 qpair failed and we were unable to recover it. 00:28:14.589 [2024-12-10 22:59:22.048394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.589 [2024-12-10 22:59:22.048423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.589 qpair failed and we were unable to recover it. 00:28:14.589 [2024-12-10 22:59:22.048537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.589 [2024-12-10 22:59:22.048566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.589 qpair failed and we were unable to recover it. 00:28:14.589 [2024-12-10 22:59:22.048663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.589 [2024-12-10 22:59:22.048692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.589 qpair failed and we were unable to recover it. 00:28:14.589 [2024-12-10 22:59:22.048792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.589 [2024-12-10 22:59:22.048821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.589 qpair failed and we were unable to recover it. 00:28:14.589 [2024-12-10 22:59:22.048983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.589 [2024-12-10 22:59:22.049012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.589 qpair failed and we were unable to recover it. 00:28:14.589 [2024-12-10 22:59:22.049105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.589 [2024-12-10 22:59:22.049134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.589 qpair failed and we were unable to recover it. 00:28:14.589 [2024-12-10 22:59:22.049260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.589 [2024-12-10 22:59:22.049289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.589 qpair failed and we were unable to recover it. 00:28:14.589 [2024-12-10 22:59:22.049453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.589 [2024-12-10 22:59:22.049502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.589 qpair failed and we were unable to recover it. 00:28:14.589 [2024-12-10 22:59:22.049603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.589 [2024-12-10 22:59:22.049634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.589 qpair failed and we were unable to recover it. 00:28:14.589 [2024-12-10 22:59:22.049757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.589 [2024-12-10 22:59:22.049788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.589 qpair failed and we were unable to recover it. 00:28:14.589 [2024-12-10 22:59:22.049913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.589 [2024-12-10 22:59:22.049943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.589 qpair failed and we were unable to recover it. 00:28:14.589 [2024-12-10 22:59:22.050119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.589 [2024-12-10 22:59:22.050188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.589 qpair failed and we were unable to recover it. 00:28:14.589 [2024-12-10 22:59:22.050295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.589 [2024-12-10 22:59:22.050324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.589 qpair failed and we were unable to recover it. 00:28:14.589 [2024-12-10 22:59:22.050514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.589 [2024-12-10 22:59:22.050542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.589 qpair failed and we were unable to recover it. 00:28:14.589 [2024-12-10 22:59:22.050664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.589 [2024-12-10 22:59:22.050692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.589 qpair failed and we were unable to recover it. 00:28:14.589 [2024-12-10 22:59:22.050867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.589 [2024-12-10 22:59:22.050899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.589 qpair failed and we were unable to recover it. 00:28:14.590 [2024-12-10 22:59:22.051029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.590 [2024-12-10 22:59:22.051061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.590 qpair failed and we were unable to recover it. 00:28:14.590 [2024-12-10 22:59:22.051179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.590 [2024-12-10 22:59:22.051213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.590 qpair failed and we were unable to recover it. 00:28:14.590 [2024-12-10 22:59:22.051348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.590 [2024-12-10 22:59:22.051380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.590 qpair failed and we were unable to recover it. 00:28:14.590 [2024-12-10 22:59:22.051505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.590 [2024-12-10 22:59:22.051533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.590 qpair failed and we were unable to recover it. 00:28:14.590 [2024-12-10 22:59:22.051700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.590 [2024-12-10 22:59:22.051728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.590 qpair failed and we were unable to recover it. 00:28:14.590 [2024-12-10 22:59:22.051908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.590 [2024-12-10 22:59:22.051939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.590 qpair failed and we were unable to recover it. 00:28:14.590 [2024-12-10 22:59:22.052103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.590 [2024-12-10 22:59:22.052132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.590 qpair failed and we were unable to recover it. 00:28:14.590 [2024-12-10 22:59:22.052256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.590 [2024-12-10 22:59:22.052286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.590 qpair failed and we were unable to recover it. 00:28:14.590 [2024-12-10 22:59:22.052389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.590 [2024-12-10 22:59:22.052419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.590 qpair failed and we were unable to recover it. 00:28:14.590 [2024-12-10 22:59:22.052591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.590 [2024-12-10 22:59:22.052623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.590 qpair failed and we were unable to recover it. 00:28:14.590 [2024-12-10 22:59:22.052733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.590 [2024-12-10 22:59:22.052764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.590 qpair failed and we were unable to recover it. 00:28:14.590 [2024-12-10 22:59:22.052881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.590 [2024-12-10 22:59:22.052914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.590 qpair failed and we were unable to recover it. 00:28:14.590 [2024-12-10 22:59:22.053049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.590 [2024-12-10 22:59:22.053081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.590 qpair failed and we were unable to recover it. 00:28:14.590 [2024-12-10 22:59:22.053203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.590 [2024-12-10 22:59:22.053233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.590 qpair failed and we were unable to recover it. 00:28:14.590 [2024-12-10 22:59:22.053400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.590 [2024-12-10 22:59:22.053431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.590 qpair failed and we were unable to recover it. 00:28:14.590 [2024-12-10 22:59:22.053593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.590 [2024-12-10 22:59:22.053621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.590 qpair failed and we were unable to recover it. 00:28:14.590 [2024-12-10 22:59:22.053802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.590 [2024-12-10 22:59:22.053831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.590 qpair failed and we were unable to recover it. 00:28:14.590 [2024-12-10 22:59:22.054030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.590 [2024-12-10 22:59:22.054059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.590 qpair failed and we were unable to recover it. 00:28:14.590 [2024-12-10 22:59:22.054222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.590 [2024-12-10 22:59:22.054258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.590 qpair failed and we were unable to recover it. 00:28:14.590 [2024-12-10 22:59:22.054368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.590 [2024-12-10 22:59:22.054396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.590 qpair failed and we were unable to recover it. 00:28:14.590 [2024-12-10 22:59:22.054599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.590 [2024-12-10 22:59:22.054628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.590 qpair failed and we were unable to recover it. 00:28:14.590 [2024-12-10 22:59:22.054751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.590 [2024-12-10 22:59:22.054779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.590 qpair failed and we were unable to recover it. 00:28:14.590 [2024-12-10 22:59:22.054880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.590 [2024-12-10 22:59:22.054909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.590 qpair failed and we were unable to recover it. 00:28:14.590 [2024-12-10 22:59:22.055075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.590 [2024-12-10 22:59:22.055105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.590 qpair failed and we were unable to recover it. 00:28:14.590 [2024-12-10 22:59:22.055293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.590 [2024-12-10 22:59:22.055326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.590 qpair failed and we were unable to recover it. 00:28:14.590 [2024-12-10 22:59:22.055430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.590 [2024-12-10 22:59:22.055461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.590 qpair failed and we were unable to recover it. 00:28:14.590 [2024-12-10 22:59:22.055589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.590 [2024-12-10 22:59:22.055622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.590 qpair failed and we were unable to recover it. 00:28:14.590 [2024-12-10 22:59:22.055793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.590 [2024-12-10 22:59:22.055824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.590 qpair failed and we were unable to recover it. 00:28:14.590 [2024-12-10 22:59:22.056111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.590 [2024-12-10 22:59:22.056143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.590 qpair failed and we were unable to recover it. 00:28:14.590 [2024-12-10 22:59:22.056337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.590 [2024-12-10 22:59:22.056366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.590 qpair failed and we were unable to recover it. 00:28:14.590 [2024-12-10 22:59:22.056491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.590 [2024-12-10 22:59:22.056536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.590 qpair failed and we were unable to recover it. 00:28:14.590 [2024-12-10 22:59:22.056658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.590 [2024-12-10 22:59:22.056690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.590 qpair failed and we were unable to recover it. 00:28:14.590 [2024-12-10 22:59:22.056803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.590 [2024-12-10 22:59:22.056836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.590 qpair failed and we were unable to recover it. 00:28:14.590 [2024-12-10 22:59:22.056934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.590 [2024-12-10 22:59:22.056967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.590 qpair failed and we were unable to recover it. 00:28:14.590 [2024-12-10 22:59:22.057132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.590 [2024-12-10 22:59:22.057174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.590 qpair failed and we were unable to recover it. 00:28:14.590 [2024-12-10 22:59:22.057440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.591 [2024-12-10 22:59:22.057473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.591 qpair failed and we were unable to recover it. 00:28:14.591 [2024-12-10 22:59:22.057575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.591 [2024-12-10 22:59:22.057606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.591 qpair failed and we were unable to recover it. 00:28:14.591 [2024-12-10 22:59:22.057738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.591 [2024-12-10 22:59:22.057766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.591 qpair failed and we were unable to recover it. 00:28:14.591 [2024-12-10 22:59:22.057892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.591 [2024-12-10 22:59:22.057921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.591 qpair failed and we were unable to recover it. 00:28:14.591 [2024-12-10 22:59:22.058029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.591 [2024-12-10 22:59:22.058057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.591 qpair failed and we were unable to recover it. 00:28:14.591 [2024-12-10 22:59:22.058171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.591 [2024-12-10 22:59:22.058201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.591 qpair failed and we were unable to recover it. 00:28:14.591 [2024-12-10 22:59:22.058387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.591 [2024-12-10 22:59:22.058416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.591 qpair failed and we were unable to recover it. 00:28:14.591 [2024-12-10 22:59:22.058523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.591 [2024-12-10 22:59:22.058552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.591 qpair failed and we were unable to recover it. 00:28:14.591 [2024-12-10 22:59:22.058670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.591 [2024-12-10 22:59:22.058698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.591 qpair failed and we were unable to recover it. 00:28:14.591 [2024-12-10 22:59:22.058809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.591 [2024-12-10 22:59:22.058839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.591 qpair failed and we were unable to recover it. 00:28:14.591 [2024-12-10 22:59:22.058963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.591 [2024-12-10 22:59:22.058990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.591 qpair failed and we were unable to recover it. 00:28:14.591 [2024-12-10 22:59:22.059092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.591 [2024-12-10 22:59:22.059120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.591 qpair failed and we were unable to recover it. 00:28:14.591 [2024-12-10 22:59:22.059310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.591 [2024-12-10 22:59:22.059341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.591 qpair failed and we were unable to recover it. 00:28:14.591 [2024-12-10 22:59:22.059438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.591 [2024-12-10 22:59:22.059468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.591 qpair failed and we were unable to recover it. 00:28:14.591 [2024-12-10 22:59:22.059590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.591 [2024-12-10 22:59:22.059619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.591 qpair failed and we were unable to recover it. 00:28:14.591 [2024-12-10 22:59:22.059727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.591 [2024-12-10 22:59:22.059757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.591 qpair failed and we were unable to recover it. 00:28:14.591 [2024-12-10 22:59:22.059891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.591 [2024-12-10 22:59:22.059919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.591 qpair failed and we were unable to recover it. 00:28:14.591 [2024-12-10 22:59:22.060028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.591 [2024-12-10 22:59:22.060058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.591 qpair failed and we were unable to recover it. 00:28:14.591 [2024-12-10 22:59:22.060233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.591 [2024-12-10 22:59:22.060263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.591 qpair failed and we were unable to recover it. 00:28:14.591 [2024-12-10 22:59:22.060433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.591 [2024-12-10 22:59:22.060462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.591 qpair failed and we were unable to recover it. 00:28:14.591 [2024-12-10 22:59:22.060627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.591 [2024-12-10 22:59:22.060655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.591 qpair failed and we were unable to recover it. 00:28:14.591 [2024-12-10 22:59:22.060770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.591 [2024-12-10 22:59:22.060800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.591 qpair failed and we were unable to recover it. 00:28:14.591 [2024-12-10 22:59:22.060969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.591 [2024-12-10 22:59:22.060997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.591 qpair failed and we were unable to recover it. 00:28:14.591 [2024-12-10 22:59:22.061094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.591 [2024-12-10 22:59:22.061125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.591 qpair failed and we were unable to recover it. 00:28:14.591 [2024-12-10 22:59:22.061234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.591 [2024-12-10 22:59:22.061270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.591 qpair failed and we were unable to recover it. 00:28:14.591 [2024-12-10 22:59:22.061443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.591 [2024-12-10 22:59:22.061472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.591 qpair failed and we were unable to recover it. 00:28:14.591 [2024-12-10 22:59:22.061577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.591 [2024-12-10 22:59:22.061606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.591 qpair failed and we were unable to recover it. 00:28:14.591 [2024-12-10 22:59:22.061730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.591 [2024-12-10 22:59:22.061759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.591 qpair failed and we were unable to recover it. 00:28:14.591 [2024-12-10 22:59:22.061853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.591 [2024-12-10 22:59:22.061880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.591 qpair failed and we were unable to recover it. 00:28:14.591 [2024-12-10 22:59:22.061981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.591 [2024-12-10 22:59:22.062010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.591 qpair failed and we were unable to recover it. 00:28:14.591 [2024-12-10 22:59:22.062111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.591 [2024-12-10 22:59:22.062142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.591 qpair failed and we were unable to recover it. 00:28:14.591 [2024-12-10 22:59:22.062323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.591 [2024-12-10 22:59:22.062351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.591 qpair failed and we were unable to recover it. 00:28:14.591 [2024-12-10 22:59:22.062513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.591 [2024-12-10 22:59:22.062541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.591 qpair failed and we were unable to recover it. 00:28:14.591 [2024-12-10 22:59:22.062656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.591 [2024-12-10 22:59:22.062684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.591 qpair failed and we were unable to recover it. 00:28:14.591 [2024-12-10 22:59:22.062851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.591 [2024-12-10 22:59:22.062880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.591 qpair failed and we were unable to recover it. 00:28:14.591 [2024-12-10 22:59:22.063013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.591 [2024-12-10 22:59:22.063042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.591 qpair failed and we were unable to recover it. 00:28:14.591 [2024-12-10 22:59:22.063149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.591 [2024-12-10 22:59:22.063188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.591 qpair failed and we were unable to recover it. 00:28:14.591 [2024-12-10 22:59:22.063288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.592 [2024-12-10 22:59:22.063316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.592 qpair failed and we were unable to recover it. 00:28:14.592 [2024-12-10 22:59:22.063502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.592 [2024-12-10 22:59:22.063531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.592 qpair failed and we were unable to recover it. 00:28:14.592 [2024-12-10 22:59:22.063651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.592 [2024-12-10 22:59:22.063680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.592 qpair failed and we were unable to recover it. 00:28:14.592 [2024-12-10 22:59:22.063799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.592 [2024-12-10 22:59:22.063827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.592 qpair failed and we were unable to recover it. 00:28:14.592 [2024-12-10 22:59:22.064016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.592 [2024-12-10 22:59:22.064045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.592 qpair failed and we were unable to recover it. 00:28:14.592 [2024-12-10 22:59:22.064153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.592 [2024-12-10 22:59:22.064194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.592 qpair failed and we were unable to recover it. 00:28:14.592 [2024-12-10 22:59:22.064297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.592 [2024-12-10 22:59:22.064325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.592 qpair failed and we were unable to recover it. 00:28:14.592 [2024-12-10 22:59:22.064445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.592 [2024-12-10 22:59:22.064473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.592 qpair failed and we were unable to recover it. 00:28:14.592 [2024-12-10 22:59:22.064575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.592 [2024-12-10 22:59:22.064603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.592 qpair failed and we were unable to recover it. 00:28:14.592 [2024-12-10 22:59:22.064723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.592 [2024-12-10 22:59:22.064752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.592 qpair failed and we were unable to recover it. 00:28:14.592 [2024-12-10 22:59:22.064923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.592 [2024-12-10 22:59:22.064953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.592 qpair failed and we were unable to recover it. 00:28:14.592 [2024-12-10 22:59:22.065116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.592 [2024-12-10 22:59:22.065145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.592 qpair failed and we were unable to recover it. 00:28:14.592 [2024-12-10 22:59:22.065263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.592 [2024-12-10 22:59:22.065292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.592 qpair failed and we were unable to recover it. 00:28:14.592 [2024-12-10 22:59:22.065397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.592 [2024-12-10 22:59:22.065427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.592 qpair failed and we were unable to recover it. 00:28:14.592 [2024-12-10 22:59:22.065529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.592 [2024-12-10 22:59:22.065561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.592 qpair failed and we were unable to recover it. 00:28:14.592 [2024-12-10 22:59:22.065661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.592 [2024-12-10 22:59:22.065689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.592 qpair failed and we were unable to recover it. 00:28:14.592 [2024-12-10 22:59:22.065861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.592 [2024-12-10 22:59:22.065890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.592 qpair failed and we were unable to recover it. 00:28:14.592 [2024-12-10 22:59:22.065998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.592 [2024-12-10 22:59:22.066028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.592 qpair failed and we were unable to recover it. 00:28:14.592 [2024-12-10 22:59:22.066199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.592 [2024-12-10 22:59:22.066229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.592 qpair failed and we were unable to recover it. 00:28:14.592 [2024-12-10 22:59:22.066332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.592 [2024-12-10 22:59:22.066361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.592 qpair failed and we were unable to recover it. 00:28:14.592 [2024-12-10 22:59:22.066472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.592 [2024-12-10 22:59:22.066501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.592 qpair failed and we were unable to recover it. 00:28:14.592 [2024-12-10 22:59:22.066678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.592 [2024-12-10 22:59:22.066704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.592 qpair failed and we were unable to recover it. 00:28:14.592 [2024-12-10 22:59:22.066812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.592 [2024-12-10 22:59:22.066840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.592 qpair failed and we were unable to recover it. 00:28:14.592 [2024-12-10 22:59:22.066944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.592 [2024-12-10 22:59:22.066972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.592 qpair failed and we were unable to recover it. 00:28:14.592 [2024-12-10 22:59:22.067076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.592 [2024-12-10 22:59:22.067102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.592 qpair failed and we were unable to recover it. 00:28:14.592 [2024-12-10 22:59:22.067207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.592 [2024-12-10 22:59:22.067235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.592 qpair failed and we were unable to recover it. 00:28:14.592 [2024-12-10 22:59:22.067338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.592 [2024-12-10 22:59:22.067363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.592 qpair failed and we were unable to recover it. 00:28:14.592 [2024-12-10 22:59:22.067532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.592 [2024-12-10 22:59:22.067558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.592 qpair failed and we were unable to recover it. 00:28:14.592 Read completed with error (sct=0, sc=8) 00:28:14.592 starting I/O failed 00:28:14.592 Read completed with error (sct=0, sc=8) 00:28:14.592 starting I/O failed 00:28:14.592 Read completed with error (sct=0, sc=8) 00:28:14.592 starting I/O failed 00:28:14.592 Read completed with error (sct=0, sc=8) 00:28:14.592 starting I/O failed 00:28:14.592 Read completed with error (sct=0, sc=8) 00:28:14.592 starting I/O failed 00:28:14.592 Read completed with error (sct=0, sc=8) 00:28:14.592 starting I/O failed 00:28:14.592 Read completed with error (sct=0, sc=8) 00:28:14.592 starting I/O failed 00:28:14.592 Read completed with error (sct=0, sc=8) 00:28:14.592 starting I/O failed 00:28:14.592 Read completed with error (sct=0, sc=8) 00:28:14.592 starting I/O failed 00:28:14.592 Read completed with error (sct=0, sc=8) 00:28:14.592 starting I/O failed 00:28:14.592 Read completed with error (sct=0, sc=8) 00:28:14.592 starting I/O failed 00:28:14.592 Read completed with error (sct=0, sc=8) 00:28:14.592 starting I/O failed 00:28:14.592 Read completed with error (sct=0, sc=8) 00:28:14.592 starting I/O failed 00:28:14.592 Read completed with error (sct=0, sc=8) 00:28:14.592 starting I/O failed 00:28:14.592 Read completed with error (sct=0, sc=8) 00:28:14.592 starting I/O failed 00:28:14.592 Write completed with error (sct=0, sc=8) 00:28:14.592 starting I/O failed 00:28:14.592 Write completed with error (sct=0, sc=8) 00:28:14.592 starting I/O failed 00:28:14.592 Write completed with error (sct=0, sc=8) 00:28:14.592 starting I/O failed 00:28:14.592 Write completed with error (sct=0, sc=8) 00:28:14.592 starting I/O failed 00:28:14.592 Read completed with error (sct=0, sc=8) 00:28:14.592 starting I/O failed 00:28:14.592 Read completed with error (sct=0, sc=8) 00:28:14.592 starting I/O failed 00:28:14.592 Read completed with error (sct=0, sc=8) 00:28:14.592 starting I/O failed 00:28:14.592 Write completed with error (sct=0, sc=8) 00:28:14.592 starting I/O failed 00:28:14.592 Write completed with error (sct=0, sc=8) 00:28:14.592 starting I/O failed 00:28:14.592 Write completed with error (sct=0, sc=8) 00:28:14.592 starting I/O failed 00:28:14.592 Read completed with error (sct=0, sc=8) 00:28:14.592 starting I/O failed 00:28:14.592 Write completed with error (sct=0, sc=8) 00:28:14.592 starting I/O failed 00:28:14.592 Read completed with error (sct=0, sc=8) 00:28:14.592 starting I/O failed 00:28:14.592 Read completed with error (sct=0, sc=8) 00:28:14.592 starting I/O failed 00:28:14.592 Read completed with error (sct=0, sc=8) 00:28:14.592 starting I/O failed 00:28:14.592 Write completed with error (sct=0, sc=8) 00:28:14.592 starting I/O failed 00:28:14.592 Write completed with error (sct=0, sc=8) 00:28:14.592 starting I/O failed 00:28:14.592 [2024-12-10 22:59:22.068207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:14.592 [2024-12-10 22:59:22.068387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.592 [2024-12-10 22:59:22.068452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.592 qpair failed and we were unable to recover it. 00:28:14.592 [2024-12-10 22:59:22.068672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.593 [2024-12-10 22:59:22.068709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.593 qpair failed and we were unable to recover it. 00:28:14.593 [2024-12-10 22:59:22.068818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.593 [2024-12-10 22:59:22.068850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.593 qpair failed and we were unable to recover it. 00:28:14.593 [2024-12-10 22:59:22.068951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.593 [2024-12-10 22:59:22.068983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.593 qpair failed and we were unable to recover it. 00:28:14.593 [2024-12-10 22:59:22.069167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.593 [2024-12-10 22:59:22.069201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.593 qpair failed and we were unable to recover it. 00:28:14.593 [2024-12-10 22:59:22.069379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.593 [2024-12-10 22:59:22.069410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.593 qpair failed and we were unable to recover it. 00:28:14.593 [2024-12-10 22:59:22.069604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.593 [2024-12-10 22:59:22.069636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.593 qpair failed and we were unable to recover it. 00:28:14.593 [2024-12-10 22:59:22.069763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.593 [2024-12-10 22:59:22.069796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.593 qpair failed and we were unable to recover it. 00:28:14.593 [2024-12-10 22:59:22.069896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.593 [2024-12-10 22:59:22.069926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.593 qpair failed and we were unable to recover it. 00:28:14.593 [2024-12-10 22:59:22.070047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.593 [2024-12-10 22:59:22.070078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.593 qpair failed and we were unable to recover it. 00:28:14.593 [2024-12-10 22:59:22.070203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.593 [2024-12-10 22:59:22.070239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.593 qpair failed and we were unable to recover it. 00:28:14.593 [2024-12-10 22:59:22.070344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.593 [2024-12-10 22:59:22.070373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.593 qpair failed and we were unable to recover it. 00:28:14.593 [2024-12-10 22:59:22.070490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.593 [2024-12-10 22:59:22.070520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.593 qpair failed and we were unable to recover it. 00:28:14.593 [2024-12-10 22:59:22.070646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.593 [2024-12-10 22:59:22.070677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.593 qpair failed and we were unable to recover it. 00:28:14.593 [2024-12-10 22:59:22.070865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.593 [2024-12-10 22:59:22.070896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.593 qpair failed and we were unable to recover it. 00:28:14.593 [2024-12-10 22:59:22.071004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.593 [2024-12-10 22:59:22.071035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.593 qpair failed and we were unable to recover it. 00:28:14.593 [2024-12-10 22:59:22.071145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.593 [2024-12-10 22:59:22.071186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.593 qpair failed and we were unable to recover it. 00:28:14.593 [2024-12-10 22:59:22.071322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.593 [2024-12-10 22:59:22.071352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.593 qpair failed and we were unable to recover it. 00:28:14.593 [2024-12-10 22:59:22.071468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.593 [2024-12-10 22:59:22.071499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.593 qpair failed and we were unable to recover it. 00:28:14.593 [2024-12-10 22:59:22.071631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.593 [2024-12-10 22:59:22.071662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.593 qpair failed and we were unable to recover it. 00:28:14.593 [2024-12-10 22:59:22.071783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.593 [2024-12-10 22:59:22.071814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.593 qpair failed and we were unable to recover it. 00:28:14.593 [2024-12-10 22:59:22.071934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.593 [2024-12-10 22:59:22.071966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.593 qpair failed and we were unable to recover it. 00:28:14.593 [2024-12-10 22:59:22.072082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.593 [2024-12-10 22:59:22.072115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.593 qpair failed and we were unable to recover it. 00:28:14.593 [2024-12-10 22:59:22.072243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.593 [2024-12-10 22:59:22.072276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.593 qpair failed and we were unable to recover it. 00:28:14.593 [2024-12-10 22:59:22.072406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.593 [2024-12-10 22:59:22.072437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.593 qpair failed and we were unable to recover it. 00:28:14.593 [2024-12-10 22:59:22.072547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.593 [2024-12-10 22:59:22.072578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.593 qpair failed and we were unable to recover it. 00:28:14.593 [2024-12-10 22:59:22.072701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.593 [2024-12-10 22:59:22.072733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.593 qpair failed and we were unable to recover it. 00:28:14.593 [2024-12-10 22:59:22.072838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.593 [2024-12-10 22:59:22.072868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.593 qpair failed and we were unable to recover it. 00:28:14.593 [2024-12-10 22:59:22.072968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.593 [2024-12-10 22:59:22.072999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.593 qpair failed and we were unable to recover it. 00:28:14.593 [2024-12-10 22:59:22.073173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.593 [2024-12-10 22:59:22.073207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.593 qpair failed and we were unable to recover it. 00:28:14.593 [2024-12-10 22:59:22.073385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.593 [2024-12-10 22:59:22.073419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.593 qpair failed and we were unable to recover it. 00:28:14.593 [2024-12-10 22:59:22.073609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.593 [2024-12-10 22:59:22.073640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.593 qpair failed and we were unable to recover it. 00:28:14.593 [2024-12-10 22:59:22.073745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.593 [2024-12-10 22:59:22.073776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.593 qpair failed and we were unable to recover it. 00:28:14.593 [2024-12-10 22:59:22.073918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.593 [2024-12-10 22:59:22.073957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.593 qpair failed and we were unable to recover it. 00:28:14.593 [2024-12-10 22:59:22.074060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.593 [2024-12-10 22:59:22.074092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.593 qpair failed and we were unable to recover it. 00:28:14.593 [2024-12-10 22:59:22.074202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.593 [2024-12-10 22:59:22.074235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.593 qpair failed and we were unable to recover it. 00:28:14.593 [2024-12-10 22:59:22.074364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.593 [2024-12-10 22:59:22.074395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.593 qpair failed and we were unable to recover it. 00:28:14.593 [2024-12-10 22:59:22.074513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.593 [2024-12-10 22:59:22.074545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.593 qpair failed and we were unable to recover it. 00:28:14.593 [2024-12-10 22:59:22.074657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.593 [2024-12-10 22:59:22.074687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.593 qpair failed and we were unable to recover it. 00:28:14.593 [2024-12-10 22:59:22.074817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.593 [2024-12-10 22:59:22.074849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.593 qpair failed and we were unable to recover it. 00:28:14.593 [2024-12-10 22:59:22.074964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.594 [2024-12-10 22:59:22.074995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.594 qpair failed and we were unable to recover it. 00:28:14.594 [2024-12-10 22:59:22.075093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.594 [2024-12-10 22:59:22.075123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.594 qpair failed and we were unable to recover it. 00:28:14.594 [2024-12-10 22:59:22.075227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.594 [2024-12-10 22:59:22.075254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.594 qpair failed and we were unable to recover it. 00:28:14.594 [2024-12-10 22:59:22.075434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.594 [2024-12-10 22:59:22.075462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.594 qpair failed and we were unable to recover it. 00:28:14.594 [2024-12-10 22:59:22.075632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.594 [2024-12-10 22:59:22.075659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.594 qpair failed and we were unable to recover it. 00:28:14.594 [2024-12-10 22:59:22.075748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.594 [2024-12-10 22:59:22.075776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.594 qpair failed and we were unable to recover it. 00:28:14.594 [2024-12-10 22:59:22.075882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.594 [2024-12-10 22:59:22.075909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.594 qpair failed and we were unable to recover it. 00:28:14.594 [2024-12-10 22:59:22.076014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.594 [2024-12-10 22:59:22.076039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.594 qpair failed and we were unable to recover it. 00:28:14.594 [2024-12-10 22:59:22.076132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.594 [2024-12-10 22:59:22.076181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.594 qpair failed and we were unable to recover it. 00:28:14.594 [2024-12-10 22:59:22.076281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.594 [2024-12-10 22:59:22.076307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.594 qpair failed and we were unable to recover it. 00:28:14.594 [2024-12-10 22:59:22.076400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.594 [2024-12-10 22:59:22.076424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.594 qpair failed and we were unable to recover it. 00:28:14.594 [2024-12-10 22:59:22.076523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.594 [2024-12-10 22:59:22.076565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.594 qpair failed and we were unable to recover it. 00:28:14.594 [2024-12-10 22:59:22.076682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.594 [2024-12-10 22:59:22.076713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.594 qpair failed and we were unable to recover it. 00:28:14.594 [2024-12-10 22:59:22.076822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.594 [2024-12-10 22:59:22.076854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.594 qpair failed and we were unable to recover it. 00:28:14.594 [2024-12-10 22:59:22.076961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.594 [2024-12-10 22:59:22.076992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.594 qpair failed and we were unable to recover it. 00:28:14.594 [2024-12-10 22:59:22.077109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.594 [2024-12-10 22:59:22.077140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.594 qpair failed and we were unable to recover it. 00:28:14.594 [2024-12-10 22:59:22.077258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.594 [2024-12-10 22:59:22.077289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.594 qpair failed and we were unable to recover it. 00:28:14.594 [2024-12-10 22:59:22.077412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.594 [2024-12-10 22:59:22.077444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.594 qpair failed and we were unable to recover it. 00:28:14.594 [2024-12-10 22:59:22.077558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.594 [2024-12-10 22:59:22.077589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.594 qpair failed and we were unable to recover it. 00:28:14.594 [2024-12-10 22:59:22.077710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.594 [2024-12-10 22:59:22.077742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.594 qpair failed and we were unable to recover it. 00:28:14.594 [2024-12-10 22:59:22.077843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.594 [2024-12-10 22:59:22.077879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.594 qpair failed and we were unable to recover it. 00:28:14.594 [2024-12-10 22:59:22.078048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.594 [2024-12-10 22:59:22.078080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.594 qpair failed and we were unable to recover it. 00:28:14.594 [2024-12-10 22:59:22.078186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.594 [2024-12-10 22:59:22.078220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.594 qpair failed and we were unable to recover it. 00:28:14.594 [2024-12-10 22:59:22.078320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.594 [2024-12-10 22:59:22.078352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.594 qpair failed and we were unable to recover it. 00:28:14.594 [2024-12-10 22:59:22.078452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.594 [2024-12-10 22:59:22.078485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.594 qpair failed and we were unable to recover it. 00:28:14.594 [2024-12-10 22:59:22.078609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.594 [2024-12-10 22:59:22.078640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.594 qpair failed and we were unable to recover it. 00:28:14.594 [2024-12-10 22:59:22.078750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.594 [2024-12-10 22:59:22.078781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.594 qpair failed and we were unable to recover it. 00:28:14.594 [2024-12-10 22:59:22.078892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.594 [2024-12-10 22:59:22.078923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.594 qpair failed and we were unable to recover it. 00:28:14.594 [2024-12-10 22:59:22.079023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.594 [2024-12-10 22:59:22.079055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.594 qpair failed and we were unable to recover it. 00:28:14.594 [2024-12-10 22:59:22.079221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.594 [2024-12-10 22:59:22.079254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.594 qpair failed and we were unable to recover it. 00:28:14.594 [2024-12-10 22:59:22.079451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.594 [2024-12-10 22:59:22.079483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.594 qpair failed and we were unable to recover it. 00:28:14.594 [2024-12-10 22:59:22.079596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.594 [2024-12-10 22:59:22.079627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.594 qpair failed and we were unable to recover it. 00:28:14.594 [2024-12-10 22:59:22.079742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.594 [2024-12-10 22:59:22.079774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.594 qpair failed and we were unable to recover it. 00:28:14.594 [2024-12-10 22:59:22.079874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.594 [2024-12-10 22:59:22.079905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.594 qpair failed and we were unable to recover it. 00:28:14.594 [2024-12-10 22:59:22.080015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.594 [2024-12-10 22:59:22.080046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.594 qpair failed and we were unable to recover it. 00:28:14.594 [2024-12-10 22:59:22.080167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.594 [2024-12-10 22:59:22.080199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.594 qpair failed and we were unable to recover it. 00:28:14.594 [2024-12-10 22:59:22.080366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.594 [2024-12-10 22:59:22.080398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.594 qpair failed and we were unable to recover it. 00:28:14.594 [2024-12-10 22:59:22.080509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.594 [2024-12-10 22:59:22.080540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.594 qpair failed and we were unable to recover it. 00:28:14.594 [2024-12-10 22:59:22.080710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.594 [2024-12-10 22:59:22.080743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.594 qpair failed and we were unable to recover it. 00:28:14.594 [2024-12-10 22:59:22.080861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.594 [2024-12-10 22:59:22.080893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.594 qpair failed and we were unable to recover it. 00:28:14.594 [2024-12-10 22:59:22.081015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.595 [2024-12-10 22:59:22.081047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.595 qpair failed and we were unable to recover it. 00:28:14.595 [2024-12-10 22:59:22.081174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.595 [2024-12-10 22:59:22.081207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.595 qpair failed and we were unable to recover it. 00:28:14.595 [2024-12-10 22:59:22.081308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.595 [2024-12-10 22:59:22.081340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.595 qpair failed and we were unable to recover it. 00:28:14.595 [2024-12-10 22:59:22.081449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.595 [2024-12-10 22:59:22.081480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.595 qpair failed and we were unable to recover it. 00:28:14.595 [2024-12-10 22:59:22.081658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.595 [2024-12-10 22:59:22.081689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.595 qpair failed and we were unable to recover it. 00:28:14.595 [2024-12-10 22:59:22.081792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.595 [2024-12-10 22:59:22.081825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.595 qpair failed and we were unable to recover it. 00:28:14.595 [2024-12-10 22:59:22.081940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.595 [2024-12-10 22:59:22.081971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.595 qpair failed and we were unable to recover it. 00:28:14.595 [2024-12-10 22:59:22.082081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.595 [2024-12-10 22:59:22.082119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.595 qpair failed and we were unable to recover it. 00:28:14.595 [2024-12-10 22:59:22.082308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.595 [2024-12-10 22:59:22.082341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.595 qpair failed and we were unable to recover it. 00:28:14.595 [2024-12-10 22:59:22.082459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.595 [2024-12-10 22:59:22.082490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.595 qpair failed and we were unable to recover it. 00:28:14.595 [2024-12-10 22:59:22.082611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.595 [2024-12-10 22:59:22.082642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.595 qpair failed and we were unable to recover it. 00:28:14.595 [2024-12-10 22:59:22.082750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.595 [2024-12-10 22:59:22.082783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.595 qpair failed and we were unable to recover it. 00:28:14.595 [2024-12-10 22:59:22.082889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.595 [2024-12-10 22:59:22.082920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.595 qpair failed and we were unable to recover it. 00:28:14.595 [2024-12-10 22:59:22.083023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.595 [2024-12-10 22:59:22.083055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.595 qpair failed and we were unable to recover it. 00:28:14.595 [2024-12-10 22:59:22.083233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.595 [2024-12-10 22:59:22.083268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.595 qpair failed and we were unable to recover it. 00:28:14.595 [2024-12-10 22:59:22.083373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.595 [2024-12-10 22:59:22.083402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.595 qpair failed and we were unable to recover it. 00:28:14.595 [2024-12-10 22:59:22.083512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.595 [2024-12-10 22:59:22.083543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.595 qpair failed and we were unable to recover it. 00:28:14.595 [2024-12-10 22:59:22.083663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.595 [2024-12-10 22:59:22.083693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.595 qpair failed and we were unable to recover it. 00:28:14.595 [2024-12-10 22:59:22.083795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.595 [2024-12-10 22:59:22.083829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.595 qpair failed and we were unable to recover it. 00:28:14.595 [2024-12-10 22:59:22.083997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.595 [2024-12-10 22:59:22.084027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.595 qpair failed and we were unable to recover it. 00:28:14.595 [2024-12-10 22:59:22.084193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.595 [2024-12-10 22:59:22.084229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.595 qpair failed and we were unable to recover it. 00:28:14.595 [2024-12-10 22:59:22.084428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.595 [2024-12-10 22:59:22.084460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.595 qpair failed and we were unable to recover it. 00:28:14.595 [2024-12-10 22:59:22.084663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.595 [2024-12-10 22:59:22.084694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.595 qpair failed and we were unable to recover it. 00:28:14.595 [2024-12-10 22:59:22.084809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.595 [2024-12-10 22:59:22.084841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.595 qpair failed and we were unable to recover it. 00:28:14.595 [2024-12-10 22:59:22.085020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.595 [2024-12-10 22:59:22.085052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.595 qpair failed and we were unable to recover it. 00:28:14.595 [2024-12-10 22:59:22.085176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.595 [2024-12-10 22:59:22.085210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.595 qpair failed and we were unable to recover it. 00:28:14.595 [2024-12-10 22:59:22.085382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.595 [2024-12-10 22:59:22.085413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.595 qpair failed and we were unable to recover it. 00:28:14.595 [2024-12-10 22:59:22.085578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.595 [2024-12-10 22:59:22.085611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.595 qpair failed and we were unable to recover it. 00:28:14.595 [2024-12-10 22:59:22.085734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.595 [2024-12-10 22:59:22.085766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.595 qpair failed and we were unable to recover it. 00:28:14.595 [2024-12-10 22:59:22.085872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.595 [2024-12-10 22:59:22.085904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.595 qpair failed and we were unable to recover it. 00:28:14.595 [2024-12-10 22:59:22.086012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.595 [2024-12-10 22:59:22.086043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.595 qpair failed and we were unable to recover it. 00:28:14.595 [2024-12-10 22:59:22.086231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.595 [2024-12-10 22:59:22.086263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.595 qpair failed and we were unable to recover it. 00:28:14.595 [2024-12-10 22:59:22.086372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.595 [2024-12-10 22:59:22.086404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.595 qpair failed and we were unable to recover it. 00:28:14.595 [2024-12-10 22:59:22.086605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.595 [2024-12-10 22:59:22.086634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.595 qpair failed and we were unable to recover it. 00:28:14.595 [2024-12-10 22:59:22.086792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.595 [2024-12-10 22:59:22.086821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.596 qpair failed and we were unable to recover it. 00:28:14.596 [2024-12-10 22:59:22.087008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.596 [2024-12-10 22:59:22.087038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.596 qpair failed and we were unable to recover it. 00:28:14.596 [2024-12-10 22:59:22.087146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.596 [2024-12-10 22:59:22.087186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.596 qpair failed and we were unable to recover it. 00:28:14.596 [2024-12-10 22:59:22.087309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.596 [2024-12-10 22:59:22.087337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.596 qpair failed and we were unable to recover it. 00:28:14.596 [2024-12-10 22:59:22.087451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.596 [2024-12-10 22:59:22.087480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.596 qpair failed and we were unable to recover it. 00:28:14.596 [2024-12-10 22:59:22.087640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.596 [2024-12-10 22:59:22.087669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.596 qpair failed and we were unable to recover it. 00:28:14.596 [2024-12-10 22:59:22.087843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.596 [2024-12-10 22:59:22.087873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.596 qpair failed and we were unable to recover it. 00:28:14.596 [2024-12-10 22:59:22.087991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.596 [2024-12-10 22:59:22.088019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.596 qpair failed and we were unable to recover it. 00:28:14.596 [2024-12-10 22:59:22.088238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.596 [2024-12-10 22:59:22.088268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.596 qpair failed and we were unable to recover it. 00:28:14.596 [2024-12-10 22:59:22.088365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.596 [2024-12-10 22:59:22.088395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.596 qpair failed and we were unable to recover it. 00:28:14.596 [2024-12-10 22:59:22.088578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.596 [2024-12-10 22:59:22.088607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.596 qpair failed and we were unable to recover it. 00:28:14.596 [2024-12-10 22:59:22.088701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.596 [2024-12-10 22:59:22.088731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.596 qpair failed and we were unable to recover it. 00:28:14.596 [2024-12-10 22:59:22.088891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.596 [2024-12-10 22:59:22.088919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.596 qpair failed and we were unable to recover it. 00:28:14.596 [2024-12-10 22:59:22.089014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.596 [2024-12-10 22:59:22.089043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.596 qpair failed and we were unable to recover it. 00:28:14.596 [2024-12-10 22:59:22.089139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.596 [2024-12-10 22:59:22.089247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.596 qpair failed and we were unable to recover it. 00:28:14.596 [2024-12-10 22:59:22.089355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.596 [2024-12-10 22:59:22.089384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.596 qpair failed and we were unable to recover it. 00:28:14.596 [2024-12-10 22:59:22.089493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.596 [2024-12-10 22:59:22.089521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.596 qpair failed and we were unable to recover it. 00:28:14.596 [2024-12-10 22:59:22.089625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.596 [2024-12-10 22:59:22.089653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.596 qpair failed and we were unable to recover it. 00:28:14.596 [2024-12-10 22:59:22.089850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.596 [2024-12-10 22:59:22.089879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.596 qpair failed and we were unable to recover it. 00:28:14.596 [2024-12-10 22:59:22.090050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.596 [2024-12-10 22:59:22.090078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.596 qpair failed and we were unable to recover it. 00:28:14.596 [2024-12-10 22:59:22.090257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.596 [2024-12-10 22:59:22.090287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.596 qpair failed and we were unable to recover it. 00:28:14.596 [2024-12-10 22:59:22.090380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.596 [2024-12-10 22:59:22.090410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.596 qpair failed and we were unable to recover it. 00:28:14.596 [2024-12-10 22:59:22.090583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.596 [2024-12-10 22:59:22.090612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.596 qpair failed and we were unable to recover it. 00:28:14.596 [2024-12-10 22:59:22.090847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.596 [2024-12-10 22:59:22.090877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.596 qpair failed and we were unable to recover it. 00:28:14.596 [2024-12-10 22:59:22.091075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.596 [2024-12-10 22:59:22.091103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.596 qpair failed and we were unable to recover it. 00:28:14.596 [2024-12-10 22:59:22.091264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.596 [2024-12-10 22:59:22.091294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.596 qpair failed and we were unable to recover it. 00:28:14.596 [2024-12-10 22:59:22.091469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.596 [2024-12-10 22:59:22.091498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.596 qpair failed and we were unable to recover it. 00:28:14.596 [2024-12-10 22:59:22.091607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.596 [2024-12-10 22:59:22.091636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.596 qpair failed and we were unable to recover it. 00:28:14.596 [2024-12-10 22:59:22.091750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.596 [2024-12-10 22:59:22.091779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.596 qpair failed and we were unable to recover it. 00:28:14.596 [2024-12-10 22:59:22.091899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.596 [2024-12-10 22:59:22.091929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.596 qpair failed and we were unable to recover it. 00:28:14.596 [2024-12-10 22:59:22.092037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.596 [2024-12-10 22:59:22.092066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.596 qpair failed and we were unable to recover it. 00:28:14.596 [2024-12-10 22:59:22.092178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.596 [2024-12-10 22:59:22.092208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.596 qpair failed and we were unable to recover it. 00:28:14.596 [2024-12-10 22:59:22.092308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.596 [2024-12-10 22:59:22.092337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.596 qpair failed and we were unable to recover it. 00:28:14.596 [2024-12-10 22:59:22.092432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.596 [2024-12-10 22:59:22.092461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.596 qpair failed and we were unable to recover it. 00:28:14.596 [2024-12-10 22:59:22.092620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.596 [2024-12-10 22:59:22.092648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.596 qpair failed and we were unable to recover it. 00:28:14.596 [2024-12-10 22:59:22.092808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.596 [2024-12-10 22:59:22.092837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.596 qpair failed and we were unable to recover it. 00:28:14.596 [2024-12-10 22:59:22.093024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.596 [2024-12-10 22:59:22.093052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.596 qpair failed and we were unable to recover it. 00:28:14.596 [2024-12-10 22:59:22.093237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.596 [2024-12-10 22:59:22.093265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.596 qpair failed and we were unable to recover it. 00:28:14.596 [2024-12-10 22:59:22.093442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.596 [2024-12-10 22:59:22.093471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.596 qpair failed and we were unable to recover it. 00:28:14.596 [2024-12-10 22:59:22.093633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.596 [2024-12-10 22:59:22.093661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.596 qpair failed and we were unable to recover it. 00:28:14.596 [2024-12-10 22:59:22.093795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.597 [2024-12-10 22:59:22.093824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.597 qpair failed and we were unable to recover it. 00:28:14.597 [2024-12-10 22:59:22.093920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.597 [2024-12-10 22:59:22.093948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.597 qpair failed and we were unable to recover it. 00:28:14.597 [2024-12-10 22:59:22.094116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.597 [2024-12-10 22:59:22.094145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.597 qpair failed and we were unable to recover it. 00:28:14.597 [2024-12-10 22:59:22.094396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.597 [2024-12-10 22:59:22.094479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.597 qpair failed and we were unable to recover it. 00:28:14.597 [2024-12-10 22:59:22.094686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.597 [2024-12-10 22:59:22.094723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.597 qpair failed and we were unable to recover it. 00:28:14.597 [2024-12-10 22:59:22.094919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.597 [2024-12-10 22:59:22.094952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.597 qpair failed and we were unable to recover it. 00:28:14.597 [2024-12-10 22:59:22.095075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.597 [2024-12-10 22:59:22.095107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.597 qpair failed and we were unable to recover it. 00:28:14.597 [2024-12-10 22:59:22.095245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.597 [2024-12-10 22:59:22.095279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.597 qpair failed and we were unable to recover it. 00:28:14.597 [2024-12-10 22:59:22.095382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.597 [2024-12-10 22:59:22.095414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.597 qpair failed and we were unable to recover it. 00:28:14.597 [2024-12-10 22:59:22.095518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.597 [2024-12-10 22:59:22.095549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.597 qpair failed and we were unable to recover it. 00:28:14.597 [2024-12-10 22:59:22.095670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.597 [2024-12-10 22:59:22.095702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.597 qpair failed and we were unable to recover it. 00:28:14.597 [2024-12-10 22:59:22.095877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.597 [2024-12-10 22:59:22.095909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.597 qpair failed and we were unable to recover it. 00:28:14.597 [2024-12-10 22:59:22.096075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.597 [2024-12-10 22:59:22.096107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.597 qpair failed and we were unable to recover it. 00:28:14.597 [2024-12-10 22:59:22.096243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.597 [2024-12-10 22:59:22.096276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.597 qpair failed and we were unable to recover it. 00:28:14.597 [2024-12-10 22:59:22.096445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.597 [2024-12-10 22:59:22.096476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.597 qpair failed and we were unable to recover it. 00:28:14.597 [2024-12-10 22:59:22.096672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.597 [2024-12-10 22:59:22.096705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.597 qpair failed and we were unable to recover it. 00:28:14.597 [2024-12-10 22:59:22.096892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.597 [2024-12-10 22:59:22.096924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.597 qpair failed and we were unable to recover it. 00:28:14.597 [2024-12-10 22:59:22.097107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.597 [2024-12-10 22:59:22.097138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.597 qpair failed and we were unable to recover it. 00:28:14.597 [2024-12-10 22:59:22.097263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.597 [2024-12-10 22:59:22.097295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.597 qpair failed and we were unable to recover it. 00:28:14.597 [2024-12-10 22:59:22.097462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.597 [2024-12-10 22:59:22.097495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.597 qpair failed and we were unable to recover it. 00:28:14.597 [2024-12-10 22:59:22.097684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.597 [2024-12-10 22:59:22.097717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.597 qpair failed and we were unable to recover it. 00:28:14.597 [2024-12-10 22:59:22.097844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.597 [2024-12-10 22:59:22.097877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.597 qpair failed and we were unable to recover it. 00:28:14.597 [2024-12-10 22:59:22.097992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.597 [2024-12-10 22:59:22.098024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.597 qpair failed and we were unable to recover it. 00:28:14.597 [2024-12-10 22:59:22.098201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.597 [2024-12-10 22:59:22.098235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.597 qpair failed and we were unable to recover it. 00:28:14.597 [2024-12-10 22:59:22.098335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.597 [2024-12-10 22:59:22.098366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.597 qpair failed and we were unable to recover it. 00:28:14.597 [2024-12-10 22:59:22.098531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.597 [2024-12-10 22:59:22.098564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.597 qpair failed and we were unable to recover it. 00:28:14.597 [2024-12-10 22:59:22.098666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.597 [2024-12-10 22:59:22.098698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.597 qpair failed and we were unable to recover it. 00:28:14.597 [2024-12-10 22:59:22.098866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.597 [2024-12-10 22:59:22.098898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.597 qpair failed and we were unable to recover it. 00:28:14.597 [2024-12-10 22:59:22.099067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.597 [2024-12-10 22:59:22.099105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.597 qpair failed and we were unable to recover it. 00:28:14.597 [2024-12-10 22:59:22.099293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.597 [2024-12-10 22:59:22.099326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.597 qpair failed and we were unable to recover it. 00:28:14.597 [2024-12-10 22:59:22.099500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.597 [2024-12-10 22:59:22.099531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.597 qpair failed and we were unable to recover it. 00:28:14.597 [2024-12-10 22:59:22.099637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.597 [2024-12-10 22:59:22.099668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.597 qpair failed and we were unable to recover it. 00:28:14.597 [2024-12-10 22:59:22.099839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.597 [2024-12-10 22:59:22.099872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.597 qpair failed and we were unable to recover it. 00:28:14.597 [2024-12-10 22:59:22.099992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.597 [2024-12-10 22:59:22.100024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.597 qpair failed and we were unable to recover it. 00:28:14.597 [2024-12-10 22:59:22.100127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.597 [2024-12-10 22:59:22.100170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.597 qpair failed and we were unable to recover it. 00:28:14.597 [2024-12-10 22:59:22.100364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.597 [2024-12-10 22:59:22.100397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.597 qpair failed and we were unable to recover it. 00:28:14.597 [2024-12-10 22:59:22.100593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.597 [2024-12-10 22:59:22.100625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.597 qpair failed and we were unable to recover it. 00:28:14.597 [2024-12-10 22:59:22.100727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.597 [2024-12-10 22:59:22.100759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.597 qpair failed and we were unable to recover it. 00:28:14.597 [2024-12-10 22:59:22.100926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.597 [2024-12-10 22:59:22.100958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.597 qpair failed and we were unable to recover it. 00:28:14.597 [2024-12-10 22:59:22.101070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.598 [2024-12-10 22:59:22.101101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.598 qpair failed and we were unable to recover it. 00:28:14.598 [2024-12-10 22:59:22.101295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.598 [2024-12-10 22:59:22.101327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.598 qpair failed and we were unable to recover it. 00:28:14.598 [2024-12-10 22:59:22.101432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.598 [2024-12-10 22:59:22.101464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.598 qpair failed and we were unable to recover it. 00:28:14.598 [2024-12-10 22:59:22.101637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.598 [2024-12-10 22:59:22.101668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.598 qpair failed and we were unable to recover it. 00:28:14.598 [2024-12-10 22:59:22.101776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.598 [2024-12-10 22:59:22.101808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.598 qpair failed and we were unable to recover it. 00:28:14.598 [2024-12-10 22:59:22.101911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.598 [2024-12-10 22:59:22.101943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.598 qpair failed and we were unable to recover it. 00:28:14.598 [2024-12-10 22:59:22.102048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.598 [2024-12-10 22:59:22.102080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.598 qpair failed and we were unable to recover it. 00:28:14.598 [2024-12-10 22:59:22.102185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.598 [2024-12-10 22:59:22.102219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.598 qpair failed and we were unable to recover it. 00:28:14.598 [2024-12-10 22:59:22.102328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.598 [2024-12-10 22:59:22.102360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.598 qpair failed and we were unable to recover it. 00:28:14.598 [2024-12-10 22:59:22.102531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.598 [2024-12-10 22:59:22.102562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.598 qpair failed and we were unable to recover it. 00:28:14.598 [2024-12-10 22:59:22.102752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.598 [2024-12-10 22:59:22.102783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.598 qpair failed and we were unable to recover it. 00:28:14.598 [2024-12-10 22:59:22.102956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.598 [2024-12-10 22:59:22.102988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.598 qpair failed and we were unable to recover it. 00:28:14.598 [2024-12-10 22:59:22.103192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.598 [2024-12-10 22:59:22.103225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.598 qpair failed and we were unable to recover it. 00:28:14.598 [2024-12-10 22:59:22.103414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.598 [2024-12-10 22:59:22.103445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.598 qpair failed and we were unable to recover it. 00:28:14.598 [2024-12-10 22:59:22.103629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.598 [2024-12-10 22:59:22.103661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.598 qpair failed and we were unable to recover it. 00:28:14.598 [2024-12-10 22:59:22.103788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.598 [2024-12-10 22:59:22.103820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.598 qpair failed and we were unable to recover it. 00:28:14.598 [2024-12-10 22:59:22.104011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.598 [2024-12-10 22:59:22.104043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.598 qpair failed and we were unable to recover it. 00:28:14.598 [2024-12-10 22:59:22.104222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.598 [2024-12-10 22:59:22.104254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.598 qpair failed and we were unable to recover it. 00:28:14.598 [2024-12-10 22:59:22.104355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.598 [2024-12-10 22:59:22.104386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.598 qpair failed and we were unable to recover it. 00:28:14.598 [2024-12-10 22:59:22.104624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.598 [2024-12-10 22:59:22.104655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.598 qpair failed and we were unable to recover it. 00:28:14.598 [2024-12-10 22:59:22.104769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.598 [2024-12-10 22:59:22.104800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.598 qpair failed and we were unable to recover it. 00:28:14.598 [2024-12-10 22:59:22.104900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.598 [2024-12-10 22:59:22.104931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.598 qpair failed and we were unable to recover it. 00:28:14.598 [2024-12-10 22:59:22.105100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.598 [2024-12-10 22:59:22.105132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.598 qpair failed and we were unable to recover it. 00:28:14.598 [2024-12-10 22:59:22.105246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.598 [2024-12-10 22:59:22.105277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.598 qpair failed and we were unable to recover it. 00:28:14.598 [2024-12-10 22:59:22.105479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.598 [2024-12-10 22:59:22.105511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.598 qpair failed and we were unable to recover it. 00:28:14.598 [2024-12-10 22:59:22.105627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.598 [2024-12-10 22:59:22.105660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.598 qpair failed and we were unable to recover it. 00:28:14.598 [2024-12-10 22:59:22.105781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.598 [2024-12-10 22:59:22.105813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.598 qpair failed and we were unable to recover it. 00:28:14.598 [2024-12-10 22:59:22.105943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.598 [2024-12-10 22:59:22.105975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.598 qpair failed and we were unable to recover it. 00:28:14.598 [2024-12-10 22:59:22.106082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.598 [2024-12-10 22:59:22.106114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.598 qpair failed and we were unable to recover it. 00:28:14.598 [2024-12-10 22:59:22.106231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.598 [2024-12-10 22:59:22.106272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.598 qpair failed and we were unable to recover it. 00:28:14.598 [2024-12-10 22:59:22.106441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.598 [2024-12-10 22:59:22.106472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.598 qpair failed and we were unable to recover it. 00:28:14.598 [2024-12-10 22:59:22.106586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.598 [2024-12-10 22:59:22.106618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.598 qpair failed and we were unable to recover it. 00:28:14.598 [2024-12-10 22:59:22.106787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.598 [2024-12-10 22:59:22.106818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.598 qpair failed and we were unable to recover it. 00:28:14.598 [2024-12-10 22:59:22.106986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.598 [2024-12-10 22:59:22.107017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.598 qpair failed and we were unable to recover it. 00:28:14.598 [2024-12-10 22:59:22.107119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.598 [2024-12-10 22:59:22.107152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.598 qpair failed and we were unable to recover it. 00:28:14.598 [2024-12-10 22:59:22.107336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.598 [2024-12-10 22:59:22.107368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.598 qpair failed and we were unable to recover it. 00:28:14.598 [2024-12-10 22:59:22.107585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.598 [2024-12-10 22:59:22.107617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.598 qpair failed and we were unable to recover it. 00:28:14.598 [2024-12-10 22:59:22.107786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.598 [2024-12-10 22:59:22.107818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.598 qpair failed and we were unable to recover it. 00:28:14.598 [2024-12-10 22:59:22.107924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.599 [2024-12-10 22:59:22.107955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.599 qpair failed and we were unable to recover it. 00:28:14.599 [2024-12-10 22:59:22.108134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.599 [2024-12-10 22:59:22.108177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.599 qpair failed and we were unable to recover it. 00:28:14.599 [2024-12-10 22:59:22.108300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.599 [2024-12-10 22:59:22.108331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.599 qpair failed and we were unable to recover it. 00:28:14.599 [2024-12-10 22:59:22.108452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.599 [2024-12-10 22:59:22.108483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.599 qpair failed and we were unable to recover it. 00:28:14.599 [2024-12-10 22:59:22.108600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.599 [2024-12-10 22:59:22.108631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.599 qpair failed and we were unable to recover it. 00:28:14.599 [2024-12-10 22:59:22.108806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.599 [2024-12-10 22:59:22.108838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.599 qpair failed and we were unable to recover it. 00:28:14.599 [2024-12-10 22:59:22.108955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.599 [2024-12-10 22:59:22.108987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.599 qpair failed and we were unable to recover it. 00:28:14.599 [2024-12-10 22:59:22.109102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.599 [2024-12-10 22:59:22.109134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.599 qpair failed and we were unable to recover it. 00:28:14.599 [2024-12-10 22:59:22.109329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.599 [2024-12-10 22:59:22.109362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.599 qpair failed and we were unable to recover it. 00:28:14.599 [2024-12-10 22:59:22.109482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.599 [2024-12-10 22:59:22.109513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.599 qpair failed and we were unable to recover it. 00:28:14.599 [2024-12-10 22:59:22.109682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.599 [2024-12-10 22:59:22.109714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.599 qpair failed and we were unable to recover it. 00:28:14.599 [2024-12-10 22:59:22.109966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.599 [2024-12-10 22:59:22.109998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.599 qpair failed and we were unable to recover it. 00:28:14.599 [2024-12-10 22:59:22.110097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.599 [2024-12-10 22:59:22.110129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.599 qpair failed and we were unable to recover it. 00:28:14.599 [2024-12-10 22:59:22.110285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.599 [2024-12-10 22:59:22.110355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.599 qpair failed and we were unable to recover it. 00:28:14.599 [2024-12-10 22:59:22.110511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.599 [2024-12-10 22:59:22.110547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.599 qpair failed and we were unable to recover it. 00:28:14.599 [2024-12-10 22:59:22.110662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.599 [2024-12-10 22:59:22.110694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.599 qpair failed and we were unable to recover it. 00:28:14.599 [2024-12-10 22:59:22.110872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.599 [2024-12-10 22:59:22.110903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.599 qpair failed and we were unable to recover it. 00:28:14.599 [2024-12-10 22:59:22.111082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.599 [2024-12-10 22:59:22.111115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.599 qpair failed and we were unable to recover it. 00:28:14.599 [2024-12-10 22:59:22.111329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.599 [2024-12-10 22:59:22.111363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.599 qpair failed and we were unable to recover it. 00:28:14.599 [2024-12-10 22:59:22.111574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.599 [2024-12-10 22:59:22.111606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.599 qpair failed and we were unable to recover it. 00:28:14.599 [2024-12-10 22:59:22.111816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.599 [2024-12-10 22:59:22.111847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.599 qpair failed and we were unable to recover it. 00:28:14.599 [2024-12-10 22:59:22.112024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.599 [2024-12-10 22:59:22.112056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.599 qpair failed and we were unable to recover it. 00:28:14.599 [2024-12-10 22:59:22.112190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.599 [2024-12-10 22:59:22.112225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.599 qpair failed and we were unable to recover it. 00:28:14.599 [2024-12-10 22:59:22.112348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.599 [2024-12-10 22:59:22.112380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.599 qpair failed and we were unable to recover it. 00:28:14.599 [2024-12-10 22:59:22.112549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.599 [2024-12-10 22:59:22.112581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.599 qpair failed and we were unable to recover it. 00:28:14.599 [2024-12-10 22:59:22.112685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.599 [2024-12-10 22:59:22.112717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.599 qpair failed and we were unable to recover it. 00:28:14.599 [2024-12-10 22:59:22.112899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.599 [2024-12-10 22:59:22.112931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.599 qpair failed and we were unable to recover it. 00:28:14.599 [2024-12-10 22:59:22.113042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.599 [2024-12-10 22:59:22.113073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.599 qpair failed and we were unable to recover it. 00:28:14.599 [2024-12-10 22:59:22.113242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.599 [2024-12-10 22:59:22.113275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.599 qpair failed and we were unable to recover it. 00:28:14.599 [2024-12-10 22:59:22.113455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.599 [2024-12-10 22:59:22.113486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.599 qpair failed and we were unable to recover it. 00:28:14.599 [2024-12-10 22:59:22.113656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.599 [2024-12-10 22:59:22.113687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.599 qpair failed and we were unable to recover it. 00:28:14.599 [2024-12-10 22:59:22.113794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.599 [2024-12-10 22:59:22.113831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.599 qpair failed and we were unable to recover it. 00:28:14.599 [2024-12-10 22:59:22.114000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.599 [2024-12-10 22:59:22.114036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.599 qpair failed and we were unable to recover it. 00:28:14.599 [2024-12-10 22:59:22.114203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.599 [2024-12-10 22:59:22.114235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.599 qpair failed and we were unable to recover it. 00:28:14.599 [2024-12-10 22:59:22.114346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.599 [2024-12-10 22:59:22.114378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.600 qpair failed and we were unable to recover it. 00:28:14.600 [2024-12-10 22:59:22.114564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.600 [2024-12-10 22:59:22.114596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.600 qpair failed and we were unable to recover it. 00:28:14.600 [2024-12-10 22:59:22.114888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.600 [2024-12-10 22:59:22.114920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.600 qpair failed and we were unable to recover it. 00:28:14.600 [2024-12-10 22:59:22.115038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.600 [2024-12-10 22:59:22.115069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.600 qpair failed and we were unable to recover it. 00:28:14.600 [2024-12-10 22:59:22.115178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.600 [2024-12-10 22:59:22.115212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.600 qpair failed and we were unable to recover it. 00:28:14.600 [2024-12-10 22:59:22.115383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.600 [2024-12-10 22:59:22.115416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.600 qpair failed and we were unable to recover it. 00:28:14.600 [2024-12-10 22:59:22.115536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.600 [2024-12-10 22:59:22.115567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.600 qpair failed and we were unable to recover it. 00:28:14.600 [2024-12-10 22:59:22.115752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.600 [2024-12-10 22:59:22.115785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.600 qpair failed and we were unable to recover it. 00:28:14.600 [2024-12-10 22:59:22.115955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.600 [2024-12-10 22:59:22.115987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.600 qpair failed and we were unable to recover it. 00:28:14.600 [2024-12-10 22:59:22.116186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.600 [2024-12-10 22:59:22.116218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.600 qpair failed and we were unable to recover it. 00:28:14.600 [2024-12-10 22:59:22.116388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.600 [2024-12-10 22:59:22.116419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.600 qpair failed and we were unable to recover it. 00:28:14.600 [2024-12-10 22:59:22.116548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.600 [2024-12-10 22:59:22.116580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.600 qpair failed and we were unable to recover it. 00:28:14.600 [2024-12-10 22:59:22.116686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.600 [2024-12-10 22:59:22.116719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.600 qpair failed and we were unable to recover it. 00:28:14.600 [2024-12-10 22:59:22.116890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.600 [2024-12-10 22:59:22.116922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.600 qpair failed and we were unable to recover it. 00:28:14.600 [2024-12-10 22:59:22.117042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.600 [2024-12-10 22:59:22.117076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.600 qpair failed and we were unable to recover it. 00:28:14.600 [2024-12-10 22:59:22.117247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.600 [2024-12-10 22:59:22.117278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.600 qpair failed and we were unable to recover it. 00:28:14.600 [2024-12-10 22:59:22.117389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.600 [2024-12-10 22:59:22.117421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.600 qpair failed and we were unable to recover it. 00:28:14.600 [2024-12-10 22:59:22.117521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.600 [2024-12-10 22:59:22.117552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.600 qpair failed and we were unable to recover it. 00:28:14.600 [2024-12-10 22:59:22.117747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.600 [2024-12-10 22:59:22.117779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.600 qpair failed and we were unable to recover it. 00:28:14.600 [2024-12-10 22:59:22.117964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.600 [2024-12-10 22:59:22.117995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.600 qpair failed and we were unable to recover it. 00:28:14.600 [2024-12-10 22:59:22.118182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.600 [2024-12-10 22:59:22.118215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.600 qpair failed and we were unable to recover it. 00:28:14.600 [2024-12-10 22:59:22.118319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.600 [2024-12-10 22:59:22.118351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.600 qpair failed and we were unable to recover it. 00:28:14.600 [2024-12-10 22:59:22.118518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.600 [2024-12-10 22:59:22.118549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.600 qpair failed and we were unable to recover it. 00:28:14.600 [2024-12-10 22:59:22.118653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.600 [2024-12-10 22:59:22.118684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.600 qpair failed and we were unable to recover it. 00:28:14.600 [2024-12-10 22:59:22.118868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.600 [2024-12-10 22:59:22.118903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.600 qpair failed and we were unable to recover it. 00:28:14.600 [2024-12-10 22:59:22.119021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.600 [2024-12-10 22:59:22.119052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.600 qpair failed and we were unable to recover it. 00:28:14.600 [2024-12-10 22:59:22.119177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.600 [2024-12-10 22:59:22.119210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.600 qpair failed and we were unable to recover it. 00:28:14.600 [2024-12-10 22:59:22.119329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.600 [2024-12-10 22:59:22.119361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.600 qpair failed and we were unable to recover it. 00:28:14.600 [2024-12-10 22:59:22.119466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.600 [2024-12-10 22:59:22.119496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.600 qpair failed and we were unable to recover it. 00:28:14.600 [2024-12-10 22:59:22.119599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.600 [2024-12-10 22:59:22.119630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.600 qpair failed and we were unable to recover it. 00:28:14.600 [2024-12-10 22:59:22.119803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.600 [2024-12-10 22:59:22.119834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.600 qpair failed and we were unable to recover it. 00:28:14.600 [2024-12-10 22:59:22.120023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.600 [2024-12-10 22:59:22.120053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.600 qpair failed and we were unable to recover it. 00:28:14.600 [2024-12-10 22:59:22.120180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.600 [2024-12-10 22:59:22.120214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.600 qpair failed and we were unable to recover it. 00:28:14.600 [2024-12-10 22:59:22.120340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.600 [2024-12-10 22:59:22.120372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.600 qpair failed and we were unable to recover it. 00:28:14.600 [2024-12-10 22:59:22.120476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.600 [2024-12-10 22:59:22.120506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.600 qpair failed and we were unable to recover it. 00:28:14.600 [2024-12-10 22:59:22.120672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.600 [2024-12-10 22:59:22.120704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.600 qpair failed and we were unable to recover it. 00:28:14.600 [2024-12-10 22:59:22.120941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.600 [2024-12-10 22:59:22.120972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.600 qpair failed and we were unable to recover it. 00:28:14.600 [2024-12-10 22:59:22.121142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.600 [2024-12-10 22:59:22.121186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.600 qpair failed and we were unable to recover it. 00:28:14.600 [2024-12-10 22:59:22.121391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.600 [2024-12-10 22:59:22.121422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.601 qpair failed and we were unable to recover it. 00:28:14.601 [2024-12-10 22:59:22.121538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.601 [2024-12-10 22:59:22.121570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.601 qpair failed and we were unable to recover it. 00:28:14.601 [2024-12-10 22:59:22.121669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.601 [2024-12-10 22:59:22.121700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.601 qpair failed and we were unable to recover it. 00:28:14.601 [2024-12-10 22:59:22.121866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.601 [2024-12-10 22:59:22.121898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.601 qpair failed and we were unable to recover it. 00:28:14.601 [2024-12-10 22:59:22.122068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.601 [2024-12-10 22:59:22.122099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.601 qpair failed and we were unable to recover it. 00:28:14.601 [2024-12-10 22:59:22.122295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.601 [2024-12-10 22:59:22.122326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.601 qpair failed and we were unable to recover it. 00:28:14.601 [2024-12-10 22:59:22.122579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.601 [2024-12-10 22:59:22.122610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.601 qpair failed and we were unable to recover it. 00:28:14.601 [2024-12-10 22:59:22.122782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.601 [2024-12-10 22:59:22.122813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.601 qpair failed and we were unable to recover it. 00:28:14.601 [2024-12-10 22:59:22.122988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.601 [2024-12-10 22:59:22.123019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.601 qpair failed and we were unable to recover it. 00:28:14.601 [2024-12-10 22:59:22.123119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.601 [2024-12-10 22:59:22.123150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.601 qpair failed and we were unable to recover it. 00:28:14.601 [2024-12-10 22:59:22.123274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.601 [2024-12-10 22:59:22.123306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.601 qpair failed and we were unable to recover it. 00:28:14.601 [2024-12-10 22:59:22.123473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.601 [2024-12-10 22:59:22.123503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.601 qpair failed and we were unable to recover it. 00:28:14.601 [2024-12-10 22:59:22.123668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.601 [2024-12-10 22:59:22.123699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.601 qpair failed and we were unable to recover it. 00:28:14.601 [2024-12-10 22:59:22.123901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.601 [2024-12-10 22:59:22.123937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.601 qpair failed and we were unable to recover it. 00:28:14.601 [2024-12-10 22:59:22.124067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.601 [2024-12-10 22:59:22.124099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.601 qpair failed and we were unable to recover it. 00:28:14.601 [2024-12-10 22:59:22.124220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.601 [2024-12-10 22:59:22.124251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.601 qpair failed and we were unable to recover it. 00:28:14.601 [2024-12-10 22:59:22.124441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.601 [2024-12-10 22:59:22.124472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.601 qpair failed and we were unable to recover it. 00:28:14.601 [2024-12-10 22:59:22.124664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.601 [2024-12-10 22:59:22.124694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.601 qpair failed and we were unable to recover it. 00:28:14.601 [2024-12-10 22:59:22.124908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.601 [2024-12-10 22:59:22.124939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.601 qpair failed and we were unable to recover it. 00:28:14.601 [2024-12-10 22:59:22.125137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.601 [2024-12-10 22:59:22.125179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.601 qpair failed and we were unable to recover it. 00:28:14.601 [2024-12-10 22:59:22.125279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.601 [2024-12-10 22:59:22.125310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.601 qpair failed and we were unable to recover it. 00:28:14.601 [2024-12-10 22:59:22.125445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.601 [2024-12-10 22:59:22.125476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.601 qpair failed and we were unable to recover it. 00:28:14.601 [2024-12-10 22:59:22.125674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.601 [2024-12-10 22:59:22.125705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.601 qpair failed and we were unable to recover it. 00:28:14.601 [2024-12-10 22:59:22.125870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.601 [2024-12-10 22:59:22.125901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.601 qpair failed and we were unable to recover it. 00:28:14.601 [2024-12-10 22:59:22.126099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.601 [2024-12-10 22:59:22.126130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.601 qpair failed and we were unable to recover it. 00:28:14.601 [2024-12-10 22:59:22.126320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.601 [2024-12-10 22:59:22.126356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.601 qpair failed and we were unable to recover it. 00:28:14.601 [2024-12-10 22:59:22.126528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.601 [2024-12-10 22:59:22.126559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.601 qpair failed and we were unable to recover it. 00:28:14.601 [2024-12-10 22:59:22.126663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.601 [2024-12-10 22:59:22.126695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.601 qpair failed and we were unable to recover it. 00:28:14.601 [2024-12-10 22:59:22.126864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.601 [2024-12-10 22:59:22.126896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.601 qpair failed and we were unable to recover it. 00:28:14.601 [2024-12-10 22:59:22.127012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.601 [2024-12-10 22:59:22.127043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.601 qpair failed and we were unable to recover it. 00:28:14.601 [2024-12-10 22:59:22.127257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.601 [2024-12-10 22:59:22.127290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.601 qpair failed and we were unable to recover it. 00:28:14.601 [2024-12-10 22:59:22.127496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.601 [2024-12-10 22:59:22.127528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.601 qpair failed and we were unable to recover it. 00:28:14.601 [2024-12-10 22:59:22.127698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.601 [2024-12-10 22:59:22.127728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.601 qpair failed and we were unable to recover it. 00:28:14.601 [2024-12-10 22:59:22.127837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.601 [2024-12-10 22:59:22.127868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.601 qpair failed and we were unable to recover it. 00:28:14.601 [2024-12-10 22:59:22.128034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.601 [2024-12-10 22:59:22.128065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.601 qpair failed and we were unable to recover it. 00:28:14.601 [2024-12-10 22:59:22.128306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.601 [2024-12-10 22:59:22.128338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.601 qpair failed and we were unable to recover it. 00:28:14.601 [2024-12-10 22:59:22.128510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.601 [2024-12-10 22:59:22.128541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.601 qpair failed and we were unable to recover it. 00:28:14.601 [2024-12-10 22:59:22.128673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.601 [2024-12-10 22:59:22.128704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.601 qpair failed and we were unable to recover it. 00:28:14.601 [2024-12-10 22:59:22.128874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.601 [2024-12-10 22:59:22.128904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.601 qpair failed and we were unable to recover it. 00:28:14.601 [2024-12-10 22:59:22.129105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.601 [2024-12-10 22:59:22.129136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.601 qpair failed and we were unable to recover it. 00:28:14.601 [2024-12-10 22:59:22.129356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.602 [2024-12-10 22:59:22.129395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.602 qpair failed and we were unable to recover it. 00:28:14.602 [2024-12-10 22:59:22.129509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.602 [2024-12-10 22:59:22.129540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.602 qpair failed and we were unable to recover it. 00:28:14.602 [2024-12-10 22:59:22.129721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.602 [2024-12-10 22:59:22.129751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.602 qpair failed and we were unable to recover it. 00:28:14.602 [2024-12-10 22:59:22.129992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.602 [2024-12-10 22:59:22.130023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.602 qpair failed and we were unable to recover it. 00:28:14.602 [2024-12-10 22:59:22.130192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.602 [2024-12-10 22:59:22.130224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.602 qpair failed and we were unable to recover it. 00:28:14.602 [2024-12-10 22:59:22.130408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.602 [2024-12-10 22:59:22.130439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.602 qpair failed and we were unable to recover it. 00:28:14.602 [2024-12-10 22:59:22.130607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.602 [2024-12-10 22:59:22.130638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.602 qpair failed and we were unable to recover it. 00:28:14.602 [2024-12-10 22:59:22.130746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.602 [2024-12-10 22:59:22.130777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.602 qpair failed and we were unable to recover it. 00:28:14.602 [2024-12-10 22:59:22.130998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.602 [2024-12-10 22:59:22.131029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.602 qpair failed and we were unable to recover it. 00:28:14.602 [2024-12-10 22:59:22.131169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.602 [2024-12-10 22:59:22.131201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.602 qpair failed and we were unable to recover it. 00:28:14.602 [2024-12-10 22:59:22.131314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.602 [2024-12-10 22:59:22.131345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.602 qpair failed and we were unable to recover it. 00:28:14.602 [2024-12-10 22:59:22.131512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.602 [2024-12-10 22:59:22.131543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.602 qpair failed and we were unable to recover it. 00:28:14.602 [2024-12-10 22:59:22.131720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.602 [2024-12-10 22:59:22.131750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.602 qpair failed and we were unable to recover it. 00:28:14.602 [2024-12-10 22:59:22.131848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.602 [2024-12-10 22:59:22.131878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.602 qpair failed and we were unable to recover it. 00:28:14.602 [2024-12-10 22:59:22.132051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.602 [2024-12-10 22:59:22.132082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.602 qpair failed and we were unable to recover it. 00:28:14.602 [2024-12-10 22:59:22.132268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.602 [2024-12-10 22:59:22.132299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.602 qpair failed and we were unable to recover it. 00:28:14.602 [2024-12-10 22:59:22.132460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.602 [2024-12-10 22:59:22.132491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.602 qpair failed and we were unable to recover it. 00:28:14.602 [2024-12-10 22:59:22.132614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.602 [2024-12-10 22:59:22.132645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.602 qpair failed and we were unable to recover it. 00:28:14.602 [2024-12-10 22:59:22.132885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.602 [2024-12-10 22:59:22.132916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.602 qpair failed and we were unable to recover it. 00:28:14.602 [2024-12-10 22:59:22.133097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.602 [2024-12-10 22:59:22.133128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.602 qpair failed and we were unable to recover it. 00:28:14.602 [2024-12-10 22:59:22.133331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.602 [2024-12-10 22:59:22.133362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.602 qpair failed and we were unable to recover it. 00:28:14.602 [2024-12-10 22:59:22.133470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.602 [2024-12-10 22:59:22.133502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.602 qpair failed and we were unable to recover it. 00:28:14.602 [2024-12-10 22:59:22.133671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.602 [2024-12-10 22:59:22.133702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.602 qpair failed and we were unable to recover it. 00:28:14.602 [2024-12-10 22:59:22.133902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.602 [2024-12-10 22:59:22.133932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.602 qpair failed and we were unable to recover it. 00:28:14.602 [2024-12-10 22:59:22.134033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.602 [2024-12-10 22:59:22.134064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.602 qpair failed and we were unable to recover it. 00:28:14.602 [2024-12-10 22:59:22.134338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.602 [2024-12-10 22:59:22.134369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.602 qpair failed and we were unable to recover it. 00:28:14.602 [2024-12-10 22:59:22.134481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.602 [2024-12-10 22:59:22.134512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.602 qpair failed and we were unable to recover it. 00:28:14.602 [2024-12-10 22:59:22.134619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.602 [2024-12-10 22:59:22.134650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.602 qpair failed and we were unable to recover it. 00:28:14.602 [2024-12-10 22:59:22.134872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.602 [2024-12-10 22:59:22.134903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.602 qpair failed and we were unable to recover it. 00:28:14.602 [2024-12-10 22:59:22.135096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.602 [2024-12-10 22:59:22.135127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.602 qpair failed and we were unable to recover it. 00:28:14.602 [2024-12-10 22:59:22.135303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.602 [2024-12-10 22:59:22.135336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.602 qpair failed and we were unable to recover it. 00:28:14.602 [2024-12-10 22:59:22.135502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.602 [2024-12-10 22:59:22.135533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.602 qpair failed and we were unable to recover it. 00:28:14.602 [2024-12-10 22:59:22.135715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.602 [2024-12-10 22:59:22.135746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.602 qpair failed and we were unable to recover it. 00:28:14.602 [2024-12-10 22:59:22.135928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.602 [2024-12-10 22:59:22.135959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.602 qpair failed and we were unable to recover it. 00:28:14.602 [2024-12-10 22:59:22.136200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.602 [2024-12-10 22:59:22.136233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.602 qpair failed and we were unable to recover it. 00:28:14.602 [2024-12-10 22:59:22.136430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.602 [2024-12-10 22:59:22.136461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.602 qpair failed and we were unable to recover it. 00:28:14.602 [2024-12-10 22:59:22.136559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.602 [2024-12-10 22:59:22.136590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.602 qpair failed and we were unable to recover it. 00:28:14.602 [2024-12-10 22:59:22.136782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.602 [2024-12-10 22:59:22.136814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.602 qpair failed and we were unable to recover it. 00:28:14.602 [2024-12-10 22:59:22.136926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.602 [2024-12-10 22:59:22.136958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.602 qpair failed and we were unable to recover it. 00:28:14.602 [2024-12-10 22:59:22.137198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.603 [2024-12-10 22:59:22.137230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.603 qpair failed and we were unable to recover it. 00:28:14.603 [2024-12-10 22:59:22.137333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.603 [2024-12-10 22:59:22.137370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.603 qpair failed and we were unable to recover it. 00:28:14.603 [2024-12-10 22:59:22.137484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.603 [2024-12-10 22:59:22.137515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.603 qpair failed and we were unable to recover it. 00:28:14.603 [2024-12-10 22:59:22.137617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.603 [2024-12-10 22:59:22.137648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.603 qpair failed and we were unable to recover it. 00:28:14.603 [2024-12-10 22:59:22.137908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.603 [2024-12-10 22:59:22.137939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.603 qpair failed and we were unable to recover it. 00:28:14.603 [2024-12-10 22:59:22.138105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.603 [2024-12-10 22:59:22.138137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.603 qpair failed and we were unable to recover it. 00:28:14.603 [2024-12-10 22:59:22.138348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.603 [2024-12-10 22:59:22.138379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.603 qpair failed and we were unable to recover it. 00:28:14.603 [2024-12-10 22:59:22.138505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.603 [2024-12-10 22:59:22.138537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.603 qpair failed and we were unable to recover it. 00:28:14.603 [2024-12-10 22:59:22.138723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.603 [2024-12-10 22:59:22.138754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.603 qpair failed and we were unable to recover it. 00:28:14.603 [2024-12-10 22:59:22.138868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.603 [2024-12-10 22:59:22.138900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.603 qpair failed and we were unable to recover it. 00:28:14.603 [2024-12-10 22:59:22.139011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.603 [2024-12-10 22:59:22.139043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.603 qpair failed and we were unable to recover it. 00:28:14.603 [2024-12-10 22:59:22.139144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.603 [2024-12-10 22:59:22.139182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.603 qpair failed and we were unable to recover it. 00:28:14.603 [2024-12-10 22:59:22.139408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.603 [2024-12-10 22:59:22.139439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.603 qpair failed and we were unable to recover it. 00:28:14.603 [2024-12-10 22:59:22.139614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.603 [2024-12-10 22:59:22.139644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.603 qpair failed and we were unable to recover it. 00:28:14.603 [2024-12-10 22:59:22.139755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.603 [2024-12-10 22:59:22.139786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.603 qpair failed and we were unable to recover it. 00:28:14.603 [2024-12-10 22:59:22.140093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.603 [2024-12-10 22:59:22.140124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.603 qpair failed and we were unable to recover it. 00:28:14.603 [2024-12-10 22:59:22.140336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.603 [2024-12-10 22:59:22.140369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.603 qpair failed and we were unable to recover it. 00:28:14.603 [2024-12-10 22:59:22.140485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.603 [2024-12-10 22:59:22.140516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.603 qpair failed and we were unable to recover it. 00:28:14.603 [2024-12-10 22:59:22.140725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.603 [2024-12-10 22:59:22.140756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.603 qpair failed and we were unable to recover it. 00:28:14.603 [2024-12-10 22:59:22.141018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.603 [2024-12-10 22:59:22.141050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.603 qpair failed and we were unable to recover it. 00:28:14.603 [2024-12-10 22:59:22.141193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.603 [2024-12-10 22:59:22.141226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.603 qpair failed and we were unable to recover it. 00:28:14.603 [2024-12-10 22:59:22.141512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.603 [2024-12-10 22:59:22.141543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.603 qpair failed and we were unable to recover it. 00:28:14.603 [2024-12-10 22:59:22.141712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.603 [2024-12-10 22:59:22.141744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.603 qpair failed and we were unable to recover it. 00:28:14.603 [2024-12-10 22:59:22.142001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.603 [2024-12-10 22:59:22.142032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.603 qpair failed and we were unable to recover it. 00:28:14.603 [2024-12-10 22:59:22.142142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.603 [2024-12-10 22:59:22.142184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.603 qpair failed and we were unable to recover it. 00:28:14.603 [2024-12-10 22:59:22.142397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.603 [2024-12-10 22:59:22.142428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.603 qpair failed and we were unable to recover it. 00:28:14.603 [2024-12-10 22:59:22.142527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.603 [2024-12-10 22:59:22.142557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.603 qpair failed and we were unable to recover it. 00:28:14.603 [2024-12-10 22:59:22.142658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.603 [2024-12-10 22:59:22.142690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.603 qpair failed and we were unable to recover it. 00:28:14.603 [2024-12-10 22:59:22.142805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.603 [2024-12-10 22:59:22.142837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.603 qpair failed and we were unable to recover it. 00:28:14.603 [2024-12-10 22:59:22.143029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.603 [2024-12-10 22:59:22.143061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.603 qpair failed and we were unable to recover it. 00:28:14.603 [2024-12-10 22:59:22.143234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.603 [2024-12-10 22:59:22.143267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.603 qpair failed and we were unable to recover it. 00:28:14.603 [2024-12-10 22:59:22.143393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.603 [2024-12-10 22:59:22.143424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.603 qpair failed and we were unable to recover it. 00:28:14.603 [2024-12-10 22:59:22.143603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.603 [2024-12-10 22:59:22.143634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.603 qpair failed and we were unable to recover it. 00:28:14.603 [2024-12-10 22:59:22.143747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.603 [2024-12-10 22:59:22.143778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.603 qpair failed and we were unable to recover it. 00:28:14.603 [2024-12-10 22:59:22.143896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.603 [2024-12-10 22:59:22.143927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.603 qpair failed and we were unable to recover it. 00:28:14.603 [2024-12-10 22:59:22.144031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.603 [2024-12-10 22:59:22.144062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.603 qpair failed and we were unable to recover it. 00:28:14.603 [2024-12-10 22:59:22.144264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.603 [2024-12-10 22:59:22.144296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.603 qpair failed and we were unable to recover it. 00:28:14.603 [2024-12-10 22:59:22.144424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.603 [2024-12-10 22:59:22.144455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.603 qpair failed and we were unable to recover it. 00:28:14.603 [2024-12-10 22:59:22.144571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.603 [2024-12-10 22:59:22.144601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.603 qpair failed and we were unable to recover it. 00:28:14.603 [2024-12-10 22:59:22.144702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.603 [2024-12-10 22:59:22.144734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.604 qpair failed and we were unable to recover it. 00:28:14.604 [2024-12-10 22:59:22.144833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.604 [2024-12-10 22:59:22.144864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.604 qpair failed and we were unable to recover it. 00:28:14.604 [2024-12-10 22:59:22.144965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.604 [2024-12-10 22:59:22.145002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.604 qpair failed and we were unable to recover it. 00:28:14.604 [2024-12-10 22:59:22.145183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.604 [2024-12-10 22:59:22.145216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.604 qpair failed and we were unable to recover it. 00:28:14.604 [2024-12-10 22:59:22.145409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.604 [2024-12-10 22:59:22.145440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.604 qpair failed and we were unable to recover it. 00:28:14.604 [2024-12-10 22:59:22.145561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.604 [2024-12-10 22:59:22.145591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.604 qpair failed and we were unable to recover it. 00:28:14.604 [2024-12-10 22:59:22.145702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.604 [2024-12-10 22:59:22.145734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.604 qpair failed and we were unable to recover it. 00:28:14.604 [2024-12-10 22:59:22.145914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.604 [2024-12-10 22:59:22.145946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.604 qpair failed and we were unable to recover it. 00:28:14.604 [2024-12-10 22:59:22.146122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.604 [2024-12-10 22:59:22.146153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.604 qpair failed and we were unable to recover it. 00:28:14.604 [2024-12-10 22:59:22.146297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.604 [2024-12-10 22:59:22.146328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.604 qpair failed and we were unable to recover it. 00:28:14.604 [2024-12-10 22:59:22.146460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.604 [2024-12-10 22:59:22.146492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.604 qpair failed and we were unable to recover it. 00:28:14.604 [2024-12-10 22:59:22.146661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.604 [2024-12-10 22:59:22.146692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.604 qpair failed and we were unable to recover it. 00:28:14.604 [2024-12-10 22:59:22.146858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.604 [2024-12-10 22:59:22.146889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.604 qpair failed and we were unable to recover it. 00:28:14.604 [2024-12-10 22:59:22.147092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.604 [2024-12-10 22:59:22.147124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.604 qpair failed and we were unable to recover it. 00:28:14.604 [2024-12-10 22:59:22.147323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.604 [2024-12-10 22:59:22.147356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.604 qpair failed and we were unable to recover it. 00:28:14.604 [2024-12-10 22:59:22.147534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.604 [2024-12-10 22:59:22.147566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.604 qpair failed and we were unable to recover it. 00:28:14.604 [2024-12-10 22:59:22.147739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.604 [2024-12-10 22:59:22.147770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.604 qpair failed and we were unable to recover it. 00:28:14.604 [2024-12-10 22:59:22.147884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.604 [2024-12-10 22:59:22.147915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.604 qpair failed and we were unable to recover it. 00:28:14.604 [2024-12-10 22:59:22.148031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.604 [2024-12-10 22:59:22.148063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.604 qpair failed and we were unable to recover it. 00:28:14.604 [2024-12-10 22:59:22.148175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.604 [2024-12-10 22:59:22.148208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.604 qpair failed and we were unable to recover it. 00:28:14.604 [2024-12-10 22:59:22.148399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.604 [2024-12-10 22:59:22.148432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.604 qpair failed and we were unable to recover it. 00:28:14.604 [2024-12-10 22:59:22.148597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.604 [2024-12-10 22:59:22.148628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.604 qpair failed and we were unable to recover it. 00:28:14.604 [2024-12-10 22:59:22.148838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.604 [2024-12-10 22:59:22.148868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.604 qpair failed and we were unable to recover it. 00:28:14.604 [2024-12-10 22:59:22.149037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.604 [2024-12-10 22:59:22.149068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.604 qpair failed and we were unable to recover it. 00:28:14.604 [2024-12-10 22:59:22.149233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.604 [2024-12-10 22:59:22.149267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.604 qpair failed and we were unable to recover it. 00:28:14.604 [2024-12-10 22:59:22.149380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.604 [2024-12-10 22:59:22.149412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.604 qpair failed and we were unable to recover it. 00:28:14.604 [2024-12-10 22:59:22.149613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.604 [2024-12-10 22:59:22.149644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.604 qpair failed and we were unable to recover it. 00:28:14.604 [2024-12-10 22:59:22.149816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.604 [2024-12-10 22:59:22.149847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.604 qpair failed and we were unable to recover it. 00:28:14.604 [2024-12-10 22:59:22.150020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.604 [2024-12-10 22:59:22.150051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.604 qpair failed and we were unable to recover it. 00:28:14.604 [2024-12-10 22:59:22.150275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.604 [2024-12-10 22:59:22.150309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.604 qpair failed and we were unable to recover it. 00:28:14.604 [2024-12-10 22:59:22.150477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.604 [2024-12-10 22:59:22.150509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.604 qpair failed and we were unable to recover it. 00:28:14.604 [2024-12-10 22:59:22.150748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.604 [2024-12-10 22:59:22.150779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.604 qpair failed and we were unable to recover it. 00:28:14.604 [2024-12-10 22:59:22.150882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.604 [2024-12-10 22:59:22.150913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.604 qpair failed and we were unable to recover it. 00:28:14.604 [2024-12-10 22:59:22.151150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.604 [2024-12-10 22:59:22.151206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.604 qpair failed and we were unable to recover it. 00:28:14.604 [2024-12-10 22:59:22.151375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.604 [2024-12-10 22:59:22.151407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.604 qpair failed and we were unable to recover it. 00:28:14.605 [2024-12-10 22:59:22.151578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.605 [2024-12-10 22:59:22.151610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.605 qpair failed and we were unable to recover it. 00:28:14.605 [2024-12-10 22:59:22.151713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.605 [2024-12-10 22:59:22.151745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.605 qpair failed and we were unable to recover it. 00:28:14.605 [2024-12-10 22:59:22.151936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.605 [2024-12-10 22:59:22.151967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.605 qpair failed and we were unable to recover it. 00:28:14.605 [2024-12-10 22:59:22.152083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.605 [2024-12-10 22:59:22.152114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.605 qpair failed and we were unable to recover it. 00:28:14.605 [2024-12-10 22:59:22.152225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.605 [2024-12-10 22:59:22.152257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.605 qpair failed and we were unable to recover it. 00:28:14.605 [2024-12-10 22:59:22.152377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.605 [2024-12-10 22:59:22.152408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.605 qpair failed and we were unable to recover it. 00:28:14.605 [2024-12-10 22:59:22.152598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.605 [2024-12-10 22:59:22.152630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.605 qpair failed and we were unable to recover it. 00:28:14.605 [2024-12-10 22:59:22.152810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.605 [2024-12-10 22:59:22.152846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.605 qpair failed and we were unable to recover it. 00:28:14.605 [2024-12-10 22:59:22.152968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.605 [2024-12-10 22:59:22.153000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.605 qpair failed and we were unable to recover it. 00:28:14.605 [2024-12-10 22:59:22.153111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.605 [2024-12-10 22:59:22.153143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.605 qpair failed and we were unable to recover it. 00:28:14.605 [2024-12-10 22:59:22.153349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.605 [2024-12-10 22:59:22.153380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.605 qpair failed and we were unable to recover it. 00:28:14.605 [2024-12-10 22:59:22.153596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.605 [2024-12-10 22:59:22.153628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.605 qpair failed and we were unable to recover it. 00:28:14.605 [2024-12-10 22:59:22.153810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.605 [2024-12-10 22:59:22.153842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.605 qpair failed and we were unable to recover it. 00:28:14.605 [2024-12-10 22:59:22.153960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.605 [2024-12-10 22:59:22.153992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.605 qpair failed and we were unable to recover it. 00:28:14.605 [2024-12-10 22:59:22.154279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.605 [2024-12-10 22:59:22.154311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.605 qpair failed and we were unable to recover it. 00:28:14.605 [2024-12-10 22:59:22.154496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.605 [2024-12-10 22:59:22.154527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.605 qpair failed and we were unable to recover it. 00:28:14.605 [2024-12-10 22:59:22.154643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.605 [2024-12-10 22:59:22.154676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.605 qpair failed and we were unable to recover it. 00:28:14.605 [2024-12-10 22:59:22.154776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.605 [2024-12-10 22:59:22.154807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.605 qpair failed and we were unable to recover it. 00:28:14.605 [2024-12-10 22:59:22.154912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.605 [2024-12-10 22:59:22.154944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.605 qpair failed and we were unable to recover it. 00:28:14.605 [2024-12-10 22:59:22.155062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.605 [2024-12-10 22:59:22.155094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.605 qpair failed and we were unable to recover it. 00:28:14.605 [2024-12-10 22:59:22.155219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.605 [2024-12-10 22:59:22.155251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.605 qpair failed and we were unable to recover it. 00:28:14.605 [2024-12-10 22:59:22.155373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.605 [2024-12-10 22:59:22.155405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.605 qpair failed and we were unable to recover it. 00:28:14.605 [2024-12-10 22:59:22.155515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.605 [2024-12-10 22:59:22.155546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.605 qpair failed and we were unable to recover it. 00:28:14.605 [2024-12-10 22:59:22.155730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.605 [2024-12-10 22:59:22.155762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.605 qpair failed and we were unable to recover it. 00:28:14.605 [2024-12-10 22:59:22.155932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.605 [2024-12-10 22:59:22.155963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.605 qpair failed and we were unable to recover it. 00:28:14.605 [2024-12-10 22:59:22.156167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.605 [2024-12-10 22:59:22.156200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.605 qpair failed and we were unable to recover it. 00:28:14.605 [2024-12-10 22:59:22.156348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.605 [2024-12-10 22:59:22.156381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.605 qpair failed and we were unable to recover it. 00:28:14.605 [2024-12-10 22:59:22.156494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.605 [2024-12-10 22:59:22.156525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.605 qpair failed and we were unable to recover it. 00:28:14.605 [2024-12-10 22:59:22.156637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.605 [2024-12-10 22:59:22.156668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.605 qpair failed and we were unable to recover it. 00:28:14.605 [2024-12-10 22:59:22.156843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.605 [2024-12-10 22:59:22.156875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.605 qpair failed and we were unable to recover it. 00:28:14.605 [2024-12-10 22:59:22.156987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.605 [2024-12-10 22:59:22.157018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.605 qpair failed and we were unable to recover it. 00:28:14.605 [2024-12-10 22:59:22.157198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.605 [2024-12-10 22:59:22.157230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.605 qpair failed and we were unable to recover it. 00:28:14.605 [2024-12-10 22:59:22.157402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.605 [2024-12-10 22:59:22.157433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.605 qpair failed and we were unable to recover it. 00:28:14.605 [2024-12-10 22:59:22.157547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.605 [2024-12-10 22:59:22.157578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.605 qpair failed and we were unable to recover it. 00:28:14.605 [2024-12-10 22:59:22.157699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.605 [2024-12-10 22:59:22.157731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.605 qpair failed and we were unable to recover it. 00:28:14.605 [2024-12-10 22:59:22.157904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.605 [2024-12-10 22:59:22.157935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.605 qpair failed and we were unable to recover it. 00:28:14.605 [2024-12-10 22:59:22.158047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.605 [2024-12-10 22:59:22.158078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.605 qpair failed and we were unable to recover it. 00:28:14.605 [2024-12-10 22:59:22.158282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.605 [2024-12-10 22:59:22.158316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.605 qpair failed and we were unable to recover it. 00:28:14.605 [2024-12-10 22:59:22.158447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.605 [2024-12-10 22:59:22.158479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.605 qpair failed and we were unable to recover it. 00:28:14.606 [2024-12-10 22:59:22.158647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.606 [2024-12-10 22:59:22.158678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.606 qpair failed and we were unable to recover it. 00:28:14.606 [2024-12-10 22:59:22.158786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.606 [2024-12-10 22:59:22.158817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.606 qpair failed and we were unable to recover it. 00:28:14.606 [2024-12-10 22:59:22.158987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.606 [2024-12-10 22:59:22.159019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.606 qpair failed and we were unable to recover it. 00:28:14.606 [2024-12-10 22:59:22.159218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.606 [2024-12-10 22:59:22.159250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.606 qpair failed and we were unable to recover it. 00:28:14.606 [2024-12-10 22:59:22.159363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.606 [2024-12-10 22:59:22.159395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.606 qpair failed and we were unable to recover it. 00:28:14.606 [2024-12-10 22:59:22.159563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.606 [2024-12-10 22:59:22.159595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.606 qpair failed and we were unable to recover it. 00:28:14.606 [2024-12-10 22:59:22.159774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.606 [2024-12-10 22:59:22.159805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.606 qpair failed and we were unable to recover it. 00:28:14.606 [2024-12-10 22:59:22.159977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.606 [2024-12-10 22:59:22.160008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.606 qpair failed and we were unable to recover it. 00:28:14.606 [2024-12-10 22:59:22.160108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.606 [2024-12-10 22:59:22.160145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.606 qpair failed and we were unable to recover it. 00:28:14.606 [2024-12-10 22:59:22.160268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.606 [2024-12-10 22:59:22.160299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.606 qpair failed and we were unable to recover it. 00:28:14.606 [2024-12-10 22:59:22.160482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.606 [2024-12-10 22:59:22.160513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.606 qpair failed and we were unable to recover it. 00:28:14.606 [2024-12-10 22:59:22.160630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.606 [2024-12-10 22:59:22.160661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.606 qpair failed and we were unable to recover it. 00:28:14.606 [2024-12-10 22:59:22.160832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.606 [2024-12-10 22:59:22.160863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.606 qpair failed and we were unable to recover it. 00:28:14.606 [2024-12-10 22:59:22.160988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.606 [2024-12-10 22:59:22.161019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.606 qpair failed and we were unable to recover it. 00:28:14.606 [2024-12-10 22:59:22.161255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.606 [2024-12-10 22:59:22.161288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.606 qpair failed and we were unable to recover it. 00:28:14.606 [2024-12-10 22:59:22.161455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.606 [2024-12-10 22:59:22.161486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.606 qpair failed and we were unable to recover it. 00:28:14.606 [2024-12-10 22:59:22.161651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.606 [2024-12-10 22:59:22.161681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.606 qpair failed and we were unable to recover it. 00:28:14.606 [2024-12-10 22:59:22.161851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.606 [2024-12-10 22:59:22.161883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.606 qpair failed and we were unable to recover it. 00:28:14.606 [2024-12-10 22:59:22.162147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.606 [2024-12-10 22:59:22.162190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.606 qpair failed and we were unable to recover it. 00:28:14.606 [2024-12-10 22:59:22.162355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.606 [2024-12-10 22:59:22.162387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.606 qpair failed and we were unable to recover it. 00:28:14.606 [2024-12-10 22:59:22.162565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.606 [2024-12-10 22:59:22.162595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.606 qpair failed and we were unable to recover it. 00:28:14.606 [2024-12-10 22:59:22.162700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.606 [2024-12-10 22:59:22.162732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.606 qpair failed and we were unable to recover it. 00:28:14.606 [2024-12-10 22:59:22.162914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.606 [2024-12-10 22:59:22.162945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.606 qpair failed and we were unable to recover it. 00:28:14.606 [2024-12-10 22:59:22.163054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.606 [2024-12-10 22:59:22.163086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.606 qpair failed and we were unable to recover it. 00:28:14.606 [2024-12-10 22:59:22.163277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.606 [2024-12-10 22:59:22.163310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.606 qpair failed and we were unable to recover it. 00:28:14.606 [2024-12-10 22:59:22.163483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.606 [2024-12-10 22:59:22.163514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.606 qpair failed and we were unable to recover it. 00:28:14.606 [2024-12-10 22:59:22.163706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.606 [2024-12-10 22:59:22.163736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.606 qpair failed and we were unable to recover it. 00:28:14.606 [2024-12-10 22:59:22.163836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.606 [2024-12-10 22:59:22.163866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.606 qpair failed and we were unable to recover it. 00:28:14.606 [2024-12-10 22:59:22.164044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.606 [2024-12-10 22:59:22.164076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.606 qpair failed and we were unable to recover it. 00:28:14.606 [2024-12-10 22:59:22.164181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.606 [2024-12-10 22:59:22.164213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.606 qpair failed and we were unable to recover it. 00:28:14.606 [2024-12-10 22:59:22.164385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.606 [2024-12-10 22:59:22.164417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.606 qpair failed and we were unable to recover it. 00:28:14.606 [2024-12-10 22:59:22.164595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.606 [2024-12-10 22:59:22.164627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.606 qpair failed and we were unable to recover it. 00:28:14.606 [2024-12-10 22:59:22.164798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.606 [2024-12-10 22:59:22.164834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.606 qpair failed and we were unable to recover it. 00:28:14.606 [2024-12-10 22:59:22.164936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.606 [2024-12-10 22:59:22.164967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.606 qpair failed and we were unable to recover it. 00:28:14.606 [2024-12-10 22:59:22.165072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.606 [2024-12-10 22:59:22.165104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.606 qpair failed and we were unable to recover it. 00:28:14.606 [2024-12-10 22:59:22.165299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.606 [2024-12-10 22:59:22.165332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.606 qpair failed and we were unable to recover it. 00:28:14.606 [2024-12-10 22:59:22.165452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.606 [2024-12-10 22:59:22.165483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.606 qpair failed and we were unable to recover it. 00:28:14.606 [2024-12-10 22:59:22.165612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.606 [2024-12-10 22:59:22.165644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.606 qpair failed and we were unable to recover it. 00:28:14.606 [2024-12-10 22:59:22.165746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.606 [2024-12-10 22:59:22.165778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.606 qpair failed and we were unable to recover it. 00:28:14.606 [2024-12-10 22:59:22.165945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.607 [2024-12-10 22:59:22.165977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.607 qpair failed and we were unable to recover it. 00:28:14.607 [2024-12-10 22:59:22.166094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.607 [2024-12-10 22:59:22.166125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.607 qpair failed and we were unable to recover it. 00:28:14.607 [2024-12-10 22:59:22.166312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.607 [2024-12-10 22:59:22.166345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.607 qpair failed and we were unable to recover it. 00:28:14.607 [2024-12-10 22:59:22.166539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.607 [2024-12-10 22:59:22.166570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.607 qpair failed and we were unable to recover it. 00:28:14.607 [2024-12-10 22:59:22.166799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.607 [2024-12-10 22:59:22.166831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.607 qpair failed and we were unable to recover it. 00:28:14.607 [2024-12-10 22:59:22.167023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.607 [2024-12-10 22:59:22.167054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.607 qpair failed and we were unable to recover it. 00:28:14.607 [2024-12-10 22:59:22.167193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.607 [2024-12-10 22:59:22.167227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.607 qpair failed and we were unable to recover it. 00:28:14.607 [2024-12-10 22:59:22.167339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.607 [2024-12-10 22:59:22.167370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.607 qpair failed and we were unable to recover it. 00:28:14.607 [2024-12-10 22:59:22.167539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.607 [2024-12-10 22:59:22.167571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.607 qpair failed and we were unable to recover it. 00:28:14.607 [2024-12-10 22:59:22.167692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.607 [2024-12-10 22:59:22.167729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.607 qpair failed and we were unable to recover it. 00:28:14.607 [2024-12-10 22:59:22.167900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.607 [2024-12-10 22:59:22.167931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.607 qpair failed and we were unable to recover it. 00:28:14.607 [2024-12-10 22:59:22.168061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.607 [2024-12-10 22:59:22.168092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.607 qpair failed and we were unable to recover it. 00:28:14.607 [2024-12-10 22:59:22.168198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.607 [2024-12-10 22:59:22.168231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.607 qpair failed and we were unable to recover it. 00:28:14.607 [2024-12-10 22:59:22.168402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.607 [2024-12-10 22:59:22.168433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.607 qpair failed and we were unable to recover it. 00:28:14.607 [2024-12-10 22:59:22.168545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.607 [2024-12-10 22:59:22.168577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.607 qpair failed and we were unable to recover it. 00:28:14.607 [2024-12-10 22:59:22.168745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.607 [2024-12-10 22:59:22.168778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.607 qpair failed and we were unable to recover it. 00:28:14.607 [2024-12-10 22:59:22.168961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.607 [2024-12-10 22:59:22.168992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.607 qpair failed and we were unable to recover it. 00:28:14.607 [2024-12-10 22:59:22.169172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.607 [2024-12-10 22:59:22.169204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.607 qpair failed and we were unable to recover it. 00:28:14.607 [2024-12-10 22:59:22.169329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.607 [2024-12-10 22:59:22.169360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.607 qpair failed and we were unable to recover it. 00:28:14.607 [2024-12-10 22:59:22.169460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.607 [2024-12-10 22:59:22.169490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.607 qpair failed and we were unable to recover it. 00:28:14.607 [2024-12-10 22:59:22.169593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.607 [2024-12-10 22:59:22.169624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.607 qpair failed and we were unable to recover it. 00:28:14.607 [2024-12-10 22:59:22.169794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.607 [2024-12-10 22:59:22.169826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.607 qpair failed and we were unable to recover it. 00:28:14.607 [2024-12-10 22:59:22.169994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.607 [2024-12-10 22:59:22.170025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.607 qpair failed and we were unable to recover it. 00:28:14.607 [2024-12-10 22:59:22.170137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.607 [2024-12-10 22:59:22.170178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.607 qpair failed and we were unable to recover it. 00:28:14.607 [2024-12-10 22:59:22.170392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.607 [2024-12-10 22:59:22.170422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.607 qpair failed and we were unable to recover it. 00:28:14.607 [2024-12-10 22:59:22.170528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.607 [2024-12-10 22:59:22.170559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.607 qpair failed and we were unable to recover it. 00:28:14.607 [2024-12-10 22:59:22.170731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.607 [2024-12-10 22:59:22.170764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.607 qpair failed and we were unable to recover it. 00:28:14.607 [2024-12-10 22:59:22.170962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.607 [2024-12-10 22:59:22.170992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.607 qpair failed and we were unable to recover it. 00:28:14.607 [2024-12-10 22:59:22.171165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.607 [2024-12-10 22:59:22.171197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.607 qpair failed and we were unable to recover it. 00:28:14.607 [2024-12-10 22:59:22.171327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.607 [2024-12-10 22:59:22.171357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.607 qpair failed and we were unable to recover it. 00:28:14.607 [2024-12-10 22:59:22.171471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.607 [2024-12-10 22:59:22.171501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.607 qpair failed and we were unable to recover it. 00:28:14.607 [2024-12-10 22:59:22.171627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.607 [2024-12-10 22:59:22.171660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.607 qpair failed and we were unable to recover it. 00:28:14.607 [2024-12-10 22:59:22.171770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.607 [2024-12-10 22:59:22.171801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.607 qpair failed and we were unable to recover it. 00:28:14.607 [2024-12-10 22:59:22.171976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.607 [2024-12-10 22:59:22.172007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.607 qpair failed and we were unable to recover it. 00:28:14.607 [2024-12-10 22:59:22.172201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.607 [2024-12-10 22:59:22.172234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.607 qpair failed and we were unable to recover it. 00:28:14.607 [2024-12-10 22:59:22.172345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.607 [2024-12-10 22:59:22.172377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.607 qpair failed and we were unable to recover it. 00:28:14.607 [2024-12-10 22:59:22.172486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.607 [2024-12-10 22:59:22.172517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.607 qpair failed and we were unable to recover it. 00:28:14.607 [2024-12-10 22:59:22.172703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.607 [2024-12-10 22:59:22.172734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.607 qpair failed and we were unable to recover it. 00:28:14.607 [2024-12-10 22:59:22.172865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.607 [2024-12-10 22:59:22.172896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.607 qpair failed and we were unable to recover it. 00:28:14.607 [2024-12-10 22:59:22.173029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.608 [2024-12-10 22:59:22.173060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.608 qpair failed and we were unable to recover it. 00:28:14.608 [2024-12-10 22:59:22.173168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.608 [2024-12-10 22:59:22.173201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.608 qpair failed and we were unable to recover it. 00:28:14.608 [2024-12-10 22:59:22.173302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.608 [2024-12-10 22:59:22.173334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.608 qpair failed and we were unable to recover it. 00:28:14.608 [2024-12-10 22:59:22.173450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.608 [2024-12-10 22:59:22.173481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.608 qpair failed and we were unable to recover it. 00:28:14.608 [2024-12-10 22:59:22.173583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.608 [2024-12-10 22:59:22.173615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.608 qpair failed and we were unable to recover it. 00:28:14.608 [2024-12-10 22:59:22.173733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.608 [2024-12-10 22:59:22.173764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.608 qpair failed and we were unable to recover it. 00:28:14.608 [2024-12-10 22:59:22.173874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.608 [2024-12-10 22:59:22.173905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.608 qpair failed and we were unable to recover it. 00:28:14.608 [2024-12-10 22:59:22.174040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.608 [2024-12-10 22:59:22.174070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.608 qpair failed and we were unable to recover it. 00:28:14.608 [2024-12-10 22:59:22.174184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.608 [2024-12-10 22:59:22.174221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.608 qpair failed and we were unable to recover it. 00:28:14.608 [2024-12-10 22:59:22.174343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.608 [2024-12-10 22:59:22.174374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.608 qpair failed and we were unable to recover it. 00:28:14.608 [2024-12-10 22:59:22.174485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.608 [2024-12-10 22:59:22.174522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.608 qpair failed and we were unable to recover it. 00:28:14.608 [2024-12-10 22:59:22.174632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.608 [2024-12-10 22:59:22.174663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.608 qpair failed and we were unable to recover it. 00:28:14.608 [2024-12-10 22:59:22.174836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.608 [2024-12-10 22:59:22.174867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.608 qpair failed and we were unable to recover it. 00:28:14.608 [2024-12-10 22:59:22.175106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.608 [2024-12-10 22:59:22.175137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.608 qpair failed and we were unable to recover it. 00:28:14.608 [2024-12-10 22:59:22.175289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.608 [2024-12-10 22:59:22.175321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.608 qpair failed and we were unable to recover it. 00:28:14.608 [2024-12-10 22:59:22.175493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.608 [2024-12-10 22:59:22.175524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.608 qpair failed and we were unable to recover it. 00:28:14.608 [2024-12-10 22:59:22.175643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.608 [2024-12-10 22:59:22.175673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.608 qpair failed and we were unable to recover it. 00:28:14.608 [2024-12-10 22:59:22.175783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.608 [2024-12-10 22:59:22.175814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.608 qpair failed and we were unable to recover it. 00:28:14.608 [2024-12-10 22:59:22.175921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.608 [2024-12-10 22:59:22.175952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.608 qpair failed and we were unable to recover it. 00:28:14.608 [2024-12-10 22:59:22.176124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.608 [2024-12-10 22:59:22.176156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.608 qpair failed and we were unable to recover it. 00:28:14.608 [2024-12-10 22:59:22.176290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.608 [2024-12-10 22:59:22.176321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.608 qpair failed and we were unable to recover it. 00:28:14.608 [2024-12-10 22:59:22.176519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.608 [2024-12-10 22:59:22.176549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.608 qpair failed and we were unable to recover it. 00:28:14.608 [2024-12-10 22:59:22.176676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.608 [2024-12-10 22:59:22.176707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.608 qpair failed and we were unable to recover it. 00:28:14.608 [2024-12-10 22:59:22.176812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.608 [2024-12-10 22:59:22.176842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.608 qpair failed and we were unable to recover it. 00:28:14.608 [2024-12-10 22:59:22.176977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.608 [2024-12-10 22:59:22.177009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.608 qpair failed and we were unable to recover it. 00:28:14.608 [2024-12-10 22:59:22.177114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.608 [2024-12-10 22:59:22.177145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.608 qpair failed and we were unable to recover it. 00:28:14.608 [2024-12-10 22:59:22.177290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.608 [2024-12-10 22:59:22.177321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.608 qpair failed and we were unable to recover it. 00:28:14.608 [2024-12-10 22:59:22.177436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.608 [2024-12-10 22:59:22.177467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.608 qpair failed and we were unable to recover it. 00:28:14.608 [2024-12-10 22:59:22.177568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.608 [2024-12-10 22:59:22.177599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.608 qpair failed and we were unable to recover it. 00:28:14.608 [2024-12-10 22:59:22.177708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.608 [2024-12-10 22:59:22.177738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.608 qpair failed and we were unable to recover it. 00:28:14.608 [2024-12-10 22:59:22.177865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.608 [2024-12-10 22:59:22.177896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.608 qpair failed and we were unable to recover it. 00:28:14.608 [2024-12-10 22:59:22.178068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.608 [2024-12-10 22:59:22.178098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.608 qpair failed and we were unable to recover it. 00:28:14.608 [2024-12-10 22:59:22.178210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.608 [2024-12-10 22:59:22.178242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.608 qpair failed and we were unable to recover it. 00:28:14.608 [2024-12-10 22:59:22.178350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.608 [2024-12-10 22:59:22.178381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.608 qpair failed and we were unable to recover it. 00:28:14.608 [2024-12-10 22:59:22.178486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.608 [2024-12-10 22:59:22.178517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.608 qpair failed and we were unable to recover it. 00:28:14.609 [2024-12-10 22:59:22.178628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.609 [2024-12-10 22:59:22.178659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.609 qpair failed and we were unable to recover it. 00:28:14.609 [2024-12-10 22:59:22.178773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.609 [2024-12-10 22:59:22.178804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.609 qpair failed and we were unable to recover it. 00:28:14.609 [2024-12-10 22:59:22.179036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.609 [2024-12-10 22:59:22.179106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.609 qpair failed and we were unable to recover it. 00:28:14.609 [2024-12-10 22:59:22.179274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.609 [2024-12-10 22:59:22.179313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.609 qpair failed and we were unable to recover it. 00:28:14.609 [2024-12-10 22:59:22.179428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.609 [2024-12-10 22:59:22.179460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.609 qpair failed and we were unable to recover it. 00:28:14.609 [2024-12-10 22:59:22.179580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.609 [2024-12-10 22:59:22.179611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.609 qpair failed and we were unable to recover it. 00:28:14.609 [2024-12-10 22:59:22.179778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.609 [2024-12-10 22:59:22.179810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.609 qpair failed and we were unable to recover it. 00:28:14.609 [2024-12-10 22:59:22.179912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.609 [2024-12-10 22:59:22.179943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.609 qpair failed and we were unable to recover it. 00:28:14.609 [2024-12-10 22:59:22.180060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.609 [2024-12-10 22:59:22.180092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.609 qpair failed and we were unable to recover it. 00:28:14.609 [2024-12-10 22:59:22.180203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.609 [2024-12-10 22:59:22.180237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.609 qpair failed and we were unable to recover it. 00:28:14.609 [2024-12-10 22:59:22.180349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.609 [2024-12-10 22:59:22.180380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.609 qpair failed and we were unable to recover it. 00:28:14.609 [2024-12-10 22:59:22.180483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.609 [2024-12-10 22:59:22.180515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.609 qpair failed and we were unable to recover it. 00:28:14.609 [2024-12-10 22:59:22.180612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.609 [2024-12-10 22:59:22.180644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.609 qpair failed and we were unable to recover it. 00:28:14.609 [2024-12-10 22:59:22.180748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.609 [2024-12-10 22:59:22.180779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.609 qpair failed and we were unable to recover it. 00:28:14.609 [2024-12-10 22:59:22.180959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.609 [2024-12-10 22:59:22.180991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.609 qpair failed and we were unable to recover it. 00:28:14.609 [2024-12-10 22:59:22.181108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.609 [2024-12-10 22:59:22.181149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.609 qpair failed and we were unable to recover it. 00:28:14.609 [2024-12-10 22:59:22.181276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.609 [2024-12-10 22:59:22.181307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.609 qpair failed and we were unable to recover it. 00:28:14.609 [2024-12-10 22:59:22.181430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.609 [2024-12-10 22:59:22.181461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.609 qpair failed and we were unable to recover it. 00:28:14.609 [2024-12-10 22:59:22.181588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.609 [2024-12-10 22:59:22.181618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.609 qpair failed and we were unable to recover it. 00:28:14.609 [2024-12-10 22:59:22.181755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.609 [2024-12-10 22:59:22.181787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.609 qpair failed and we were unable to recover it. 00:28:14.609 [2024-12-10 22:59:22.181892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.609 [2024-12-10 22:59:22.181923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.609 qpair failed and we were unable to recover it. 00:28:14.609 [2024-12-10 22:59:22.182034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.609 [2024-12-10 22:59:22.182065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.609 qpair failed and we were unable to recover it. 00:28:14.609 [2024-12-10 22:59:22.182183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.609 [2024-12-10 22:59:22.182216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.609 qpair failed and we were unable to recover it. 00:28:14.609 [2024-12-10 22:59:22.182342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.609 [2024-12-10 22:59:22.182373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.609 qpair failed and we were unable to recover it. 00:28:14.609 [2024-12-10 22:59:22.182491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.609 [2024-12-10 22:59:22.182522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.609 qpair failed and we were unable to recover it. 00:28:14.609 [2024-12-10 22:59:22.182632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.609 [2024-12-10 22:59:22.182662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.609 qpair failed and we were unable to recover it. 00:28:14.609 [2024-12-10 22:59:22.182842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.609 [2024-12-10 22:59:22.182873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.609 qpair failed and we were unable to recover it. 00:28:14.609 [2024-12-10 22:59:22.182997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.609 [2024-12-10 22:59:22.183028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.609 qpair failed and we were unable to recover it. 00:28:14.609 [2024-12-10 22:59:22.183199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.609 [2024-12-10 22:59:22.183232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.609 qpair failed and we were unable to recover it. 00:28:14.609 [2024-12-10 22:59:22.183340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.609 [2024-12-10 22:59:22.183372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.609 qpair failed and we were unable to recover it. 00:28:14.609 [2024-12-10 22:59:22.183487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.609 [2024-12-10 22:59:22.183518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.609 qpair failed and we were unable to recover it. 00:28:14.609 [2024-12-10 22:59:22.183709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.609 [2024-12-10 22:59:22.183740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.609 qpair failed and we were unable to recover it. 00:28:14.609 [2024-12-10 22:59:22.183923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.609 [2024-12-10 22:59:22.183954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.609 qpair failed and we were unable to recover it. 00:28:14.609 [2024-12-10 22:59:22.184059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.609 [2024-12-10 22:59:22.184091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.609 qpair failed and we were unable to recover it. 00:28:14.609 [2024-12-10 22:59:22.184221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.609 [2024-12-10 22:59:22.184254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.609 qpair failed and we were unable to recover it. 00:28:14.609 [2024-12-10 22:59:22.184368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.609 [2024-12-10 22:59:22.184399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.609 qpair failed and we were unable to recover it. 00:28:14.609 [2024-12-10 22:59:22.184511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.609 [2024-12-10 22:59:22.184542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.609 qpair failed and we were unable to recover it. 00:28:14.609 [2024-12-10 22:59:22.184652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.609 [2024-12-10 22:59:22.184684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.609 qpair failed and we were unable to recover it. 00:28:14.609 [2024-12-10 22:59:22.184863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.609 [2024-12-10 22:59:22.184900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.609 qpair failed and we were unable to recover it. 00:28:14.610 [2024-12-10 22:59:22.185000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.610 [2024-12-10 22:59:22.185031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.610 qpair failed and we were unable to recover it. 00:28:14.610 [2024-12-10 22:59:22.185154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.610 [2024-12-10 22:59:22.185193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.610 qpair failed and we were unable to recover it. 00:28:14.610 [2024-12-10 22:59:22.185299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.610 [2024-12-10 22:59:22.185330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.610 qpair failed and we were unable to recover it. 00:28:14.610 [2024-12-10 22:59:22.185440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.610 [2024-12-10 22:59:22.185476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.610 qpair failed and we were unable to recover it. 00:28:14.610 [2024-12-10 22:59:22.185657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.610 [2024-12-10 22:59:22.185686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.610 qpair failed and we were unable to recover it. 00:28:14.610 [2024-12-10 22:59:22.185782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.610 [2024-12-10 22:59:22.185810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.610 qpair failed and we were unable to recover it. 00:28:14.610 [2024-12-10 22:59:22.185928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.610 [2024-12-10 22:59:22.185956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.610 qpair failed and we were unable to recover it. 00:28:14.610 [2024-12-10 22:59:22.186057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.610 [2024-12-10 22:59:22.186085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.610 qpair failed and we were unable to recover it. 00:28:14.610 [2024-12-10 22:59:22.186192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.610 [2024-12-10 22:59:22.186221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.610 qpair failed and we were unable to recover it. 00:28:14.610 [2024-12-10 22:59:22.186323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.610 [2024-12-10 22:59:22.186352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.610 qpair failed and we were unable to recover it. 00:28:14.610 [2024-12-10 22:59:22.186531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.610 [2024-12-10 22:59:22.186561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.610 qpair failed and we were unable to recover it. 00:28:14.610 [2024-12-10 22:59:22.186656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.610 [2024-12-10 22:59:22.186685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.610 qpair failed and we were unable to recover it. 00:28:14.610 [2024-12-10 22:59:22.186786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.610 [2024-12-10 22:59:22.186815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.610 qpair failed and we were unable to recover it. 00:28:14.610 [2024-12-10 22:59:22.186979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.610 [2024-12-10 22:59:22.187008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.610 qpair failed and we were unable to recover it. 00:28:14.610 [2024-12-10 22:59:22.187103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.610 [2024-12-10 22:59:22.187131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.610 qpair failed and we were unable to recover it. 00:28:14.610 [2024-12-10 22:59:22.187294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.610 [2024-12-10 22:59:22.187328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.610 qpair failed and we were unable to recover it. 00:28:14.610 [2024-12-10 22:59:22.187431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.610 [2024-12-10 22:59:22.187467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.610 qpair failed and we were unable to recover it. 00:28:14.610 [2024-12-10 22:59:22.187574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.610 [2024-12-10 22:59:22.187604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.610 qpair failed and we were unable to recover it. 00:28:14.610 [2024-12-10 22:59:22.187708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.610 [2024-12-10 22:59:22.187739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.610 qpair failed and we were unable to recover it. 00:28:14.610 [2024-12-10 22:59:22.187856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.610 [2024-12-10 22:59:22.187887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.610 qpair failed and we were unable to recover it. 00:28:14.610 [2024-12-10 22:59:22.188126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.610 [2024-12-10 22:59:22.188169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.610 qpair failed and we were unable to recover it. 00:28:14.610 [2024-12-10 22:59:22.188274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.610 [2024-12-10 22:59:22.188306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.610 qpair failed and we were unable to recover it. 00:28:14.610 [2024-12-10 22:59:22.188407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.610 [2024-12-10 22:59:22.188437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.610 qpair failed and we were unable to recover it. 00:28:14.610 [2024-12-10 22:59:22.188538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.610 [2024-12-10 22:59:22.188569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.610 qpair failed and we were unable to recover it. 00:28:14.610 [2024-12-10 22:59:22.188804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.610 [2024-12-10 22:59:22.188835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.610 qpair failed and we were unable to recover it. 00:28:14.610 [2024-12-10 22:59:22.188935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.610 [2024-12-10 22:59:22.188966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.610 qpair failed and we were unable to recover it. 00:28:14.610 [2024-12-10 22:59:22.189138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.610 [2024-12-10 22:59:22.189181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.610 qpair failed and we were unable to recover it. 00:28:14.610 [2024-12-10 22:59:22.189311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.610 [2024-12-10 22:59:22.189342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.610 qpair failed and we were unable to recover it. 00:28:14.610 [2024-12-10 22:59:22.189512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.610 [2024-12-10 22:59:22.189544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.610 qpair failed and we were unable to recover it. 00:28:14.610 [2024-12-10 22:59:22.189652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.610 [2024-12-10 22:59:22.189682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.610 qpair failed and we were unable to recover it. 00:28:14.610 [2024-12-10 22:59:22.189867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.610 [2024-12-10 22:59:22.189898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.610 qpair failed and we were unable to recover it. 00:28:14.610 [2024-12-10 22:59:22.189999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.610 [2024-12-10 22:59:22.190031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.610 qpair failed and we were unable to recover it. 00:28:14.610 [2024-12-10 22:59:22.190140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.610 [2024-12-10 22:59:22.190183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.610 qpair failed and we were unable to recover it. 00:28:14.610 [2024-12-10 22:59:22.190356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.610 [2024-12-10 22:59:22.190387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.610 qpair failed and we were unable to recover it. 00:28:14.610 [2024-12-10 22:59:22.190560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.610 [2024-12-10 22:59:22.190591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.610 qpair failed and we were unable to recover it. 00:28:14.610 [2024-12-10 22:59:22.190698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.610 [2024-12-10 22:59:22.190728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.610 qpair failed and we were unable to recover it. 00:28:14.610 [2024-12-10 22:59:22.190834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.610 [2024-12-10 22:59:22.190865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.610 qpair failed and we were unable to recover it. 00:28:14.610 [2024-12-10 22:59:22.190971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.610 [2024-12-10 22:59:22.191002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.610 qpair failed and we were unable to recover it. 00:28:14.610 [2024-12-10 22:59:22.191124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.610 [2024-12-10 22:59:22.191155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.610 qpair failed and we were unable to recover it. 00:28:14.610 [2024-12-10 22:59:22.191290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.611 [2024-12-10 22:59:22.191321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.611 qpair failed and we were unable to recover it. 00:28:14.611 [2024-12-10 22:59:22.191426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.611 [2024-12-10 22:59:22.191457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.611 qpair failed and we were unable to recover it. 00:28:14.611 [2024-12-10 22:59:22.191624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.611 [2024-12-10 22:59:22.191656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.611 qpair failed and we were unable to recover it. 00:28:14.611 [2024-12-10 22:59:22.191830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.611 [2024-12-10 22:59:22.191861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.611 qpair failed and we were unable to recover it. 00:28:14.611 [2024-12-10 22:59:22.191961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.611 [2024-12-10 22:59:22.191997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.611 qpair failed and we were unable to recover it. 00:28:14.611 [2024-12-10 22:59:22.192095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.611 [2024-12-10 22:59:22.192127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.611 qpair failed and we were unable to recover it. 00:28:14.611 [2024-12-10 22:59:22.192239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.611 [2024-12-10 22:59:22.192272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.611 qpair failed and we were unable to recover it. 00:28:14.611 [2024-12-10 22:59:22.192382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.611 [2024-12-10 22:59:22.192413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.611 qpair failed and we were unable to recover it. 00:28:14.611 [2024-12-10 22:59:22.192585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.611 [2024-12-10 22:59:22.192616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.611 qpair failed and we were unable to recover it. 00:28:14.611 [2024-12-10 22:59:22.192726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.611 [2024-12-10 22:59:22.192757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.611 qpair failed and we were unable to recover it. 00:28:14.611 [2024-12-10 22:59:22.192921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.611 [2024-12-10 22:59:22.192951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.611 qpair failed and we were unable to recover it. 00:28:14.611 [2024-12-10 22:59:22.193066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.611 [2024-12-10 22:59:22.193096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.611 qpair failed and we were unable to recover it. 00:28:14.611 [2024-12-10 22:59:22.193212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.611 [2024-12-10 22:59:22.193246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.611 qpair failed and we were unable to recover it. 00:28:14.611 [2024-12-10 22:59:22.193413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.611 [2024-12-10 22:59:22.193445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.611 qpair failed and we were unable to recover it. 00:28:14.611 [2024-12-10 22:59:22.193711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.611 [2024-12-10 22:59:22.193743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.611 qpair failed and we were unable to recover it. 00:28:14.611 [2024-12-10 22:59:22.193911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.611 [2024-12-10 22:59:22.193942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.611 qpair failed and we were unable to recover it. 00:28:14.611 [2024-12-10 22:59:22.194056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.611 [2024-12-10 22:59:22.194086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.611 qpair failed and we were unable to recover it. 00:28:14.611 [2024-12-10 22:59:22.194220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.611 [2024-12-10 22:59:22.194254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.611 qpair failed and we were unable to recover it. 00:28:14.611 [2024-12-10 22:59:22.194364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.611 [2024-12-10 22:59:22.194396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.611 qpair failed and we were unable to recover it. 00:28:14.611 [2024-12-10 22:59:22.194509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.611 [2024-12-10 22:59:22.194539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.611 qpair failed and we were unable to recover it. 00:28:14.611 [2024-12-10 22:59:22.194640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.611 [2024-12-10 22:59:22.194671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.611 qpair failed and we were unable to recover it. 00:28:14.611 [2024-12-10 22:59:22.194770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.611 [2024-12-10 22:59:22.194801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.611 qpair failed and we were unable to recover it. 00:28:14.611 [2024-12-10 22:59:22.194913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.611 [2024-12-10 22:59:22.194943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.611 qpair failed and we were unable to recover it. 00:28:14.611 [2024-12-10 22:59:22.195106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.611 [2024-12-10 22:59:22.195138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.611 qpair failed and we were unable to recover it. 00:28:14.611 [2024-12-10 22:59:22.195267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.611 [2024-12-10 22:59:22.195299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.611 qpair failed and we were unable to recover it. 00:28:14.611 [2024-12-10 22:59:22.195406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.611 [2024-12-10 22:59:22.195438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.611 qpair failed and we were unable to recover it. 00:28:14.611 [2024-12-10 22:59:22.195550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.611 [2024-12-10 22:59:22.195581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.611 qpair failed and we were unable to recover it. 00:28:14.611 [2024-12-10 22:59:22.195687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.611 [2024-12-10 22:59:22.195718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.611 qpair failed and we were unable to recover it. 00:28:14.611 [2024-12-10 22:59:22.195887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.611 [2024-12-10 22:59:22.195919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.611 qpair failed and we were unable to recover it. 00:28:14.611 [2024-12-10 22:59:22.196090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.611 [2024-12-10 22:59:22.196121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.611 qpair failed and we were unable to recover it. 00:28:14.611 [2024-12-10 22:59:22.196270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.611 [2024-12-10 22:59:22.196302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.611 qpair failed and we were unable to recover it. 00:28:14.611 [2024-12-10 22:59:22.196411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.611 [2024-12-10 22:59:22.196441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.611 qpair failed and we were unable to recover it. 00:28:14.611 [2024-12-10 22:59:22.196553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.611 [2024-12-10 22:59:22.196584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.611 qpair failed and we were unable to recover it. 00:28:14.611 [2024-12-10 22:59:22.196693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.611 [2024-12-10 22:59:22.196723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.611 qpair failed and we were unable to recover it. 00:28:14.611 [2024-12-10 22:59:22.196904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.611 [2024-12-10 22:59:22.196935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.611 qpair failed and we were unable to recover it. 00:28:14.611 [2024-12-10 22:59:22.197043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.611 [2024-12-10 22:59:22.197074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.611 qpair failed and we were unable to recover it. 00:28:14.611 [2024-12-10 22:59:22.197188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.611 [2024-12-10 22:59:22.197220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.611 qpair failed and we were unable to recover it. 00:28:14.611 [2024-12-10 22:59:22.197340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.611 [2024-12-10 22:59:22.197371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.611 qpair failed and we were unable to recover it. 00:28:14.611 [2024-12-10 22:59:22.197541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.611 [2024-12-10 22:59:22.197572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.611 qpair failed and we were unable to recover it. 00:28:14.611 [2024-12-10 22:59:22.197680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.612 [2024-12-10 22:59:22.197711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.612 qpair failed and we were unable to recover it. 00:28:14.612 [2024-12-10 22:59:22.197819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.612 [2024-12-10 22:59:22.197850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.612 qpair failed and we were unable to recover it. 00:28:14.612 [2024-12-10 22:59:22.198065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.612 [2024-12-10 22:59:22.198098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.612 qpair failed and we were unable to recover it. 00:28:14.612 [2024-12-10 22:59:22.198264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.612 [2024-12-10 22:59:22.198296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.612 qpair failed and we were unable to recover it. 00:28:14.612 [2024-12-10 22:59:22.198398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.612 [2024-12-10 22:59:22.198428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.612 qpair failed and we were unable to recover it. 00:28:14.612 [2024-12-10 22:59:22.198547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.612 [2024-12-10 22:59:22.198583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.612 qpair failed and we were unable to recover it. 00:28:14.612 [2024-12-10 22:59:22.198695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.612 [2024-12-10 22:59:22.198726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.612 qpair failed and we were unable to recover it. 00:28:14.612 [2024-12-10 22:59:22.198829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.612 [2024-12-10 22:59:22.198860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.612 qpair failed and we were unable to recover it. 00:28:14.612 [2024-12-10 22:59:22.198985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.612 [2024-12-10 22:59:22.199016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.612 qpair failed and we were unable to recover it. 00:28:14.612 [2024-12-10 22:59:22.199196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.612 [2024-12-10 22:59:22.199247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.612 qpair failed and we were unable to recover it. 00:28:14.612 [2024-12-10 22:59:22.199349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.612 [2024-12-10 22:59:22.199377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.612 qpair failed and we were unable to recover it. 00:28:14.612 [2024-12-10 22:59:22.199488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.612 [2024-12-10 22:59:22.199517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.612 qpair failed and we were unable to recover it. 00:28:14.612 [2024-12-10 22:59:22.199731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.612 [2024-12-10 22:59:22.199773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.612 qpair failed and we were unable to recover it. 00:28:14.612 [2024-12-10 22:59:22.199871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.612 [2024-12-10 22:59:22.199900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.612 qpair failed and we were unable to recover it. 00:28:14.612 [2024-12-10 22:59:22.200068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.612 [2024-12-10 22:59:22.200098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.612 qpair failed and we were unable to recover it. 00:28:14.612 [2024-12-10 22:59:22.200376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.612 [2024-12-10 22:59:22.200408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.612 qpair failed and we were unable to recover it. 00:28:14.612 [2024-12-10 22:59:22.200581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.612 [2024-12-10 22:59:22.200662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.612 qpair failed and we were unable to recover it. 00:28:14.612 [2024-12-10 22:59:22.200806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.612 [2024-12-10 22:59:22.200842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.612 qpair failed and we were unable to recover it. 00:28:14.612 [2024-12-10 22:59:22.200966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.612 [2024-12-10 22:59:22.200999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.612 qpair failed and we were unable to recover it. 00:28:14.612 [2024-12-10 22:59:22.201181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.612 [2024-12-10 22:59:22.201215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.612 qpair failed and we were unable to recover it. 00:28:14.612 [2024-12-10 22:59:22.201319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.612 [2024-12-10 22:59:22.201351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.612 qpair failed and we were unable to recover it. 00:28:14.612 [2024-12-10 22:59:22.201471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.612 [2024-12-10 22:59:22.201503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.612 qpair failed and we were unable to recover it. 00:28:14.612 [2024-12-10 22:59:22.201608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.612 [2024-12-10 22:59:22.201639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.612 qpair failed and we were unable to recover it. 00:28:14.612 [2024-12-10 22:59:22.201811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.612 [2024-12-10 22:59:22.201843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.612 qpair failed and we were unable to recover it. 00:28:14.612 [2024-12-10 22:59:22.201954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.612 [2024-12-10 22:59:22.201985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.612 qpair failed and we were unable to recover it. 00:28:14.612 [2024-12-10 22:59:22.202081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.612 [2024-12-10 22:59:22.202113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.612 qpair failed and we were unable to recover it. 00:28:14.612 [2024-12-10 22:59:22.202252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.612 [2024-12-10 22:59:22.202285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.612 qpair failed and we were unable to recover it. 00:28:14.612 [2024-12-10 22:59:22.202384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.612 [2024-12-10 22:59:22.202415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.612 qpair failed and we were unable to recover it. 00:28:14.612 [2024-12-10 22:59:22.202603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.612 [2024-12-10 22:59:22.202635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.612 qpair failed and we were unable to recover it. 00:28:14.612 [2024-12-10 22:59:22.202753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.612 [2024-12-10 22:59:22.202784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.612 qpair failed and we were unable to recover it. 00:28:14.612 [2024-12-10 22:59:22.202891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.612 [2024-12-10 22:59:22.202922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.612 qpair failed and we were unable to recover it. 00:28:14.612 [2024-12-10 22:59:22.203032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.612 [2024-12-10 22:59:22.203063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.612 qpair failed and we were unable to recover it. 00:28:14.612 [2024-12-10 22:59:22.203253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.612 [2024-12-10 22:59:22.203287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.612 qpair failed and we were unable to recover it. 00:28:14.612 [2024-12-10 22:59:22.203460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.612 [2024-12-10 22:59:22.203490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.612 qpair failed and we were unable to recover it. 00:28:14.612 [2024-12-10 22:59:22.203592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.612 [2024-12-10 22:59:22.203623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.612 qpair failed and we were unable to recover it. 00:28:14.612 [2024-12-10 22:59:22.203744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.612 [2024-12-10 22:59:22.203776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.612 qpair failed and we were unable to recover it. 00:28:14.612 [2024-12-10 22:59:22.203891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.612 [2024-12-10 22:59:22.203922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.612 qpair failed and we were unable to recover it. 00:28:14.612 [2024-12-10 22:59:22.204025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.612 [2024-12-10 22:59:22.204057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.612 qpair failed and we were unable to recover it. 00:28:14.612 [2024-12-10 22:59:22.204179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.612 [2024-12-10 22:59:22.204210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.612 qpair failed and we were unable to recover it. 00:28:14.612 [2024-12-10 22:59:22.204328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.612 [2024-12-10 22:59:22.204359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-12-10 22:59:22.204489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-12-10 22:59:22.204521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-12-10 22:59:22.204647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-12-10 22:59:22.204677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-12-10 22:59:22.204778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-12-10 22:59:22.204810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-12-10 22:59:22.204914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-12-10 22:59:22.204946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-12-10 22:59:22.205114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-12-10 22:59:22.205144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-12-10 22:59:22.205263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-12-10 22:59:22.205301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-12-10 22:59:22.205410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-12-10 22:59:22.205441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-12-10 22:59:22.205557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-12-10 22:59:22.205588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-12-10 22:59:22.205756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-12-10 22:59:22.205789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-12-10 22:59:22.205958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-12-10 22:59:22.205990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-12-10 22:59:22.206094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-12-10 22:59:22.206126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-12-10 22:59:22.206246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-12-10 22:59:22.206279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-12-10 22:59:22.206387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-12-10 22:59:22.206419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-12-10 22:59:22.206532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-12-10 22:59:22.206563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-12-10 22:59:22.206674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-12-10 22:59:22.206706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-12-10 22:59:22.206823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-12-10 22:59:22.206856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-12-10 22:59:22.207024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-12-10 22:59:22.207055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-12-10 22:59:22.207174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-12-10 22:59:22.207208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-12-10 22:59:22.207377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-12-10 22:59:22.207409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-12-10 22:59:22.207598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-12-10 22:59:22.207630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-12-10 22:59:22.207801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-12-10 22:59:22.207832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-12-10 22:59:22.207946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-12-10 22:59:22.207978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-12-10 22:59:22.208179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-12-10 22:59:22.208212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-12-10 22:59:22.208319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-12-10 22:59:22.208349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-12-10 22:59:22.208464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-12-10 22:59:22.208496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-12-10 22:59:22.208601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-12-10 22:59:22.208633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-12-10 22:59:22.208740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-12-10 22:59:22.208771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-12-10 22:59:22.208880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-12-10 22:59:22.208911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-12-10 22:59:22.209016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-12-10 22:59:22.209047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-12-10 22:59:22.209150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-12-10 22:59:22.209194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-12-10 22:59:22.209369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-12-10 22:59:22.209400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-12-10 22:59:22.209520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-12-10 22:59:22.209551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-12-10 22:59:22.209663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-12-10 22:59:22.209695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.613 [2024-12-10 22:59:22.209811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.613 [2024-12-10 22:59:22.209842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.613 qpair failed and we were unable to recover it. 00:28:14.614 [2024-12-10 22:59:22.209954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-12-10 22:59:22.209985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-12-10 22:59:22.210095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-12-10 22:59:22.210126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-12-10 22:59:22.210313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-12-10 22:59:22.210348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-12-10 22:59:22.210451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-12-10 22:59:22.210483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-12-10 22:59:22.210586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-12-10 22:59:22.210617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-12-10 22:59:22.210720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-12-10 22:59:22.210753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-12-10 22:59:22.210871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-12-10 22:59:22.210900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-12-10 22:59:22.211018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-12-10 22:59:22.211051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-12-10 22:59:22.211182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-12-10 22:59:22.211213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-12-10 22:59:22.211377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-12-10 22:59:22.211408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-12-10 22:59:22.211524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-12-10 22:59:22.211555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-12-10 22:59:22.211753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-12-10 22:59:22.211790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-12-10 22:59:22.211967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-12-10 22:59:22.211997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-12-10 22:59:22.212199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-12-10 22:59:22.212230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-12-10 22:59:22.212349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-12-10 22:59:22.212379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-12-10 22:59:22.212486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-12-10 22:59:22.212517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-12-10 22:59:22.212755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-12-10 22:59:22.212786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-12-10 22:59:22.212891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-12-10 22:59:22.212922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-12-10 22:59:22.213045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-12-10 22:59:22.213076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-12-10 22:59:22.213249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-12-10 22:59:22.213283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-12-10 22:59:22.213403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-12-10 22:59:22.213433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-12-10 22:59:22.213553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-12-10 22:59:22.213582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-12-10 22:59:22.213752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-12-10 22:59:22.213784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-12-10 22:59:22.213893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-12-10 22:59:22.213923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-12-10 22:59:22.214091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-12-10 22:59:22.214128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-12-10 22:59:22.214315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-12-10 22:59:22.214348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-12-10 22:59:22.214467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-12-10 22:59:22.214497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-12-10 22:59:22.214601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-12-10 22:59:22.214632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-12-10 22:59:22.214739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-12-10 22:59:22.214769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-12-10 22:59:22.214884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-12-10 22:59:22.214915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-12-10 22:59:22.215060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-12-10 22:59:22.215092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-12-10 22:59:22.215196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-12-10 22:59:22.215226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-12-10 22:59:22.215399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-12-10 22:59:22.215432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-12-10 22:59:22.215538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-12-10 22:59:22.215569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-12-10 22:59:22.215841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-12-10 22:59:22.215873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-12-10 22:59:22.216076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-12-10 22:59:22.216107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-12-10 22:59:22.216245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-12-10 22:59:22.216276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-12-10 22:59:22.216443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-12-10 22:59:22.216474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-12-10 22:59:22.216583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.614 [2024-12-10 22:59:22.216615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.614 qpair failed and we were unable to recover it. 00:28:14.614 [2024-12-10 22:59:22.216783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-12-10 22:59:22.216812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-12-10 22:59:22.216923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-12-10 22:59:22.216954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-12-10 22:59:22.217077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-12-10 22:59:22.217108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-12-10 22:59:22.217238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-12-10 22:59:22.217271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-12-10 22:59:22.217387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-12-10 22:59:22.217419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-12-10 22:59:22.217593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-12-10 22:59:22.217625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-12-10 22:59:22.217728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-12-10 22:59:22.217759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-12-10 22:59:22.217925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-12-10 22:59:22.217955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-12-10 22:59:22.218201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-12-10 22:59:22.218234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-12-10 22:59:22.218408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-12-10 22:59:22.218437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-12-10 22:59:22.218605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-12-10 22:59:22.218634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-12-10 22:59:22.218737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-12-10 22:59:22.218769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-12-10 22:59:22.218954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-12-10 22:59:22.218989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-12-10 22:59:22.219106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-12-10 22:59:22.219135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-12-10 22:59:22.219311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-12-10 22:59:22.219342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-12-10 22:59:22.219450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-12-10 22:59:22.219479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-12-10 22:59:22.219598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-12-10 22:59:22.219627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-12-10 22:59:22.219742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-12-10 22:59:22.219772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-12-10 22:59:22.219866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-12-10 22:59:22.219897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-12-10 22:59:22.219999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-12-10 22:59:22.220028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-12-10 22:59:22.220136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-12-10 22:59:22.220182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-12-10 22:59:22.220299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-12-10 22:59:22.220331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-12-10 22:59:22.220432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-12-10 22:59:22.220462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-12-10 22:59:22.220645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-12-10 22:59:22.220676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-12-10 22:59:22.220796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-12-10 22:59:22.220826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-12-10 22:59:22.220926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-12-10 22:59:22.220956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-12-10 22:59:22.221143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-12-10 22:59:22.221184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-12-10 22:59:22.221303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-12-10 22:59:22.221334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-12-10 22:59:22.221502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-12-10 22:59:22.221532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-12-10 22:59:22.221641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-12-10 22:59:22.221671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-12-10 22:59:22.221872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-12-10 22:59:22.221903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-12-10 22:59:22.222031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-12-10 22:59:22.222061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-12-10 22:59:22.222242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-12-10 22:59:22.222274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-12-10 22:59:22.222443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-12-10 22:59:22.222474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-12-10 22:59:22.222586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-12-10 22:59:22.222616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-12-10 22:59:22.222781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-12-10 22:59:22.222810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-12-10 22:59:22.222985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-12-10 22:59:22.223015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-12-10 22:59:22.223119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-12-10 22:59:22.223150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-12-10 22:59:22.223287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.615 [2024-12-10 22:59:22.223318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.615 qpair failed and we were unable to recover it. 00:28:14.615 [2024-12-10 22:59:22.223429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-12-10 22:59:22.223459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-12-10 22:59:22.223637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-12-10 22:59:22.223668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-12-10 22:59:22.223770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-12-10 22:59:22.223799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-12-10 22:59:22.223900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-12-10 22:59:22.223931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-12-10 22:59:22.224099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-12-10 22:59:22.224128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-12-10 22:59:22.224313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-12-10 22:59:22.224345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-12-10 22:59:22.224462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-12-10 22:59:22.224491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-12-10 22:59:22.224599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-12-10 22:59:22.224629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-12-10 22:59:22.224735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-12-10 22:59:22.224764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-12-10 22:59:22.224871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-12-10 22:59:22.224902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-12-10 22:59:22.225019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-12-10 22:59:22.225050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-12-10 22:59:22.225174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-12-10 22:59:22.225206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-12-10 22:59:22.225382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-12-10 22:59:22.225414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-12-10 22:59:22.225700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-12-10 22:59:22.225737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-12-10 22:59:22.225839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-12-10 22:59:22.225870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-12-10 22:59:22.225969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-12-10 22:59:22.226000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-12-10 22:59:22.226102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-12-10 22:59:22.226133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-12-10 22:59:22.226308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-12-10 22:59:22.226341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-12-10 22:59:22.226457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-12-10 22:59:22.226488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-12-10 22:59:22.226726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-12-10 22:59:22.226757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-12-10 22:59:22.226870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-12-10 22:59:22.226900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-12-10 22:59:22.227095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-12-10 22:59:22.227127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-12-10 22:59:22.227249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-12-10 22:59:22.227281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-12-10 22:59:22.227395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-12-10 22:59:22.227426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-12-10 22:59:22.227542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-12-10 22:59:22.227573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-12-10 22:59:22.227758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-12-10 22:59:22.227790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-12-10 22:59:22.227956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-12-10 22:59:22.227986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-12-10 22:59:22.228178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-12-10 22:59:22.228212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-12-10 22:59:22.228381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-12-10 22:59:22.228412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-12-10 22:59:22.228591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-12-10 22:59:22.228621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-12-10 22:59:22.228735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-12-10 22:59:22.228766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-12-10 22:59:22.228880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-12-10 22:59:22.228912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-12-10 22:59:22.229016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-12-10 22:59:22.229046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-12-10 22:59:22.229172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-12-10 22:59:22.229205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-12-10 22:59:22.229316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-12-10 22:59:22.229346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-12-10 22:59:22.229475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-12-10 22:59:22.229506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-12-10 22:59:22.229628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-12-10 22:59:22.229660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-12-10 22:59:22.229775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-12-10 22:59:22.229805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-12-10 22:59:22.229907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.616 [2024-12-10 22:59:22.229938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.616 qpair failed and we were unable to recover it. 00:28:14.616 [2024-12-10 22:59:22.230076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-12-10 22:59:22.230106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-12-10 22:59:22.230236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-12-10 22:59:22.230268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-12-10 22:59:22.230368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-12-10 22:59:22.230399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-12-10 22:59:22.230584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-12-10 22:59:22.230615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-12-10 22:59:22.230806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-12-10 22:59:22.230836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-12-10 22:59:22.231008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-12-10 22:59:22.231039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-12-10 22:59:22.231148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-12-10 22:59:22.231189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-12-10 22:59:22.231304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-12-10 22:59:22.231335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-12-10 22:59:22.231458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-12-10 22:59:22.231488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-12-10 22:59:22.231589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-12-10 22:59:22.231620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-12-10 22:59:22.231718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-12-10 22:59:22.231748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-12-10 22:59:22.231852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-12-10 22:59:22.231883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-12-10 22:59:22.232053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-12-10 22:59:22.232084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-12-10 22:59:22.232240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-12-10 22:59:22.232273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-12-10 22:59:22.232388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-12-10 22:59:22.232425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-12-10 22:59:22.232595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-12-10 22:59:22.232625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-12-10 22:59:22.232726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-12-10 22:59:22.232756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-12-10 22:59:22.232857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-12-10 22:59:22.232888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-12-10 22:59:22.233054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-12-10 22:59:22.233083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-12-10 22:59:22.233187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-12-10 22:59:22.233220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-12-10 22:59:22.233341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-12-10 22:59:22.233371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-12-10 22:59:22.233549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-12-10 22:59:22.233580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-12-10 22:59:22.233712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-12-10 22:59:22.233742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-12-10 22:59:22.233870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-12-10 22:59:22.233901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-12-10 22:59:22.234002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-12-10 22:59:22.234034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-12-10 22:59:22.234137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-12-10 22:59:22.234174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-12-10 22:59:22.234349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-12-10 22:59:22.234380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-12-10 22:59:22.234486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-12-10 22:59:22.234517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-12-10 22:59:22.234634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-12-10 22:59:22.234665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-12-10 22:59:22.234768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-12-10 22:59:22.234799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-12-10 22:59:22.234996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-12-10 22:59:22.235026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-12-10 22:59:22.235149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-12-10 22:59:22.235188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-12-10 22:59:22.235303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-12-10 22:59:22.235334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.617 [2024-12-10 22:59:22.235452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.617 [2024-12-10 22:59:22.235484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.617 qpair failed and we were unable to recover it. 00:28:14.618 [2024-12-10 22:59:22.235653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-12-10 22:59:22.235686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-12-10 22:59:22.235885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-12-10 22:59:22.235917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-12-10 22:59:22.236017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-12-10 22:59:22.236048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-12-10 22:59:22.236165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-12-10 22:59:22.236196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-12-10 22:59:22.236321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-12-10 22:59:22.236352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-12-10 22:59:22.236460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-12-10 22:59:22.236493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-12-10 22:59:22.236611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-12-10 22:59:22.236642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-12-10 22:59:22.236753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-12-10 22:59:22.236784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-12-10 22:59:22.236893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-12-10 22:59:22.236925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-12-10 22:59:22.237090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-12-10 22:59:22.237121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-12-10 22:59:22.237306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-12-10 22:59:22.237337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-12-10 22:59:22.237525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-12-10 22:59:22.237556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-12-10 22:59:22.237658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-12-10 22:59:22.237688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-12-10 22:59:22.237858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-12-10 22:59:22.237889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-12-10 22:59:22.238009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-12-10 22:59:22.238040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-12-10 22:59:22.238140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-12-10 22:59:22.238195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-12-10 22:59:22.238318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-12-10 22:59:22.238349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-12-10 22:59:22.238456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-12-10 22:59:22.238487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-12-10 22:59:22.238609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-12-10 22:59:22.238640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-12-10 22:59:22.238740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-12-10 22:59:22.238770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-12-10 22:59:22.238871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-12-10 22:59:22.238907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-12-10 22:59:22.239010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-12-10 22:59:22.239040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-12-10 22:59:22.239224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-12-10 22:59:22.239256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-12-10 22:59:22.239444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-12-10 22:59:22.239475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-12-10 22:59:22.239573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-12-10 22:59:22.239603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-12-10 22:59:22.239708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-12-10 22:59:22.239736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-12-10 22:59:22.239923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-12-10 22:59:22.239951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-12-10 22:59:22.240061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-12-10 22:59:22.240088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-12-10 22:59:22.240256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-12-10 22:59:22.240286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-12-10 22:59:22.240407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-12-10 22:59:22.240434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-12-10 22:59:22.240544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-12-10 22:59:22.240573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-12-10 22:59:22.240686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-12-10 22:59:22.240715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-12-10 22:59:22.240872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-12-10 22:59:22.240900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-12-10 22:59:22.240993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-12-10 22:59:22.241021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-12-10 22:59:22.241194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-12-10 22:59:22.241224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-12-10 22:59:22.241385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-12-10 22:59:22.241413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-12-10 22:59:22.241507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-12-10 22:59:22.241535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-12-10 22:59:22.241632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-12-10 22:59:22.241661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.618 [2024-12-10 22:59:22.241819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.618 [2024-12-10 22:59:22.241848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.618 qpair failed and we were unable to recover it. 00:28:14.619 [2024-12-10 22:59:22.242017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-12-10 22:59:22.242045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-12-10 22:59:22.242302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-12-10 22:59:22.242331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-12-10 22:59:22.242507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-12-10 22:59:22.242536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-12-10 22:59:22.242701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-12-10 22:59:22.242728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-12-10 22:59:22.242826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-12-10 22:59:22.242854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-12-10 22:59:22.242969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-12-10 22:59:22.242997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-12-10 22:59:22.243169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-12-10 22:59:22.243198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-12-10 22:59:22.243297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-12-10 22:59:22.243325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-12-10 22:59:22.243480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-12-10 22:59:22.243550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-12-10 22:59:22.243691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-12-10 22:59:22.243727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-12-10 22:59:22.243924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-12-10 22:59:22.243956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-12-10 22:59:22.244079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-12-10 22:59:22.244110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-12-10 22:59:22.244295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-12-10 22:59:22.244329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-12-10 22:59:22.244431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-12-10 22:59:22.244462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-12-10 22:59:22.244572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-12-10 22:59:22.244603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-12-10 22:59:22.244778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-12-10 22:59:22.244810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-12-10 22:59:22.244975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-12-10 22:59:22.245006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-12-10 22:59:22.245192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-12-10 22:59:22.245225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-12-10 22:59:22.245398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-12-10 22:59:22.245431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-12-10 22:59:22.245624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-12-10 22:59:22.245654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-12-10 22:59:22.245838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-12-10 22:59:22.245870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-12-10 22:59:22.245988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-12-10 22:59:22.246036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-12-10 22:59:22.246209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-12-10 22:59:22.246242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-12-10 22:59:22.246345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-12-10 22:59:22.246376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-12-10 22:59:22.246584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-12-10 22:59:22.246616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-12-10 22:59:22.246739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-12-10 22:59:22.246771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-12-10 22:59:22.246946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-12-10 22:59:22.246977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-12-10 22:59:22.247083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-12-10 22:59:22.247114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-12-10 22:59:22.247236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-12-10 22:59:22.247270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-12-10 22:59:22.247440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-12-10 22:59:22.247471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-12-10 22:59:22.247664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-12-10 22:59:22.247695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-12-10 22:59:22.247867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-12-10 22:59:22.247898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-12-10 22:59:22.248002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-12-10 22:59:22.248033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-12-10 22:59:22.248145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-12-10 22:59:22.248185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-12-10 22:59:22.248369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-12-10 22:59:22.248400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-12-10 22:59:22.248602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-12-10 22:59:22.248633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-12-10 22:59:22.248802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-12-10 22:59:22.248833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-12-10 22:59:22.248957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-12-10 22:59:22.248988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-12-10 22:59:22.249179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.619 [2024-12-10 22:59:22.249213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.619 qpair failed and we were unable to recover it. 00:28:14.619 [2024-12-10 22:59:22.249319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-12-10 22:59:22.249350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-12-10 22:59:22.249451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-12-10 22:59:22.249483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-12-10 22:59:22.249668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-12-10 22:59:22.249700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-12-10 22:59:22.249894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-12-10 22:59:22.249925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-12-10 22:59:22.250047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-12-10 22:59:22.250078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-12-10 22:59:22.250247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-12-10 22:59:22.250281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-12-10 22:59:22.250400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-12-10 22:59:22.250431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-12-10 22:59:22.250562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-12-10 22:59:22.250594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-12-10 22:59:22.250787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-12-10 22:59:22.250819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-12-10 22:59:22.250925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-12-10 22:59:22.250957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-12-10 22:59:22.251134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-12-10 22:59:22.251176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-12-10 22:59:22.251287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-12-10 22:59:22.251319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-12-10 22:59:22.251426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-12-10 22:59:22.251457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-12-10 22:59:22.251568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-12-10 22:59:22.251598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-12-10 22:59:22.251731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-12-10 22:59:22.251762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-12-10 22:59:22.252022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-12-10 22:59:22.252057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-12-10 22:59:22.252182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-12-10 22:59:22.252215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-12-10 22:59:22.252321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-12-10 22:59:22.252353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-12-10 22:59:22.252535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-12-10 22:59:22.252566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-12-10 22:59:22.252751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-12-10 22:59:22.252789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-12-10 22:59:22.252964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-12-10 22:59:22.252994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-12-10 22:59:22.253107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-12-10 22:59:22.253138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-12-10 22:59:22.253252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-12-10 22:59:22.253289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-12-10 22:59:22.253402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-12-10 22:59:22.253432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-12-10 22:59:22.253602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-12-10 22:59:22.253633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-12-10 22:59:22.253756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-12-10 22:59:22.253787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-12-10 22:59:22.253904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-12-10 22:59:22.253935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-12-10 22:59:22.254047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-12-10 22:59:22.254078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-12-10 22:59:22.254249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-12-10 22:59:22.254282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-12-10 22:59:22.254451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-12-10 22:59:22.254482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-12-10 22:59:22.254683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-12-10 22:59:22.254714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-12-10 22:59:22.254898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-12-10 22:59:22.254929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-12-10 22:59:22.255031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-12-10 22:59:22.255062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-12-10 22:59:22.255178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-12-10 22:59:22.255211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-12-10 22:59:22.255379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-12-10 22:59:22.255410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-12-10 22:59:22.255606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-12-10 22:59:22.255637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-12-10 22:59:22.255758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-12-10 22:59:22.255789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-12-10 22:59:22.255890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-12-10 22:59:22.255922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-12-10 22:59:22.256118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.620 [2024-12-10 22:59:22.256149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.620 qpair failed and we were unable to recover it. 00:28:14.620 [2024-12-10 22:59:22.256282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-12-10 22:59:22.256314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-12-10 22:59:22.256482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-12-10 22:59:22.256513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-12-10 22:59:22.256688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-12-10 22:59:22.256720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-12-10 22:59:22.256888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-12-10 22:59:22.256919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-12-10 22:59:22.257116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-12-10 22:59:22.257147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-12-10 22:59:22.257321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-12-10 22:59:22.257354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-12-10 22:59:22.257523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-12-10 22:59:22.257555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-12-10 22:59:22.257734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-12-10 22:59:22.257765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-12-10 22:59:22.257945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-12-10 22:59:22.257976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-12-10 22:59:22.258190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-12-10 22:59:22.258223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-12-10 22:59:22.258442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-12-10 22:59:22.258512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-12-10 22:59:22.258640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-12-10 22:59:22.258675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-12-10 22:59:22.258797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-12-10 22:59:22.258829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-12-10 22:59:22.259023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-12-10 22:59:22.259055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-12-10 22:59:22.259173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-12-10 22:59:22.259206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-12-10 22:59:22.259328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-12-10 22:59:22.259360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-12-10 22:59:22.259474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-12-10 22:59:22.259505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-12-10 22:59:22.259609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-12-10 22:59:22.259640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-12-10 22:59:22.259806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-12-10 22:59:22.259837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-12-10 22:59:22.260098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-12-10 22:59:22.260129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-12-10 22:59:22.260315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-12-10 22:59:22.260347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-12-10 22:59:22.260513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-12-10 22:59:22.260546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-12-10 22:59:22.260656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-12-10 22:59:22.260687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-12-10 22:59:22.260805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-12-10 22:59:22.260846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-12-10 22:59:22.261020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-12-10 22:59:22.261051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-12-10 22:59:22.261181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-12-10 22:59:22.261215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-12-10 22:59:22.261421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-12-10 22:59:22.261451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-12-10 22:59:22.261618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-12-10 22:59:22.261650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-12-10 22:59:22.261847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-12-10 22:59:22.261879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-12-10 22:59:22.262082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-12-10 22:59:22.262113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-12-10 22:59:22.262233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-12-10 22:59:22.262266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-12-10 22:59:22.262369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-12-10 22:59:22.262400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-12-10 22:59:22.262587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-12-10 22:59:22.262618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-12-10 22:59:22.262745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-12-10 22:59:22.262777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-12-10 22:59:22.262892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-12-10 22:59:22.262923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-12-10 22:59:22.263046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-12-10 22:59:22.263078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-12-10 22:59:22.263196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-12-10 22:59:22.263229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-12-10 22:59:22.263403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-12-10 22:59:22.263435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-12-10 22:59:22.263535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-12-10 22:59:22.263566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.621 qpair failed and we were unable to recover it. 00:28:14.621 [2024-12-10 22:59:22.263747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.621 [2024-12-10 22:59:22.263784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-12-10 22:59:22.264006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-12-10 22:59:22.264038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-12-10 22:59:22.264220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-12-10 22:59:22.264252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-12-10 22:59:22.264352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-12-10 22:59:22.264383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-12-10 22:59:22.264498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-12-10 22:59:22.264530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-12-10 22:59:22.264646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-12-10 22:59:22.264677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-12-10 22:59:22.264842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-12-10 22:59:22.264874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-12-10 22:59:22.265077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-12-10 22:59:22.265108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-12-10 22:59:22.265313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-12-10 22:59:22.265347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-12-10 22:59:22.265466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-12-10 22:59:22.265499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-12-10 22:59:22.265618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-12-10 22:59:22.265648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-12-10 22:59:22.265828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-12-10 22:59:22.265860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-12-10 22:59:22.266030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-12-10 22:59:22.266062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-12-10 22:59:22.266173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-12-10 22:59:22.266205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-12-10 22:59:22.266332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-12-10 22:59:22.266363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-12-10 22:59:22.266552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-12-10 22:59:22.266584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-12-10 22:59:22.266713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-12-10 22:59:22.266744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-12-10 22:59:22.266852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-12-10 22:59:22.266882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-12-10 22:59:22.266989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-12-10 22:59:22.267021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-12-10 22:59:22.267190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-12-10 22:59:22.267223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-12-10 22:59:22.267390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-12-10 22:59:22.267421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-12-10 22:59:22.267590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-12-10 22:59:22.267622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-12-10 22:59:22.267789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-12-10 22:59:22.267820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-12-10 22:59:22.268081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-12-10 22:59:22.268112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-12-10 22:59:22.268363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-12-10 22:59:22.268399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-12-10 22:59:22.268657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-12-10 22:59:22.268689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-12-10 22:59:22.268858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-12-10 22:59:22.268889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-12-10 22:59:22.268993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-12-10 22:59:22.269024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-12-10 22:59:22.269195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-12-10 22:59:22.269227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-12-10 22:59:22.269347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-12-10 22:59:22.269379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-12-10 22:59:22.269498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-12-10 22:59:22.269528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-12-10 22:59:22.269631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-12-10 22:59:22.269661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-12-10 22:59:22.269793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-12-10 22:59:22.269825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-12-10 22:59:22.269958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-12-10 22:59:22.269989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.622 [2024-12-10 22:59:22.270180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.622 [2024-12-10 22:59:22.270213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.622 qpair failed and we were unable to recover it. 00:28:14.623 [2024-12-10 22:59:22.270406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-12-10 22:59:22.270436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-12-10 22:59:22.270610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-12-10 22:59:22.270641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-12-10 22:59:22.270756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-12-10 22:59:22.270787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-12-10 22:59:22.271064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-12-10 22:59:22.271097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-12-10 22:59:22.271333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-12-10 22:59:22.271365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-12-10 22:59:22.271535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-12-10 22:59:22.271566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-12-10 22:59:22.271755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-12-10 22:59:22.271785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-12-10 22:59:22.272003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-12-10 22:59:22.272033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-12-10 22:59:22.272200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-12-10 22:59:22.272233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-12-10 22:59:22.272334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-12-10 22:59:22.272365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-12-10 22:59:22.272475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-12-10 22:59:22.272506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-12-10 22:59:22.272680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-12-10 22:59:22.272711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-12-10 22:59:22.272914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-12-10 22:59:22.272945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-12-10 22:59:22.273118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-12-10 22:59:22.273148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-12-10 22:59:22.273416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-12-10 22:59:22.273447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-12-10 22:59:22.273550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-12-10 22:59:22.273582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-12-10 22:59:22.273771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-12-10 22:59:22.273803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-12-10 22:59:22.273995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-12-10 22:59:22.274026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-12-10 22:59:22.274195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-12-10 22:59:22.274228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-12-10 22:59:22.274398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-12-10 22:59:22.274430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-12-10 22:59:22.274550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-12-10 22:59:22.274581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-12-10 22:59:22.274761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-12-10 22:59:22.274792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-12-10 22:59:22.274911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-12-10 22:59:22.274942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-12-10 22:59:22.275124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-12-10 22:59:22.275156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-12-10 22:59:22.275284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-12-10 22:59:22.275316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-12-10 22:59:22.275482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-12-10 22:59:22.275514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-12-10 22:59:22.275680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-12-10 22:59:22.275712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-12-10 22:59:22.275873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-12-10 22:59:22.275905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-12-10 22:59:22.276030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-12-10 22:59:22.276061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-12-10 22:59:22.276179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-12-10 22:59:22.276217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-12-10 22:59:22.276412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-12-10 22:59:22.276444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-12-10 22:59:22.276610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-12-10 22:59:22.276643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-12-10 22:59:22.276745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-12-10 22:59:22.276775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-12-10 22:59:22.276882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-12-10 22:59:22.276914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-12-10 22:59:22.277086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-12-10 22:59:22.277117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-12-10 22:59:22.277296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-12-10 22:59:22.277328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-12-10 22:59:22.277510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-12-10 22:59:22.277541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-12-10 22:59:22.277804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-12-10 22:59:22.277837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.623 [2024-12-10 22:59:22.277949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.623 [2024-12-10 22:59:22.277980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.623 qpair failed and we were unable to recover it. 00:28:14.624 [2024-12-10 22:59:22.278083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-12-10 22:59:22.278114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-12-10 22:59:22.278222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-12-10 22:59:22.278254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-12-10 22:59:22.278516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-12-10 22:59:22.278547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-12-10 22:59:22.278719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-12-10 22:59:22.278749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-12-10 22:59:22.278858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-12-10 22:59:22.278890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-12-10 22:59:22.279009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-12-10 22:59:22.279040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-12-10 22:59:22.279207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-12-10 22:59:22.279239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-12-10 22:59:22.279442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-12-10 22:59:22.279474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-12-10 22:59:22.279595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-12-10 22:59:22.279626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-12-10 22:59:22.279740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-12-10 22:59:22.279771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-12-10 22:59:22.279936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-12-10 22:59:22.279967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-12-10 22:59:22.280134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-12-10 22:59:22.280176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-12-10 22:59:22.280361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-12-10 22:59:22.280392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-12-10 22:59:22.280582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-12-10 22:59:22.280612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-12-10 22:59:22.280732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-12-10 22:59:22.280762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-12-10 22:59:22.280933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-12-10 22:59:22.280964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-12-10 22:59:22.281201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-12-10 22:59:22.281233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-12-10 22:59:22.281493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-12-10 22:59:22.281530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-12-10 22:59:22.281728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-12-10 22:59:22.281759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-12-10 22:59:22.281859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-12-10 22:59:22.281890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-12-10 22:59:22.282001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-12-10 22:59:22.282032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-12-10 22:59:22.282149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-12-10 22:59:22.282209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-12-10 22:59:22.282328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-12-10 22:59:22.282360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-12-10 22:59:22.282618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-12-10 22:59:22.282649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-12-10 22:59:22.282771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-12-10 22:59:22.282801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-12-10 22:59:22.282969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-12-10 22:59:22.283002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-12-10 22:59:22.283118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-12-10 22:59:22.283148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-12-10 22:59:22.283262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-12-10 22:59:22.283296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-12-10 22:59:22.283505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-12-10 22:59:22.283537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-12-10 22:59:22.283710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-12-10 22:59:22.283741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-12-10 22:59:22.283906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-12-10 22:59:22.283937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-12-10 22:59:22.284121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-12-10 22:59:22.284153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-12-10 22:59:22.284341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-12-10 22:59:22.284372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-12-10 22:59:22.284489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-12-10 22:59:22.284520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-12-10 22:59:22.284686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-12-10 22:59:22.284716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-12-10 22:59:22.284891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-12-10 22:59:22.284922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-12-10 22:59:22.285043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-12-10 22:59:22.285074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-12-10 22:59:22.285185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-12-10 22:59:22.285218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-12-10 22:59:22.285396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.624 [2024-12-10 22:59:22.285427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.624 qpair failed and we were unable to recover it. 00:28:14.624 [2024-12-10 22:59:22.285596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.625 [2024-12-10 22:59:22.285627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.625 qpair failed and we were unable to recover it. 00:28:14.625 [2024-12-10 22:59:22.285733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.625 [2024-12-10 22:59:22.285764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.625 qpair failed and we were unable to recover it. 00:28:14.625 [2024-12-10 22:59:22.285932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.625 [2024-12-10 22:59:22.285963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.625 qpair failed and we were unable to recover it. 00:28:14.625 [2024-12-10 22:59:22.286146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.625 [2024-12-10 22:59:22.286187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.625 qpair failed and we were unable to recover it. 00:28:14.625 [2024-12-10 22:59:22.286409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.625 [2024-12-10 22:59:22.286441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.625 qpair failed and we were unable to recover it. 00:28:14.625 [2024-12-10 22:59:22.286633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.625 [2024-12-10 22:59:22.286667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.625 qpair failed and we were unable to recover it. 00:28:14.625 [2024-12-10 22:59:22.286775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.625 [2024-12-10 22:59:22.286806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.625 qpair failed and we were unable to recover it. 00:28:14.625 [2024-12-10 22:59:22.286999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.625 [2024-12-10 22:59:22.287030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.625 qpair failed and we were unable to recover it. 00:28:14.625 [2024-12-10 22:59:22.287167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.625 [2024-12-10 22:59:22.287200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.625 qpair failed and we were unable to recover it. 00:28:14.625 [2024-12-10 22:59:22.287324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.625 [2024-12-10 22:59:22.287357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.625 qpair failed and we were unable to recover it. 00:28:14.625 [2024-12-10 22:59:22.287531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.625 [2024-12-10 22:59:22.287561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.625 qpair failed and we were unable to recover it. 00:28:14.625 [2024-12-10 22:59:22.287731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.625 [2024-12-10 22:59:22.287763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.625 qpair failed and we were unable to recover it. 00:28:14.625 [2024-12-10 22:59:22.287953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.625 [2024-12-10 22:59:22.287985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.625 qpair failed and we were unable to recover it. 00:28:14.625 [2024-12-10 22:59:22.288181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.625 [2024-12-10 22:59:22.288213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.625 qpair failed and we were unable to recover it. 00:28:14.625 [2024-12-10 22:59:22.288400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.625 [2024-12-10 22:59:22.288431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.625 qpair failed and we were unable to recover it. 00:28:14.625 [2024-12-10 22:59:22.288605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.625 [2024-12-10 22:59:22.288636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.625 qpair failed and we were unable to recover it. 00:28:14.625 [2024-12-10 22:59:22.288817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.625 [2024-12-10 22:59:22.288848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.625 qpair failed and we were unable to recover it. 00:28:14.625 [2024-12-10 22:59:22.289014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.625 [2024-12-10 22:59:22.289045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.625 qpair failed and we were unable to recover it. 00:28:14.625 [2024-12-10 22:59:22.289213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.625 [2024-12-10 22:59:22.289251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.625 qpair failed and we were unable to recover it. 00:28:14.625 [2024-12-10 22:59:22.289353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.625 [2024-12-10 22:59:22.289384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.625 qpair failed and we were unable to recover it. 00:28:14.625 [2024-12-10 22:59:22.289552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.625 [2024-12-10 22:59:22.289583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.625 qpair failed and we were unable to recover it. 00:28:14.625 [2024-12-10 22:59:22.289694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.625 [2024-12-10 22:59:22.289726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.625 qpair failed and we were unable to recover it. 00:28:14.625 [2024-12-10 22:59:22.289988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.625 [2024-12-10 22:59:22.290021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.625 qpair failed and we were unable to recover it. 00:28:14.625 [2024-12-10 22:59:22.290293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.625 [2024-12-10 22:59:22.290332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.625 qpair failed and we were unable to recover it. 00:28:14.625 [2024-12-10 22:59:22.290501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.625 [2024-12-10 22:59:22.290532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.625 qpair failed and we were unable to recover it. 00:28:14.625 [2024-12-10 22:59:22.290715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.625 [2024-12-10 22:59:22.290748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.625 qpair failed and we were unable to recover it. 00:28:14.625 [2024-12-10 22:59:22.290860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.625 [2024-12-10 22:59:22.290893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.625 qpair failed and we were unable to recover it. 00:28:14.907 [2024-12-10 22:59:22.291015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.907 [2024-12-10 22:59:22.291045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.907 qpair failed and we were unable to recover it. 00:28:14.907 [2024-12-10 22:59:22.291172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.908 [2024-12-10 22:59:22.291205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.908 qpair failed and we were unable to recover it. 00:28:14.908 [2024-12-10 22:59:22.291448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.908 [2024-12-10 22:59:22.291480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.908 qpair failed and we were unable to recover it. 00:28:14.908 [2024-12-10 22:59:22.291596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.908 [2024-12-10 22:59:22.291627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.908 qpair failed and we were unable to recover it. 00:28:14.908 [2024-12-10 22:59:22.291801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.908 [2024-12-10 22:59:22.291833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.908 qpair failed and we were unable to recover it. 00:28:14.908 [2024-12-10 22:59:22.291961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.908 [2024-12-10 22:59:22.291993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.908 qpair failed and we were unable to recover it. 00:28:14.908 [2024-12-10 22:59:22.292113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.908 [2024-12-10 22:59:22.292144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.908 qpair failed and we were unable to recover it. 00:28:14.908 [2024-12-10 22:59:22.292354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.908 [2024-12-10 22:59:22.292385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.908 qpair failed and we were unable to recover it. 00:28:14.908 [2024-12-10 22:59:22.292552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.908 [2024-12-10 22:59:22.292586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.908 qpair failed and we were unable to recover it. 00:28:14.908 [2024-12-10 22:59:22.292713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.908 [2024-12-10 22:59:22.292744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.908 qpair failed and we were unable to recover it. 00:28:14.908 [2024-12-10 22:59:22.292919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.908 [2024-12-10 22:59:22.292953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.908 qpair failed and we were unable to recover it. 00:28:14.908 [2024-12-10 22:59:22.293075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.908 [2024-12-10 22:59:22.293107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.908 qpair failed and we were unable to recover it. 00:28:14.908 [2024-12-10 22:59:22.293240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.908 [2024-12-10 22:59:22.293273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.908 qpair failed and we were unable to recover it. 00:28:14.908 [2024-12-10 22:59:22.293404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.908 [2024-12-10 22:59:22.293437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.908 qpair failed and we were unable to recover it. 00:28:14.908 [2024-12-10 22:59:22.293679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.908 [2024-12-10 22:59:22.293710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.908 qpair failed and we were unable to recover it. 00:28:14.908 [2024-12-10 22:59:22.293898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.908 [2024-12-10 22:59:22.293930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.908 qpair failed and we were unable to recover it. 00:28:14.908 [2024-12-10 22:59:22.294100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.908 [2024-12-10 22:59:22.294130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.908 qpair failed and we were unable to recover it. 00:28:14.908 [2024-12-10 22:59:22.294310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.908 [2024-12-10 22:59:22.294344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.908 qpair failed and we were unable to recover it. 00:28:14.908 [2024-12-10 22:59:22.294543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.908 [2024-12-10 22:59:22.294575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.908 qpair failed and we were unable to recover it. 00:28:14.908 [2024-12-10 22:59:22.294742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.908 [2024-12-10 22:59:22.294775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.908 qpair failed and we were unable to recover it. 00:28:14.908 [2024-12-10 22:59:22.294940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.908 [2024-12-10 22:59:22.294974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.908 qpair failed and we were unable to recover it. 00:28:14.908 [2024-12-10 22:59:22.295142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.908 [2024-12-10 22:59:22.295184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.908 qpair failed and we were unable to recover it. 00:28:14.908 [2024-12-10 22:59:22.295327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.908 [2024-12-10 22:59:22.295357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.908 qpair failed and we were unable to recover it. 00:28:14.908 [2024-12-10 22:59:22.295473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.908 [2024-12-10 22:59:22.295504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.908 qpair failed and we were unable to recover it. 00:28:14.908 [2024-12-10 22:59:22.295633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.908 [2024-12-10 22:59:22.295667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.908 qpair failed and we were unable to recover it. 00:28:14.908 [2024-12-10 22:59:22.295784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.908 [2024-12-10 22:59:22.295816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.908 qpair failed and we were unable to recover it. 00:28:14.908 [2024-12-10 22:59:22.296010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.908 [2024-12-10 22:59:22.296042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.908 qpair failed and we were unable to recover it. 00:28:14.908 [2024-12-10 22:59:22.296153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.908 [2024-12-10 22:59:22.296215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.908 qpair failed and we were unable to recover it. 00:28:14.908 [2024-12-10 22:59:22.296328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.908 [2024-12-10 22:59:22.296360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.908 qpair failed and we were unable to recover it. 00:28:14.908 [2024-12-10 22:59:22.296526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.908 [2024-12-10 22:59:22.296557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.908 qpair failed and we were unable to recover it. 00:28:14.908 [2024-12-10 22:59:22.296656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.908 [2024-12-10 22:59:22.296686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.908 qpair failed and we were unable to recover it. 00:28:14.908 [2024-12-10 22:59:22.296905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.908 [2024-12-10 22:59:22.296942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.908 qpair failed and we were unable to recover it. 00:28:14.908 [2024-12-10 22:59:22.297172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.908 [2024-12-10 22:59:22.297205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.908 qpair failed and we were unable to recover it. 00:28:14.908 [2024-12-10 22:59:22.297322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.908 [2024-12-10 22:59:22.297354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.908 qpair failed and we were unable to recover it. 00:28:14.908 [2024-12-10 22:59:22.297457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.908 [2024-12-10 22:59:22.297487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.908 qpair failed and we were unable to recover it. 00:28:14.908 [2024-12-10 22:59:22.297666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.908 [2024-12-10 22:59:22.297696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.908 qpair failed and we were unable to recover it. 00:28:14.908 [2024-12-10 22:59:22.297894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.908 [2024-12-10 22:59:22.297925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.909 qpair failed and we were unable to recover it. 00:28:14.909 [2024-12-10 22:59:22.298092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.909 [2024-12-10 22:59:22.298123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.909 qpair failed and we were unable to recover it. 00:28:14.909 [2024-12-10 22:59:22.298241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.909 [2024-12-10 22:59:22.298274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.909 qpair failed and we were unable to recover it. 00:28:14.909 [2024-12-10 22:59:22.298396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.909 [2024-12-10 22:59:22.298429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.909 qpair failed and we were unable to recover it. 00:28:14.909 [2024-12-10 22:59:22.298597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.909 [2024-12-10 22:59:22.298630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.909 qpair failed and we were unable to recover it. 00:28:14.909 [2024-12-10 22:59:22.298808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.909 [2024-12-10 22:59:22.298840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.909 qpair failed and we were unable to recover it. 00:28:14.909 [2024-12-10 22:59:22.299006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.909 [2024-12-10 22:59:22.299039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.909 qpair failed and we were unable to recover it. 00:28:14.909 [2024-12-10 22:59:22.299282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.909 [2024-12-10 22:59:22.299314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.909 qpair failed and we were unable to recover it. 00:28:14.909 [2024-12-10 22:59:22.299415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.909 [2024-12-10 22:59:22.299447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.909 qpair failed and we were unable to recover it. 00:28:14.909 [2024-12-10 22:59:22.299629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.909 [2024-12-10 22:59:22.299660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.909 qpair failed and we were unable to recover it. 00:28:14.909 [2024-12-10 22:59:22.299852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.909 [2024-12-10 22:59:22.299884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.909 qpair failed and we were unable to recover it. 00:28:14.909 [2024-12-10 22:59:22.300006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.909 [2024-12-10 22:59:22.300037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.909 qpair failed and we were unable to recover it. 00:28:14.909 [2024-12-10 22:59:22.300150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.909 [2024-12-10 22:59:22.300191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.909 qpair failed and we were unable to recover it. 00:28:14.909 [2024-12-10 22:59:22.300360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.909 [2024-12-10 22:59:22.300391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.909 qpair failed and we were unable to recover it. 00:28:14.909 [2024-12-10 22:59:22.300516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.909 [2024-12-10 22:59:22.300548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.909 qpair failed and we were unable to recover it. 00:28:14.909 [2024-12-10 22:59:22.300664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.909 [2024-12-10 22:59:22.300698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.909 qpair failed and we were unable to recover it. 00:28:14.909 [2024-12-10 22:59:22.300873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.909 [2024-12-10 22:59:22.300905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.909 qpair failed and we were unable to recover it. 00:28:14.909 [2024-12-10 22:59:22.301019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.909 [2024-12-10 22:59:22.301050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.909 qpair failed and we were unable to recover it. 00:28:14.909 [2024-12-10 22:59:22.301172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.909 [2024-12-10 22:59:22.301205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.909 qpair failed and we were unable to recover it. 00:28:14.909 [2024-12-10 22:59:22.301306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.909 [2024-12-10 22:59:22.301336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.909 qpair failed and we were unable to recover it. 00:28:14.909 [2024-12-10 22:59:22.301522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.909 [2024-12-10 22:59:22.301554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.909 qpair failed and we were unable to recover it. 00:28:14.909 [2024-12-10 22:59:22.301912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.909 [2024-12-10 22:59:22.301944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.909 qpair failed and we were unable to recover it. 00:28:14.909 [2024-12-10 22:59:22.302123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.909 [2024-12-10 22:59:22.302156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.909 qpair failed and we were unable to recover it. 00:28:14.909 [2024-12-10 22:59:22.302279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.909 [2024-12-10 22:59:22.302311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.909 qpair failed and we were unable to recover it. 00:28:14.909 [2024-12-10 22:59:22.302483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.909 [2024-12-10 22:59:22.302514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.909 qpair failed and we were unable to recover it. 00:28:14.909 [2024-12-10 22:59:22.302649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.909 [2024-12-10 22:59:22.302679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.909 qpair failed and we were unable to recover it. 00:28:14.909 [2024-12-10 22:59:22.302866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.909 [2024-12-10 22:59:22.302896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.909 qpair failed and we were unable to recover it. 00:28:14.909 [2024-12-10 22:59:22.303066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.909 [2024-12-10 22:59:22.303098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.909 qpair failed and we were unable to recover it. 00:28:14.909 [2024-12-10 22:59:22.303228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.909 [2024-12-10 22:59:22.303261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.909 qpair failed and we were unable to recover it. 00:28:14.909 [2024-12-10 22:59:22.303440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.909 [2024-12-10 22:59:22.303471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.909 qpair failed and we were unable to recover it. 00:28:14.909 [2024-12-10 22:59:22.303654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.909 [2024-12-10 22:59:22.303685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.909 qpair failed and we were unable to recover it. 00:28:14.909 [2024-12-10 22:59:22.303918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.909 [2024-12-10 22:59:22.303949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.909 qpair failed and we were unable to recover it. 00:28:14.909 [2024-12-10 22:59:22.304053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.909 [2024-12-10 22:59:22.304085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.909 qpair failed and we were unable to recover it. 00:28:14.909 [2024-12-10 22:59:22.304199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.909 [2024-12-10 22:59:22.304233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.909 qpair failed and we were unable to recover it. 00:28:14.909 [2024-12-10 22:59:22.304444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.909 [2024-12-10 22:59:22.304475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.909 qpair failed and we were unable to recover it. 00:28:14.909 [2024-12-10 22:59:22.304647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.909 [2024-12-10 22:59:22.304685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.909 qpair failed and we were unable to recover it. 00:28:14.909 [2024-12-10 22:59:22.304945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.909 [2024-12-10 22:59:22.304976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.909 qpair failed and we were unable to recover it. 00:28:14.909 [2024-12-10 22:59:22.305178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.909 [2024-12-10 22:59:22.305211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.909 qpair failed and we were unable to recover it. 00:28:14.909 [2024-12-10 22:59:22.305328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.909 [2024-12-10 22:59:22.305359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.909 qpair failed and we were unable to recover it. 00:28:14.909 [2024-12-10 22:59:22.305459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.909 [2024-12-10 22:59:22.305491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.910 qpair failed and we were unable to recover it. 00:28:14.910 [2024-12-10 22:59:22.305679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.910 [2024-12-10 22:59:22.305710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.910 qpair failed and we were unable to recover it. 00:28:14.910 [2024-12-10 22:59:22.305908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.910 [2024-12-10 22:59:22.305939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.910 qpair failed and we were unable to recover it. 00:28:14.910 [2024-12-10 22:59:22.306106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.910 [2024-12-10 22:59:22.306139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.910 qpair failed and we were unable to recover it. 00:28:14.910 [2024-12-10 22:59:22.306264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.910 [2024-12-10 22:59:22.306296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.910 qpair failed and we were unable to recover it. 00:28:14.910 [2024-12-10 22:59:22.306475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.910 [2024-12-10 22:59:22.306505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.910 qpair failed and we were unable to recover it. 00:28:14.910 [2024-12-10 22:59:22.306630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.910 [2024-12-10 22:59:22.306662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.910 qpair failed and we were unable to recover it. 00:28:14.910 [2024-12-10 22:59:22.306779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.910 [2024-12-10 22:59:22.306809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.910 qpair failed and we were unable to recover it. 00:28:14.910 [2024-12-10 22:59:22.306982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.910 [2024-12-10 22:59:22.307016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.910 qpair failed and we were unable to recover it. 00:28:14.910 [2024-12-10 22:59:22.307132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.910 [2024-12-10 22:59:22.307170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.910 qpair failed and we were unable to recover it. 00:28:14.910 [2024-12-10 22:59:22.307281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.910 [2024-12-10 22:59:22.307312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.910 qpair failed and we were unable to recover it. 00:28:14.910 [2024-12-10 22:59:22.307422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.910 [2024-12-10 22:59:22.307455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.910 qpair failed and we were unable to recover it. 00:28:14.910 [2024-12-10 22:59:22.307571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.910 [2024-12-10 22:59:22.307604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.910 qpair failed and we were unable to recover it. 00:28:14.910 [2024-12-10 22:59:22.307777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.910 [2024-12-10 22:59:22.307811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.910 qpair failed and we were unable to recover it. 00:28:14.910 [2024-12-10 22:59:22.307998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.910 [2024-12-10 22:59:22.308032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.910 qpair failed and we were unable to recover it. 00:28:14.910 [2024-12-10 22:59:22.308227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.910 [2024-12-10 22:59:22.308260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.910 qpair failed and we were unable to recover it. 00:28:14.910 [2024-12-10 22:59:22.308368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.910 [2024-12-10 22:59:22.308401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.910 qpair failed and we were unable to recover it. 00:28:14.910 [2024-12-10 22:59:22.308514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.910 [2024-12-10 22:59:22.308544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.910 qpair failed and we were unable to recover it. 00:28:14.910 [2024-12-10 22:59:22.308661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.910 [2024-12-10 22:59:22.308692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.910 qpair failed and we were unable to recover it. 00:28:14.910 [2024-12-10 22:59:22.308809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.910 [2024-12-10 22:59:22.308841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.910 qpair failed and we were unable to recover it. 00:28:14.910 [2024-12-10 22:59:22.308953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.910 [2024-12-10 22:59:22.308983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.910 qpair failed and we were unable to recover it. 00:28:14.910 [2024-12-10 22:59:22.309172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.910 [2024-12-10 22:59:22.309206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.910 qpair failed and we were unable to recover it. 00:28:14.910 [2024-12-10 22:59:22.309310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.910 [2024-12-10 22:59:22.309341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.910 qpair failed and we were unable to recover it. 00:28:14.910 [2024-12-10 22:59:22.309460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.910 [2024-12-10 22:59:22.309490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.910 qpair failed and we were unable to recover it. 00:28:14.910 [2024-12-10 22:59:22.309591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.910 [2024-12-10 22:59:22.309623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.910 qpair failed and we were unable to recover it. 00:28:14.910 [2024-12-10 22:59:22.309738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.910 [2024-12-10 22:59:22.309769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.910 qpair failed and we were unable to recover it. 00:28:14.910 [2024-12-10 22:59:22.309941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.910 [2024-12-10 22:59:22.309972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.910 qpair failed and we were unable to recover it. 00:28:14.910 [2024-12-10 22:59:22.310169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.910 [2024-12-10 22:59:22.310202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.910 qpair failed and we were unable to recover it. 00:28:14.910 [2024-12-10 22:59:22.310391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.910 [2024-12-10 22:59:22.310422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.910 qpair failed and we were unable to recover it. 00:28:14.910 [2024-12-10 22:59:22.310593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.910 [2024-12-10 22:59:22.310625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.910 qpair failed and we were unable to recover it. 00:28:14.910 [2024-12-10 22:59:22.310815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.910 [2024-12-10 22:59:22.310846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.910 qpair failed and we were unable to recover it. 00:28:14.910 [2024-12-10 22:59:22.311014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.910 [2024-12-10 22:59:22.311046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.910 qpair failed and we were unable to recover it. 00:28:14.910 [2024-12-10 22:59:22.311166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.910 [2024-12-10 22:59:22.311198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.910 qpair failed and we were unable to recover it. 00:28:14.910 [2024-12-10 22:59:22.311312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.910 [2024-12-10 22:59:22.311344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.910 qpair failed and we were unable to recover it. 00:28:14.910 [2024-12-10 22:59:22.311456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.910 [2024-12-10 22:59:22.311487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.910 qpair failed and we were unable to recover it. 00:28:14.910 [2024-12-10 22:59:22.311591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.910 [2024-12-10 22:59:22.311622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.910 qpair failed and we were unable to recover it. 00:28:14.910 [2024-12-10 22:59:22.311798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.910 [2024-12-10 22:59:22.311834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.910 qpair failed and we were unable to recover it. 00:28:14.910 [2024-12-10 22:59:22.311938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.910 [2024-12-10 22:59:22.311969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.910 qpair failed and we were unable to recover it. 00:28:14.910 [2024-12-10 22:59:22.312074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.910 [2024-12-10 22:59:22.312105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.910 qpair failed and we were unable to recover it. 00:28:14.910 [2024-12-10 22:59:22.312314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.910 [2024-12-10 22:59:22.312346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.910 qpair failed and we were unable to recover it. 00:28:14.911 [2024-12-10 22:59:22.312530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.911 [2024-12-10 22:59:22.312561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.911 qpair failed and we were unable to recover it. 00:28:14.911 [2024-12-10 22:59:22.312668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.911 [2024-12-10 22:59:22.312699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.911 qpair failed and we were unable to recover it. 00:28:14.911 [2024-12-10 22:59:22.312871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.911 [2024-12-10 22:59:22.312902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.911 qpair failed and we were unable to recover it. 00:28:14.911 [2024-12-10 22:59:22.313005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.911 [2024-12-10 22:59:22.313037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.911 qpair failed and we were unable to recover it. 00:28:14.911 [2024-12-10 22:59:22.313156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.911 [2024-12-10 22:59:22.313199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.911 qpair failed and we were unable to recover it. 00:28:14.911 [2024-12-10 22:59:22.313373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.911 [2024-12-10 22:59:22.313404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.911 qpair failed and we were unable to recover it. 00:28:14.911 [2024-12-10 22:59:22.313589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.911 [2024-12-10 22:59:22.313620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.911 qpair failed and we were unable to recover it. 00:28:14.911 [2024-12-10 22:59:22.313802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.911 [2024-12-10 22:59:22.313833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.911 qpair failed and we were unable to recover it. 00:28:14.911 [2024-12-10 22:59:22.314003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.911 [2024-12-10 22:59:22.314034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.911 qpair failed and we were unable to recover it. 00:28:14.911 [2024-12-10 22:59:22.314137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.911 [2024-12-10 22:59:22.314179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.911 qpair failed and we were unable to recover it. 00:28:14.911 [2024-12-10 22:59:22.314321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.911 [2024-12-10 22:59:22.314353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.911 qpair failed and we were unable to recover it. 00:28:14.911 [2024-12-10 22:59:22.314457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.911 [2024-12-10 22:59:22.314489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.911 qpair failed and we were unable to recover it. 00:28:14.911 [2024-12-10 22:59:22.314597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.911 [2024-12-10 22:59:22.314629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.911 qpair failed and we were unable to recover it. 00:28:14.911 [2024-12-10 22:59:22.314741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.911 [2024-12-10 22:59:22.314774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.911 qpair failed and we were unable to recover it. 00:28:14.911 [2024-12-10 22:59:22.314941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.911 [2024-12-10 22:59:22.314973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.911 qpair failed and we were unable to recover it. 00:28:14.911 [2024-12-10 22:59:22.315096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.911 [2024-12-10 22:59:22.315126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.911 qpair failed and we were unable to recover it. 00:28:14.911 [2024-12-10 22:59:22.315310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.911 [2024-12-10 22:59:22.315344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.911 qpair failed and we were unable to recover it. 00:28:14.911 [2024-12-10 22:59:22.315532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.911 [2024-12-10 22:59:22.315565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.911 qpair failed and we were unable to recover it. 00:28:14.911 [2024-12-10 22:59:22.315684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.911 [2024-12-10 22:59:22.315714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.911 qpair failed and we were unable to recover it. 00:28:14.911 [2024-12-10 22:59:22.315815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.911 [2024-12-10 22:59:22.315849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.911 qpair failed and we were unable to recover it. 00:28:14.911 [2024-12-10 22:59:22.316049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.911 [2024-12-10 22:59:22.316083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.911 qpair failed and we were unable to recover it. 00:28:14.911 [2024-12-10 22:59:22.316207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.911 [2024-12-10 22:59:22.316241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.911 qpair failed and we were unable to recover it. 00:28:14.911 [2024-12-10 22:59:22.316417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.911 [2024-12-10 22:59:22.316449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.911 qpair failed and we were unable to recover it. 00:28:14.911 [2024-12-10 22:59:22.316574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.911 [2024-12-10 22:59:22.316605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.911 qpair failed and we were unable to recover it. 00:28:14.911 [2024-12-10 22:59:22.316724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.911 [2024-12-10 22:59:22.316756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.911 qpair failed and we were unable to recover it. 00:28:14.911 [2024-12-10 22:59:22.316867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.911 [2024-12-10 22:59:22.316897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.911 qpair failed and we were unable to recover it. 00:28:14.911 [2024-12-10 22:59:22.317005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.911 [2024-12-10 22:59:22.317037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.911 qpair failed and we were unable to recover it. 00:28:14.911 [2024-12-10 22:59:22.317211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.911 [2024-12-10 22:59:22.317244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.911 qpair failed and we were unable to recover it. 00:28:14.911 [2024-12-10 22:59:22.317411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.911 [2024-12-10 22:59:22.317442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.911 qpair failed and we were unable to recover it. 00:28:14.911 [2024-12-10 22:59:22.317615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.911 [2024-12-10 22:59:22.317647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.911 qpair failed and we were unable to recover it. 00:28:14.911 [2024-12-10 22:59:22.317819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.911 [2024-12-10 22:59:22.317850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.911 qpair failed and we were unable to recover it. 00:28:14.911 [2024-12-10 22:59:22.318092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.911 [2024-12-10 22:59:22.318122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.911 qpair failed and we were unable to recover it. 00:28:14.911 [2024-12-10 22:59:22.318309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.911 [2024-12-10 22:59:22.318341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.911 qpair failed and we were unable to recover it. 00:28:14.911 [2024-12-10 22:59:22.318514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.911 [2024-12-10 22:59:22.318545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.911 qpair failed and we were unable to recover it. 00:28:14.911 [2024-12-10 22:59:22.318731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.911 [2024-12-10 22:59:22.318762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.911 qpair failed and we were unable to recover it. 00:28:14.911 [2024-12-10 22:59:22.318967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.911 [2024-12-10 22:59:22.318998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.911 qpair failed and we were unable to recover it. 00:28:14.911 [2024-12-10 22:59:22.319106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.911 [2024-12-10 22:59:22.319143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.911 qpair failed and we were unable to recover it. 00:28:14.911 [2024-12-10 22:59:22.319278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.911 [2024-12-10 22:59:22.319310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.911 qpair failed and we were unable to recover it. 00:28:14.911 [2024-12-10 22:59:22.319436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.911 [2024-12-10 22:59:22.319468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.911 qpair failed and we were unable to recover it. 00:28:14.911 [2024-12-10 22:59:22.319572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.912 [2024-12-10 22:59:22.319604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.912 qpair failed and we were unable to recover it. 00:28:14.912 [2024-12-10 22:59:22.319798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.912 [2024-12-10 22:59:22.319829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.912 qpair failed and we were unable to recover it. 00:28:14.912 [2024-12-10 22:59:22.319936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.912 [2024-12-10 22:59:22.319967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.912 qpair failed and we were unable to recover it. 00:28:14.912 [2024-12-10 22:59:22.320096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.912 [2024-12-10 22:59:22.320127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.912 qpair failed and we were unable to recover it. 00:28:14.912 [2024-12-10 22:59:22.320262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.912 [2024-12-10 22:59:22.320293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.912 qpair failed and we were unable to recover it. 00:28:14.912 [2024-12-10 22:59:22.320462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.912 [2024-12-10 22:59:22.320493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.912 qpair failed and we were unable to recover it. 00:28:14.912 [2024-12-10 22:59:22.320664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.912 [2024-12-10 22:59:22.320695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.912 qpair failed and we were unable to recover it. 00:28:14.912 [2024-12-10 22:59:22.320799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.912 [2024-12-10 22:59:22.320829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.912 qpair failed and we were unable to recover it. 00:28:14.912 [2024-12-10 22:59:22.320996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.912 [2024-12-10 22:59:22.321027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.912 qpair failed and we were unable to recover it. 00:28:14.912 [2024-12-10 22:59:22.321128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.912 [2024-12-10 22:59:22.321168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.912 qpair failed and we were unable to recover it. 00:28:14.912 [2024-12-10 22:59:22.321338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.912 [2024-12-10 22:59:22.321368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.912 qpair failed and we were unable to recover it. 00:28:14.912 [2024-12-10 22:59:22.321554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.912 [2024-12-10 22:59:22.321585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.912 qpair failed and we were unable to recover it. 00:28:14.912 [2024-12-10 22:59:22.321690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.912 [2024-12-10 22:59:22.321721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.912 qpair failed and we were unable to recover it. 00:28:14.912 [2024-12-10 22:59:22.321842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.912 [2024-12-10 22:59:22.321872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.912 qpair failed and we were unable to recover it. 00:28:14.912 [2024-12-10 22:59:22.322042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.912 [2024-12-10 22:59:22.322072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.912 qpair failed and we were unable to recover it. 00:28:14.912 [2024-12-10 22:59:22.322183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.912 [2024-12-10 22:59:22.322217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.912 qpair failed and we were unable to recover it. 00:28:14.912 [2024-12-10 22:59:22.322410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.912 [2024-12-10 22:59:22.322441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.912 qpair failed and we were unable to recover it. 00:28:14.912 [2024-12-10 22:59:22.322679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.912 [2024-12-10 22:59:22.322711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.912 qpair failed and we were unable to recover it. 00:28:14.912 [2024-12-10 22:59:22.322830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.912 [2024-12-10 22:59:22.322861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.912 qpair failed and we were unable to recover it. 00:28:14.912 [2024-12-10 22:59:22.323072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.912 [2024-12-10 22:59:22.323103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.912 qpair failed and we were unable to recover it. 00:28:14.912 [2024-12-10 22:59:22.323221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.912 [2024-12-10 22:59:22.323253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.912 qpair failed and we were unable to recover it. 00:28:14.912 [2024-12-10 22:59:22.323359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.912 [2024-12-10 22:59:22.323389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.912 qpair failed and we were unable to recover it. 00:28:14.912 [2024-12-10 22:59:22.323508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.912 [2024-12-10 22:59:22.323539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.912 qpair failed and we were unable to recover it. 00:28:14.912 [2024-12-10 22:59:22.323650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.912 [2024-12-10 22:59:22.323681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.912 qpair failed and we were unable to recover it. 00:28:14.912 [2024-12-10 22:59:22.323859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.912 [2024-12-10 22:59:22.323890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.912 qpair failed and we were unable to recover it. 00:28:14.912 [2024-12-10 22:59:22.323995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.912 [2024-12-10 22:59:22.324026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.912 qpair failed and we were unable to recover it. 00:28:14.912 [2024-12-10 22:59:22.324133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.912 [2024-12-10 22:59:22.324184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.912 qpair failed and we were unable to recover it. 00:28:14.912 [2024-12-10 22:59:22.324287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.912 [2024-12-10 22:59:22.324319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.912 qpair failed and we were unable to recover it. 00:28:14.912 [2024-12-10 22:59:22.324417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.912 [2024-12-10 22:59:22.324449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.912 qpair failed and we were unable to recover it. 00:28:14.912 [2024-12-10 22:59:22.324648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.912 [2024-12-10 22:59:22.324680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.912 qpair failed and we were unable to recover it. 00:28:14.912 [2024-12-10 22:59:22.324853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.912 [2024-12-10 22:59:22.324884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.912 qpair failed and we were unable to recover it. 00:28:14.912 [2024-12-10 22:59:22.324987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.912 [2024-12-10 22:59:22.325017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.912 qpair failed and we were unable to recover it. 00:28:14.912 [2024-12-10 22:59:22.325122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.912 [2024-12-10 22:59:22.325152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.912 qpair failed and we were unable to recover it. 00:28:14.912 [2024-12-10 22:59:22.325264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.912 [2024-12-10 22:59:22.325296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.912 qpair failed and we were unable to recover it. 00:28:14.912 [2024-12-10 22:59:22.325392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.913 [2024-12-10 22:59:22.325424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.913 qpair failed and we were unable to recover it. 00:28:14.913 [2024-12-10 22:59:22.325587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.913 [2024-12-10 22:59:22.325617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.913 qpair failed and we were unable to recover it. 00:28:14.913 [2024-12-10 22:59:22.325785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.913 [2024-12-10 22:59:22.325816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.913 qpair failed and we were unable to recover it. 00:28:14.913 [2024-12-10 22:59:22.325934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.913 [2024-12-10 22:59:22.325970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.913 qpair failed and we were unable to recover it. 00:28:14.913 [2024-12-10 22:59:22.326209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.913 [2024-12-10 22:59:22.326242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.913 qpair failed and we were unable to recover it. 00:28:14.913 [2024-12-10 22:59:22.326346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.913 [2024-12-10 22:59:22.326377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.913 qpair failed and we were unable to recover it. 00:28:14.913 [2024-12-10 22:59:22.326560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.913 [2024-12-10 22:59:22.326591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.913 qpair failed and we were unable to recover it. 00:28:14.913 [2024-12-10 22:59:22.326694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.913 [2024-12-10 22:59:22.326724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.913 qpair failed and we were unable to recover it. 00:28:14.913 [2024-12-10 22:59:22.326834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.913 [2024-12-10 22:59:22.326864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.913 qpair failed and we were unable to recover it. 00:28:14.913 [2024-12-10 22:59:22.326979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.913 [2024-12-10 22:59:22.327009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.913 qpair failed and we were unable to recover it. 00:28:14.913 [2024-12-10 22:59:22.327218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.913 [2024-12-10 22:59:22.327250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.913 qpair failed and we were unable to recover it. 00:28:14.913 [2024-12-10 22:59:22.327354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.913 [2024-12-10 22:59:22.327385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.913 qpair failed and we were unable to recover it. 00:28:14.913 [2024-12-10 22:59:22.327551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.913 [2024-12-10 22:59:22.327582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.913 qpair failed and we were unable to recover it. 00:28:14.913 [2024-12-10 22:59:22.327683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.913 [2024-12-10 22:59:22.327715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.913 qpair failed and we were unable to recover it. 00:28:14.913 [2024-12-10 22:59:22.327901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.913 [2024-12-10 22:59:22.327931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.913 qpair failed and we were unable to recover it. 00:28:14.913 [2024-12-10 22:59:22.328045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.913 [2024-12-10 22:59:22.328077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.913 qpair failed and we were unable to recover it. 00:28:14.913 [2024-12-10 22:59:22.328206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.913 [2024-12-10 22:59:22.328239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.913 qpair failed and we were unable to recover it. 00:28:14.913 [2024-12-10 22:59:22.328360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.913 [2024-12-10 22:59:22.328392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.913 qpair failed and we were unable to recover it. 00:28:14.913 [2024-12-10 22:59:22.328493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.913 [2024-12-10 22:59:22.328525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.913 qpair failed and we were unable to recover it. 00:28:14.913 [2024-12-10 22:59:22.328647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.913 [2024-12-10 22:59:22.328678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.913 qpair failed and we were unable to recover it. 00:28:14.913 [2024-12-10 22:59:22.328938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.913 [2024-12-10 22:59:22.328969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.913 qpair failed and we were unable to recover it. 00:28:14.913 [2024-12-10 22:59:22.329138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.913 [2024-12-10 22:59:22.329177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.913 qpair failed and we were unable to recover it. 00:28:14.913 [2024-12-10 22:59:22.329278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.913 [2024-12-10 22:59:22.329309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.913 qpair failed and we were unable to recover it. 00:28:14.913 [2024-12-10 22:59:22.329424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.913 [2024-12-10 22:59:22.329455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.913 qpair failed and we were unable to recover it. 00:28:14.913 [2024-12-10 22:59:22.329623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.913 [2024-12-10 22:59:22.329654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.913 qpair failed and we were unable to recover it. 00:28:14.913 [2024-12-10 22:59:22.329850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.913 [2024-12-10 22:59:22.329881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.913 qpair failed and we were unable to recover it. 00:28:14.913 [2024-12-10 22:59:22.329998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.913 [2024-12-10 22:59:22.330030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.913 qpair failed and we were unable to recover it. 00:28:14.913 [2024-12-10 22:59:22.330135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.913 [2024-12-10 22:59:22.330178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.913 qpair failed and we were unable to recover it. 00:28:14.913 [2024-12-10 22:59:22.330279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.913 [2024-12-10 22:59:22.330310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.913 qpair failed and we were unable to recover it. 00:28:14.913 [2024-12-10 22:59:22.330480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.913 [2024-12-10 22:59:22.330512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.913 qpair failed and we were unable to recover it. 00:28:14.913 [2024-12-10 22:59:22.330618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.913 [2024-12-10 22:59:22.330650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.913 qpair failed and we were unable to recover it. 00:28:14.913 [2024-12-10 22:59:22.330757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.913 [2024-12-10 22:59:22.330790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.913 qpair failed and we were unable to recover it. 00:28:14.913 [2024-12-10 22:59:22.330955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.913 [2024-12-10 22:59:22.330987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.913 qpair failed and we were unable to recover it. 00:28:14.913 [2024-12-10 22:59:22.331236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.913 [2024-12-10 22:59:22.331268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.913 qpair failed and we were unable to recover it. 00:28:14.913 [2024-12-10 22:59:22.331385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.913 [2024-12-10 22:59:22.331416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.913 qpair failed and we were unable to recover it. 00:28:14.913 [2024-12-10 22:59:22.331523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.913 [2024-12-10 22:59:22.331556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.913 qpair failed and we were unable to recover it. 00:28:14.913 [2024-12-10 22:59:22.331663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.913 [2024-12-10 22:59:22.331694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.913 qpair failed and we were unable to recover it. 00:28:14.913 [2024-12-10 22:59:22.331860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.913 [2024-12-10 22:59:22.331891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.913 qpair failed and we were unable to recover it. 00:28:14.913 [2024-12-10 22:59:22.332014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.913 [2024-12-10 22:59:22.332046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.913 qpair failed and we were unable to recover it. 00:28:14.913 [2024-12-10 22:59:22.332243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.914 [2024-12-10 22:59:22.332275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.914 qpair failed and we were unable to recover it. 00:28:14.914 [2024-12-10 22:59:22.332472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.914 [2024-12-10 22:59:22.332503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.914 qpair failed and we were unable to recover it. 00:28:14.914 [2024-12-10 22:59:22.332604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.914 [2024-12-10 22:59:22.332635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.914 qpair failed and we were unable to recover it. 00:28:14.914 [2024-12-10 22:59:22.332756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.914 [2024-12-10 22:59:22.332787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.914 qpair failed and we were unable to recover it. 00:28:14.914 [2024-12-10 22:59:22.332954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.914 [2024-12-10 22:59:22.332990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.914 qpair failed and we were unable to recover it. 00:28:14.914 [2024-12-10 22:59:22.333094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.914 [2024-12-10 22:59:22.333125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.914 qpair failed and we were unable to recover it. 00:28:14.914 [2024-12-10 22:59:22.333259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.914 [2024-12-10 22:59:22.333291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.914 qpair failed and we were unable to recover it. 00:28:14.914 [2024-12-10 22:59:22.333524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.914 [2024-12-10 22:59:22.333558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.914 qpair failed and we were unable to recover it. 00:28:14.914 [2024-12-10 22:59:22.333680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.914 [2024-12-10 22:59:22.333711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.914 qpair failed and we were unable to recover it. 00:28:14.914 [2024-12-10 22:59:22.333903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.914 [2024-12-10 22:59:22.333934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.914 qpair failed and we were unable to recover it. 00:28:14.914 [2024-12-10 22:59:22.334036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.914 [2024-12-10 22:59:22.334068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.914 qpair failed and we were unable to recover it. 00:28:14.914 [2024-12-10 22:59:22.334187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.914 [2024-12-10 22:59:22.334219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.914 qpair failed and we were unable to recover it. 00:28:14.914 [2024-12-10 22:59:22.334326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.914 [2024-12-10 22:59:22.334357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.914 qpair failed and we were unable to recover it. 00:28:14.914 [2024-12-10 22:59:22.334623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.914 [2024-12-10 22:59:22.334655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.914 qpair failed and we were unable to recover it. 00:28:14.914 [2024-12-10 22:59:22.334893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.914 [2024-12-10 22:59:22.334924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.914 qpair failed and we were unable to recover it. 00:28:14.914 [2024-12-10 22:59:22.335125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.914 [2024-12-10 22:59:22.335168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.914 qpair failed and we were unable to recover it. 00:28:14.914 [2024-12-10 22:59:22.335269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.914 [2024-12-10 22:59:22.335300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.914 qpair failed and we were unable to recover it. 00:28:14.914 [2024-12-10 22:59:22.335471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.914 [2024-12-10 22:59:22.335503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.914 qpair failed and we were unable to recover it. 00:28:14.914 [2024-12-10 22:59:22.335681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.914 [2024-12-10 22:59:22.335712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.914 qpair failed and we were unable to recover it. 00:28:14.914 [2024-12-10 22:59:22.335817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.914 [2024-12-10 22:59:22.335848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.914 qpair failed and we were unable to recover it. 00:28:14.914 [2024-12-10 22:59:22.336022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.914 [2024-12-10 22:59:22.336057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.914 qpair failed and we were unable to recover it. 00:28:14.914 [2024-12-10 22:59:22.336179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.914 [2024-12-10 22:59:22.336211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.914 qpair failed and we were unable to recover it. 00:28:14.914 [2024-12-10 22:59:22.336330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.914 [2024-12-10 22:59:22.336360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.914 qpair failed and we were unable to recover it. 00:28:14.914 [2024-12-10 22:59:22.336531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.914 [2024-12-10 22:59:22.336562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.914 qpair failed and we were unable to recover it. 00:28:14.914 [2024-12-10 22:59:22.336743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.914 [2024-12-10 22:59:22.336774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.914 qpair failed and we were unable to recover it. 00:28:14.914 [2024-12-10 22:59:22.336939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.914 [2024-12-10 22:59:22.336971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.914 qpair failed and we were unable to recover it. 00:28:14.914 [2024-12-10 22:59:22.337180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.914 [2024-12-10 22:59:22.337217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.914 qpair failed and we were unable to recover it. 00:28:14.914 [2024-12-10 22:59:22.337396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.914 [2024-12-10 22:59:22.337428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.914 qpair failed and we were unable to recover it. 00:28:14.914 [2024-12-10 22:59:22.337555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.914 [2024-12-10 22:59:22.337587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.914 qpair failed and we were unable to recover it. 00:28:14.914 [2024-12-10 22:59:22.337697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.914 [2024-12-10 22:59:22.337730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.914 qpair failed and we were unable to recover it. 00:28:14.914 [2024-12-10 22:59:22.337832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.914 [2024-12-10 22:59:22.337865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.914 qpair failed and we were unable to recover it. 00:28:14.914 [2024-12-10 22:59:22.338038] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa0b20 is same with the state(6) to be set 00:28:14.914 [2024-12-10 22:59:22.338369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.914 [2024-12-10 22:59:22.338440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.914 qpair failed and we were unable to recover it. 00:28:14.914 [2024-12-10 22:59:22.338587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.914 [2024-12-10 22:59:22.338624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.914 qpair failed and we were unable to recover it. 00:28:14.914 [2024-12-10 22:59:22.338755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.914 [2024-12-10 22:59:22.338788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.914 qpair failed and we were unable to recover it. 00:28:14.914 [2024-12-10 22:59:22.338957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.914 [2024-12-10 22:59:22.338989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.914 qpair failed and we were unable to recover it. 00:28:14.914 [2024-12-10 22:59:22.339279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.914 [2024-12-10 22:59:22.339312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.914 qpair failed and we were unable to recover it. 00:28:14.914 [2024-12-10 22:59:22.339419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.914 [2024-12-10 22:59:22.339451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.914 qpair failed and we were unable to recover it. 00:28:14.914 [2024-12-10 22:59:22.339618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.914 [2024-12-10 22:59:22.339651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.914 qpair failed and we were unable to recover it. 00:28:14.914 [2024-12-10 22:59:22.339821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.914 [2024-12-10 22:59:22.339853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.914 qpair failed and we were unable to recover it. 00:28:14.914 [2024-12-10 22:59:22.340034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.915 [2024-12-10 22:59:22.340065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.915 qpair failed and we were unable to recover it. 00:28:14.915 [2024-12-10 22:59:22.340331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.915 [2024-12-10 22:59:22.340365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.915 qpair failed and we were unable to recover it. 00:28:14.915 [2024-12-10 22:59:22.340546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.915 [2024-12-10 22:59:22.340578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.915 qpair failed and we were unable to recover it. 00:28:14.915 [2024-12-10 22:59:22.340701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.915 [2024-12-10 22:59:22.340733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.915 qpair failed and we were unable to recover it. 00:28:14.915 [2024-12-10 22:59:22.340840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.915 [2024-12-10 22:59:22.340871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.915 qpair failed and we were unable to recover it. 00:28:14.915 [2024-12-10 22:59:22.341061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.915 [2024-12-10 22:59:22.341094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.915 qpair failed and we were unable to recover it. 00:28:14.915 [2024-12-10 22:59:22.341200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.915 [2024-12-10 22:59:22.341232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.915 qpair failed and we were unable to recover it. 00:28:14.915 [2024-12-10 22:59:22.341405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.915 [2024-12-10 22:59:22.341437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.915 qpair failed and we were unable to recover it. 00:28:14.915 [2024-12-10 22:59:22.341629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.915 [2024-12-10 22:59:22.341661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.915 qpair failed and we were unable to recover it. 00:28:14.915 [2024-12-10 22:59:22.341831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.915 [2024-12-10 22:59:22.341864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.915 qpair failed and we were unable to recover it. 00:28:14.915 [2024-12-10 22:59:22.341966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.915 [2024-12-10 22:59:22.341998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.915 qpair failed and we were unable to recover it. 00:28:14.915 [2024-12-10 22:59:22.342178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.915 [2024-12-10 22:59:22.342211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.915 qpair failed and we were unable to recover it. 00:28:14.915 [2024-12-10 22:59:22.342378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.915 [2024-12-10 22:59:22.342409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.915 qpair failed and we were unable to recover it. 00:28:14.915 [2024-12-10 22:59:22.342512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.915 [2024-12-10 22:59:22.342545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.915 qpair failed and we were unable to recover it. 00:28:14.915 [2024-12-10 22:59:22.342712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.915 [2024-12-10 22:59:22.342743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.915 qpair failed and we were unable to recover it. 00:28:14.915 [2024-12-10 22:59:22.342950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.915 [2024-12-10 22:59:22.342982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.915 qpair failed and we were unable to recover it. 00:28:14.915 [2024-12-10 22:59:22.343186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.915 [2024-12-10 22:59:22.343220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.915 qpair failed and we were unable to recover it. 00:28:14.915 [2024-12-10 22:59:22.343331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.915 [2024-12-10 22:59:22.343363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.915 qpair failed and we were unable to recover it. 00:28:14.915 [2024-12-10 22:59:22.343533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.915 [2024-12-10 22:59:22.343572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.915 qpair failed and we were unable to recover it. 00:28:14.915 [2024-12-10 22:59:22.343748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.915 [2024-12-10 22:59:22.343780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.915 qpair failed and we were unable to recover it. 00:28:14.915 [2024-12-10 22:59:22.343950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.915 [2024-12-10 22:59:22.343982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.915 qpair failed and we were unable to recover it. 00:28:14.915 [2024-12-10 22:59:22.344167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.915 [2024-12-10 22:59:22.344201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.915 qpair failed and we were unable to recover it. 00:28:14.915 [2024-12-10 22:59:22.344383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.915 [2024-12-10 22:59:22.344414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.915 qpair failed and we were unable to recover it. 00:28:14.915 [2024-12-10 22:59:22.344529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.915 [2024-12-10 22:59:22.344560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.915 qpair failed and we were unable to recover it. 00:28:14.915 [2024-12-10 22:59:22.344664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.915 [2024-12-10 22:59:22.344696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.915 qpair failed and we were unable to recover it. 00:28:14.915 [2024-12-10 22:59:22.344879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.915 [2024-12-10 22:59:22.344910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.915 qpair failed and we were unable to recover it. 00:28:14.915 [2024-12-10 22:59:22.345071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.915 [2024-12-10 22:59:22.345101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.915 qpair failed and we were unable to recover it. 00:28:14.915 [2024-12-10 22:59:22.345227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.915 [2024-12-10 22:59:22.345260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.915 qpair failed and we were unable to recover it. 00:28:14.915 [2024-12-10 22:59:22.345458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.915 [2024-12-10 22:59:22.345490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.915 qpair failed and we were unable to recover it. 00:28:14.915 [2024-12-10 22:59:22.345604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.915 [2024-12-10 22:59:22.345636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.915 qpair failed and we were unable to recover it. 00:28:14.915 [2024-12-10 22:59:22.345818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.915 [2024-12-10 22:59:22.345850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.915 qpair failed and we were unable to recover it. 00:28:14.915 [2024-12-10 22:59:22.346022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.915 [2024-12-10 22:59:22.346054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.915 qpair failed and we were unable to recover it. 00:28:14.915 [2024-12-10 22:59:22.346231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.915 [2024-12-10 22:59:22.346263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.915 qpair failed and we were unable to recover it. 00:28:14.915 [2024-12-10 22:59:22.346437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.915 [2024-12-10 22:59:22.346468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.915 qpair failed and we were unable to recover it. 00:28:14.915 [2024-12-10 22:59:22.346589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.915 [2024-12-10 22:59:22.346620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.915 qpair failed and we were unable to recover it. 00:28:14.915 [2024-12-10 22:59:22.346800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.915 [2024-12-10 22:59:22.346833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.915 qpair failed and we were unable to recover it. 00:28:14.915 [2024-12-10 22:59:22.347066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.915 [2024-12-10 22:59:22.347098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.915 qpair failed and we were unable to recover it. 00:28:14.915 [2024-12-10 22:59:22.347229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.915 [2024-12-10 22:59:22.347262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.915 qpair failed and we were unable to recover it. 00:28:14.915 [2024-12-10 22:59:22.347459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.915 [2024-12-10 22:59:22.347491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.915 qpair failed and we were unable to recover it. 00:28:14.915 [2024-12-10 22:59:22.347611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.915 [2024-12-10 22:59:22.347643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.916 qpair failed and we were unable to recover it. 00:28:14.916 [2024-12-10 22:59:22.347812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.916 [2024-12-10 22:59:22.347844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.916 qpair failed and we were unable to recover it. 00:28:14.916 [2024-12-10 22:59:22.348052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.916 [2024-12-10 22:59:22.348084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.916 qpair failed and we were unable to recover it. 00:28:14.916 [2024-12-10 22:59:22.348201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.916 [2024-12-10 22:59:22.348235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.916 qpair failed and we were unable to recover it. 00:28:14.916 [2024-12-10 22:59:22.348449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.916 [2024-12-10 22:59:22.348481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.916 qpair failed and we were unable to recover it. 00:28:14.916 [2024-12-10 22:59:22.348649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.916 [2024-12-10 22:59:22.348682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.916 qpair failed and we were unable to recover it. 00:28:14.916 [2024-12-10 22:59:22.348856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.916 [2024-12-10 22:59:22.348888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.916 qpair failed and we were unable to recover it. 00:28:14.916 [2024-12-10 22:59:22.349055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.916 [2024-12-10 22:59:22.349087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.916 qpair failed and we were unable to recover it. 00:28:14.916 [2024-12-10 22:59:22.349256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.916 [2024-12-10 22:59:22.349289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.916 qpair failed and we were unable to recover it. 00:28:14.916 [2024-12-10 22:59:22.349463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.916 [2024-12-10 22:59:22.349495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.916 qpair failed and we were unable to recover it. 00:28:14.916 [2024-12-10 22:59:22.349664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.916 [2024-12-10 22:59:22.349696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.916 qpair failed and we were unable to recover it. 00:28:14.916 [2024-12-10 22:59:22.349793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.916 [2024-12-10 22:59:22.349825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.916 qpair failed and we were unable to recover it. 00:28:14.916 [2024-12-10 22:59:22.349932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.916 [2024-12-10 22:59:22.349963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.916 qpair failed and we were unable to recover it. 00:28:14.916 [2024-12-10 22:59:22.350132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.916 [2024-12-10 22:59:22.350173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.916 qpair failed and we were unable to recover it. 00:28:14.916 [2024-12-10 22:59:22.350346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.916 [2024-12-10 22:59:22.350378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.916 qpair failed and we were unable to recover it. 00:28:14.916 [2024-12-10 22:59:22.350500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.916 [2024-12-10 22:59:22.350533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.916 qpair failed and we were unable to recover it. 00:28:14.916 [2024-12-10 22:59:22.350709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.916 [2024-12-10 22:59:22.350740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.916 qpair failed and we were unable to recover it. 00:28:14.916 [2024-12-10 22:59:22.350944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.916 [2024-12-10 22:59:22.350977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.916 qpair failed and we were unable to recover it. 00:28:14.916 [2024-12-10 22:59:22.351237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.916 [2024-12-10 22:59:22.351270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.916 qpair failed and we were unable to recover it. 00:28:14.916 [2024-12-10 22:59:22.351380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.916 [2024-12-10 22:59:22.351416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.916 qpair failed and we were unable to recover it. 00:28:14.916 [2024-12-10 22:59:22.351612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.916 [2024-12-10 22:59:22.351643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.916 qpair failed and we were unable to recover it. 00:28:14.916 [2024-12-10 22:59:22.351813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.916 [2024-12-10 22:59:22.351846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.916 qpair failed and we were unable to recover it. 00:28:14.916 [2024-12-10 22:59:22.351959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.916 [2024-12-10 22:59:22.351990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.916 qpair failed and we were unable to recover it. 00:28:14.916 [2024-12-10 22:59:22.352205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.916 [2024-12-10 22:59:22.352242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.916 qpair failed and we were unable to recover it. 00:28:14.916 [2024-12-10 22:59:22.352416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.916 [2024-12-10 22:59:22.352446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.916 qpair failed and we were unable to recover it. 00:28:14.916 [2024-12-10 22:59:22.352615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.916 [2024-12-10 22:59:22.352646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.916 qpair failed and we were unable to recover it. 00:28:14.916 [2024-12-10 22:59:22.352760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.916 [2024-12-10 22:59:22.352792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.916 qpair failed and we were unable to recover it. 00:28:14.916 [2024-12-10 22:59:22.352961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.916 [2024-12-10 22:59:22.352992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.916 qpair failed and we were unable to recover it. 00:28:14.916 [2024-12-10 22:59:22.353100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.916 [2024-12-10 22:59:22.353131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.916 qpair failed and we were unable to recover it. 00:28:14.916 [2024-12-10 22:59:22.353320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.916 [2024-12-10 22:59:22.353353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.916 qpair failed and we were unable to recover it. 00:28:14.916 [2024-12-10 22:59:22.353535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.916 [2024-12-10 22:59:22.353568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.916 qpair failed and we were unable to recover it. 00:28:14.916 [2024-12-10 22:59:22.353789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.916 [2024-12-10 22:59:22.353821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.916 qpair failed and we were unable to recover it. 00:28:14.916 [2024-12-10 22:59:22.353939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.916 [2024-12-10 22:59:22.353970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.916 qpair failed and we were unable to recover it. 00:28:14.916 [2024-12-10 22:59:22.354094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.916 [2024-12-10 22:59:22.354126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.916 qpair failed and we were unable to recover it. 00:28:14.916 [2024-12-10 22:59:22.354304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.916 [2024-12-10 22:59:22.354336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.916 qpair failed and we were unable to recover it. 00:28:14.917 [2024-12-10 22:59:22.354569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.917 [2024-12-10 22:59:22.354601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.917 qpair failed and we were unable to recover it. 00:28:14.917 [2024-12-10 22:59:22.354773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.917 [2024-12-10 22:59:22.354805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.917 qpair failed and we were unable to recover it. 00:28:14.917 [2024-12-10 22:59:22.354932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.917 [2024-12-10 22:59:22.354963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.917 qpair failed and we were unable to recover it. 00:28:14.917 [2024-12-10 22:59:22.355105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.917 [2024-12-10 22:59:22.355137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.917 qpair failed and we were unable to recover it. 00:28:14.917 [2024-12-10 22:59:22.355258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.917 [2024-12-10 22:59:22.355290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.917 qpair failed and we were unable to recover it. 00:28:14.917 [2024-12-10 22:59:22.355411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.917 [2024-12-10 22:59:22.355443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.917 qpair failed and we were unable to recover it. 00:28:14.917 [2024-12-10 22:59:22.355550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.917 [2024-12-10 22:59:22.355582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.917 qpair failed and we were unable to recover it. 00:28:14.917 [2024-12-10 22:59:22.355692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.917 [2024-12-10 22:59:22.355725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.917 qpair failed and we were unable to recover it. 00:28:14.917 [2024-12-10 22:59:22.355891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.917 [2024-12-10 22:59:22.355923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.917 qpair failed and we were unable to recover it. 00:28:14.917 [2024-12-10 22:59:22.356114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.917 [2024-12-10 22:59:22.356145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.917 qpair failed and we were unable to recover it. 00:28:14.917 [2024-12-10 22:59:22.356272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.917 [2024-12-10 22:59:22.356305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.917 qpair failed and we were unable to recover it. 00:28:14.917 [2024-12-10 22:59:22.356413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.917 [2024-12-10 22:59:22.356445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.917 qpair failed and we were unable to recover it. 00:28:14.917 [2024-12-10 22:59:22.356703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.917 [2024-12-10 22:59:22.356734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.917 qpair failed and we were unable to recover it. 00:28:14.917 [2024-12-10 22:59:22.356952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.917 [2024-12-10 22:59:22.356984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.917 qpair failed and we were unable to recover it. 00:28:14.917 [2024-12-10 22:59:22.357092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.917 [2024-12-10 22:59:22.357123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.917 qpair failed and we were unable to recover it. 00:28:14.917 [2024-12-10 22:59:22.357302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.917 [2024-12-10 22:59:22.357335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.917 qpair failed and we were unable to recover it. 00:28:14.917 [2024-12-10 22:59:22.357534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.917 [2024-12-10 22:59:22.357566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.917 qpair failed and we were unable to recover it. 00:28:14.917 [2024-12-10 22:59:22.357735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.917 [2024-12-10 22:59:22.357768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.917 qpair failed and we were unable to recover it. 00:28:14.917 [2024-12-10 22:59:22.357958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.917 [2024-12-10 22:59:22.357991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.917 qpair failed and we were unable to recover it. 00:28:14.917 [2024-12-10 22:59:22.358170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.917 [2024-12-10 22:59:22.358202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.917 qpair failed and we were unable to recover it. 00:28:14.917 [2024-12-10 22:59:22.358317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.917 [2024-12-10 22:59:22.358349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.917 qpair failed and we were unable to recover it. 00:28:14.917 [2024-12-10 22:59:22.358533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.917 [2024-12-10 22:59:22.358565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.917 qpair failed and we were unable to recover it. 00:28:14.917 [2024-12-10 22:59:22.358665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.917 [2024-12-10 22:59:22.358698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.917 qpair failed and we were unable to recover it. 00:28:14.917 [2024-12-10 22:59:22.358870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.917 [2024-12-10 22:59:22.358902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.917 qpair failed and we were unable to recover it. 00:28:14.917 [2024-12-10 22:59:22.359020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.917 [2024-12-10 22:59:22.359057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.917 qpair failed and we were unable to recover it. 00:28:14.917 [2024-12-10 22:59:22.359179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.917 [2024-12-10 22:59:22.359213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.917 qpair failed and we were unable to recover it. 00:28:14.917 [2024-12-10 22:59:22.359381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.917 [2024-12-10 22:59:22.359412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.917 qpair failed and we were unable to recover it. 00:28:14.917 [2024-12-10 22:59:22.359540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.917 [2024-12-10 22:59:22.359571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.917 qpair failed and we were unable to recover it. 00:28:14.917 [2024-12-10 22:59:22.359675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.917 [2024-12-10 22:59:22.359707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.917 qpair failed and we were unable to recover it. 00:28:14.917 [2024-12-10 22:59:22.359918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.917 [2024-12-10 22:59:22.359949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.917 qpair failed and we were unable to recover it. 00:28:14.917 [2024-12-10 22:59:22.360118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.917 [2024-12-10 22:59:22.360150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.917 qpair failed and we were unable to recover it. 00:28:14.917 [2024-12-10 22:59:22.360280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.917 [2024-12-10 22:59:22.360313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.917 qpair failed and we were unable to recover it. 00:28:14.917 [2024-12-10 22:59:22.360420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.917 [2024-12-10 22:59:22.360451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.917 qpair failed and we were unable to recover it. 00:28:14.917 [2024-12-10 22:59:22.360552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.917 [2024-12-10 22:59:22.360583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.917 qpair failed and we were unable to recover it. 00:28:14.917 [2024-12-10 22:59:22.360698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.917 [2024-12-10 22:59:22.360731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.917 qpair failed and we were unable to recover it. 00:28:14.917 [2024-12-10 22:59:22.360868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.917 [2024-12-10 22:59:22.360899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.917 qpair failed and we were unable to recover it. 00:28:14.917 [2024-12-10 22:59:22.361017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.917 [2024-12-10 22:59:22.361048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.917 qpair failed and we were unable to recover it. 00:28:14.917 [2024-12-10 22:59:22.361167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.917 [2024-12-10 22:59:22.361201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.917 qpair failed and we were unable to recover it. 00:28:14.917 [2024-12-10 22:59:22.361402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.917 [2024-12-10 22:59:22.361433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.917 qpair failed and we were unable to recover it. 00:28:14.917 [2024-12-10 22:59:22.361611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.918 [2024-12-10 22:59:22.361644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.918 qpair failed and we were unable to recover it. 00:28:14.918 [2024-12-10 22:59:22.361748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.918 [2024-12-10 22:59:22.361780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.918 qpair failed and we were unable to recover it. 00:28:14.918 [2024-12-10 22:59:22.361950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.918 [2024-12-10 22:59:22.361983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.918 qpair failed and we were unable to recover it. 00:28:14.918 [2024-12-10 22:59:22.362167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.918 [2024-12-10 22:59:22.362201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.918 qpair failed and we were unable to recover it. 00:28:14.918 [2024-12-10 22:59:22.362322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.918 [2024-12-10 22:59:22.362355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.918 qpair failed and we were unable to recover it. 00:28:14.918 [2024-12-10 22:59:22.362522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.918 [2024-12-10 22:59:22.362554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.918 qpair failed and we were unable to recover it. 00:28:14.918 [2024-12-10 22:59:22.362656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.918 [2024-12-10 22:59:22.362687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.918 qpair failed and we were unable to recover it. 00:28:14.918 [2024-12-10 22:59:22.362791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.918 [2024-12-10 22:59:22.362822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.918 qpair failed and we were unable to recover it. 00:28:14.918 [2024-12-10 22:59:22.362989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.918 [2024-12-10 22:59:22.363021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.918 qpair failed and we were unable to recover it. 00:28:14.918 [2024-12-10 22:59:22.363121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.918 [2024-12-10 22:59:22.363152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.918 qpair failed and we were unable to recover it. 00:28:14.918 [2024-12-10 22:59:22.363407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.918 [2024-12-10 22:59:22.363439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.918 qpair failed and we were unable to recover it. 00:28:14.918 [2024-12-10 22:59:22.363618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.918 [2024-12-10 22:59:22.363650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.918 qpair failed and we were unable to recover it. 00:28:14.918 [2024-12-10 22:59:22.363767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.918 [2024-12-10 22:59:22.363799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.918 qpair failed and we were unable to recover it. 00:28:14.918 [2024-12-10 22:59:22.363979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.918 [2024-12-10 22:59:22.364011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.918 qpair failed and we were unable to recover it. 00:28:14.918 [2024-12-10 22:59:22.364182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.918 [2024-12-10 22:59:22.364216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.918 qpair failed and we were unable to recover it. 00:28:14.918 [2024-12-10 22:59:22.364402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.918 [2024-12-10 22:59:22.364435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.918 qpair failed and we were unable to recover it. 00:28:14.918 [2024-12-10 22:59:22.364563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.918 [2024-12-10 22:59:22.364593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.918 qpair failed and we were unable to recover it. 00:28:14.918 [2024-12-10 22:59:22.364849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.918 [2024-12-10 22:59:22.364880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.918 qpair failed and we were unable to recover it. 00:28:14.918 [2024-12-10 22:59:22.364991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.918 [2024-12-10 22:59:22.365024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.918 qpair failed and we were unable to recover it. 00:28:14.918 [2024-12-10 22:59:22.365136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.918 [2024-12-10 22:59:22.365184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.918 qpair failed and we were unable to recover it. 00:28:14.918 [2024-12-10 22:59:22.365358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.918 [2024-12-10 22:59:22.365389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.918 qpair failed and we were unable to recover it. 00:28:14.918 [2024-12-10 22:59:22.365560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.918 [2024-12-10 22:59:22.365592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.918 qpair failed and we were unable to recover it. 00:28:14.918 [2024-12-10 22:59:22.365691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.918 [2024-12-10 22:59:22.365722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.918 qpair failed and we were unable to recover it. 00:28:14.918 [2024-12-10 22:59:22.365891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.918 [2024-12-10 22:59:22.365923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.918 qpair failed and we were unable to recover it. 00:28:14.918 [2024-12-10 22:59:22.366030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.918 [2024-12-10 22:59:22.366063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.918 qpair failed and we were unable to recover it. 00:28:14.918 [2024-12-10 22:59:22.366254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.918 [2024-12-10 22:59:22.366293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.918 qpair failed and we were unable to recover it. 00:28:14.918 [2024-12-10 22:59:22.366411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.918 [2024-12-10 22:59:22.366442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.918 qpair failed and we were unable to recover it. 00:28:14.918 [2024-12-10 22:59:22.366635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.918 [2024-12-10 22:59:22.366666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.918 qpair failed and we were unable to recover it. 00:28:14.918 [2024-12-10 22:59:22.366834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.918 [2024-12-10 22:59:22.366867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.918 qpair failed and we were unable to recover it. 00:28:14.918 [2024-12-10 22:59:22.367035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.918 [2024-12-10 22:59:22.367071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.918 qpair failed and we were unable to recover it. 00:28:14.918 [2024-12-10 22:59:22.367253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.918 [2024-12-10 22:59:22.367285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.918 qpair failed and we were unable to recover it. 00:28:14.918 [2024-12-10 22:59:22.367490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.918 [2024-12-10 22:59:22.367522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.918 qpair failed and we were unable to recover it. 00:28:14.918 [2024-12-10 22:59:22.367628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.918 [2024-12-10 22:59:22.367659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.918 qpair failed and we were unable to recover it. 00:28:14.918 [2024-12-10 22:59:22.367766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.918 [2024-12-10 22:59:22.367797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.918 qpair failed and we were unable to recover it. 00:28:14.918 [2024-12-10 22:59:22.367908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.918 [2024-12-10 22:59:22.367942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.918 qpair failed and we were unable to recover it. 00:28:14.918 [2024-12-10 22:59:22.368128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.918 [2024-12-10 22:59:22.368168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.918 qpair failed and we were unable to recover it. 00:28:14.918 [2024-12-10 22:59:22.368350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.918 [2024-12-10 22:59:22.368381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.918 qpair failed and we were unable to recover it. 00:28:14.918 [2024-12-10 22:59:22.368553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.918 [2024-12-10 22:59:22.368585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.918 qpair failed and we were unable to recover it. 00:28:14.918 [2024-12-10 22:59:22.368773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.918 [2024-12-10 22:59:22.368807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.918 qpair failed and we were unable to recover it. 00:28:14.918 [2024-12-10 22:59:22.368989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.919 [2024-12-10 22:59:22.369021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.919 qpair failed and we were unable to recover it. 00:28:14.919 [2024-12-10 22:59:22.369217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.919 [2024-12-10 22:59:22.369251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.919 qpair failed and we were unable to recover it. 00:28:14.919 [2024-12-10 22:59:22.369356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.919 [2024-12-10 22:59:22.369388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.919 qpair failed and we were unable to recover it. 00:28:14.919 [2024-12-10 22:59:22.369602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.919 [2024-12-10 22:59:22.369634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.919 qpair failed and we were unable to recover it. 00:28:14.919 [2024-12-10 22:59:22.369835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.919 [2024-12-10 22:59:22.369868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.919 qpair failed and we were unable to recover it. 00:28:14.919 [2024-12-10 22:59:22.370041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.919 [2024-12-10 22:59:22.370073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.919 qpair failed and we were unable to recover it. 00:28:14.919 [2024-12-10 22:59:22.370270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.919 [2024-12-10 22:59:22.370302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.919 qpair failed and we were unable to recover it. 00:28:14.919 [2024-12-10 22:59:22.370430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.919 [2024-12-10 22:59:22.370462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.919 qpair failed and we were unable to recover it. 00:28:14.919 [2024-12-10 22:59:22.370562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.919 [2024-12-10 22:59:22.370595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.919 qpair failed and we were unable to recover it. 00:28:14.919 [2024-12-10 22:59:22.370706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.919 [2024-12-10 22:59:22.370738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.919 qpair failed and we were unable to recover it. 00:28:14.919 [2024-12-10 22:59:22.370857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.919 [2024-12-10 22:59:22.370889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.919 qpair failed and we were unable to recover it. 00:28:14.919 [2024-12-10 22:59:22.371000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.919 [2024-12-10 22:59:22.371031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.919 qpair failed and we were unable to recover it. 00:28:14.919 [2024-12-10 22:59:22.371199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.919 [2024-12-10 22:59:22.371232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.919 qpair failed and we were unable to recover it. 00:28:14.919 [2024-12-10 22:59:22.371501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.919 [2024-12-10 22:59:22.371533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.919 qpair failed and we were unable to recover it. 00:28:14.919 [2024-12-10 22:59:22.371653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.919 [2024-12-10 22:59:22.371685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.919 qpair failed and we were unable to recover it. 00:28:14.919 [2024-12-10 22:59:22.371862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.919 [2024-12-10 22:59:22.371894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.919 qpair failed and we were unable to recover it. 00:28:14.919 [2024-12-10 22:59:22.372086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.919 [2024-12-10 22:59:22.372118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.919 qpair failed and we were unable to recover it. 00:28:14.919 [2024-12-10 22:59:22.372224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.919 [2024-12-10 22:59:22.372256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.919 qpair failed and we were unable to recover it. 00:28:14.919 [2024-12-10 22:59:22.372434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.919 [2024-12-10 22:59:22.372466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.919 qpair failed and we were unable to recover it. 00:28:14.919 [2024-12-10 22:59:22.372634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.919 [2024-12-10 22:59:22.372666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.919 qpair failed and we were unable to recover it. 00:28:14.919 [2024-12-10 22:59:22.372856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.919 [2024-12-10 22:59:22.372888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.919 qpair failed and we were unable to recover it. 00:28:14.919 [2024-12-10 22:59:22.373073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.919 [2024-12-10 22:59:22.373105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.919 qpair failed and we were unable to recover it. 00:28:14.919 [2024-12-10 22:59:22.373283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.919 [2024-12-10 22:59:22.373317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.919 qpair failed and we were unable to recover it. 00:28:14.919 [2024-12-10 22:59:22.373492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.919 [2024-12-10 22:59:22.373524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.919 qpair failed and we were unable to recover it. 00:28:14.919 [2024-12-10 22:59:22.373696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.919 [2024-12-10 22:59:22.373728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.919 qpair failed and we were unable to recover it. 00:28:14.919 [2024-12-10 22:59:22.373896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.919 [2024-12-10 22:59:22.373928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.919 qpair failed and we were unable to recover it. 00:28:14.919 [2024-12-10 22:59:22.374117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.919 [2024-12-10 22:59:22.374155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.919 qpair failed and we were unable to recover it. 00:28:14.919 [2024-12-10 22:59:22.374373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.919 [2024-12-10 22:59:22.374406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.919 qpair failed and we were unable to recover it. 00:28:14.919 [2024-12-10 22:59:22.374522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.919 [2024-12-10 22:59:22.374555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.919 qpair failed and we were unable to recover it. 00:28:14.919 [2024-12-10 22:59:22.374796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.919 [2024-12-10 22:59:22.374829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.919 qpair failed and we were unable to recover it. 00:28:14.919 [2024-12-10 22:59:22.374999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.919 [2024-12-10 22:59:22.375031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.919 qpair failed and we were unable to recover it. 00:28:14.919 [2024-12-10 22:59:22.375132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.919 [2024-12-10 22:59:22.375175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.919 qpair failed and we were unable to recover it. 00:28:14.919 [2024-12-10 22:59:22.375296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.919 [2024-12-10 22:59:22.375327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.919 qpair failed and we were unable to recover it. 00:28:14.919 [2024-12-10 22:59:22.375453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.919 [2024-12-10 22:59:22.375485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.919 qpair failed and we were unable to recover it. 00:28:14.919 [2024-12-10 22:59:22.375588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.919 [2024-12-10 22:59:22.375620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.919 qpair failed and we were unable to recover it. 00:28:14.919 [2024-12-10 22:59:22.375721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.919 [2024-12-10 22:59:22.375754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.919 qpair failed and we were unable to recover it. 00:28:14.919 [2024-12-10 22:59:22.375856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.919 [2024-12-10 22:59:22.375886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.919 qpair failed and we were unable to recover it. 00:28:14.919 [2024-12-10 22:59:22.376094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.919 [2024-12-10 22:59:22.376128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.919 qpair failed and we were unable to recover it. 00:28:14.919 [2024-12-10 22:59:22.376308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.919 [2024-12-10 22:59:22.376340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.919 qpair failed and we were unable to recover it. 00:28:14.919 [2024-12-10 22:59:22.376511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.919 [2024-12-10 22:59:22.376542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.919 qpair failed and we were unable to recover it. 00:28:14.920 [2024-12-10 22:59:22.376725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.920 [2024-12-10 22:59:22.376757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.920 qpair failed and we were unable to recover it. 00:28:14.920 [2024-12-10 22:59:22.376925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.920 [2024-12-10 22:59:22.376956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.920 qpair failed and we were unable to recover it. 00:28:14.920 [2024-12-10 22:59:22.377126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.920 [2024-12-10 22:59:22.377167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.920 qpair failed and we were unable to recover it. 00:28:14.920 [2024-12-10 22:59:22.377284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.920 [2024-12-10 22:59:22.377315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.920 qpair failed and we were unable to recover it. 00:28:14.920 [2024-12-10 22:59:22.377449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.920 [2024-12-10 22:59:22.377481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.920 qpair failed and we were unable to recover it. 00:28:14.920 [2024-12-10 22:59:22.377646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.920 [2024-12-10 22:59:22.377678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.920 qpair failed and we were unable to recover it. 00:28:14.920 [2024-12-10 22:59:22.377797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.920 [2024-12-10 22:59:22.377829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.920 qpair failed and we were unable to recover it. 00:28:14.920 [2024-12-10 22:59:22.378033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.920 [2024-12-10 22:59:22.378065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.920 qpair failed and we were unable to recover it. 00:28:14.920 [2024-12-10 22:59:22.378236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.920 [2024-12-10 22:59:22.378270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.920 qpair failed and we were unable to recover it. 00:28:14.920 [2024-12-10 22:59:22.378438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.920 [2024-12-10 22:59:22.378469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.920 qpair failed and we were unable to recover it. 00:28:14.920 [2024-12-10 22:59:22.378637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.920 [2024-12-10 22:59:22.378669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.920 qpair failed and we were unable to recover it. 00:28:14.920 [2024-12-10 22:59:22.378786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.920 [2024-12-10 22:59:22.378818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.920 qpair failed and we were unable to recover it. 00:28:14.920 [2024-12-10 22:59:22.379054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.920 [2024-12-10 22:59:22.379086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.920 qpair failed and we were unable to recover it. 00:28:14.920 [2024-12-10 22:59:22.379274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.920 [2024-12-10 22:59:22.379306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.920 qpair failed and we were unable to recover it. 00:28:14.920 [2024-12-10 22:59:22.379424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.920 [2024-12-10 22:59:22.379456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.920 qpair failed and we were unable to recover it. 00:28:14.920 [2024-12-10 22:59:22.379566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.920 [2024-12-10 22:59:22.379597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.920 qpair failed and we were unable to recover it. 00:28:14.920 [2024-12-10 22:59:22.379789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.920 [2024-12-10 22:59:22.379819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.920 qpair failed and we were unable to recover it. 00:28:14.920 [2024-12-10 22:59:22.379981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.920 [2024-12-10 22:59:22.380012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.920 qpair failed and we were unable to recover it. 00:28:14.920 [2024-12-10 22:59:22.380113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.920 [2024-12-10 22:59:22.380144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.920 qpair failed and we were unable to recover it. 00:28:14.920 [2024-12-10 22:59:22.380246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.920 [2024-12-10 22:59:22.380278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.920 qpair failed and we were unable to recover it. 00:28:14.920 [2024-12-10 22:59:22.380450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.920 [2024-12-10 22:59:22.380481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.920 qpair failed and we were unable to recover it. 00:28:14.920 [2024-12-10 22:59:22.380602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.920 [2024-12-10 22:59:22.380634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.920 qpair failed and we were unable to recover it. 00:28:14.920 [2024-12-10 22:59:22.380807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.920 [2024-12-10 22:59:22.380840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.920 qpair failed and we were unable to recover it. 00:28:14.920 [2024-12-10 22:59:22.381024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.920 [2024-12-10 22:59:22.381057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.920 qpair failed and we were unable to recover it. 00:28:14.920 [2024-12-10 22:59:22.381194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.920 [2024-12-10 22:59:22.381226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.920 qpair failed and we were unable to recover it. 00:28:14.920 [2024-12-10 22:59:22.381325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.920 [2024-12-10 22:59:22.381356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.920 qpair failed and we were unable to recover it. 00:28:14.920 [2024-12-10 22:59:22.381456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.920 [2024-12-10 22:59:22.381493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.920 qpair failed and we were unable to recover it. 00:28:14.920 [2024-12-10 22:59:22.381689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.920 [2024-12-10 22:59:22.381721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.920 qpair failed and we were unable to recover it. 00:28:14.920 [2024-12-10 22:59:22.381836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.920 [2024-12-10 22:59:22.381867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.920 qpair failed and we were unable to recover it. 00:28:14.920 [2024-12-10 22:59:22.381967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.920 [2024-12-10 22:59:22.381997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.920 qpair failed and we were unable to recover it. 00:28:14.920 [2024-12-10 22:59:22.382115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.920 [2024-12-10 22:59:22.382147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.920 qpair failed and we were unable to recover it. 00:28:14.920 [2024-12-10 22:59:22.382372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.920 [2024-12-10 22:59:22.382404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.920 qpair failed and we were unable to recover it. 00:28:14.920 [2024-12-10 22:59:22.382574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.920 [2024-12-10 22:59:22.382606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.920 qpair failed and we were unable to recover it. 00:28:14.920 [2024-12-10 22:59:22.382790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.920 [2024-12-10 22:59:22.382821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.920 qpair failed and we were unable to recover it. 00:28:14.920 [2024-12-10 22:59:22.383016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.920 [2024-12-10 22:59:22.383047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.920 qpair failed and we were unable to recover it. 00:28:14.920 [2024-12-10 22:59:22.383337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.920 [2024-12-10 22:59:22.383371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.920 qpair failed and we were unable to recover it. 00:28:14.920 [2024-12-10 22:59:22.383564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.920 [2024-12-10 22:59:22.383597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.920 qpair failed and we were unable to recover it. 00:28:14.920 [2024-12-10 22:59:22.383700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.920 [2024-12-10 22:59:22.383730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.920 qpair failed and we were unable to recover it. 00:28:14.920 [2024-12-10 22:59:22.383837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.920 [2024-12-10 22:59:22.383869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.920 qpair failed and we were unable to recover it. 00:28:14.920 [2024-12-10 22:59:22.383987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.921 [2024-12-10 22:59:22.384018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.921 qpair failed and we were unable to recover it. 00:28:14.921 [2024-12-10 22:59:22.384211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.921 [2024-12-10 22:59:22.384243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.921 qpair failed and we were unable to recover it. 00:28:14.921 [2024-12-10 22:59:22.384417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.921 [2024-12-10 22:59:22.384448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.921 qpair failed and we were unable to recover it. 00:28:14.921 [2024-12-10 22:59:22.384624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.921 [2024-12-10 22:59:22.384656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.921 qpair failed and we were unable to recover it. 00:28:14.921 [2024-12-10 22:59:22.384759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.921 [2024-12-10 22:59:22.384791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.921 qpair failed and we were unable to recover it. 00:28:14.921 [2024-12-10 22:59:22.384993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.921 [2024-12-10 22:59:22.385024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.921 qpair failed and we were unable to recover it. 00:28:14.921 [2024-12-10 22:59:22.385128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.921 [2024-12-10 22:59:22.385170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.921 qpair failed and we were unable to recover it. 00:28:14.921 [2024-12-10 22:59:22.385282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.921 [2024-12-10 22:59:22.385313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.921 qpair failed and we were unable to recover it. 00:28:14.921 [2024-12-10 22:59:22.385425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.921 [2024-12-10 22:59:22.385458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.921 qpair failed and we were unable to recover it. 00:28:14.921 [2024-12-10 22:59:22.385559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.921 [2024-12-10 22:59:22.385591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.921 qpair failed and we were unable to recover it. 00:28:14.921 [2024-12-10 22:59:22.385759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.921 [2024-12-10 22:59:22.385791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.921 qpair failed and we were unable to recover it. 00:28:14.921 [2024-12-10 22:59:22.385980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.921 [2024-12-10 22:59:22.386012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.921 qpair failed and we were unable to recover it. 00:28:14.921 [2024-12-10 22:59:22.386184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.921 [2024-12-10 22:59:22.386217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.921 qpair failed and we were unable to recover it. 00:28:14.921 [2024-12-10 22:59:22.386349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.921 [2024-12-10 22:59:22.386381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.921 qpair failed and we were unable to recover it. 00:28:14.921 [2024-12-10 22:59:22.386557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.921 [2024-12-10 22:59:22.386589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.921 qpair failed and we were unable to recover it. 00:28:14.921 [2024-12-10 22:59:22.386694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.921 [2024-12-10 22:59:22.386726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.921 qpair failed and we were unable to recover it. 00:28:14.921 [2024-12-10 22:59:22.386912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.921 [2024-12-10 22:59:22.386944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.921 qpair failed and we were unable to recover it. 00:28:14.921 [2024-12-10 22:59:22.387116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.921 [2024-12-10 22:59:22.387147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.921 qpair failed and we were unable to recover it. 00:28:14.921 [2024-12-10 22:59:22.387325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.921 [2024-12-10 22:59:22.387357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.921 qpair failed and we were unable to recover it. 00:28:14.921 [2024-12-10 22:59:22.387564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.921 [2024-12-10 22:59:22.387597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.921 qpair failed and we were unable to recover it. 00:28:14.921 [2024-12-10 22:59:22.387711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.921 [2024-12-10 22:59:22.387743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.921 qpair failed and we were unable to recover it. 00:28:14.921 [2024-12-10 22:59:22.387995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.921 [2024-12-10 22:59:22.388027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.921 qpair failed and we were unable to recover it. 00:28:14.921 [2024-12-10 22:59:22.388141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.921 [2024-12-10 22:59:22.388184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.921 qpair failed and we were unable to recover it. 00:28:14.921 [2024-12-10 22:59:22.388287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.921 [2024-12-10 22:59:22.388319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.921 qpair failed and we were unable to recover it. 00:28:14.921 [2024-12-10 22:59:22.388436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.921 [2024-12-10 22:59:22.388469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.921 qpair failed and we were unable to recover it. 00:28:14.921 [2024-12-10 22:59:22.388578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.921 [2024-12-10 22:59:22.388610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.921 qpair failed and we were unable to recover it. 00:28:14.921 [2024-12-10 22:59:22.388709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.921 [2024-12-10 22:59:22.388740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.921 qpair failed and we were unable to recover it. 00:28:14.921 [2024-12-10 22:59:22.389001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.921 [2024-12-10 22:59:22.389037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.921 qpair failed and we were unable to recover it. 00:28:14.921 [2024-12-10 22:59:22.389148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.921 [2024-12-10 22:59:22.389191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.921 qpair failed and we were unable to recover it. 00:28:14.921 [2024-12-10 22:59:22.389403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.921 [2024-12-10 22:59:22.389434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.921 qpair failed and we were unable to recover it. 00:28:14.921 [2024-12-10 22:59:22.389551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.921 [2024-12-10 22:59:22.389583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.921 qpair failed and we were unable to recover it. 00:28:14.921 [2024-12-10 22:59:22.389805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.921 [2024-12-10 22:59:22.389836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.921 qpair failed and we were unable to recover it. 00:28:14.921 [2024-12-10 22:59:22.390030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.921 [2024-12-10 22:59:22.390063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.921 qpair failed and we were unable to recover it. 00:28:14.921 [2024-12-10 22:59:22.390238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.921 [2024-12-10 22:59:22.390272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.921 qpair failed and we were unable to recover it. 00:28:14.921 [2024-12-10 22:59:22.390444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.921 [2024-12-10 22:59:22.390477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.921 qpair failed and we were unable to recover it. 00:28:14.921 [2024-12-10 22:59:22.390672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.922 [2024-12-10 22:59:22.390706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.922 qpair failed and we were unable to recover it. 00:28:14.922 [2024-12-10 22:59:22.390825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.922 [2024-12-10 22:59:22.390857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.922 qpair failed and we were unable to recover it. 00:28:14.922 [2024-12-10 22:59:22.390973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.922 [2024-12-10 22:59:22.391006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.922 qpair failed and we were unable to recover it. 00:28:14.922 [2024-12-10 22:59:22.391219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.922 [2024-12-10 22:59:22.391253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.922 qpair failed and we were unable to recover it. 00:28:14.922 [2024-12-10 22:59:22.391449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.922 [2024-12-10 22:59:22.391481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.922 qpair failed and we were unable to recover it. 00:28:14.922 [2024-12-10 22:59:22.391599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.922 [2024-12-10 22:59:22.391630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.922 qpair failed and we were unable to recover it. 00:28:14.922 [2024-12-10 22:59:22.391821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.922 [2024-12-10 22:59:22.391852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.922 qpair failed and we were unable to recover it. 00:28:14.922 [2024-12-10 22:59:22.392023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.922 [2024-12-10 22:59:22.392056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.922 qpair failed and we were unable to recover it. 00:28:14.922 [2024-12-10 22:59:22.392224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.922 [2024-12-10 22:59:22.392258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.922 qpair failed and we were unable to recover it. 00:28:14.922 [2024-12-10 22:59:22.392450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.922 [2024-12-10 22:59:22.392482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.922 qpair failed and we were unable to recover it. 00:28:14.922 [2024-12-10 22:59:22.392598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.922 [2024-12-10 22:59:22.392630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.922 qpair failed and we were unable to recover it. 00:28:14.922 [2024-12-10 22:59:22.392729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.922 [2024-12-10 22:59:22.392761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.922 qpair failed and we were unable to recover it. 00:28:14.922 [2024-12-10 22:59:22.392879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.922 [2024-12-10 22:59:22.392911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.922 qpair failed and we were unable to recover it. 00:28:14.922 [2024-12-10 22:59:22.393099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.922 [2024-12-10 22:59:22.393132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.922 qpair failed and we were unable to recover it. 00:28:14.922 [2024-12-10 22:59:22.393265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.922 [2024-12-10 22:59:22.393297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.922 qpair failed and we were unable to recover it. 00:28:14.922 [2024-12-10 22:59:22.393407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.922 [2024-12-10 22:59:22.393439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.922 qpair failed and we were unable to recover it. 00:28:14.922 [2024-12-10 22:59:22.393537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.922 [2024-12-10 22:59:22.393569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.922 qpair failed and we were unable to recover it. 00:28:14.922 [2024-12-10 22:59:22.393671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.922 [2024-12-10 22:59:22.393702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.922 qpair failed and we were unable to recover it. 00:28:14.922 [2024-12-10 22:59:22.393962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.922 [2024-12-10 22:59:22.393994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.922 qpair failed and we were unable to recover it. 00:28:14.922 [2024-12-10 22:59:22.394125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.922 [2024-12-10 22:59:22.394168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.922 qpair failed and we were unable to recover it. 00:28:14.922 [2024-12-10 22:59:22.394307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.922 [2024-12-10 22:59:22.394338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.922 qpair failed and we were unable to recover it. 00:28:14.922 [2024-12-10 22:59:22.394515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.922 [2024-12-10 22:59:22.394547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.922 qpair failed and we were unable to recover it. 00:28:14.922 [2024-12-10 22:59:22.394731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.922 [2024-12-10 22:59:22.394763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.922 qpair failed and we were unable to recover it. 00:28:14.922 [2024-12-10 22:59:22.394868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.922 [2024-12-10 22:59:22.394901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.922 qpair failed and we were unable to recover it. 00:28:14.922 [2024-12-10 22:59:22.395006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.922 [2024-12-10 22:59:22.395040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.922 qpair failed and we were unable to recover it. 00:28:14.922 [2024-12-10 22:59:22.395210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.922 [2024-12-10 22:59:22.395244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.922 qpair failed and we were unable to recover it. 00:28:14.922 [2024-12-10 22:59:22.395418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.922 [2024-12-10 22:59:22.395451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.922 qpair failed and we were unable to recover it. 00:28:14.922 [2024-12-10 22:59:22.395620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.922 [2024-12-10 22:59:22.395651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.922 qpair failed and we were unable to recover it. 00:28:14.922 [2024-12-10 22:59:22.395762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.922 [2024-12-10 22:59:22.395795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.922 qpair failed and we were unable to recover it. 00:28:14.922 [2024-12-10 22:59:22.395927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.922 [2024-12-10 22:59:22.395961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.922 qpair failed and we were unable to recover it. 00:28:14.922 [2024-12-10 22:59:22.396081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.922 [2024-12-10 22:59:22.396113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.922 qpair failed and we were unable to recover it. 00:28:14.922 [2024-12-10 22:59:22.396317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.922 [2024-12-10 22:59:22.396350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.922 qpair failed and we were unable to recover it. 00:28:14.922 [2024-12-10 22:59:22.396516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.922 [2024-12-10 22:59:22.396553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.922 qpair failed and we were unable to recover it. 00:28:14.922 [2024-12-10 22:59:22.396790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.922 [2024-12-10 22:59:22.396823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.922 qpair failed and we were unable to recover it. 00:28:14.922 [2024-12-10 22:59:22.397015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.922 [2024-12-10 22:59:22.397046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.922 qpair failed and we were unable to recover it. 00:28:14.922 [2024-12-10 22:59:22.397174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.922 [2024-12-10 22:59:22.397207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.922 qpair failed and we were unable to recover it. 00:28:14.922 [2024-12-10 22:59:22.397326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.922 [2024-12-10 22:59:22.397358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.922 qpair failed and we were unable to recover it. 00:28:14.922 [2024-12-10 22:59:22.397460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.922 [2024-12-10 22:59:22.397491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.922 qpair failed and we were unable to recover it. 00:28:14.922 [2024-12-10 22:59:22.397594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.922 [2024-12-10 22:59:22.397626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.922 qpair failed and we were unable to recover it. 00:28:14.922 [2024-12-10 22:59:22.397814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.922 [2024-12-10 22:59:22.397847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.922 qpair failed and we were unable to recover it. 00:28:14.923 [2024-12-10 22:59:22.398015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.923 [2024-12-10 22:59:22.398046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.923 qpair failed and we were unable to recover it. 00:28:14.923 [2024-12-10 22:59:22.398182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.923 [2024-12-10 22:59:22.398216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.923 qpair failed and we were unable to recover it. 00:28:14.923 [2024-12-10 22:59:22.398319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.923 [2024-12-10 22:59:22.398351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.923 qpair failed and we were unable to recover it. 00:28:14.923 [2024-12-10 22:59:22.398522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.923 [2024-12-10 22:59:22.398555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.923 qpair failed and we were unable to recover it. 00:28:14.923 [2024-12-10 22:59:22.398728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.923 [2024-12-10 22:59:22.398759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.923 qpair failed and we were unable to recover it. 00:28:14.923 [2024-12-10 22:59:22.399048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.923 [2024-12-10 22:59:22.399079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.923 qpair failed and we were unable to recover it. 00:28:14.923 [2024-12-10 22:59:22.399192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.923 [2024-12-10 22:59:22.399226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.923 qpair failed and we were unable to recover it. 00:28:14.923 [2024-12-10 22:59:22.399396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.923 [2024-12-10 22:59:22.399428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.923 qpair failed and we were unable to recover it. 00:28:14.923 [2024-12-10 22:59:22.399632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.923 [2024-12-10 22:59:22.399666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.923 qpair failed and we were unable to recover it. 00:28:14.923 [2024-12-10 22:59:22.399800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.923 [2024-12-10 22:59:22.399833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.923 qpair failed and we were unable to recover it. 00:28:14.923 [2024-12-10 22:59:22.399934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.923 [2024-12-10 22:59:22.399966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.923 qpair failed and we were unable to recover it. 00:28:14.923 [2024-12-10 22:59:22.400236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.923 [2024-12-10 22:59:22.400270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.923 qpair failed and we were unable to recover it. 00:28:14.923 [2024-12-10 22:59:22.400479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.923 [2024-12-10 22:59:22.400511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.923 qpair failed and we were unable to recover it. 00:28:14.923 [2024-12-10 22:59:22.400695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.923 [2024-12-10 22:59:22.400726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.923 qpair failed and we were unable to recover it. 00:28:14.923 [2024-12-10 22:59:22.400934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.923 [2024-12-10 22:59:22.400965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.923 qpair failed and we were unable to recover it. 00:28:14.923 [2024-12-10 22:59:22.401166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.923 [2024-12-10 22:59:22.401199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.923 qpair failed and we were unable to recover it. 00:28:14.923 [2024-12-10 22:59:22.401315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.923 [2024-12-10 22:59:22.401346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.923 qpair failed and we were unable to recover it. 00:28:14.923 [2024-12-10 22:59:22.401516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.923 [2024-12-10 22:59:22.401548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.923 qpair failed and we were unable to recover it. 00:28:14.923 [2024-12-10 22:59:22.401717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.923 [2024-12-10 22:59:22.401749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.923 qpair failed and we were unable to recover it. 00:28:14.923 [2024-12-10 22:59:22.402025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.923 [2024-12-10 22:59:22.402095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.923 qpair failed and we were unable to recover it. 00:28:14.923 [2024-12-10 22:59:22.402258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.923 [2024-12-10 22:59:22.402296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.923 qpair failed and we were unable to recover it. 00:28:14.923 [2024-12-10 22:59:22.402423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.923 [2024-12-10 22:59:22.402456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.923 qpair failed and we were unable to recover it. 00:28:14.923 [2024-12-10 22:59:22.402656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.923 [2024-12-10 22:59:22.402687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.923 qpair failed and we were unable to recover it. 00:28:14.923 [2024-12-10 22:59:22.402869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.923 [2024-12-10 22:59:22.402900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.923 qpair failed and we were unable to recover it. 00:28:14.923 [2024-12-10 22:59:22.403075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.923 [2024-12-10 22:59:22.403107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.923 qpair failed and we were unable to recover it. 00:28:14.923 [2024-12-10 22:59:22.403288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.923 [2024-12-10 22:59:22.403322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.923 qpair failed and we were unable to recover it. 00:28:14.923 [2024-12-10 22:59:22.403436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.923 [2024-12-10 22:59:22.403467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.923 qpair failed and we were unable to recover it. 00:28:14.923 [2024-12-10 22:59:22.403570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.923 [2024-12-10 22:59:22.403601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.923 qpair failed and we were unable to recover it. 00:28:14.923 [2024-12-10 22:59:22.403771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.923 [2024-12-10 22:59:22.403802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.923 qpair failed and we were unable to recover it. 00:28:14.923 [2024-12-10 22:59:22.404020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.923 [2024-12-10 22:59:22.404051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.923 qpair failed and we were unable to recover it. 00:28:14.923 [2024-12-10 22:59:22.404169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.923 [2024-12-10 22:59:22.404202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.923 qpair failed and we were unable to recover it. 00:28:14.923 [2024-12-10 22:59:22.404311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.923 [2024-12-10 22:59:22.404342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.923 qpair failed and we were unable to recover it. 00:28:14.923 [2024-12-10 22:59:22.404526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.923 [2024-12-10 22:59:22.404568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.923 qpair failed and we were unable to recover it. 00:28:14.923 [2024-12-10 22:59:22.404692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.923 [2024-12-10 22:59:22.404725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.923 qpair failed and we were unable to recover it. 00:28:14.923 [2024-12-10 22:59:22.404936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.923 [2024-12-10 22:59:22.404968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.923 qpair failed and we were unable to recover it. 00:28:14.923 [2024-12-10 22:59:22.405088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.923 [2024-12-10 22:59:22.405121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.923 qpair failed and we were unable to recover it. 00:28:14.923 [2024-12-10 22:59:22.405307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.923 [2024-12-10 22:59:22.405338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.923 qpair failed and we were unable to recover it. 00:28:14.923 [2024-12-10 22:59:22.405439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.923 [2024-12-10 22:59:22.405472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.923 qpair failed and we were unable to recover it. 00:28:14.923 [2024-12-10 22:59:22.405583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.923 [2024-12-10 22:59:22.405615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.923 qpair failed and we were unable to recover it. 00:28:14.923 [2024-12-10 22:59:22.405801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.924 [2024-12-10 22:59:22.405834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.924 qpair failed and we were unable to recover it. 00:28:14.924 [2024-12-10 22:59:22.405955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.924 [2024-12-10 22:59:22.405987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.924 qpair failed and we were unable to recover it. 00:28:14.924 [2024-12-10 22:59:22.406180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.924 [2024-12-10 22:59:22.406213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.924 qpair failed and we were unable to recover it. 00:28:14.924 [2024-12-10 22:59:22.406331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.924 [2024-12-10 22:59:22.406363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.924 qpair failed and we were unable to recover it. 00:28:14.924 [2024-12-10 22:59:22.406472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.924 [2024-12-10 22:59:22.406503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.924 qpair failed and we were unable to recover it. 00:28:14.924 [2024-12-10 22:59:22.406714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.924 [2024-12-10 22:59:22.406747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.924 qpair failed and we were unable to recover it. 00:28:14.924 [2024-12-10 22:59:22.406915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.924 [2024-12-10 22:59:22.406946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.924 qpair failed and we were unable to recover it. 00:28:14.924 [2024-12-10 22:59:22.407135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.924 [2024-12-10 22:59:22.407180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.924 qpair failed and we were unable to recover it. 00:28:14.924 [2024-12-10 22:59:22.407304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.924 [2024-12-10 22:59:22.407335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.924 qpair failed and we were unable to recover it. 00:28:14.924 [2024-12-10 22:59:22.407525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.924 [2024-12-10 22:59:22.407556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.924 qpair failed and we were unable to recover it. 00:28:14.924 [2024-12-10 22:59:22.407726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.924 [2024-12-10 22:59:22.407758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.924 qpair failed and we were unable to recover it. 00:28:14.924 [2024-12-10 22:59:22.407862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.924 [2024-12-10 22:59:22.407894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.924 qpair failed and we were unable to recover it. 00:28:14.924 [2024-12-10 22:59:22.408069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.924 [2024-12-10 22:59:22.408100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.924 qpair failed and we were unable to recover it. 00:28:14.924 [2024-12-10 22:59:22.408333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.924 [2024-12-10 22:59:22.408366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.924 qpair failed and we were unable to recover it. 00:28:14.924 [2024-12-10 22:59:22.408488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.924 [2024-12-10 22:59:22.408519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.924 qpair failed and we were unable to recover it. 00:28:14.924 [2024-12-10 22:59:22.408689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.924 [2024-12-10 22:59:22.408721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.924 qpair failed and we were unable to recover it. 00:28:14.924 [2024-12-10 22:59:22.408909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.924 [2024-12-10 22:59:22.408940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.924 qpair failed and we were unable to recover it. 00:28:14.924 [2024-12-10 22:59:22.409068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.924 [2024-12-10 22:59:22.409100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.924 qpair failed and we were unable to recover it. 00:28:14.924 [2024-12-10 22:59:22.409225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.924 [2024-12-10 22:59:22.409258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.924 qpair failed and we were unable to recover it. 00:28:14.924 [2024-12-10 22:59:22.409375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.924 [2024-12-10 22:59:22.409407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.924 qpair failed and we were unable to recover it. 00:28:14.924 [2024-12-10 22:59:22.409630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.924 [2024-12-10 22:59:22.409700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.924 qpair failed and we were unable to recover it. 00:28:14.924 [2024-12-10 22:59:22.409895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.924 [2024-12-10 22:59:22.409931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.924 qpair failed and we were unable to recover it. 00:28:14.924 [2024-12-10 22:59:22.410043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.924 [2024-12-10 22:59:22.410075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.924 qpair failed and we were unable to recover it. 00:28:14.924 [2024-12-10 22:59:22.410253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.924 [2024-12-10 22:59:22.410287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.924 qpair failed and we were unable to recover it. 00:28:14.924 [2024-12-10 22:59:22.410499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.924 [2024-12-10 22:59:22.410531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.924 qpair failed and we were unable to recover it. 00:28:14.924 [2024-12-10 22:59:22.410704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.924 [2024-12-10 22:59:22.410737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.924 qpair failed and we were unable to recover it. 00:28:14.924 [2024-12-10 22:59:22.410933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.924 [2024-12-10 22:59:22.410966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.924 qpair failed and we were unable to recover it. 00:28:14.924 [2024-12-10 22:59:22.411084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.924 [2024-12-10 22:59:22.411116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.924 qpair failed and we were unable to recover it. 00:28:14.924 [2024-12-10 22:59:22.411303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.924 [2024-12-10 22:59:22.411338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.924 qpair failed and we were unable to recover it. 00:28:14.924 [2024-12-10 22:59:22.411507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.924 [2024-12-10 22:59:22.411538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.924 qpair failed and we were unable to recover it. 00:28:14.924 [2024-12-10 22:59:22.411640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.924 [2024-12-10 22:59:22.411672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.924 qpair failed and we were unable to recover it. 00:28:14.924 [2024-12-10 22:59:22.411861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.924 [2024-12-10 22:59:22.411893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.924 qpair failed and we were unable to recover it. 00:28:14.924 [2024-12-10 22:59:22.412010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.924 [2024-12-10 22:59:22.412041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.924 qpair failed and we were unable to recover it. 00:28:14.924 [2024-12-10 22:59:22.412180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.924 [2024-12-10 22:59:22.412224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.924 qpair failed and we were unable to recover it. 00:28:14.924 [2024-12-10 22:59:22.412327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.924 [2024-12-10 22:59:22.412358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.924 qpair failed and we were unable to recover it. 00:28:14.924 [2024-12-10 22:59:22.412541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.924 [2024-12-10 22:59:22.412573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.924 qpair failed and we were unable to recover it. 00:28:14.924 [2024-12-10 22:59:22.412757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.924 [2024-12-10 22:59:22.412789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.924 qpair failed and we were unable to recover it. 00:28:14.924 [2024-12-10 22:59:22.412904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.924 [2024-12-10 22:59:22.412936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.924 qpair failed and we were unable to recover it. 00:28:14.924 [2024-12-10 22:59:22.413062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.924 [2024-12-10 22:59:22.413094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.924 qpair failed and we were unable to recover it. 00:28:14.924 [2024-12-10 22:59:22.413309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.925 [2024-12-10 22:59:22.413342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.925 qpair failed and we were unable to recover it. 00:28:14.925 [2024-12-10 22:59:22.413522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.925 [2024-12-10 22:59:22.413554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.925 qpair failed and we were unable to recover it. 00:28:14.925 [2024-12-10 22:59:22.413730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.925 [2024-12-10 22:59:22.413761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.925 qpair failed and we were unable to recover it. 00:28:14.925 [2024-12-10 22:59:22.413885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.925 [2024-12-10 22:59:22.413917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.925 qpair failed and we were unable to recover it. 00:28:14.925 [2024-12-10 22:59:22.414030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.925 [2024-12-10 22:59:22.414063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.925 qpair failed and we were unable to recover it. 00:28:14.925 [2024-12-10 22:59:22.414175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.925 [2024-12-10 22:59:22.414208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.925 qpair failed and we were unable to recover it. 00:28:14.925 [2024-12-10 22:59:22.414382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.925 [2024-12-10 22:59:22.414414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.925 qpair failed and we were unable to recover it. 00:28:14.925 [2024-12-10 22:59:22.414584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.925 [2024-12-10 22:59:22.414615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.925 qpair failed and we were unable to recover it. 00:28:14.925 [2024-12-10 22:59:22.414793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.925 [2024-12-10 22:59:22.414825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.925 qpair failed and we were unable to recover it. 00:28:14.925 [2024-12-10 22:59:22.414994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.925 [2024-12-10 22:59:22.415026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.925 qpair failed and we were unable to recover it. 00:28:14.925 [2024-12-10 22:59:22.415210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.925 [2024-12-10 22:59:22.415243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.925 qpair failed and we were unable to recover it. 00:28:14.925 [2024-12-10 22:59:22.415418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.925 [2024-12-10 22:59:22.415450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.925 qpair failed and we were unable to recover it. 00:28:14.925 [2024-12-10 22:59:22.415622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.925 [2024-12-10 22:59:22.415653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.925 qpair failed and we were unable to recover it. 00:28:14.925 [2024-12-10 22:59:22.415826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.925 [2024-12-10 22:59:22.415858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.925 qpair failed and we were unable to recover it. 00:28:14.925 [2024-12-10 22:59:22.416050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.925 [2024-12-10 22:59:22.416082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.925 qpair failed and we were unable to recover it. 00:28:14.925 [2024-12-10 22:59:22.416279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.925 [2024-12-10 22:59:22.416311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.925 qpair failed and we were unable to recover it. 00:28:14.925 [2024-12-10 22:59:22.416419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.925 [2024-12-10 22:59:22.416451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.925 qpair failed and we were unable to recover it. 00:28:14.925 [2024-12-10 22:59:22.416620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.925 [2024-12-10 22:59:22.416652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.925 qpair failed and we were unable to recover it. 00:28:14.925 [2024-12-10 22:59:22.416772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.925 [2024-12-10 22:59:22.416803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.925 qpair failed and we were unable to recover it. 00:28:14.925 [2024-12-10 22:59:22.416975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.925 [2024-12-10 22:59:22.417007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.925 qpair failed and we were unable to recover it. 00:28:14.925 [2024-12-10 22:59:22.417110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.925 [2024-12-10 22:59:22.417141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.925 qpair failed and we were unable to recover it. 00:28:14.925 [2024-12-10 22:59:22.417382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.925 [2024-12-10 22:59:22.417453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.925 qpair failed and we were unable to recover it. 00:28:14.925 [2024-12-10 22:59:22.417648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.925 [2024-12-10 22:59:22.417685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.925 qpair failed and we were unable to recover it. 00:28:14.925 [2024-12-10 22:59:22.417798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.925 [2024-12-10 22:59:22.417831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.925 qpair failed and we were unable to recover it. 00:28:14.925 [2024-12-10 22:59:22.418026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.925 [2024-12-10 22:59:22.418057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.925 qpair failed and we were unable to recover it. 00:28:14.925 [2024-12-10 22:59:22.418339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.925 [2024-12-10 22:59:22.418375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.925 qpair failed and we were unable to recover it. 00:28:14.925 [2024-12-10 22:59:22.418482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.925 [2024-12-10 22:59:22.418513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.925 qpair failed and we were unable to recover it. 00:28:14.925 [2024-12-10 22:59:22.418704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.925 [2024-12-10 22:59:22.418737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.925 qpair failed and we were unable to recover it. 00:28:14.925 [2024-12-10 22:59:22.418927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.925 [2024-12-10 22:59:22.418961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.925 qpair failed and we were unable to recover it. 00:28:14.925 [2024-12-10 22:59:22.419081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.925 [2024-12-10 22:59:22.419113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.925 qpair failed and we were unable to recover it. 00:28:14.925 [2024-12-10 22:59:22.419312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.925 [2024-12-10 22:59:22.419345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.925 qpair failed and we were unable to recover it. 00:28:14.925 [2024-12-10 22:59:22.419526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.925 [2024-12-10 22:59:22.419557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.925 qpair failed and we were unable to recover it. 00:28:14.925 [2024-12-10 22:59:22.419673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.925 [2024-12-10 22:59:22.419704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.925 qpair failed and we were unable to recover it. 00:28:14.925 [2024-12-10 22:59:22.419885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.925 [2024-12-10 22:59:22.419917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.925 qpair failed and we were unable to recover it. 00:28:14.925 [2024-12-10 22:59:22.420030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.925 [2024-12-10 22:59:22.420061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.925 qpair failed and we were unable to recover it. 00:28:14.925 [2024-12-10 22:59:22.420240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.926 [2024-12-10 22:59:22.420274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.926 qpair failed and we were unable to recover it. 00:28:14.926 [2024-12-10 22:59:22.420404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.926 [2024-12-10 22:59:22.420434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.926 qpair failed and we were unable to recover it. 00:28:14.926 [2024-12-10 22:59:22.420601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.926 [2024-12-10 22:59:22.420631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.926 qpair failed and we were unable to recover it. 00:28:14.926 [2024-12-10 22:59:22.420811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.926 [2024-12-10 22:59:22.420843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.926 qpair failed and we were unable to recover it. 00:28:14.926 [2024-12-10 22:59:22.421009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.926 [2024-12-10 22:59:22.421039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.926 qpair failed and we were unable to recover it. 00:28:14.926 [2024-12-10 22:59:22.421150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.926 [2024-12-10 22:59:22.421196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.926 qpair failed and we were unable to recover it. 00:28:14.926 [2024-12-10 22:59:22.421455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.926 [2024-12-10 22:59:22.421487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.926 qpair failed and we were unable to recover it. 00:28:14.926 [2024-12-10 22:59:22.421722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.926 [2024-12-10 22:59:22.421754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.926 qpair failed and we were unable to recover it. 00:28:14.926 [2024-12-10 22:59:22.421888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.926 [2024-12-10 22:59:22.421920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.926 qpair failed and we were unable to recover it. 00:28:14.926 [2024-12-10 22:59:22.422092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.926 [2024-12-10 22:59:22.422123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.926 qpair failed and we were unable to recover it. 00:28:14.926 [2024-12-10 22:59:22.422241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.926 [2024-12-10 22:59:22.422274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.926 qpair failed and we were unable to recover it. 00:28:14.926 [2024-12-10 22:59:22.422376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.926 [2024-12-10 22:59:22.422407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.926 qpair failed and we were unable to recover it. 00:28:14.926 [2024-12-10 22:59:22.422530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.926 [2024-12-10 22:59:22.422562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.926 qpair failed and we were unable to recover it. 00:28:14.926 [2024-12-10 22:59:22.422733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.926 [2024-12-10 22:59:22.422769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.926 qpair failed and we were unable to recover it. 00:28:14.926 [2024-12-10 22:59:22.422882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.926 [2024-12-10 22:59:22.422913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.926 qpair failed and we were unable to recover it. 00:28:14.926 [2024-12-10 22:59:22.423080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.926 [2024-12-10 22:59:22.423110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.926 qpair failed and we were unable to recover it. 00:28:14.926 [2024-12-10 22:59:22.423304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.926 [2024-12-10 22:59:22.423338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.926 qpair failed and we were unable to recover it. 00:28:14.926 [2024-12-10 22:59:22.423599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.926 [2024-12-10 22:59:22.423631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.926 qpair failed and we were unable to recover it. 00:28:14.926 [2024-12-10 22:59:22.423867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.926 [2024-12-10 22:59:22.423899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.926 qpair failed and we were unable to recover it. 00:28:14.926 [2024-12-10 22:59:22.424141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.926 [2024-12-10 22:59:22.424185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.926 qpair failed and we were unable to recover it. 00:28:14.926 [2024-12-10 22:59:22.424384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.926 [2024-12-10 22:59:22.424415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.926 qpair failed and we were unable to recover it. 00:28:14.926 [2024-12-10 22:59:22.424519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.926 [2024-12-10 22:59:22.424550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.926 qpair failed and we were unable to recover it. 00:28:14.926 [2024-12-10 22:59:22.424729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.926 [2024-12-10 22:59:22.424761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.926 qpair failed and we were unable to recover it. 00:28:14.926 [2024-12-10 22:59:22.424890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.926 [2024-12-10 22:59:22.424920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.926 qpair failed and we were unable to recover it. 00:28:14.926 [2024-12-10 22:59:22.425089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.926 [2024-12-10 22:59:22.425119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.926 qpair failed and we were unable to recover it. 00:28:14.926 [2024-12-10 22:59:22.425299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.926 [2024-12-10 22:59:22.425332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.926 qpair failed and we were unable to recover it. 00:28:14.926 [2024-12-10 22:59:22.425513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.926 [2024-12-10 22:59:22.425544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.926 qpair failed and we were unable to recover it. 00:28:14.926 [2024-12-10 22:59:22.425661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.926 [2024-12-10 22:59:22.425692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.926 qpair failed and we were unable to recover it. 00:28:14.926 [2024-12-10 22:59:22.425880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.926 [2024-12-10 22:59:22.425911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.926 qpair failed and we were unable to recover it. 00:28:14.926 [2024-12-10 22:59:22.426089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.926 [2024-12-10 22:59:22.426121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.926 qpair failed and we were unable to recover it. 00:28:14.926 [2024-12-10 22:59:22.426327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.926 [2024-12-10 22:59:22.426359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.926 qpair failed and we were unable to recover it. 00:28:14.926 [2024-12-10 22:59:22.426526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.926 [2024-12-10 22:59:22.426556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.926 qpair failed and we were unable to recover it. 00:28:14.926 [2024-12-10 22:59:22.426684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.926 [2024-12-10 22:59:22.426715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.926 qpair failed and we were unable to recover it. 00:28:14.926 [2024-12-10 22:59:22.426971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.926 [2024-12-10 22:59:22.427002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.926 qpair failed and we were unable to recover it. 00:28:14.926 [2024-12-10 22:59:22.427178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.926 [2024-12-10 22:59:22.427211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.926 qpair failed and we were unable to recover it. 00:28:14.926 [2024-12-10 22:59:22.427379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.926 [2024-12-10 22:59:22.427410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.926 qpair failed and we were unable to recover it. 00:28:14.926 [2024-12-10 22:59:22.427579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.926 [2024-12-10 22:59:22.427609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.926 qpair failed and we were unable to recover it. 00:28:14.926 [2024-12-10 22:59:22.427797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.926 [2024-12-10 22:59:22.427828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.926 qpair failed and we were unable to recover it. 00:28:14.926 [2024-12-10 22:59:22.427932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.926 [2024-12-10 22:59:22.427964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.926 qpair failed and we were unable to recover it. 00:28:14.926 [2024-12-10 22:59:22.428173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.926 [2024-12-10 22:59:22.428206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.926 qpair failed and we were unable to recover it. 00:28:14.926 [2024-12-10 22:59:22.428319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.927 [2024-12-10 22:59:22.428350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.927 qpair failed and we were unable to recover it. 00:28:14.927 [2024-12-10 22:59:22.428545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.927 [2024-12-10 22:59:22.428577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.927 qpair failed and we were unable to recover it. 00:28:14.927 [2024-12-10 22:59:22.428744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.927 [2024-12-10 22:59:22.428776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.927 qpair failed and we were unable to recover it. 00:28:14.927 [2024-12-10 22:59:22.428883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.927 [2024-12-10 22:59:22.428913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.927 qpair failed and we were unable to recover it. 00:28:14.927 [2024-12-10 22:59:22.429079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.927 [2024-12-10 22:59:22.429111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.927 qpair failed and we were unable to recover it. 00:28:14.927 [2024-12-10 22:59:22.429326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.927 [2024-12-10 22:59:22.429358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.927 qpair failed and we were unable to recover it. 00:28:14.927 [2024-12-10 22:59:22.429624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.927 [2024-12-10 22:59:22.429657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.927 qpair failed and we were unable to recover it. 00:28:14.927 [2024-12-10 22:59:22.429777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.927 [2024-12-10 22:59:22.429808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.927 qpair failed and we were unable to recover it. 00:28:14.927 [2024-12-10 22:59:22.429918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.927 [2024-12-10 22:59:22.429948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.927 qpair failed and we were unable to recover it. 00:28:14.927 [2024-12-10 22:59:22.430218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.927 [2024-12-10 22:59:22.430251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.927 qpair failed and we were unable to recover it. 00:28:14.927 [2024-12-10 22:59:22.430371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.927 [2024-12-10 22:59:22.430403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.927 qpair failed and we were unable to recover it. 00:28:14.927 [2024-12-10 22:59:22.430518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.927 [2024-12-10 22:59:22.430548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.927 qpair failed and we were unable to recover it. 00:28:14.927 [2024-12-10 22:59:22.430663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.927 [2024-12-10 22:59:22.430694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.927 qpair failed and we were unable to recover it. 00:28:14.927 [2024-12-10 22:59:22.430825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.927 [2024-12-10 22:59:22.430855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.927 qpair failed and we were unable to recover it. 00:28:14.927 [2024-12-10 22:59:22.431021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.927 [2024-12-10 22:59:22.431057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.927 qpair failed and we were unable to recover it. 00:28:14.927 [2024-12-10 22:59:22.431168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.927 [2024-12-10 22:59:22.431201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.927 qpair failed and we were unable to recover it. 00:28:14.927 [2024-12-10 22:59:22.431308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.927 [2024-12-10 22:59:22.431338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.927 qpair failed and we were unable to recover it. 00:28:14.927 [2024-12-10 22:59:22.431446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.927 [2024-12-10 22:59:22.431478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.927 qpair failed and we were unable to recover it. 00:28:14.927 [2024-12-10 22:59:22.431598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.927 [2024-12-10 22:59:22.431629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.927 qpair failed and we were unable to recover it. 00:28:14.927 [2024-12-10 22:59:22.431866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.927 [2024-12-10 22:59:22.431898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.927 qpair failed and we were unable to recover it. 00:28:14.927 [2024-12-10 22:59:22.432065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.927 [2024-12-10 22:59:22.432096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.927 qpair failed and we were unable to recover it. 00:28:14.927 [2024-12-10 22:59:22.432286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.927 [2024-12-10 22:59:22.432319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.927 qpair failed and we were unable to recover it. 00:28:14.927 [2024-12-10 22:59:22.432525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.927 [2024-12-10 22:59:22.432557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.927 qpair failed and we were unable to recover it. 00:28:14.927 [2024-12-10 22:59:22.432762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.927 [2024-12-10 22:59:22.432793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.927 qpair failed and we were unable to recover it. 00:28:14.927 [2024-12-10 22:59:22.432901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.927 [2024-12-10 22:59:22.432932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.927 qpair failed and we were unable to recover it. 00:28:14.927 [2024-12-10 22:59:22.433101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.927 [2024-12-10 22:59:22.433131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.927 qpair failed and we were unable to recover it. 00:28:14.927 [2024-12-10 22:59:22.433251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.927 [2024-12-10 22:59:22.433284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.927 qpair failed and we were unable to recover it. 00:28:14.927 [2024-12-10 22:59:22.433478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.927 [2024-12-10 22:59:22.433510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.927 qpair failed and we were unable to recover it. 00:28:14.927 [2024-12-10 22:59:22.433706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.927 [2024-12-10 22:59:22.433738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.927 qpair failed and we were unable to recover it. 00:28:14.927 [2024-12-10 22:59:22.433920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.927 [2024-12-10 22:59:22.433951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.927 qpair failed and we were unable to recover it. 00:28:14.927 [2024-12-10 22:59:22.434063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.927 [2024-12-10 22:59:22.434094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.927 qpair failed and we were unable to recover it. 00:28:14.927 [2024-12-10 22:59:22.434261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.927 [2024-12-10 22:59:22.434293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.927 qpair failed and we were unable to recover it. 00:28:14.927 [2024-12-10 22:59:22.434487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.927 [2024-12-10 22:59:22.434517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.927 qpair failed and we were unable to recover it. 00:28:14.927 [2024-12-10 22:59:22.434684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.927 [2024-12-10 22:59:22.434714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.927 qpair failed and we were unable to recover it. 00:28:14.927 [2024-12-10 22:59:22.434817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.927 [2024-12-10 22:59:22.434847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.927 qpair failed and we were unable to recover it. 00:28:14.927 [2024-12-10 22:59:22.435056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.927 [2024-12-10 22:59:22.435087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.927 qpair failed and we were unable to recover it. 00:28:14.927 [2024-12-10 22:59:22.435260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.927 [2024-12-10 22:59:22.435291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.927 qpair failed and we were unable to recover it. 00:28:14.927 [2024-12-10 22:59:22.435410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.927 [2024-12-10 22:59:22.435441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.927 qpair failed and we were unable to recover it. 00:28:14.927 [2024-12-10 22:59:22.435617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.927 [2024-12-10 22:59:22.435647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.927 qpair failed and we were unable to recover it. 00:28:14.927 [2024-12-10 22:59:22.435749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.927 [2024-12-10 22:59:22.435778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.928 qpair failed and we were unable to recover it. 00:28:14.928 [2024-12-10 22:59:22.435878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.928 [2024-12-10 22:59:22.435910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.928 qpair failed and we were unable to recover it. 00:28:14.928 [2024-12-10 22:59:22.436102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.928 [2024-12-10 22:59:22.436139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.928 qpair failed and we were unable to recover it. 00:28:14.928 [2024-12-10 22:59:22.436280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.928 [2024-12-10 22:59:22.436310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.928 qpair failed and we were unable to recover it. 00:28:14.928 [2024-12-10 22:59:22.436414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.928 [2024-12-10 22:59:22.436444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.928 qpair failed and we were unable to recover it. 00:28:14.928 [2024-12-10 22:59:22.436616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.928 [2024-12-10 22:59:22.436654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.928 qpair failed and we were unable to recover it. 00:28:14.928 [2024-12-10 22:59:22.436847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.928 [2024-12-10 22:59:22.436880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.928 qpair failed and we were unable to recover it. 00:28:14.928 [2024-12-10 22:59:22.436992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.928 [2024-12-10 22:59:22.437022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.928 qpair failed and we were unable to recover it. 00:28:14.928 [2024-12-10 22:59:22.437193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.928 [2024-12-10 22:59:22.437226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.928 qpair failed and we were unable to recover it. 00:28:14.928 [2024-12-10 22:59:22.437392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.928 [2024-12-10 22:59:22.437425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.928 qpair failed and we were unable to recover it. 00:28:14.928 [2024-12-10 22:59:22.437531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.928 [2024-12-10 22:59:22.437561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.928 qpair failed and we were unable to recover it. 00:28:14.928 [2024-12-10 22:59:22.437691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.928 [2024-12-10 22:59:22.437723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.928 qpair failed and we were unable to recover it. 00:28:14.928 [2024-12-10 22:59:22.437841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.928 [2024-12-10 22:59:22.437871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.928 qpair failed and we were unable to recover it. 00:28:14.928 [2024-12-10 22:59:22.438048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.928 [2024-12-10 22:59:22.438078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.928 qpair failed and we were unable to recover it. 00:28:14.928 [2024-12-10 22:59:22.438284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.928 [2024-12-10 22:59:22.438318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.928 qpair failed and we were unable to recover it. 00:28:14.928 [2024-12-10 22:59:22.438486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.928 [2024-12-10 22:59:22.438517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.928 qpair failed and we were unable to recover it. 00:28:14.928 [2024-12-10 22:59:22.438623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.928 [2024-12-10 22:59:22.438655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.928 qpair failed and we were unable to recover it. 00:28:14.928 [2024-12-10 22:59:22.438788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.928 [2024-12-10 22:59:22.438820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.928 qpair failed and we were unable to recover it. 00:28:14.928 [2024-12-10 22:59:22.438987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.928 [2024-12-10 22:59:22.439017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.928 qpair failed and we were unable to recover it. 00:28:14.928 [2024-12-10 22:59:22.439130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.928 [2024-12-10 22:59:22.439173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.928 qpair failed and we were unable to recover it. 00:28:14.928 [2024-12-10 22:59:22.439342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.928 [2024-12-10 22:59:22.439371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.928 qpair failed and we were unable to recover it. 00:28:14.928 [2024-12-10 22:59:22.439634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.928 [2024-12-10 22:59:22.439666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.928 qpair failed and we were unable to recover it. 00:28:14.928 [2024-12-10 22:59:22.439871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.928 [2024-12-10 22:59:22.439903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.928 qpair failed and we were unable to recover it. 00:28:14.928 [2024-12-10 22:59:22.440016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.928 [2024-12-10 22:59:22.440045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.928 qpair failed and we were unable to recover it. 00:28:14.928 [2024-12-10 22:59:22.440166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.928 [2024-12-10 22:59:22.440199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.928 qpair failed and we were unable to recover it. 00:28:14.928 [2024-12-10 22:59:22.440309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.928 [2024-12-10 22:59:22.440339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.928 qpair failed and we were unable to recover it. 00:28:14.928 [2024-12-10 22:59:22.440439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.928 [2024-12-10 22:59:22.440471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.928 qpair failed and we were unable to recover it. 00:28:14.928 [2024-12-10 22:59:22.440666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.928 [2024-12-10 22:59:22.440698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.928 qpair failed and we were unable to recover it. 00:28:14.928 [2024-12-10 22:59:22.440805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.928 [2024-12-10 22:59:22.440836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.928 qpair failed and we were unable to recover it. 00:28:14.928 [2024-12-10 22:59:22.440973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.928 [2024-12-10 22:59:22.441005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.928 qpair failed and we were unable to recover it. 00:28:14.928 [2024-12-10 22:59:22.441186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.928 [2024-12-10 22:59:22.441219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.928 qpair failed and we were unable to recover it. 00:28:14.928 [2024-12-10 22:59:22.441339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.928 [2024-12-10 22:59:22.441371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.928 qpair failed and we were unable to recover it. 00:28:14.928 [2024-12-10 22:59:22.441488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.928 [2024-12-10 22:59:22.441518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.928 qpair failed and we were unable to recover it. 00:28:14.928 [2024-12-10 22:59:22.441682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.928 [2024-12-10 22:59:22.441713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.928 qpair failed and we were unable to recover it. 00:28:14.928 [2024-12-10 22:59:22.441883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.928 [2024-12-10 22:59:22.441914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.928 qpair failed and we were unable to recover it. 00:28:14.928 [2024-12-10 22:59:22.442037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.928 [2024-12-10 22:59:22.442067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.928 qpair failed and we were unable to recover it. 00:28:14.928 [2024-12-10 22:59:22.442235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.928 [2024-12-10 22:59:22.442269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.928 qpair failed and we were unable to recover it. 00:28:14.928 [2024-12-10 22:59:22.442446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.928 [2024-12-10 22:59:22.442478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.928 qpair failed and we were unable to recover it. 00:28:14.928 [2024-12-10 22:59:22.442610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.928 [2024-12-10 22:59:22.442641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.928 qpair failed and we were unable to recover it. 00:28:14.928 [2024-12-10 22:59:22.442743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.928 [2024-12-10 22:59:22.442774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.928 qpair failed and we were unable to recover it. 00:28:14.928 [2024-12-10 22:59:22.442883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.929 [2024-12-10 22:59:22.442914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.929 qpair failed and we were unable to recover it. 00:28:14.929 [2024-12-10 22:59:22.443021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.929 [2024-12-10 22:59:22.443051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.929 qpair failed and we were unable to recover it. 00:28:14.929 [2024-12-10 22:59:22.443149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.929 [2024-12-10 22:59:22.443201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.929 qpair failed and we were unable to recover it. 00:28:14.929 [2024-12-10 22:59:22.443370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.929 [2024-12-10 22:59:22.443407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.929 qpair failed and we were unable to recover it. 00:28:14.929 [2024-12-10 22:59:22.443589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.929 [2024-12-10 22:59:22.443620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.929 qpair failed and we were unable to recover it. 00:28:14.929 [2024-12-10 22:59:22.443875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.929 [2024-12-10 22:59:22.443907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.929 qpair failed and we were unable to recover it. 00:28:14.929 [2024-12-10 22:59:22.444086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.929 [2024-12-10 22:59:22.444116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.929 qpair failed and we were unable to recover it. 00:28:14.929 [2024-12-10 22:59:22.444324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.929 [2024-12-10 22:59:22.444356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.929 qpair failed and we were unable to recover it. 00:28:14.929 [2024-12-10 22:59:22.444544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.929 [2024-12-10 22:59:22.444574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.929 qpair failed and we were unable to recover it. 00:28:14.929 [2024-12-10 22:59:22.444691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.929 [2024-12-10 22:59:22.444722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.929 qpair failed and we were unable to recover it. 00:28:14.929 [2024-12-10 22:59:22.444916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.929 [2024-12-10 22:59:22.444948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.929 qpair failed and we were unable to recover it. 00:28:14.929 [2024-12-10 22:59:22.445050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.929 [2024-12-10 22:59:22.445081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.929 qpair failed and we were unable to recover it. 00:28:14.929 [2024-12-10 22:59:22.445261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.929 [2024-12-10 22:59:22.445294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.929 qpair failed and we were unable to recover it. 00:28:14.929 [2024-12-10 22:59:22.445398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.929 [2024-12-10 22:59:22.445429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.929 qpair failed and we were unable to recover it. 00:28:14.929 [2024-12-10 22:59:22.445606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.929 [2024-12-10 22:59:22.445637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.929 qpair failed and we were unable to recover it. 00:28:14.929 [2024-12-10 22:59:22.445806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.929 [2024-12-10 22:59:22.445838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.929 qpair failed and we were unable to recover it. 00:28:14.929 [2024-12-10 22:59:22.446002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.929 [2024-12-10 22:59:22.446033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.929 qpair failed and we were unable to recover it. 00:28:14.929 [2024-12-10 22:59:22.446210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.929 [2024-12-10 22:59:22.446243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.929 qpair failed and we were unable to recover it. 00:28:14.929 [2024-12-10 22:59:22.446451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.929 [2024-12-10 22:59:22.446484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.929 qpair failed and we were unable to recover it. 00:28:14.929 [2024-12-10 22:59:22.446747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.929 [2024-12-10 22:59:22.446780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.929 qpair failed and we were unable to recover it. 00:28:14.929 [2024-12-10 22:59:22.446957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.929 [2024-12-10 22:59:22.446990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.929 qpair failed and we were unable to recover it. 00:28:14.929 [2024-12-10 22:59:22.447105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.929 [2024-12-10 22:59:22.447136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.929 qpair failed and we were unable to recover it. 00:28:14.929 [2024-12-10 22:59:22.447275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.929 [2024-12-10 22:59:22.447306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.929 qpair failed and we were unable to recover it. 00:28:14.929 [2024-12-10 22:59:22.447410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.929 [2024-12-10 22:59:22.447440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.929 qpair failed and we were unable to recover it. 00:28:14.929 [2024-12-10 22:59:22.447608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.929 [2024-12-10 22:59:22.447640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.929 qpair failed and we were unable to recover it. 00:28:14.929 [2024-12-10 22:59:22.447830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.929 [2024-12-10 22:59:22.447860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.929 qpair failed and we were unable to recover it. 00:28:14.929 [2024-12-10 22:59:22.447995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.929 [2024-12-10 22:59:22.448027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.929 qpair failed and we were unable to recover it. 00:28:14.929 [2024-12-10 22:59:22.448139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.929 [2024-12-10 22:59:22.448180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.929 qpair failed and we were unable to recover it. 00:28:14.929 [2024-12-10 22:59:22.448445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.929 [2024-12-10 22:59:22.448475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.929 qpair failed and we were unable to recover it. 00:28:14.929 [2024-12-10 22:59:22.448653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.929 [2024-12-10 22:59:22.448685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.929 qpair failed and we were unable to recover it. 00:28:14.929 [2024-12-10 22:59:22.448794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.929 [2024-12-10 22:59:22.448830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.929 qpair failed and we were unable to recover it. 00:28:14.929 [2024-12-10 22:59:22.449020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.929 [2024-12-10 22:59:22.449052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.929 qpair failed and we were unable to recover it. 00:28:14.929 [2024-12-10 22:59:22.449185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.929 [2024-12-10 22:59:22.449218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.929 qpair failed and we were unable to recover it. 00:28:14.929 [2024-12-10 22:59:22.449355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.929 [2024-12-10 22:59:22.449385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.929 qpair failed and we were unable to recover it. 00:28:14.929 [2024-12-10 22:59:22.449496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.929 [2024-12-10 22:59:22.449527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.929 qpair failed and we were unable to recover it. 00:28:14.929 [2024-12-10 22:59:22.449766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.929 [2024-12-10 22:59:22.449799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.929 qpair failed and we were unable to recover it. 00:28:14.929 [2024-12-10 22:59:22.449921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.929 [2024-12-10 22:59:22.449953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.929 qpair failed and we were unable to recover it. 00:28:14.929 [2024-12-10 22:59:22.450148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.929 [2024-12-10 22:59:22.450191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.929 qpair failed and we were unable to recover it. 00:28:14.929 [2024-12-10 22:59:22.450380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.929 [2024-12-10 22:59:22.450410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.929 qpair failed and we were unable to recover it. 00:28:14.929 [2024-12-10 22:59:22.450522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.929 [2024-12-10 22:59:22.450554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.929 qpair failed and we were unable to recover it. 00:28:14.929 [2024-12-10 22:59:22.450836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.930 [2024-12-10 22:59:22.450868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.930 qpair failed and we were unable to recover it. 00:28:14.930 [2024-12-10 22:59:22.450997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.930 [2024-12-10 22:59:22.451028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.930 qpair failed and we were unable to recover it. 00:28:14.930 [2024-12-10 22:59:22.451133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.930 [2024-12-10 22:59:22.451186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.930 qpair failed and we were unable to recover it. 00:28:14.930 [2024-12-10 22:59:22.451357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.930 [2024-12-10 22:59:22.451389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.930 qpair failed and we were unable to recover it. 00:28:14.930 [2024-12-10 22:59:22.451609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.930 [2024-12-10 22:59:22.451679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.930 qpair failed and we were unable to recover it. 00:28:14.930 [2024-12-10 22:59:22.451880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.930 [2024-12-10 22:59:22.451915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.930 qpair failed and we were unable to recover it. 00:28:14.930 [2024-12-10 22:59:22.452172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.930 [2024-12-10 22:59:22.452206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.930 qpair failed and we were unable to recover it. 00:28:14.930 [2024-12-10 22:59:22.452411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.930 [2024-12-10 22:59:22.452443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.930 qpair failed and we were unable to recover it. 00:28:14.930 [2024-12-10 22:59:22.452638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.930 [2024-12-10 22:59:22.452670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.930 qpair failed and we were unable to recover it. 00:28:14.930 [2024-12-10 22:59:22.452790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.930 [2024-12-10 22:59:22.452822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.930 qpair failed and we were unable to recover it. 00:28:14.930 [2024-12-10 22:59:22.452997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.930 [2024-12-10 22:59:22.453028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.930 qpair failed and we were unable to recover it. 00:28:14.930 [2024-12-10 22:59:22.453246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.930 [2024-12-10 22:59:22.453279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.930 qpair failed and we were unable to recover it. 00:28:14.930 [2024-12-10 22:59:22.453470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.930 [2024-12-10 22:59:22.453501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.930 qpair failed and we were unable to recover it. 00:28:14.930 [2024-12-10 22:59:22.453670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.930 [2024-12-10 22:59:22.453702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.930 qpair failed and we were unable to recover it. 00:28:14.930 [2024-12-10 22:59:22.453816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.930 [2024-12-10 22:59:22.453846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.930 qpair failed and we were unable to recover it. 00:28:14.930 [2024-12-10 22:59:22.453965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.930 [2024-12-10 22:59:22.453996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.930 qpair failed and we were unable to recover it. 00:28:14.930 [2024-12-10 22:59:22.454110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.930 [2024-12-10 22:59:22.454142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.930 qpair failed and we were unable to recover it. 00:28:14.930 [2024-12-10 22:59:22.454254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.930 [2024-12-10 22:59:22.454295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.930 qpair failed and we were unable to recover it. 00:28:14.930 [2024-12-10 22:59:22.454481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.930 [2024-12-10 22:59:22.454514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.930 qpair failed and we were unable to recover it. 00:28:14.930 [2024-12-10 22:59:22.454651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.930 [2024-12-10 22:59:22.454685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.930 qpair failed and we were unable to recover it. 00:28:14.930 [2024-12-10 22:59:22.454799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.930 [2024-12-10 22:59:22.454831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.930 qpair failed and we were unable to recover it. 00:28:14.930 [2024-12-10 22:59:22.454953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.930 [2024-12-10 22:59:22.454985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.930 qpair failed and we were unable to recover it. 00:28:14.930 [2024-12-10 22:59:22.455153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.930 [2024-12-10 22:59:22.455202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.930 qpair failed and we were unable to recover it. 00:28:14.930 [2024-12-10 22:59:22.455312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.930 [2024-12-10 22:59:22.455342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.930 qpair failed and we were unable to recover it. 00:28:14.930 [2024-12-10 22:59:22.455530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.930 [2024-12-10 22:59:22.455562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.930 qpair failed and we were unable to recover it. 00:28:14.930 [2024-12-10 22:59:22.455751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.930 [2024-12-10 22:59:22.455782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.930 qpair failed and we were unable to recover it. 00:28:14.930 [2024-12-10 22:59:22.455956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.930 [2024-12-10 22:59:22.455988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.930 qpair failed and we were unable to recover it. 00:28:14.930 [2024-12-10 22:59:22.456168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.930 [2024-12-10 22:59:22.456202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.930 qpair failed and we were unable to recover it. 00:28:14.930 [2024-12-10 22:59:22.456389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.930 [2024-12-10 22:59:22.456419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.930 qpair failed and we were unable to recover it. 00:28:14.930 [2024-12-10 22:59:22.456522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.930 [2024-12-10 22:59:22.456554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.930 qpair failed and we were unable to recover it. 00:28:14.930 [2024-12-10 22:59:22.456668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.930 [2024-12-10 22:59:22.456699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.930 qpair failed and we were unable to recover it. 00:28:14.930 [2024-12-10 22:59:22.456876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.930 [2024-12-10 22:59:22.456908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.930 qpair failed and we were unable to recover it. 00:28:14.930 [2024-12-10 22:59:22.457036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.930 [2024-12-10 22:59:22.457068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.930 qpair failed and we were unable to recover it. 00:28:14.930 [2024-12-10 22:59:22.457264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.930 [2024-12-10 22:59:22.457297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.930 qpair failed and we were unable to recover it. 00:28:14.930 [2024-12-10 22:59:22.457468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.930 [2024-12-10 22:59:22.457500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.930 qpair failed and we were unable to recover it. 00:28:14.930 [2024-12-10 22:59:22.457616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.930 [2024-12-10 22:59:22.457647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.930 qpair failed and we were unable to recover it. 00:28:14.931 [2024-12-10 22:59:22.457814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.931 [2024-12-10 22:59:22.457845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.931 qpair failed and we were unable to recover it. 00:28:14.931 [2024-12-10 22:59:22.458025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.931 [2024-12-10 22:59:22.458056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.931 qpair failed and we were unable to recover it. 00:28:14.931 [2024-12-10 22:59:22.458245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.931 [2024-12-10 22:59:22.458277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.931 qpair failed and we were unable to recover it. 00:28:14.931 [2024-12-10 22:59:22.458447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.931 [2024-12-10 22:59:22.458478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.931 qpair failed and we were unable to recover it. 00:28:14.931 [2024-12-10 22:59:22.458592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.931 [2024-12-10 22:59:22.458624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.931 qpair failed and we were unable to recover it. 00:28:14.931 [2024-12-10 22:59:22.458745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.931 [2024-12-10 22:59:22.458779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.931 qpair failed and we were unable to recover it. 00:28:14.931 [2024-12-10 22:59:22.458901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.931 [2024-12-10 22:59:22.458932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.931 qpair failed and we were unable to recover it. 00:28:14.931 [2024-12-10 22:59:22.459098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.931 [2024-12-10 22:59:22.459130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:14.931 qpair failed and we were unable to recover it. 00:28:14.931 [2024-12-10 22:59:22.459447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.931 [2024-12-10 22:59:22.459516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.931 qpair failed and we were unable to recover it. 00:28:14.931 [2024-12-10 22:59:22.459640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.931 [2024-12-10 22:59:22.459675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.931 qpair failed and we were unable to recover it. 00:28:14.931 [2024-12-10 22:59:22.459798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.931 [2024-12-10 22:59:22.459829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.931 qpair failed and we were unable to recover it. 00:28:14.931 [2024-12-10 22:59:22.459997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.931 [2024-12-10 22:59:22.460029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.931 qpair failed and we were unable to recover it. 00:28:14.931 [2024-12-10 22:59:22.460205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.931 [2024-12-10 22:59:22.460240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.931 qpair failed and we were unable to recover it. 00:28:14.931 [2024-12-10 22:59:22.460472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.931 [2024-12-10 22:59:22.460503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.931 qpair failed and we were unable to recover it. 00:28:14.931 [2024-12-10 22:59:22.460686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.931 [2024-12-10 22:59:22.460719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.931 qpair failed and we were unable to recover it. 00:28:14.931 [2024-12-10 22:59:22.460915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.931 [2024-12-10 22:59:22.460948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.931 qpair failed and we were unable to recover it. 00:28:14.931 [2024-12-10 22:59:22.461051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.931 [2024-12-10 22:59:22.461082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.931 qpair failed and we were unable to recover it. 00:28:14.931 [2024-12-10 22:59:22.461194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.931 [2024-12-10 22:59:22.461226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.931 qpair failed and we were unable to recover it. 00:28:14.931 [2024-12-10 22:59:22.461333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.931 [2024-12-10 22:59:22.461366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.931 qpair failed and we were unable to recover it. 00:28:14.931 [2024-12-10 22:59:22.461486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.931 [2024-12-10 22:59:22.461516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.931 qpair failed and we were unable to recover it. 00:28:14.931 [2024-12-10 22:59:22.461709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.931 [2024-12-10 22:59:22.461741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.931 qpair failed and we were unable to recover it. 00:28:14.931 [2024-12-10 22:59:22.461862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.931 [2024-12-10 22:59:22.461903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.931 qpair failed and we were unable to recover it. 00:28:14.931 [2024-12-10 22:59:22.462024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.931 [2024-12-10 22:59:22.462056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.931 qpair failed and we were unable to recover it. 00:28:14.931 [2024-12-10 22:59:22.462347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.931 [2024-12-10 22:59:22.462382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.931 qpair failed and we were unable to recover it. 00:28:14.931 [2024-12-10 22:59:22.462490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.931 [2024-12-10 22:59:22.462521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.931 qpair failed and we were unable to recover it. 00:28:14.931 [2024-12-10 22:59:22.462689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.931 [2024-12-10 22:59:22.462720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.931 qpair failed and we were unable to recover it. 00:28:14.931 [2024-12-10 22:59:22.462935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.931 [2024-12-10 22:59:22.462969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.931 qpair failed and we were unable to recover it. 00:28:14.931 [2024-12-10 22:59:22.463176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.931 [2024-12-10 22:59:22.463211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.931 qpair failed and we were unable to recover it. 00:28:14.931 [2024-12-10 22:59:22.463396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.931 [2024-12-10 22:59:22.463427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.931 qpair failed and we were unable to recover it. 00:28:14.931 [2024-12-10 22:59:22.463609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.931 [2024-12-10 22:59:22.463640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.931 qpair failed and we were unable to recover it. 00:28:14.931 [2024-12-10 22:59:22.463872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.931 [2024-12-10 22:59:22.463904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.931 qpair failed and we were unable to recover it. 00:28:14.931 [2024-12-10 22:59:22.464017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.931 [2024-12-10 22:59:22.464048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.931 qpair failed and we were unable to recover it. 00:28:14.931 [2024-12-10 22:59:22.464213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.931 [2024-12-10 22:59:22.464246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.931 qpair failed and we were unable to recover it. 00:28:14.931 [2024-12-10 22:59:22.464413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.931 [2024-12-10 22:59:22.464445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.931 qpair failed and we were unable to recover it. 00:28:14.931 [2024-12-10 22:59:22.464618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.931 [2024-12-10 22:59:22.464649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.931 qpair failed and we were unable to recover it. 00:28:14.931 [2024-12-10 22:59:22.464918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.931 [2024-12-10 22:59:22.464952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.931 qpair failed and we were unable to recover it. 00:28:14.931 [2024-12-10 22:59:22.465134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.931 [2024-12-10 22:59:22.465174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.931 qpair failed and we were unable to recover it. 00:28:14.931 [2024-12-10 22:59:22.465370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.931 [2024-12-10 22:59:22.465401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.931 qpair failed and we were unable to recover it. 00:28:14.931 [2024-12-10 22:59:22.465515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.931 [2024-12-10 22:59:22.465547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.931 qpair failed and we were unable to recover it. 00:28:14.931 [2024-12-10 22:59:22.465665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.932 [2024-12-10 22:59:22.465697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.932 qpair failed and we were unable to recover it. 00:28:14.932 [2024-12-10 22:59:22.465991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.932 [2024-12-10 22:59:22.466023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.932 qpair failed and we were unable to recover it. 00:28:14.932 [2024-12-10 22:59:22.466191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.932 [2024-12-10 22:59:22.466223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.932 qpair failed and we were unable to recover it. 00:28:14.932 [2024-12-10 22:59:22.466324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.932 [2024-12-10 22:59:22.466355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.932 qpair failed and we were unable to recover it. 00:28:14.932 [2024-12-10 22:59:22.466545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.932 [2024-12-10 22:59:22.466577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.932 qpair failed and we were unable to recover it. 00:28:14.932 [2024-12-10 22:59:22.466683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.932 [2024-12-10 22:59:22.466716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.932 qpair failed and we were unable to recover it. 00:28:14.932 [2024-12-10 22:59:22.466906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.932 [2024-12-10 22:59:22.466938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.932 qpair failed and we were unable to recover it. 00:28:14.932 [2024-12-10 22:59:22.467053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.932 [2024-12-10 22:59:22.467084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.932 qpair failed and we were unable to recover it. 00:28:14.932 [2024-12-10 22:59:22.467211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.932 [2024-12-10 22:59:22.467244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.932 qpair failed and we were unable to recover it. 00:28:14.932 [2024-12-10 22:59:22.467364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.932 [2024-12-10 22:59:22.467397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.932 qpair failed and we were unable to recover it. 00:28:14.932 [2024-12-10 22:59:22.467587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.932 [2024-12-10 22:59:22.467619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.932 qpair failed and we were unable to recover it. 00:28:14.932 [2024-12-10 22:59:22.467786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.932 [2024-12-10 22:59:22.467818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.932 qpair failed and we were unable to recover it. 00:28:14.932 [2024-12-10 22:59:22.467981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.932 [2024-12-10 22:59:22.468012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.932 qpair failed and we were unable to recover it. 00:28:14.932 [2024-12-10 22:59:22.468276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.932 [2024-12-10 22:59:22.468308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.932 qpair failed and we were unable to recover it. 00:28:14.932 [2024-12-10 22:59:22.468426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.932 [2024-12-10 22:59:22.468459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.932 qpair failed and we were unable to recover it. 00:28:14.932 [2024-12-10 22:59:22.468650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.932 [2024-12-10 22:59:22.468680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.932 qpair failed and we were unable to recover it. 00:28:14.932 [2024-12-10 22:59:22.468860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.932 [2024-12-10 22:59:22.468892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.932 qpair failed and we were unable to recover it. 00:28:14.932 [2024-12-10 22:59:22.468993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.932 [2024-12-10 22:59:22.469024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.932 qpair failed and we were unable to recover it. 00:28:14.932 [2024-12-10 22:59:22.469218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.932 [2024-12-10 22:59:22.469251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.932 qpair failed and we were unable to recover it. 00:28:14.932 [2024-12-10 22:59:22.469437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.932 [2024-12-10 22:59:22.469469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.932 qpair failed and we were unable to recover it. 00:28:14.932 [2024-12-10 22:59:22.469579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.932 [2024-12-10 22:59:22.469611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.932 qpair failed and we were unable to recover it. 00:28:14.932 [2024-12-10 22:59:22.469776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.932 [2024-12-10 22:59:22.469808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.932 qpair failed and we were unable to recover it. 00:28:14.932 [2024-12-10 22:59:22.469918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.932 [2024-12-10 22:59:22.469955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.932 qpair failed and we were unable to recover it. 00:28:14.932 [2024-12-10 22:59:22.470126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.932 [2024-12-10 22:59:22.470167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.932 qpair failed and we were unable to recover it. 00:28:14.932 [2024-12-10 22:59:22.470339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.932 [2024-12-10 22:59:22.470370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.932 qpair failed and we were unable to recover it. 00:28:14.932 [2024-12-10 22:59:22.470489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.932 [2024-12-10 22:59:22.470520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.932 qpair failed and we were unable to recover it. 00:28:14.932 [2024-12-10 22:59:22.470715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.932 [2024-12-10 22:59:22.470746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.932 qpair failed and we were unable to recover it. 00:28:14.932 [2024-12-10 22:59:22.470914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.932 [2024-12-10 22:59:22.470946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.932 qpair failed and we were unable to recover it. 00:28:14.932 [2024-12-10 22:59:22.471075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.932 [2024-12-10 22:59:22.471106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.932 qpair failed and we were unable to recover it. 00:28:14.932 [2024-12-10 22:59:22.471222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.932 [2024-12-10 22:59:22.471256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.932 qpair failed and we were unable to recover it. 00:28:14.932 [2024-12-10 22:59:22.471447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.932 [2024-12-10 22:59:22.471479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.932 qpair failed and we were unable to recover it. 00:28:14.932 [2024-12-10 22:59:22.471601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.932 [2024-12-10 22:59:22.471633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.932 qpair failed and we were unable to recover it. 00:28:14.932 [2024-12-10 22:59:22.471833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.932 [2024-12-10 22:59:22.471864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.932 qpair failed and we were unable to recover it. 00:28:14.932 [2024-12-10 22:59:22.471964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.932 [2024-12-10 22:59:22.471995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.932 qpair failed and we were unable to recover it. 00:28:14.932 [2024-12-10 22:59:22.472174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.932 [2024-12-10 22:59:22.472208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.932 qpair failed and we were unable to recover it. 00:28:14.932 [2024-12-10 22:59:22.472330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.932 [2024-12-10 22:59:22.472361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.932 qpair failed and we were unable to recover it. 00:28:14.932 [2024-12-10 22:59:22.472635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.932 [2024-12-10 22:59:22.472667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.932 qpair failed and we were unable to recover it. 00:28:14.932 [2024-12-10 22:59:22.472848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.932 [2024-12-10 22:59:22.472881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.932 qpair failed and we were unable to recover it. 00:28:14.932 [2024-12-10 22:59:22.473080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.932 [2024-12-10 22:59:22.473112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.932 qpair failed and we were unable to recover it. 00:28:14.932 [2024-12-10 22:59:22.473250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.932 [2024-12-10 22:59:22.473283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.933 qpair failed and we were unable to recover it. 00:28:14.933 [2024-12-10 22:59:22.473451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.933 [2024-12-10 22:59:22.473484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.933 qpair failed and we were unable to recover it. 00:28:14.933 [2024-12-10 22:59:22.473592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.933 [2024-12-10 22:59:22.473623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.933 qpair failed and we were unable to recover it. 00:28:14.933 [2024-12-10 22:59:22.473819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.933 [2024-12-10 22:59:22.473851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.933 qpair failed and we were unable to recover it. 00:28:14.933 [2024-12-10 22:59:22.473962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.933 [2024-12-10 22:59:22.473993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.933 qpair failed and we were unable to recover it. 00:28:14.933 [2024-12-10 22:59:22.474105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.933 [2024-12-10 22:59:22.474138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.933 qpair failed and we were unable to recover it. 00:28:14.933 [2024-12-10 22:59:22.474272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.933 [2024-12-10 22:59:22.474304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.933 qpair failed and we were unable to recover it. 00:28:14.933 [2024-12-10 22:59:22.474472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.933 [2024-12-10 22:59:22.474509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.933 qpair failed and we were unable to recover it. 00:28:14.933 [2024-12-10 22:59:22.474682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.933 [2024-12-10 22:59:22.474714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.933 qpair failed and we were unable to recover it. 00:28:14.933 [2024-12-10 22:59:22.474976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.933 [2024-12-10 22:59:22.475007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.933 qpair failed and we were unable to recover it. 00:28:14.933 [2024-12-10 22:59:22.475135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.933 [2024-12-10 22:59:22.475180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.933 qpair failed and we were unable to recover it. 00:28:14.933 [2024-12-10 22:59:22.475368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.933 [2024-12-10 22:59:22.475399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.933 qpair failed and we were unable to recover it. 00:28:14.933 [2024-12-10 22:59:22.475567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.933 [2024-12-10 22:59:22.475598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.933 qpair failed and we were unable to recover it. 00:28:14.933 [2024-12-10 22:59:22.475715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.933 [2024-12-10 22:59:22.475747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.933 qpair failed and we were unable to recover it. 00:28:14.933 [2024-12-10 22:59:22.475859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.933 [2024-12-10 22:59:22.475890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.933 qpair failed and we were unable to recover it. 00:28:14.933 [2024-12-10 22:59:22.476007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.933 [2024-12-10 22:59:22.476039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.933 qpair failed and we were unable to recover it. 00:28:14.933 [2024-12-10 22:59:22.476220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.933 [2024-12-10 22:59:22.476253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.933 qpair failed and we were unable to recover it. 00:28:14.933 [2024-12-10 22:59:22.476453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.933 [2024-12-10 22:59:22.476485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.933 qpair failed and we were unable to recover it. 00:28:14.933 [2024-12-10 22:59:22.476609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.933 [2024-12-10 22:59:22.476641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.933 qpair failed and we were unable to recover it. 00:28:14.933 [2024-12-10 22:59:22.476761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.933 [2024-12-10 22:59:22.476792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.933 qpair failed and we were unable to recover it. 00:28:14.933 [2024-12-10 22:59:22.476958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.933 [2024-12-10 22:59:22.476991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.933 qpair failed and we were unable to recover it. 00:28:14.933 [2024-12-10 22:59:22.477167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.933 [2024-12-10 22:59:22.477200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.933 qpair failed and we were unable to recover it. 00:28:14.933 [2024-12-10 22:59:22.477371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.933 [2024-12-10 22:59:22.477403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.933 qpair failed and we were unable to recover it. 00:28:14.933 [2024-12-10 22:59:22.477584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.933 [2024-12-10 22:59:22.477621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.933 qpair failed and we were unable to recover it. 00:28:14.933 [2024-12-10 22:59:22.477839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.933 [2024-12-10 22:59:22.477870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.933 qpair failed and we were unable to recover it. 00:28:14.933 [2024-12-10 22:59:22.478037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.933 [2024-12-10 22:59:22.478069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.933 qpair failed and we were unable to recover it. 00:28:14.933 [2024-12-10 22:59:22.478238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.933 [2024-12-10 22:59:22.478270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.933 qpair failed and we were unable to recover it. 00:28:14.933 [2024-12-10 22:59:22.478436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.933 [2024-12-10 22:59:22.478469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.933 qpair failed and we were unable to recover it. 00:28:14.933 [2024-12-10 22:59:22.478731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.933 [2024-12-10 22:59:22.478763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.933 qpair failed and we were unable to recover it. 00:28:14.933 [2024-12-10 22:59:22.478870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.933 [2024-12-10 22:59:22.478902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.933 qpair failed and we were unable to recover it. 00:28:14.933 [2024-12-10 22:59:22.479078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.933 [2024-12-10 22:59:22.479109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.933 qpair failed and we were unable to recover it. 00:28:14.933 [2024-12-10 22:59:22.479245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.933 [2024-12-10 22:59:22.479278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.933 qpair failed and we were unable to recover it. 00:28:14.933 [2024-12-10 22:59:22.479445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.933 [2024-12-10 22:59:22.479478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.933 qpair failed and we were unable to recover it. 00:28:14.933 [2024-12-10 22:59:22.479599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.933 [2024-12-10 22:59:22.479631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.933 qpair failed and we were unable to recover it. 00:28:14.933 [2024-12-10 22:59:22.479804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.933 [2024-12-10 22:59:22.479836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.933 qpair failed and we were unable to recover it. 00:28:14.933 [2024-12-10 22:59:22.480021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.933 [2024-12-10 22:59:22.480054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.933 qpair failed and we were unable to recover it. 00:28:14.933 [2024-12-10 22:59:22.480235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.933 [2024-12-10 22:59:22.480268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.933 qpair failed and we were unable to recover it. 00:28:14.933 [2024-12-10 22:59:22.480468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.933 [2024-12-10 22:59:22.480499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.933 qpair failed and we were unable to recover it. 00:28:14.933 [2024-12-10 22:59:22.480668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.933 [2024-12-10 22:59:22.480701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.933 qpair failed and we were unable to recover it. 00:28:14.933 [2024-12-10 22:59:22.480867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.933 [2024-12-10 22:59:22.480899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.934 qpair failed and we were unable to recover it. 00:28:14.934 [2024-12-10 22:59:22.481007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.934 [2024-12-10 22:59:22.481038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.934 qpair failed and we were unable to recover it. 00:28:14.934 [2024-12-10 22:59:22.481154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.934 [2024-12-10 22:59:22.481195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.934 qpair failed and we were unable to recover it. 00:28:14.934 [2024-12-10 22:59:22.481390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.934 [2024-12-10 22:59:22.481422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.934 qpair failed and we were unable to recover it. 00:28:14.934 [2024-12-10 22:59:22.481610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.934 [2024-12-10 22:59:22.481642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.934 qpair failed and we were unable to recover it. 00:28:14.934 [2024-12-10 22:59:22.481754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.934 [2024-12-10 22:59:22.481787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.934 qpair failed and we were unable to recover it. 00:28:14.934 [2024-12-10 22:59:22.482011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.934 [2024-12-10 22:59:22.482044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.934 qpair failed and we were unable to recover it. 00:28:14.934 [2024-12-10 22:59:22.482147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.934 [2024-12-10 22:59:22.482188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.934 qpair failed and we were unable to recover it. 00:28:14.934 [2024-12-10 22:59:22.482357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.934 [2024-12-10 22:59:22.482388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.934 qpair failed and we were unable to recover it. 00:28:14.934 [2024-12-10 22:59:22.482558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.934 [2024-12-10 22:59:22.482589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.934 qpair failed and we were unable to recover it. 00:28:14.934 [2024-12-10 22:59:22.482762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.934 [2024-12-10 22:59:22.482794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.934 qpair failed and we were unable to recover it. 00:28:14.934 [2024-12-10 22:59:22.482979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.934 [2024-12-10 22:59:22.483011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.934 qpair failed and we were unable to recover it. 00:28:14.934 [2024-12-10 22:59:22.483110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.934 [2024-12-10 22:59:22.483142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.934 qpair failed and we were unable to recover it. 00:28:14.934 [2024-12-10 22:59:22.483342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.934 [2024-12-10 22:59:22.483374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.934 qpair failed and we were unable to recover it. 00:28:14.934 [2024-12-10 22:59:22.483614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.934 [2024-12-10 22:59:22.483646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.934 qpair failed and we were unable to recover it. 00:28:14.934 [2024-12-10 22:59:22.483816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.934 [2024-12-10 22:59:22.483847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.934 qpair failed and we were unable to recover it. 00:28:14.934 [2024-12-10 22:59:22.484014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.934 [2024-12-10 22:59:22.484046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.934 qpair failed and we were unable to recover it. 00:28:14.934 [2024-12-10 22:59:22.484305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.934 [2024-12-10 22:59:22.484338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.934 qpair failed and we were unable to recover it. 00:28:14.934 [2024-12-10 22:59:22.484454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.934 [2024-12-10 22:59:22.484487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.934 qpair failed and we were unable to recover it. 00:28:14.934 [2024-12-10 22:59:22.484656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.934 [2024-12-10 22:59:22.484688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.934 qpair failed and we were unable to recover it. 00:28:14.934 [2024-12-10 22:59:22.484803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.934 [2024-12-10 22:59:22.484834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.934 qpair failed and we were unable to recover it. 00:28:14.934 [2024-12-10 22:59:22.485025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.934 [2024-12-10 22:59:22.485056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.934 qpair failed and we were unable to recover it. 00:28:14.934 [2024-12-10 22:59:22.485225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.934 [2024-12-10 22:59:22.485258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.934 qpair failed and we were unable to recover it. 00:28:14.934 [2024-12-10 22:59:22.485392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.934 [2024-12-10 22:59:22.485425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.934 qpair failed and we were unable to recover it. 00:28:14.934 [2024-12-10 22:59:22.485608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.934 [2024-12-10 22:59:22.485646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.934 qpair failed and we were unable to recover it. 00:28:14.934 [2024-12-10 22:59:22.485763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.934 [2024-12-10 22:59:22.485795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.934 qpair failed and we were unable to recover it. 00:28:14.934 [2024-12-10 22:59:22.485990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.934 [2024-12-10 22:59:22.486023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.934 qpair failed and we were unable to recover it. 00:28:14.934 [2024-12-10 22:59:22.486140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.934 [2024-12-10 22:59:22.486182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.934 qpair failed and we were unable to recover it. 00:28:14.934 [2024-12-10 22:59:22.486354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.934 [2024-12-10 22:59:22.486387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.934 qpair failed and we were unable to recover it. 00:28:14.934 [2024-12-10 22:59:22.486579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.934 [2024-12-10 22:59:22.486611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.934 qpair failed and we were unable to recover it. 00:28:14.934 [2024-12-10 22:59:22.486804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.934 [2024-12-10 22:59:22.486837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.934 qpair failed and we were unable to recover it. 00:28:14.934 [2024-12-10 22:59:22.487014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.934 [2024-12-10 22:59:22.487046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.934 qpair failed and we were unable to recover it. 00:28:14.934 [2024-12-10 22:59:22.487169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.934 [2024-12-10 22:59:22.487202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.934 qpair failed and we were unable to recover it. 00:28:14.934 [2024-12-10 22:59:22.487304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.934 [2024-12-10 22:59:22.487338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.934 qpair failed and we were unable to recover it. 00:28:14.934 [2024-12-10 22:59:22.487514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.934 [2024-12-10 22:59:22.487546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.934 qpair failed and we were unable to recover it. 00:28:14.934 [2024-12-10 22:59:22.487713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.934 [2024-12-10 22:59:22.487744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.935 qpair failed and we were unable to recover it. 00:28:14.935 [2024-12-10 22:59:22.487924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.935 [2024-12-10 22:59:22.487956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.935 qpair failed and we were unable to recover it. 00:28:14.935 [2024-12-10 22:59:22.488126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.935 [2024-12-10 22:59:22.488198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.935 qpair failed and we were unable to recover it. 00:28:14.935 [2024-12-10 22:59:22.488318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.935 [2024-12-10 22:59:22.488351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.935 qpair failed and we were unable to recover it. 00:28:14.935 [2024-12-10 22:59:22.488534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.935 [2024-12-10 22:59:22.488567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.935 qpair failed and we were unable to recover it. 00:28:14.935 [2024-12-10 22:59:22.488753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.935 [2024-12-10 22:59:22.488784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.935 qpair failed and we were unable to recover it. 00:28:14.935 [2024-12-10 22:59:22.489033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.935 [2024-12-10 22:59:22.489065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.935 qpair failed and we were unable to recover it. 00:28:14.935 [2024-12-10 22:59:22.489254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.935 [2024-12-10 22:59:22.489288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.935 qpair failed and we were unable to recover it. 00:28:14.935 [2024-12-10 22:59:22.489396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.935 [2024-12-10 22:59:22.489428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.935 qpair failed and we were unable to recover it. 00:28:14.935 [2024-12-10 22:59:22.489531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.935 [2024-12-10 22:59:22.489563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.935 qpair failed and we were unable to recover it. 00:28:14.935 [2024-12-10 22:59:22.489728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.935 [2024-12-10 22:59:22.489759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.935 qpair failed and we were unable to recover it. 00:28:14.935 [2024-12-10 22:59:22.489887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.935 [2024-12-10 22:59:22.489920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.935 qpair failed and we were unable to recover it. 00:28:14.935 [2024-12-10 22:59:22.490089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.935 [2024-12-10 22:59:22.490122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.935 qpair failed and we were unable to recover it. 00:28:14.935 [2024-12-10 22:59:22.490301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.935 [2024-12-10 22:59:22.490335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.935 qpair failed and we were unable to recover it. 00:28:14.935 [2024-12-10 22:59:22.490502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.935 [2024-12-10 22:59:22.490533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.935 qpair failed and we were unable to recover it. 00:28:14.935 [2024-12-10 22:59:22.490650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.935 [2024-12-10 22:59:22.490681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.935 qpair failed and we were unable to recover it. 00:28:14.935 [2024-12-10 22:59:22.490812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.935 [2024-12-10 22:59:22.490854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.935 qpair failed and we were unable to recover it. 00:28:14.935 [2024-12-10 22:59:22.491020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.935 [2024-12-10 22:59:22.491051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.935 qpair failed and we were unable to recover it. 00:28:14.935 [2024-12-10 22:59:22.491240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.935 [2024-12-10 22:59:22.491273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.935 qpair failed and we were unable to recover it. 00:28:14.935 [2024-12-10 22:59:22.491456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.935 [2024-12-10 22:59:22.491487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.935 qpair failed and we were unable to recover it. 00:28:14.935 [2024-12-10 22:59:22.491623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.935 [2024-12-10 22:59:22.491655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.935 qpair failed and we were unable to recover it. 00:28:14.935 [2024-12-10 22:59:22.491826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.935 [2024-12-10 22:59:22.491858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.935 qpair failed and we were unable to recover it. 00:28:14.935 [2024-12-10 22:59:22.492051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.935 [2024-12-10 22:59:22.492082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.935 qpair failed and we were unable to recover it. 00:28:14.935 [2024-12-10 22:59:22.492201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.935 [2024-12-10 22:59:22.492236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.935 qpair failed and we were unable to recover it. 00:28:14.935 [2024-12-10 22:59:22.492409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.935 [2024-12-10 22:59:22.492441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.935 qpair failed and we were unable to recover it. 00:28:14.935 [2024-12-10 22:59:22.492562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.935 [2024-12-10 22:59:22.492593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.935 qpair failed and we were unable to recover it. 00:28:14.935 [2024-12-10 22:59:22.492712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.935 [2024-12-10 22:59:22.492744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.935 qpair failed and we were unable to recover it. 00:28:14.935 [2024-12-10 22:59:22.492862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.935 [2024-12-10 22:59:22.492894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.935 qpair failed and we were unable to recover it. 00:28:14.935 [2024-12-10 22:59:22.493115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.935 [2024-12-10 22:59:22.493147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.935 qpair failed and we were unable to recover it. 00:28:14.935 [2024-12-10 22:59:22.493346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.935 [2024-12-10 22:59:22.493379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.935 qpair failed and we were unable to recover it. 00:28:14.935 [2024-12-10 22:59:22.493556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.935 [2024-12-10 22:59:22.493587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.935 qpair failed and we were unable to recover it. 00:28:14.935 [2024-12-10 22:59:22.493815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.935 [2024-12-10 22:59:22.493848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.935 qpair failed and we were unable to recover it. 00:28:14.935 [2024-12-10 22:59:22.494043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.935 [2024-12-10 22:59:22.494075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.935 qpair failed and we were unable to recover it. 00:28:14.935 [2024-12-10 22:59:22.494265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.935 [2024-12-10 22:59:22.494299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.935 qpair failed and we were unable to recover it. 00:28:14.935 [2024-12-10 22:59:22.494484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.935 [2024-12-10 22:59:22.494516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.935 qpair failed and we were unable to recover it. 00:28:14.935 [2024-12-10 22:59:22.494637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.935 [2024-12-10 22:59:22.494669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.935 qpair failed and we were unable to recover it. 00:28:14.935 [2024-12-10 22:59:22.494771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.935 [2024-12-10 22:59:22.494803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.935 qpair failed and we were unable to recover it. 00:28:14.935 [2024-12-10 22:59:22.494925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.935 [2024-12-10 22:59:22.494957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.935 qpair failed and we were unable to recover it. 00:28:14.935 [2024-12-10 22:59:22.495067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.935 [2024-12-10 22:59:22.495099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.935 qpair failed and we were unable to recover it. 00:28:14.935 [2024-12-10 22:59:22.495311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.935 [2024-12-10 22:59:22.495346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.935 qpair failed and we were unable to recover it. 00:28:14.936 [2024-12-10 22:59:22.495456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.936 [2024-12-10 22:59:22.495488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.936 qpair failed and we were unable to recover it. 00:28:14.936 [2024-12-10 22:59:22.495603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.936 [2024-12-10 22:59:22.495634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.936 qpair failed and we were unable to recover it. 00:28:14.936 [2024-12-10 22:59:22.495829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.936 [2024-12-10 22:59:22.495861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.936 qpair failed and we were unable to recover it. 00:28:14.936 [2024-12-10 22:59:22.496034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.936 [2024-12-10 22:59:22.496066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.936 qpair failed and we were unable to recover it. 00:28:14.936 [2024-12-10 22:59:22.496199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.936 [2024-12-10 22:59:22.496232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.936 qpair failed and we were unable to recover it. 00:28:14.936 [2024-12-10 22:59:22.496337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.936 [2024-12-10 22:59:22.496368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.936 qpair failed and we were unable to recover it. 00:28:14.936 [2024-12-10 22:59:22.496538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.936 [2024-12-10 22:59:22.496570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.936 qpair failed and we were unable to recover it. 00:28:14.936 [2024-12-10 22:59:22.496773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.936 [2024-12-10 22:59:22.496804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.936 qpair failed and we were unable to recover it. 00:28:14.936 [2024-12-10 22:59:22.496987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.936 [2024-12-10 22:59:22.497019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.936 qpair failed and we were unable to recover it. 00:28:14.936 [2024-12-10 22:59:22.497193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.936 [2024-12-10 22:59:22.497226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.936 qpair failed and we were unable to recover it. 00:28:14.936 [2024-12-10 22:59:22.497327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.936 [2024-12-10 22:59:22.497358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.936 qpair failed and we were unable to recover it. 00:28:14.936 [2024-12-10 22:59:22.497524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.936 [2024-12-10 22:59:22.497556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.936 qpair failed and we were unable to recover it. 00:28:14.936 [2024-12-10 22:59:22.497744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.936 [2024-12-10 22:59:22.497775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.936 qpair failed and we were unable to recover it. 00:28:14.936 [2024-12-10 22:59:22.497956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.936 [2024-12-10 22:59:22.497988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.936 qpair failed and we were unable to recover it. 00:28:14.936 [2024-12-10 22:59:22.498095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.936 [2024-12-10 22:59:22.498126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.936 qpair failed and we were unable to recover it. 00:28:14.936 [2024-12-10 22:59:22.498239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.936 [2024-12-10 22:59:22.498272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.936 qpair failed and we were unable to recover it. 00:28:14.936 [2024-12-10 22:59:22.498450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.936 [2024-12-10 22:59:22.498488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.936 qpair failed and we were unable to recover it. 00:28:14.936 [2024-12-10 22:59:22.498591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.936 [2024-12-10 22:59:22.498622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.936 qpair failed and we were unable to recover it. 00:28:14.936 [2024-12-10 22:59:22.498746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.936 [2024-12-10 22:59:22.498777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.936 qpair failed and we were unable to recover it. 00:28:14.936 [2024-12-10 22:59:22.498881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.936 [2024-12-10 22:59:22.498913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.936 qpair failed and we were unable to recover it. 00:28:14.936 [2024-12-10 22:59:22.499019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.936 [2024-12-10 22:59:22.499051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.936 qpair failed and we were unable to recover it. 00:28:14.936 [2024-12-10 22:59:22.499289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.936 [2024-12-10 22:59:22.499321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.936 qpair failed and we were unable to recover it. 00:28:14.936 [2024-12-10 22:59:22.499495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.936 [2024-12-10 22:59:22.499526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.936 qpair failed and we were unable to recover it. 00:28:14.936 [2024-12-10 22:59:22.499750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.936 [2024-12-10 22:59:22.499781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.936 qpair failed and we were unable to recover it. 00:28:14.936 [2024-12-10 22:59:22.499951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.936 [2024-12-10 22:59:22.499982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.936 qpair failed and we were unable to recover it. 00:28:14.936 [2024-12-10 22:59:22.500148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.936 [2024-12-10 22:59:22.500198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.936 qpair failed and we were unable to recover it. 00:28:14.936 [2024-12-10 22:59:22.500301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.936 [2024-12-10 22:59:22.500332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.936 qpair failed and we were unable to recover it. 00:28:14.936 [2024-12-10 22:59:22.500446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.936 [2024-12-10 22:59:22.500477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.936 qpair failed and we were unable to recover it. 00:28:14.936 [2024-12-10 22:59:22.500581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.936 [2024-12-10 22:59:22.500612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.936 qpair failed and we were unable to recover it. 00:28:14.936 [2024-12-10 22:59:22.500814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.936 [2024-12-10 22:59:22.500846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.936 qpair failed and we were unable to recover it. 00:28:14.936 [2024-12-10 22:59:22.500964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.936 [2024-12-10 22:59:22.500995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.936 qpair failed and we were unable to recover it. 00:28:14.936 [2024-12-10 22:59:22.501189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.936 [2024-12-10 22:59:22.501222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.936 qpair failed and we were unable to recover it. 00:28:14.936 [2024-12-10 22:59:22.501331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.936 [2024-12-10 22:59:22.501362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.936 qpair failed and we were unable to recover it. 00:28:14.936 [2024-12-10 22:59:22.501463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.936 [2024-12-10 22:59:22.501494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.936 qpair failed and we were unable to recover it. 00:28:14.936 [2024-12-10 22:59:22.501660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.936 [2024-12-10 22:59:22.501691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.936 qpair failed and we were unable to recover it. 00:28:14.936 [2024-12-10 22:59:22.501792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.936 [2024-12-10 22:59:22.501823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.936 qpair failed and we were unable to recover it. 00:28:14.936 [2024-12-10 22:59:22.502031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.936 [2024-12-10 22:59:22.502062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.936 qpair failed and we were unable to recover it. 00:28:14.936 [2024-12-10 22:59:22.502170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.936 [2024-12-10 22:59:22.502202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.936 qpair failed and we were unable to recover it. 00:28:14.936 [2024-12-10 22:59:22.502458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.936 [2024-12-10 22:59:22.502489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.937 qpair failed and we were unable to recover it. 00:28:14.937 [2024-12-10 22:59:22.502665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.937 [2024-12-10 22:59:22.502697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.937 qpair failed and we were unable to recover it. 00:28:14.937 [2024-12-10 22:59:22.502866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.937 [2024-12-10 22:59:22.502898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.937 qpair failed and we were unable to recover it. 00:28:14.937 [2024-12-10 22:59:22.503061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.937 [2024-12-10 22:59:22.503092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.937 qpair failed and we were unable to recover it. 00:28:14.937 [2024-12-10 22:59:22.503277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.937 [2024-12-10 22:59:22.503310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.937 qpair failed and we were unable to recover it. 00:28:14.937 [2024-12-10 22:59:22.503429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.937 [2024-12-10 22:59:22.503461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.937 qpair failed and we were unable to recover it. 00:28:14.937 [2024-12-10 22:59:22.503626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.937 [2024-12-10 22:59:22.503658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.937 qpair failed and we were unable to recover it. 00:28:14.937 [2024-12-10 22:59:22.503774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.937 [2024-12-10 22:59:22.503805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.937 qpair failed and we were unable to recover it. 00:28:14.937 [2024-12-10 22:59:22.503982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.937 [2024-12-10 22:59:22.504015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.937 qpair failed and we were unable to recover it. 00:28:14.937 [2024-12-10 22:59:22.504182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.937 [2024-12-10 22:59:22.504213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.937 qpair failed and we were unable to recover it. 00:28:14.937 [2024-12-10 22:59:22.504331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.937 [2024-12-10 22:59:22.504363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.937 qpair failed and we were unable to recover it. 00:28:14.937 [2024-12-10 22:59:22.504626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.937 [2024-12-10 22:59:22.504658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.937 qpair failed and we were unable to recover it. 00:28:14.937 [2024-12-10 22:59:22.504851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.937 [2024-12-10 22:59:22.504882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.937 qpair failed and we were unable to recover it. 00:28:14.937 [2024-12-10 22:59:22.504995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.937 [2024-12-10 22:59:22.505027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.937 qpair failed and we were unable to recover it. 00:28:14.937 [2024-12-10 22:59:22.505330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.937 [2024-12-10 22:59:22.505362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.937 qpair failed and we were unable to recover it. 00:28:14.937 [2024-12-10 22:59:22.505560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.937 [2024-12-10 22:59:22.505592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.937 qpair failed and we were unable to recover it. 00:28:14.937 [2024-12-10 22:59:22.505711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.937 [2024-12-10 22:59:22.505743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.937 qpair failed and we were unable to recover it. 00:28:14.937 [2024-12-10 22:59:22.505934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.937 [2024-12-10 22:59:22.505965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.937 qpair failed and we were unable to recover it. 00:28:14.937 [2024-12-10 22:59:22.506147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.937 [2024-12-10 22:59:22.506193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.937 qpair failed and we were unable to recover it. 00:28:14.937 [2024-12-10 22:59:22.506329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.937 [2024-12-10 22:59:22.506360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.937 qpair failed and we were unable to recover it. 00:28:14.937 [2024-12-10 22:59:22.506479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.937 [2024-12-10 22:59:22.506511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.937 qpair failed and we were unable to recover it. 00:28:14.937 [2024-12-10 22:59:22.506680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.937 [2024-12-10 22:59:22.506711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.937 qpair failed and we were unable to recover it. 00:28:14.937 [2024-12-10 22:59:22.506838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.937 [2024-12-10 22:59:22.506869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.937 qpair failed and we were unable to recover it. 00:28:14.937 [2024-12-10 22:59:22.506987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.937 [2024-12-10 22:59:22.507018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.937 qpair failed and we were unable to recover it. 00:28:14.937 [2024-12-10 22:59:22.507185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.937 [2024-12-10 22:59:22.507218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.937 qpair failed and we were unable to recover it. 00:28:14.937 [2024-12-10 22:59:22.507401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.937 [2024-12-10 22:59:22.507433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.937 qpair failed and we were unable to recover it. 00:28:14.937 [2024-12-10 22:59:22.507605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.937 [2024-12-10 22:59:22.507636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.937 qpair failed and we were unable to recover it. 00:28:14.937 [2024-12-10 22:59:22.507804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.937 [2024-12-10 22:59:22.507835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.937 qpair failed and we were unable to recover it. 00:28:14.937 [2024-12-10 22:59:22.507985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.937 [2024-12-10 22:59:22.508016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.937 qpair failed and we were unable to recover it. 00:28:14.937 [2024-12-10 22:59:22.508199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.937 [2024-12-10 22:59:22.508231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.937 qpair failed and we were unable to recover it. 00:28:14.937 [2024-12-10 22:59:22.508396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.937 [2024-12-10 22:59:22.508427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.937 qpair failed and we were unable to recover it. 00:28:14.937 [2024-12-10 22:59:22.508596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.937 [2024-12-10 22:59:22.508627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.937 qpair failed and we were unable to recover it. 00:28:14.937 [2024-12-10 22:59:22.508815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.937 [2024-12-10 22:59:22.508846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.937 qpair failed and we were unable to recover it. 00:28:14.937 [2024-12-10 22:59:22.508948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.937 [2024-12-10 22:59:22.508979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.937 qpair failed and we were unable to recover it. 00:28:14.937 [2024-12-10 22:59:22.509176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.937 [2024-12-10 22:59:22.509209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.937 qpair failed and we were unable to recover it. 00:28:14.937 [2024-12-10 22:59:22.509378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.937 [2024-12-10 22:59:22.509410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.937 qpair failed and we were unable to recover it. 00:28:14.937 [2024-12-10 22:59:22.509577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.937 [2024-12-10 22:59:22.509608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.937 qpair failed and we were unable to recover it. 00:28:14.937 [2024-12-10 22:59:22.509845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.937 [2024-12-10 22:59:22.509877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.937 qpair failed and we were unable to recover it. 00:28:14.937 [2024-12-10 22:59:22.510080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.937 [2024-12-10 22:59:22.510111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.937 qpair failed and we were unable to recover it. 00:28:14.937 [2024-12-10 22:59:22.510238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.937 [2024-12-10 22:59:22.510270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.938 qpair failed and we were unable to recover it. 00:28:14.938 [2024-12-10 22:59:22.510440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.938 [2024-12-10 22:59:22.510471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.938 qpair failed and we were unable to recover it. 00:28:14.938 [2024-12-10 22:59:22.510738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.938 [2024-12-10 22:59:22.510775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.938 qpair failed and we were unable to recover it. 00:28:14.938 [2024-12-10 22:59:22.510948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.938 [2024-12-10 22:59:22.510979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.938 qpair failed and we were unable to recover it. 00:28:14.938 [2024-12-10 22:59:22.511097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.938 [2024-12-10 22:59:22.511128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.938 qpair failed and we were unable to recover it. 00:28:14.938 [2024-12-10 22:59:22.511334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.938 [2024-12-10 22:59:22.511366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.938 qpair failed and we were unable to recover it. 00:28:14.938 [2024-12-10 22:59:22.511483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.938 [2024-12-10 22:59:22.511515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.938 qpair failed and we were unable to recover it. 00:28:14.938 [2024-12-10 22:59:22.511684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.938 [2024-12-10 22:59:22.511716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.938 qpair failed and we were unable to recover it. 00:28:14.938 [2024-12-10 22:59:22.511837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.938 [2024-12-10 22:59:22.511869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.938 qpair failed and we were unable to recover it. 00:28:14.938 [2024-12-10 22:59:22.512038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.938 [2024-12-10 22:59:22.512069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.938 qpair failed and we were unable to recover it. 00:28:14.938 [2024-12-10 22:59:22.512238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.938 [2024-12-10 22:59:22.512289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.938 qpair failed and we were unable to recover it. 00:28:14.938 [2024-12-10 22:59:22.512503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.938 [2024-12-10 22:59:22.512535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.938 qpair failed and we were unable to recover it. 00:28:14.938 [2024-12-10 22:59:22.512716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.938 [2024-12-10 22:59:22.512747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.938 qpair failed and we were unable to recover it. 00:28:14.938 [2024-12-10 22:59:22.512864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.938 [2024-12-10 22:59:22.512895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.938 qpair failed and we were unable to recover it. 00:28:14.938 [2024-12-10 22:59:22.512998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.938 [2024-12-10 22:59:22.513030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.938 qpair failed and we were unable to recover it. 00:28:14.938 [2024-12-10 22:59:22.513196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.938 [2024-12-10 22:59:22.513228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.938 qpair failed and we were unable to recover it. 00:28:14.938 [2024-12-10 22:59:22.513411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.938 [2024-12-10 22:59:22.513443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.938 qpair failed and we were unable to recover it. 00:28:14.938 [2024-12-10 22:59:22.513548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.938 [2024-12-10 22:59:22.513579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.938 qpair failed and we were unable to recover it. 00:28:14.938 [2024-12-10 22:59:22.513696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.938 [2024-12-10 22:59:22.513727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.938 qpair failed and we were unable to recover it. 00:28:14.938 [2024-12-10 22:59:22.513896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.938 [2024-12-10 22:59:22.513933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.938 qpair failed and we were unable to recover it. 00:28:14.938 [2024-12-10 22:59:22.514175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.938 [2024-12-10 22:59:22.514207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.938 qpair failed and we were unable to recover it. 00:28:14.938 [2024-12-10 22:59:22.514327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.938 [2024-12-10 22:59:22.514358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.938 qpair failed and we were unable to recover it. 00:28:14.938 [2024-12-10 22:59:22.514460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.938 [2024-12-10 22:59:22.514492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.938 qpair failed and we were unable to recover it. 00:28:14.938 [2024-12-10 22:59:22.514599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.938 [2024-12-10 22:59:22.514630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.938 qpair failed and we were unable to recover it. 00:28:14.938 [2024-12-10 22:59:22.514826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.938 [2024-12-10 22:59:22.514858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.938 qpair failed and we were unable to recover it. 00:28:14.938 [2024-12-10 22:59:22.514977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.938 [2024-12-10 22:59:22.515009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.938 qpair failed and we were unable to recover it. 00:28:14.938 [2024-12-10 22:59:22.515124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.938 [2024-12-10 22:59:22.515155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.938 qpair failed and we were unable to recover it. 00:28:14.938 [2024-12-10 22:59:22.515396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.938 [2024-12-10 22:59:22.515428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.938 qpair failed and we were unable to recover it. 00:28:14.938 [2024-12-10 22:59:22.515531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.938 [2024-12-10 22:59:22.515563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.938 qpair failed and we were unable to recover it. 00:28:14.938 [2024-12-10 22:59:22.515678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.938 [2024-12-10 22:59:22.515710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.938 qpair failed and we were unable to recover it. 00:28:14.938 [2024-12-10 22:59:22.515919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.938 [2024-12-10 22:59:22.515951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.938 qpair failed and we were unable to recover it. 00:28:14.938 [2024-12-10 22:59:22.516141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.938 [2024-12-10 22:59:22.516197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.938 qpair failed and we were unable to recover it. 00:28:14.938 [2024-12-10 22:59:22.516365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.938 [2024-12-10 22:59:22.516397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.938 qpair failed and we were unable to recover it. 00:28:14.938 [2024-12-10 22:59:22.516621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.938 [2024-12-10 22:59:22.516653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.938 qpair failed and we were unable to recover it. 00:28:14.938 [2024-12-10 22:59:22.516773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.938 [2024-12-10 22:59:22.516805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.939 qpair failed and we were unable to recover it. 00:28:14.939 [2024-12-10 22:59:22.516985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.939 [2024-12-10 22:59:22.517017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.939 qpair failed and we were unable to recover it. 00:28:14.939 [2024-12-10 22:59:22.517124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.939 [2024-12-10 22:59:22.517156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.939 qpair failed and we were unable to recover it. 00:28:14.939 [2024-12-10 22:59:22.517285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.939 [2024-12-10 22:59:22.517317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.939 qpair failed and we were unable to recover it. 00:28:14.939 [2024-12-10 22:59:22.517575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.939 [2024-12-10 22:59:22.517607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.939 qpair failed and we were unable to recover it. 00:28:14.939 [2024-12-10 22:59:22.517709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.939 [2024-12-10 22:59:22.517740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.939 qpair failed and we were unable to recover it. 00:28:14.939 [2024-12-10 22:59:22.517906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.939 [2024-12-10 22:59:22.517938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.939 qpair failed and we were unable to recover it. 00:28:14.939 [2024-12-10 22:59:22.518036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.939 [2024-12-10 22:59:22.518067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.939 qpair failed and we were unable to recover it. 00:28:14.939 [2024-12-10 22:59:22.518234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.939 [2024-12-10 22:59:22.518268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.939 qpair failed and we were unable to recover it. 00:28:14.939 [2024-12-10 22:59:22.518530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.939 [2024-12-10 22:59:22.518561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.939 qpair failed and we were unable to recover it. 00:28:14.939 [2024-12-10 22:59:22.518814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.939 [2024-12-10 22:59:22.518846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.939 qpair failed and we were unable to recover it. 00:28:14.939 [2024-12-10 22:59:22.519014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.939 [2024-12-10 22:59:22.519044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.939 qpair failed and we were unable to recover it. 00:28:14.939 [2024-12-10 22:59:22.519240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.939 [2024-12-10 22:59:22.519272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.939 qpair failed and we were unable to recover it. 00:28:14.939 [2024-12-10 22:59:22.519474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.939 [2024-12-10 22:59:22.519506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.939 qpair failed and we were unable to recover it. 00:28:14.939 [2024-12-10 22:59:22.519674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.939 [2024-12-10 22:59:22.519704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.939 qpair failed and we were unable to recover it. 00:28:14.939 [2024-12-10 22:59:22.519887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.939 [2024-12-10 22:59:22.519919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.939 qpair failed and we were unable to recover it. 00:28:14.939 [2024-12-10 22:59:22.520028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.939 [2024-12-10 22:59:22.520059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.939 qpair failed and we were unable to recover it. 00:28:14.939 [2024-12-10 22:59:22.520201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.939 [2024-12-10 22:59:22.520233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.939 qpair failed and we were unable to recover it. 00:28:14.939 [2024-12-10 22:59:22.520355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.939 [2024-12-10 22:59:22.520387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.939 qpair failed and we were unable to recover it. 00:28:14.939 [2024-12-10 22:59:22.520554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.939 [2024-12-10 22:59:22.520586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.939 qpair failed and we were unable to recover it. 00:28:14.939 [2024-12-10 22:59:22.520753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.939 [2024-12-10 22:59:22.520784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.939 qpair failed and we were unable to recover it. 00:28:14.939 [2024-12-10 22:59:22.520902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.939 [2024-12-10 22:59:22.520933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.939 qpair failed and we were unable to recover it. 00:28:14.939 [2024-12-10 22:59:22.521150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.939 [2024-12-10 22:59:22.521192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.939 qpair failed and we were unable to recover it. 00:28:14.939 [2024-12-10 22:59:22.521374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.939 [2024-12-10 22:59:22.521405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.939 qpair failed and we were unable to recover it. 00:28:14.939 [2024-12-10 22:59:22.521574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.939 [2024-12-10 22:59:22.521605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.939 qpair failed and we were unable to recover it. 00:28:14.939 [2024-12-10 22:59:22.521843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.939 [2024-12-10 22:59:22.521881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.939 qpair failed and we were unable to recover it. 00:28:14.939 [2024-12-10 22:59:22.522119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.939 [2024-12-10 22:59:22.522150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.939 qpair failed and we were unable to recover it. 00:28:14.939 [2024-12-10 22:59:22.522336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.939 [2024-12-10 22:59:22.522368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.939 qpair failed and we were unable to recover it. 00:28:14.939 [2024-12-10 22:59:22.522612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.939 [2024-12-10 22:59:22.522644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.939 qpair failed and we were unable to recover it. 00:28:14.939 [2024-12-10 22:59:22.522816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.939 [2024-12-10 22:59:22.522848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.939 qpair failed and we were unable to recover it. 00:28:14.939 [2024-12-10 22:59:22.523014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.939 [2024-12-10 22:59:22.523046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.939 qpair failed and we were unable to recover it. 00:28:14.939 [2024-12-10 22:59:22.523155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.939 [2024-12-10 22:59:22.523197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.939 qpair failed and we were unable to recover it. 00:28:14.939 [2024-12-10 22:59:22.523316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.939 [2024-12-10 22:59:22.523348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.939 qpair failed and we were unable to recover it. 00:28:14.939 [2024-12-10 22:59:22.523513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.939 [2024-12-10 22:59:22.523545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.939 qpair failed and we were unable to recover it. 00:28:14.939 [2024-12-10 22:59:22.523712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.939 [2024-12-10 22:59:22.523743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.939 qpair failed and we were unable to recover it. 00:28:14.939 [2024-12-10 22:59:22.524003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.939 [2024-12-10 22:59:22.524035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.939 qpair failed and we were unable to recover it. 00:28:14.939 [2024-12-10 22:59:22.524152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.939 [2024-12-10 22:59:22.524193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.939 qpair failed and we were unable to recover it. 00:28:14.939 [2024-12-10 22:59:22.524431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.939 [2024-12-10 22:59:22.524462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.939 qpair failed and we were unable to recover it. 00:28:14.939 [2024-12-10 22:59:22.524668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.939 [2024-12-10 22:59:22.524703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.939 qpair failed and we were unable to recover it. 00:28:14.939 [2024-12-10 22:59:22.524833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.940 [2024-12-10 22:59:22.524865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.940 qpair failed and we were unable to recover it. 00:28:14.940 [2024-12-10 22:59:22.524986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.940 [2024-12-10 22:59:22.525017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.940 qpair failed and we were unable to recover it. 00:28:14.940 [2024-12-10 22:59:22.525184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.940 [2024-12-10 22:59:22.525216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.940 qpair failed and we were unable to recover it. 00:28:14.940 [2024-12-10 22:59:22.525487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.940 [2024-12-10 22:59:22.525519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.940 qpair failed and we were unable to recover it. 00:28:14.940 [2024-12-10 22:59:22.525633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.940 [2024-12-10 22:59:22.525664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.940 qpair failed and we were unable to recover it. 00:28:14.940 [2024-12-10 22:59:22.525832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.940 [2024-12-10 22:59:22.525863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.940 qpair failed and we were unable to recover it. 00:28:14.940 [2024-12-10 22:59:22.526036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.940 [2024-12-10 22:59:22.526068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.940 qpair failed and we were unable to recover it. 00:28:14.940 [2024-12-10 22:59:22.526186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.940 [2024-12-10 22:59:22.526218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.940 qpair failed and we were unable to recover it. 00:28:14.940 [2024-12-10 22:59:22.526335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.940 [2024-12-10 22:59:22.526367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.940 qpair failed and we were unable to recover it. 00:28:14.940 [2024-12-10 22:59:22.526550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.940 [2024-12-10 22:59:22.526582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.940 qpair failed and we were unable to recover it. 00:28:14.940 [2024-12-10 22:59:22.526710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.940 [2024-12-10 22:59:22.526742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.940 qpair failed and we were unable to recover it. 00:28:14.940 [2024-12-10 22:59:22.526918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.940 [2024-12-10 22:59:22.526950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.940 qpair failed and we were unable to recover it. 00:28:14.940 [2024-12-10 22:59:22.527117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.940 [2024-12-10 22:59:22.527148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.940 qpair failed and we were unable to recover it. 00:28:14.940 [2024-12-10 22:59:22.527266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.940 [2024-12-10 22:59:22.527298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.940 qpair failed and we were unable to recover it. 00:28:14.940 [2024-12-10 22:59:22.527465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.940 [2024-12-10 22:59:22.527494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.940 qpair failed and we were unable to recover it. 00:28:14.940 [2024-12-10 22:59:22.527661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.940 [2024-12-10 22:59:22.527690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.940 qpair failed and we were unable to recover it. 00:28:14.940 [2024-12-10 22:59:22.527877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.940 [2024-12-10 22:59:22.527907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.940 qpair failed and we were unable to recover it. 00:28:14.940 [2024-12-10 22:59:22.528016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.940 [2024-12-10 22:59:22.528046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.940 qpair failed and we were unable to recover it. 00:28:14.940 [2024-12-10 22:59:22.528209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.940 [2024-12-10 22:59:22.528239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.940 qpair failed and we were unable to recover it. 00:28:14.940 [2024-12-10 22:59:22.528429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.940 [2024-12-10 22:59:22.528460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.940 qpair failed and we were unable to recover it. 00:28:14.940 [2024-12-10 22:59:22.528575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.940 [2024-12-10 22:59:22.528605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.940 qpair failed and we were unable to recover it. 00:28:14.940 [2024-12-10 22:59:22.528719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.940 [2024-12-10 22:59:22.528749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.940 qpair failed and we were unable to recover it. 00:28:14.940 [2024-12-10 22:59:22.528862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.940 [2024-12-10 22:59:22.528891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.940 qpair failed and we were unable to recover it. 00:28:14.940 [2024-12-10 22:59:22.529058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.940 [2024-12-10 22:59:22.529088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.940 qpair failed and we were unable to recover it. 00:28:14.940 [2024-12-10 22:59:22.529266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.940 [2024-12-10 22:59:22.529297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.940 qpair failed and we were unable to recover it. 00:28:14.940 [2024-12-10 22:59:22.529465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.940 [2024-12-10 22:59:22.529494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.940 qpair failed and we were unable to recover it. 00:28:14.940 [2024-12-10 22:59:22.529683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.940 [2024-12-10 22:59:22.529722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.940 qpair failed and we were unable to recover it. 00:28:14.940 [2024-12-10 22:59:22.529891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.940 [2024-12-10 22:59:22.529919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.940 qpair failed and we were unable to recover it. 00:28:14.940 [2024-12-10 22:59:22.530089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.940 [2024-12-10 22:59:22.530119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.940 qpair failed and we were unable to recover it. 00:28:14.940 [2024-12-10 22:59:22.530320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.940 [2024-12-10 22:59:22.530351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.940 qpair failed and we were unable to recover it. 00:28:14.940 [2024-12-10 22:59:22.530638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.940 [2024-12-10 22:59:22.530668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.940 qpair failed and we were unable to recover it. 00:28:14.940 [2024-12-10 22:59:22.530851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.940 [2024-12-10 22:59:22.530881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.940 qpair failed and we were unable to recover it. 00:28:14.940 [2024-12-10 22:59:22.531079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.940 [2024-12-10 22:59:22.531109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.940 qpair failed and we were unable to recover it. 00:28:14.940 [2024-12-10 22:59:22.531245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.940 [2024-12-10 22:59:22.531276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.940 qpair failed and we were unable to recover it. 00:28:14.940 [2024-12-10 22:59:22.531444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.940 [2024-12-10 22:59:22.531474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.940 qpair failed and we were unable to recover it. 00:28:14.940 [2024-12-10 22:59:22.531586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.940 [2024-12-10 22:59:22.531615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.940 qpair failed and we were unable to recover it. 00:28:14.940 [2024-12-10 22:59:22.531783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.940 [2024-12-10 22:59:22.531813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.940 qpair failed and we were unable to recover it. 00:28:14.940 [2024-12-10 22:59:22.531945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.940 [2024-12-10 22:59:22.531975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.940 qpair failed and we were unable to recover it. 00:28:14.940 [2024-12-10 22:59:22.532087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.940 [2024-12-10 22:59:22.532118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.940 qpair failed and we were unable to recover it. 00:28:14.940 [2024-12-10 22:59:22.532316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.940 [2024-12-10 22:59:22.532347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.940 qpair failed and we were unable to recover it. 00:28:14.941 [2024-12-10 22:59:22.532464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.941 [2024-12-10 22:59:22.532494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.941 qpair failed and we were unable to recover it. 00:28:14.941 [2024-12-10 22:59:22.532619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.941 [2024-12-10 22:59:22.532649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.941 qpair failed and we were unable to recover it. 00:28:14.941 [2024-12-10 22:59:22.532931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.941 [2024-12-10 22:59:22.532961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.941 qpair failed and we were unable to recover it. 00:28:14.941 [2024-12-10 22:59:22.533152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.941 [2024-12-10 22:59:22.533192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.941 qpair failed and we were unable to recover it. 00:28:14.941 [2024-12-10 22:59:22.533360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.941 [2024-12-10 22:59:22.533389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.941 qpair failed and we were unable to recover it. 00:28:14.941 [2024-12-10 22:59:22.533499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.941 [2024-12-10 22:59:22.533529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.941 qpair failed and we were unable to recover it. 00:28:14.941 [2024-12-10 22:59:22.533720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.941 [2024-12-10 22:59:22.533750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.941 qpair failed and we were unable to recover it. 00:28:14.941 [2024-12-10 22:59:22.533876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.941 [2024-12-10 22:59:22.533906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.941 qpair failed and we were unable to recover it. 00:28:14.941 [2024-12-10 22:59:22.534015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.941 [2024-12-10 22:59:22.534045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.941 qpair failed and we were unable to recover it. 00:28:14.941 [2024-12-10 22:59:22.534171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.941 [2024-12-10 22:59:22.534203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.941 qpair failed and we were unable to recover it. 00:28:14.941 [2024-12-10 22:59:22.534390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.941 [2024-12-10 22:59:22.534419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.941 qpair failed and we were unable to recover it. 00:28:14.941 [2024-12-10 22:59:22.534585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.941 [2024-12-10 22:59:22.534615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.941 qpair failed and we were unable to recover it. 00:28:14.941 [2024-12-10 22:59:22.534784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.941 [2024-12-10 22:59:22.534814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.941 qpair failed and we were unable to recover it. 00:28:14.941 [2024-12-10 22:59:22.535056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.941 [2024-12-10 22:59:22.535088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.941 qpair failed and we were unable to recover it. 00:28:14.941 [2024-12-10 22:59:22.535291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.941 [2024-12-10 22:59:22.535324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.941 qpair failed and we were unable to recover it. 00:28:14.941 [2024-12-10 22:59:22.535442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.941 [2024-12-10 22:59:22.535474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.941 qpair failed and we were unable to recover it. 00:28:14.941 [2024-12-10 22:59:22.535601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.941 [2024-12-10 22:59:22.535633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.941 qpair failed and we were unable to recover it. 00:28:14.941 [2024-12-10 22:59:22.535802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.941 [2024-12-10 22:59:22.535833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.941 qpair failed and we were unable to recover it. 00:28:14.941 [2024-12-10 22:59:22.536012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.941 [2024-12-10 22:59:22.536044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.941 qpair failed and we were unable to recover it. 00:28:14.941 [2024-12-10 22:59:22.536178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.941 [2024-12-10 22:59:22.536210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.941 qpair failed and we were unable to recover it. 00:28:14.941 [2024-12-10 22:59:22.536324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.941 [2024-12-10 22:59:22.536355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.941 qpair failed and we were unable to recover it. 00:28:14.941 [2024-12-10 22:59:22.536537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.941 [2024-12-10 22:59:22.536569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.941 qpair failed and we were unable to recover it. 00:28:14.941 [2024-12-10 22:59:22.536739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.941 [2024-12-10 22:59:22.536770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.941 qpair failed and we were unable to recover it. 00:28:14.941 [2024-12-10 22:59:22.537036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.941 [2024-12-10 22:59:22.537068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.941 qpair failed and we were unable to recover it. 00:28:14.941 [2024-12-10 22:59:22.537249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.941 [2024-12-10 22:59:22.537284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.941 qpair failed and we were unable to recover it. 00:28:14.941 [2024-12-10 22:59:22.537452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.941 [2024-12-10 22:59:22.537485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.941 qpair failed and we were unable to recover it. 00:28:14.941 [2024-12-10 22:59:22.537721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.941 [2024-12-10 22:59:22.537759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.941 qpair failed and we were unable to recover it. 00:28:14.941 [2024-12-10 22:59:22.537865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.941 [2024-12-10 22:59:22.537896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.941 qpair failed and we were unable to recover it. 00:28:14.941 [2024-12-10 22:59:22.538016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.941 [2024-12-10 22:59:22.538048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.941 qpair failed and we were unable to recover it. 00:28:14.941 [2024-12-10 22:59:22.538172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.941 [2024-12-10 22:59:22.538204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.941 qpair failed and we were unable to recover it. 00:28:14.941 [2024-12-10 22:59:22.538334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.941 [2024-12-10 22:59:22.538365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.941 qpair failed and we were unable to recover it. 00:28:14.941 [2024-12-10 22:59:22.538484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.941 [2024-12-10 22:59:22.538515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.941 qpair failed and we were unable to recover it. 00:28:14.941 [2024-12-10 22:59:22.538684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.941 [2024-12-10 22:59:22.538715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.941 qpair failed and we were unable to recover it. 00:28:14.941 [2024-12-10 22:59:22.538883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.941 [2024-12-10 22:59:22.538915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.941 qpair failed and we were unable to recover it. 00:28:14.941 [2024-12-10 22:59:22.539029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.941 [2024-12-10 22:59:22.539061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.941 qpair failed and we were unable to recover it. 00:28:14.941 [2024-12-10 22:59:22.539240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.941 [2024-12-10 22:59:22.539272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.941 qpair failed and we were unable to recover it. 00:28:14.941 [2024-12-10 22:59:22.539445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.941 [2024-12-10 22:59:22.539477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.941 qpair failed and we were unable to recover it. 00:28:14.941 [2024-12-10 22:59:22.539646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.941 [2024-12-10 22:59:22.539679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.941 qpair failed and we were unable to recover it. 00:28:14.941 [2024-12-10 22:59:22.539787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.941 [2024-12-10 22:59:22.539819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.941 qpair failed and we were unable to recover it. 00:28:14.941 [2024-12-10 22:59:22.539938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.942 [2024-12-10 22:59:22.539969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.942 qpair failed and we were unable to recover it. 00:28:14.942 [2024-12-10 22:59:22.540146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.942 [2024-12-10 22:59:22.540188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.942 qpair failed and we were unable to recover it. 00:28:14.942 [2024-12-10 22:59:22.540404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.942 [2024-12-10 22:59:22.540437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.942 qpair failed and we were unable to recover it. 00:28:14.942 [2024-12-10 22:59:22.540552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.942 [2024-12-10 22:59:22.540582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.942 qpair failed and we were unable to recover it. 00:28:14.942 [2024-12-10 22:59:22.540703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.942 [2024-12-10 22:59:22.540735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.942 qpair failed and we were unable to recover it. 00:28:14.942 [2024-12-10 22:59:22.540927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.942 [2024-12-10 22:59:22.540959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.942 qpair failed and we were unable to recover it. 00:28:14.942 [2024-12-10 22:59:22.541085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.942 [2024-12-10 22:59:22.541117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.942 qpair failed and we were unable to recover it. 00:28:14.942 [2024-12-10 22:59:22.541336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.942 [2024-12-10 22:59:22.541368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.942 qpair failed and we were unable to recover it. 00:28:14.942 [2024-12-10 22:59:22.541542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.942 [2024-12-10 22:59:22.541574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.942 qpair failed and we were unable to recover it. 00:28:14.942 [2024-12-10 22:59:22.541682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.942 [2024-12-10 22:59:22.541714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.942 qpair failed and we were unable to recover it. 00:28:14.942 [2024-12-10 22:59:22.541881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.942 [2024-12-10 22:59:22.541912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.942 qpair failed and we were unable to recover it. 00:28:14.942 [2024-12-10 22:59:22.542040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.942 [2024-12-10 22:59:22.542071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.942 qpair failed and we were unable to recover it. 00:28:14.942 [2024-12-10 22:59:22.542276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.942 [2024-12-10 22:59:22.542308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.942 qpair failed and we were unable to recover it. 00:28:14.942 [2024-12-10 22:59:22.542421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.942 [2024-12-10 22:59:22.542454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.942 qpair failed and we were unable to recover it. 00:28:14.942 [2024-12-10 22:59:22.542580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.942 [2024-12-10 22:59:22.542612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.942 qpair failed and we were unable to recover it. 00:28:14.942 [2024-12-10 22:59:22.542799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.942 [2024-12-10 22:59:22.542831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.942 qpair failed and we were unable to recover it. 00:28:14.942 [2024-12-10 22:59:22.543069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.942 [2024-12-10 22:59:22.543100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.942 qpair failed and we were unable to recover it. 00:28:14.942 [2024-12-10 22:59:22.543216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.942 [2024-12-10 22:59:22.543249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.942 qpair failed and we were unable to recover it. 00:28:14.942 [2024-12-10 22:59:22.543374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.942 [2024-12-10 22:59:22.543407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.942 qpair failed and we were unable to recover it. 00:28:14.942 [2024-12-10 22:59:22.543525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.942 [2024-12-10 22:59:22.543557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.942 qpair failed and we were unable to recover it. 00:28:14.942 [2024-12-10 22:59:22.543760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.942 [2024-12-10 22:59:22.543792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.942 qpair failed and we were unable to recover it. 00:28:14.942 [2024-12-10 22:59:22.544037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.942 [2024-12-10 22:59:22.544070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.942 qpair failed and we were unable to recover it. 00:28:14.942 [2024-12-10 22:59:22.544182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.942 [2024-12-10 22:59:22.544215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.942 qpair failed and we were unable to recover it. 00:28:14.942 [2024-12-10 22:59:22.544395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.942 [2024-12-10 22:59:22.544428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.942 qpair failed and we were unable to recover it. 00:28:14.942 [2024-12-10 22:59:22.544543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.942 [2024-12-10 22:59:22.544575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.942 qpair failed and we were unable to recover it. 00:28:14.942 [2024-12-10 22:59:22.544763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.942 [2024-12-10 22:59:22.544796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.942 qpair failed and we were unable to recover it. 00:28:14.942 [2024-12-10 22:59:22.544967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.942 [2024-12-10 22:59:22.544999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.942 qpair failed and we were unable to recover it. 00:28:14.942 [2024-12-10 22:59:22.545203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.942 [2024-12-10 22:59:22.545241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.942 qpair failed and we were unable to recover it. 00:28:14.942 [2024-12-10 22:59:22.545417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.942 [2024-12-10 22:59:22.545450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.942 qpair failed and we were unable to recover it. 00:28:14.942 [2024-12-10 22:59:22.545631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.942 [2024-12-10 22:59:22.545663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.942 qpair failed and we were unable to recover it. 00:28:14.942 [2024-12-10 22:59:22.545846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.942 [2024-12-10 22:59:22.545878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.942 qpair failed and we were unable to recover it. 00:28:14.942 [2024-12-10 22:59:22.545997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.942 [2024-12-10 22:59:22.546028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.942 qpair failed and we were unable to recover it. 00:28:14.942 [2024-12-10 22:59:22.546150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.942 [2024-12-10 22:59:22.546191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.942 qpair failed and we were unable to recover it. 00:28:14.942 [2024-12-10 22:59:22.546305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.942 [2024-12-10 22:59:22.546336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.942 qpair failed and we were unable to recover it. 00:28:14.942 [2024-12-10 22:59:22.546572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.942 [2024-12-10 22:59:22.546603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.942 qpair failed and we were unable to recover it. 00:28:14.942 [2024-12-10 22:59:22.546775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.942 [2024-12-10 22:59:22.546808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.942 qpair failed and we were unable to recover it. 00:28:14.942 [2024-12-10 22:59:22.546985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.942 [2024-12-10 22:59:22.547016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.942 qpair failed and we were unable to recover it. 00:28:14.942 [2024-12-10 22:59:22.547200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.942 [2024-12-10 22:59:22.547232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.942 qpair failed and we were unable to recover it. 00:28:14.942 [2024-12-10 22:59:22.547338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.942 [2024-12-10 22:59:22.547372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.942 qpair failed and we were unable to recover it. 00:28:14.942 [2024-12-10 22:59:22.547490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.943 [2024-12-10 22:59:22.547521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.943 qpair failed and we were unable to recover it. 00:28:14.943 [2024-12-10 22:59:22.547729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.943 [2024-12-10 22:59:22.547761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.943 qpair failed and we were unable to recover it. 00:28:14.943 [2024-12-10 22:59:22.547935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.943 [2024-12-10 22:59:22.547966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.943 qpair failed and we were unable to recover it. 00:28:14.943 [2024-12-10 22:59:22.548080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.943 [2024-12-10 22:59:22.548112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.943 qpair failed and we were unable to recover it. 00:28:14.943 [2024-12-10 22:59:22.548303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.943 [2024-12-10 22:59:22.548337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.943 qpair failed and we were unable to recover it. 00:28:14.943 [2024-12-10 22:59:22.548445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.943 [2024-12-10 22:59:22.548477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.943 qpair failed and we were unable to recover it. 00:28:14.943 [2024-12-10 22:59:22.548668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.943 [2024-12-10 22:59:22.548699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.943 qpair failed and we were unable to recover it. 00:28:14.943 [2024-12-10 22:59:22.548824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.943 [2024-12-10 22:59:22.548855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.943 qpair failed and we were unable to recover it. 00:28:14.943 [2024-12-10 22:59:22.549035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.943 [2024-12-10 22:59:22.549067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.943 qpair failed and we were unable to recover it. 00:28:14.943 [2024-12-10 22:59:22.549246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.943 [2024-12-10 22:59:22.549281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.943 qpair failed and we were unable to recover it. 00:28:14.943 [2024-12-10 22:59:22.549390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.943 [2024-12-10 22:59:22.549422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.943 qpair failed and we were unable to recover it. 00:28:14.943 [2024-12-10 22:59:22.549680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.943 [2024-12-10 22:59:22.549712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.943 qpair failed and we were unable to recover it. 00:28:14.943 [2024-12-10 22:59:22.549817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.943 [2024-12-10 22:59:22.549849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.943 qpair failed and we were unable to recover it. 00:28:14.943 [2024-12-10 22:59:22.550022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.943 [2024-12-10 22:59:22.550054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.943 qpair failed and we were unable to recover it. 00:28:14.943 [2024-12-10 22:59:22.550178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.943 [2024-12-10 22:59:22.550212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.943 qpair failed and we were unable to recover it. 00:28:14.943 [2024-12-10 22:59:22.550460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.943 [2024-12-10 22:59:22.550530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.943 qpair failed and we were unable to recover it. 00:28:14.943 [2024-12-10 22:59:22.550672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.943 [2024-12-10 22:59:22.550707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.943 qpair failed and we were unable to recover it. 00:28:14.943 [2024-12-10 22:59:22.550824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.943 [2024-12-10 22:59:22.550857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.943 qpair failed and we were unable to recover it. 00:28:14.943 [2024-12-10 22:59:22.550966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.943 [2024-12-10 22:59:22.550997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.943 qpair failed and we were unable to recover it. 00:28:14.943 [2024-12-10 22:59:22.551175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.943 [2024-12-10 22:59:22.551208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.943 qpair failed and we were unable to recover it. 00:28:14.943 [2024-12-10 22:59:22.551401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.943 [2024-12-10 22:59:22.551432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.943 qpair failed and we were unable to recover it. 00:28:14.943 [2024-12-10 22:59:22.551551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.943 [2024-12-10 22:59:22.551581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.943 qpair failed and we were unable to recover it. 00:28:14.943 [2024-12-10 22:59:22.551749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.943 [2024-12-10 22:59:22.551781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.943 qpair failed and we were unable to recover it. 00:28:14.943 [2024-12-10 22:59:22.551909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.943 [2024-12-10 22:59:22.551939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.943 qpair failed and we were unable to recover it. 00:28:14.943 [2024-12-10 22:59:22.552040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.943 [2024-12-10 22:59:22.552070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.943 qpair failed and we were unable to recover it. 00:28:14.943 [2024-12-10 22:59:22.552257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.943 [2024-12-10 22:59:22.552291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.943 qpair failed and we were unable to recover it. 00:28:14.943 [2024-12-10 22:59:22.552475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.943 [2024-12-10 22:59:22.552506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.943 qpair failed and we were unable to recover it. 00:28:14.943 [2024-12-10 22:59:22.552622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.943 [2024-12-10 22:59:22.552651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.943 qpair failed and we were unable to recover it. 00:28:14.943 [2024-12-10 22:59:22.552760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.943 [2024-12-10 22:59:22.552791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.943 qpair failed and we were unable to recover it. 00:28:14.943 [2024-12-10 22:59:22.552935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.943 [2024-12-10 22:59:22.552965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.943 qpair failed and we were unable to recover it. 00:28:14.943 [2024-12-10 22:59:22.553142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.943 [2024-12-10 22:59:22.553184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.943 qpair failed and we were unable to recover it. 00:28:14.943 [2024-12-10 22:59:22.553360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.943 [2024-12-10 22:59:22.553391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.943 qpair failed and we were unable to recover it. 00:28:14.943 [2024-12-10 22:59:22.553506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.943 [2024-12-10 22:59:22.553536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.943 qpair failed and we were unable to recover it. 00:28:14.943 [2024-12-10 22:59:22.553705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.943 [2024-12-10 22:59:22.553734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.943 qpair failed and we were unable to recover it. 00:28:14.943 [2024-12-10 22:59:22.553921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.943 [2024-12-10 22:59:22.553952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.943 qpair failed and we were unable to recover it. 00:28:14.943 [2024-12-10 22:59:22.554210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.944 [2024-12-10 22:59:22.554242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.944 qpair failed and we were unable to recover it. 00:28:14.944 [2024-12-10 22:59:22.554367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.944 [2024-12-10 22:59:22.554400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.944 qpair failed and we were unable to recover it. 00:28:14.944 [2024-12-10 22:59:22.554587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.944 [2024-12-10 22:59:22.554617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.944 qpair failed and we were unable to recover it. 00:28:14.944 [2024-12-10 22:59:22.554808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.944 [2024-12-10 22:59:22.554842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.944 qpair failed and we were unable to recover it. 00:28:14.944 [2024-12-10 22:59:22.555013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.944 [2024-12-10 22:59:22.555043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.944 qpair failed and we were unable to recover it. 00:28:14.944 [2024-12-10 22:59:22.555248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.944 [2024-12-10 22:59:22.555281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.944 qpair failed and we were unable to recover it. 00:28:14.944 [2024-12-10 22:59:22.555478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.944 [2024-12-10 22:59:22.555509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.944 qpair failed and we were unable to recover it. 00:28:14.944 [2024-12-10 22:59:22.555627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.944 [2024-12-10 22:59:22.555665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.944 qpair failed and we were unable to recover it. 00:28:14.944 [2024-12-10 22:59:22.555784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.944 [2024-12-10 22:59:22.555815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.944 qpair failed and we were unable to recover it. 00:28:14.944 [2024-12-10 22:59:22.555926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.944 [2024-12-10 22:59:22.555958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.944 qpair failed and we were unable to recover it. 00:28:14.944 [2024-12-10 22:59:22.556128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.944 [2024-12-10 22:59:22.556171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.944 qpair failed and we were unable to recover it. 00:28:14.944 [2024-12-10 22:59:22.556435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.944 [2024-12-10 22:59:22.556468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.944 qpair failed and we were unable to recover it. 00:28:14.944 [2024-12-10 22:59:22.556636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.944 [2024-12-10 22:59:22.556666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.944 qpair failed and we were unable to recover it. 00:28:14.944 [2024-12-10 22:59:22.556849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.944 [2024-12-10 22:59:22.556881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.944 qpair failed and we were unable to recover it. 00:28:14.944 [2024-12-10 22:59:22.557074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.944 [2024-12-10 22:59:22.557107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.944 qpair failed and we were unable to recover it. 00:28:14.944 [2024-12-10 22:59:22.557289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.944 [2024-12-10 22:59:22.557322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.944 qpair failed and we were unable to recover it. 00:28:14.944 [2024-12-10 22:59:22.557490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.944 [2024-12-10 22:59:22.557523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.944 qpair failed and we were unable to recover it. 00:28:14.944 [2024-12-10 22:59:22.557632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.944 [2024-12-10 22:59:22.557665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.944 qpair failed and we were unable to recover it. 00:28:14.944 [2024-12-10 22:59:22.557862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.944 [2024-12-10 22:59:22.557895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.944 qpair failed and we were unable to recover it. 00:28:14.944 [2024-12-10 22:59:22.558006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.944 [2024-12-10 22:59:22.558038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.944 qpair failed and we were unable to recover it. 00:28:14.944 [2024-12-10 22:59:22.558168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.944 [2024-12-10 22:59:22.558201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.944 qpair failed and we were unable to recover it. 00:28:14.944 [2024-12-10 22:59:22.558382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.944 [2024-12-10 22:59:22.558413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.944 qpair failed and we were unable to recover it. 00:28:14.944 [2024-12-10 22:59:22.558514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.944 [2024-12-10 22:59:22.558546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.944 qpair failed and we were unable to recover it. 00:28:14.944 [2024-12-10 22:59:22.558661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.944 [2024-12-10 22:59:22.558692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.944 qpair failed and we were unable to recover it. 00:28:14.944 [2024-12-10 22:59:22.558812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.944 [2024-12-10 22:59:22.558843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.944 qpair failed and we were unable to recover it. 00:28:14.944 [2024-12-10 22:59:22.558950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.944 [2024-12-10 22:59:22.558980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.944 qpair failed and we were unable to recover it. 00:28:14.944 [2024-12-10 22:59:22.559100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.944 [2024-12-10 22:59:22.559131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.944 qpair failed and we were unable to recover it. 00:28:14.944 [2024-12-10 22:59:22.559314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.944 [2024-12-10 22:59:22.559347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.944 qpair failed and we were unable to recover it. 00:28:14.944 [2024-12-10 22:59:22.559450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.944 [2024-12-10 22:59:22.559480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.944 qpair failed and we were unable to recover it. 00:28:14.944 [2024-12-10 22:59:22.559653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.944 [2024-12-10 22:59:22.559685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.944 qpair failed and we were unable to recover it. 00:28:14.944 [2024-12-10 22:59:22.559873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.944 [2024-12-10 22:59:22.559905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.944 qpair failed and we were unable to recover it. 00:28:14.944 [2024-12-10 22:59:22.560076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.944 [2024-12-10 22:59:22.560108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.944 qpair failed and we were unable to recover it. 00:28:14.944 [2024-12-10 22:59:22.560252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.944 [2024-12-10 22:59:22.560284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.944 qpair failed and we were unable to recover it. 00:28:14.944 [2024-12-10 22:59:22.560452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.944 [2024-12-10 22:59:22.560483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.944 qpair failed and we were unable to recover it. 00:28:14.944 [2024-12-10 22:59:22.560744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.944 [2024-12-10 22:59:22.560782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.944 qpair failed and we were unable to recover it. 00:28:14.944 [2024-12-10 22:59:22.560950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.944 [2024-12-10 22:59:22.560981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.944 qpair failed and we were unable to recover it. 00:28:14.944 [2024-12-10 22:59:22.561107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.944 [2024-12-10 22:59:22.561139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.944 qpair failed and we were unable to recover it. 00:28:14.944 [2024-12-10 22:59:22.561266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.944 [2024-12-10 22:59:22.561297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.944 qpair failed and we were unable to recover it. 00:28:14.944 [2024-12-10 22:59:22.561464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.944 [2024-12-10 22:59:22.561495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.944 qpair failed and we were unable to recover it. 00:28:14.944 [2024-12-10 22:59:22.561599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.944 [2024-12-10 22:59:22.561632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.944 qpair failed and we were unable to recover it. 00:28:14.944 [2024-12-10 22:59:22.561886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.945 [2024-12-10 22:59:22.561917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.945 qpair failed and we were unable to recover it. 00:28:14.945 [2024-12-10 22:59:22.562030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.945 [2024-12-10 22:59:22.562061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.945 qpair failed and we were unable to recover it. 00:28:14.945 [2024-12-10 22:59:22.562257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.945 [2024-12-10 22:59:22.562290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.945 qpair failed and we were unable to recover it. 00:28:14.945 [2024-12-10 22:59:22.562456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.945 [2024-12-10 22:59:22.562487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.945 qpair failed and we were unable to recover it. 00:28:14.945 [2024-12-10 22:59:22.562602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.945 [2024-12-10 22:59:22.562633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.945 qpair failed and we were unable to recover it. 00:28:14.945 [2024-12-10 22:59:22.562733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.945 [2024-12-10 22:59:22.562764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.945 qpair failed and we were unable to recover it. 00:28:14.945 [2024-12-10 22:59:22.562947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.945 [2024-12-10 22:59:22.562978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.945 qpair failed and we were unable to recover it. 00:28:14.945 [2024-12-10 22:59:22.563174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.945 [2024-12-10 22:59:22.563208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:14.945 qpair failed and we were unable to recover it. 00:28:14.945 [2024-12-10 22:59:22.563371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.945 [2024-12-10 22:59:22.563441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.945 qpair failed and we were unable to recover it. 00:28:14.945 [2024-12-10 22:59:22.563679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.945 [2024-12-10 22:59:22.563749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.945 qpair failed and we were unable to recover it. 00:28:14.945 [2024-12-10 22:59:22.563953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.945 [2024-12-10 22:59:22.563989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.945 qpair failed and we were unable to recover it. 00:28:14.945 [2024-12-10 22:59:22.564210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.945 [2024-12-10 22:59:22.564244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.945 qpair failed and we were unable to recover it. 00:28:14.945 [2024-12-10 22:59:22.564514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.945 [2024-12-10 22:59:22.564546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.945 qpair failed and we were unable to recover it. 00:28:14.945 [2024-12-10 22:59:22.564666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.945 [2024-12-10 22:59:22.564697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.945 qpair failed and we were unable to recover it. 00:28:14.945 [2024-12-10 22:59:22.564890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.945 [2024-12-10 22:59:22.564922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.945 qpair failed and we were unable to recover it. 00:28:14.945 [2024-12-10 22:59:22.565104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.945 [2024-12-10 22:59:22.565136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.945 qpair failed and we were unable to recover it. 00:28:14.945 [2024-12-10 22:59:22.565274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.945 [2024-12-10 22:59:22.565307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.945 qpair failed and we were unable to recover it. 00:28:14.945 [2024-12-10 22:59:22.565416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.945 [2024-12-10 22:59:22.565447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.945 qpair failed and we were unable to recover it. 00:28:14.945 [2024-12-10 22:59:22.565640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.945 [2024-12-10 22:59:22.565671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.945 qpair failed and we were unable to recover it. 00:28:14.945 [2024-12-10 22:59:22.565776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.945 [2024-12-10 22:59:22.565808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.945 qpair failed and we were unable to recover it. 00:28:14.945 [2024-12-10 22:59:22.566021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.945 [2024-12-10 22:59:22.566055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.945 qpair failed and we were unable to recover it. 00:28:14.945 [2024-12-10 22:59:22.566169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.945 [2024-12-10 22:59:22.566213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.945 qpair failed and we were unable to recover it. 00:28:14.945 [2024-12-10 22:59:22.566335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.945 [2024-12-10 22:59:22.566366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.945 qpair failed and we were unable to recover it. 00:28:14.945 [2024-12-10 22:59:22.566467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.945 [2024-12-10 22:59:22.566498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.945 qpair failed and we were unable to recover it. 00:28:14.945 [2024-12-10 22:59:22.566613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.945 [2024-12-10 22:59:22.566645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.945 qpair failed and we were unable to recover it. 00:28:14.945 [2024-12-10 22:59:22.566839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.945 [2024-12-10 22:59:22.566871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.945 qpair failed and we were unable to recover it. 00:28:14.945 [2024-12-10 22:59:22.566988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.945 [2024-12-10 22:59:22.567020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.945 qpair failed and we were unable to recover it. 00:28:14.945 [2024-12-10 22:59:22.567129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.945 [2024-12-10 22:59:22.567174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.945 qpair failed and we were unable to recover it. 00:28:14.945 [2024-12-10 22:59:22.567291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.945 [2024-12-10 22:59:22.567321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.945 qpair failed and we were unable to recover it. 00:28:14.945 [2024-12-10 22:59:22.567438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.945 [2024-12-10 22:59:22.567471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.945 qpair failed and we were unable to recover it. 00:28:14.945 [2024-12-10 22:59:22.567578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.945 [2024-12-10 22:59:22.567611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.945 qpair failed and we were unable to recover it. 00:28:14.945 [2024-12-10 22:59:22.567717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.945 [2024-12-10 22:59:22.567749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.945 qpair failed and we were unable to recover it. 00:28:14.945 [2024-12-10 22:59:22.567916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.945 [2024-12-10 22:59:22.567948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.945 qpair failed and we were unable to recover it. 00:28:14.945 [2024-12-10 22:59:22.568139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.945 [2024-12-10 22:59:22.568183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.945 qpair failed and we were unable to recover it. 00:28:14.945 [2024-12-10 22:59:22.568353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.945 [2024-12-10 22:59:22.568384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.945 qpair failed and we were unable to recover it. 00:28:14.945 [2024-12-10 22:59:22.568512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.945 [2024-12-10 22:59:22.568544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.945 qpair failed and we were unable to recover it. 00:28:14.945 [2024-12-10 22:59:22.568711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.945 [2024-12-10 22:59:22.568743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.945 qpair failed and we were unable to recover it. 00:28:14.945 [2024-12-10 22:59:22.568916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.945 [2024-12-10 22:59:22.568948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.945 qpair failed and we were unable to recover it. 00:28:14.945 [2024-12-10 22:59:22.569052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.945 [2024-12-10 22:59:22.569083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.945 qpair failed and we were unable to recover it. 00:28:14.945 [2024-12-10 22:59:22.569199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.946 [2024-12-10 22:59:22.569233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.946 qpair failed and we were unable to recover it. 00:28:14.946 [2024-12-10 22:59:22.569437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.946 [2024-12-10 22:59:22.569470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.946 qpair failed and we were unable to recover it. 00:28:14.946 [2024-12-10 22:59:22.569707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.946 [2024-12-10 22:59:22.569738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.946 qpair failed and we were unable to recover it. 00:28:14.946 [2024-12-10 22:59:22.569857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.946 [2024-12-10 22:59:22.569888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.946 qpair failed and we were unable to recover it. 00:28:14.946 [2024-12-10 22:59:22.570013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.946 [2024-12-10 22:59:22.570045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.946 qpair failed and we were unable to recover it. 00:28:14.946 [2024-12-10 22:59:22.570285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.946 [2024-12-10 22:59:22.570317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.946 qpair failed and we were unable to recover it. 00:28:14.946 [2024-12-10 22:59:22.570420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.946 [2024-12-10 22:59:22.570452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.946 qpair failed and we were unable to recover it. 00:28:14.946 [2024-12-10 22:59:22.570650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.946 [2024-12-10 22:59:22.570682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.946 qpair failed and we were unable to recover it. 00:28:14.946 [2024-12-10 22:59:22.570851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.946 [2024-12-10 22:59:22.570883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:14.946 qpair failed and we were unable to recover it. 00:28:14.946 [2024-12-10 22:59:22.571118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.946 [2024-12-10 22:59:22.571204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.946 qpair failed and we were unable to recover it. 00:28:14.946 [2024-12-10 22:59:22.571357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.946 [2024-12-10 22:59:22.571393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.946 qpair failed and we were unable to recover it. 00:28:14.946 [2024-12-10 22:59:22.571504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.946 [2024-12-10 22:59:22.571538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.946 qpair failed and we were unable to recover it. 00:28:14.946 [2024-12-10 22:59:22.571779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.946 [2024-12-10 22:59:22.571811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.946 qpair failed and we were unable to recover it. 00:28:14.946 [2024-12-10 22:59:22.571926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.946 [2024-12-10 22:59:22.571959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.946 qpair failed and we were unable to recover it. 00:28:14.946 [2024-12-10 22:59:22.572174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.946 [2024-12-10 22:59:22.572207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.946 qpair failed and we were unable to recover it. 00:28:14.946 [2024-12-10 22:59:22.572403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.946 [2024-12-10 22:59:22.572435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.946 qpair failed and we were unable to recover it. 00:28:14.946 [2024-12-10 22:59:22.572617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.946 [2024-12-10 22:59:22.572650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.946 qpair failed and we were unable to recover it. 00:28:14.946 [2024-12-10 22:59:22.572855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.946 [2024-12-10 22:59:22.572886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.946 qpair failed and we were unable to recover it. 00:28:14.946 [2024-12-10 22:59:22.573006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.946 [2024-12-10 22:59:22.573039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.946 qpair failed and we were unable to recover it. 00:28:14.946 [2024-12-10 22:59:22.573156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.946 [2024-12-10 22:59:22.573202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.946 qpair failed and we were unable to recover it. 00:28:14.946 [2024-12-10 22:59:22.573368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.946 [2024-12-10 22:59:22.573399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.946 qpair failed and we were unable to recover it. 00:28:14.946 [2024-12-10 22:59:22.573517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.946 [2024-12-10 22:59:22.573549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.946 qpair failed and we were unable to recover it. 00:28:14.946 [2024-12-10 22:59:22.573718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.946 [2024-12-10 22:59:22.573758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.946 qpair failed and we were unable to recover it. 00:28:14.946 [2024-12-10 22:59:22.573997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.946 [2024-12-10 22:59:22.574029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.946 qpair failed and we were unable to recover it. 00:28:14.946 [2024-12-10 22:59:22.574268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.946 [2024-12-10 22:59:22.574301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.946 qpair failed and we were unable to recover it. 00:28:14.946 [2024-12-10 22:59:22.574482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.946 [2024-12-10 22:59:22.574513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.946 qpair failed and we were unable to recover it. 00:28:14.946 [2024-12-10 22:59:22.574626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.946 [2024-12-10 22:59:22.574657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.946 qpair failed and we were unable to recover it. 00:28:14.946 [2024-12-10 22:59:22.574827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.946 [2024-12-10 22:59:22.574859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.946 qpair failed and we were unable to recover it. 00:28:14.946 [2024-12-10 22:59:22.575036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.946 [2024-12-10 22:59:22.575068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.946 qpair failed and we were unable to recover it. 00:28:14.946 [2024-12-10 22:59:22.575187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.946 [2024-12-10 22:59:22.575219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.946 qpair failed and we were unable to recover it. 00:28:14.946 [2024-12-10 22:59:22.575387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.946 [2024-12-10 22:59:22.575418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.946 qpair failed and we were unable to recover it. 00:28:14.946 [2024-12-10 22:59:22.575532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.946 [2024-12-10 22:59:22.575563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.946 qpair failed and we were unable to recover it. 00:28:14.946 [2024-12-10 22:59:22.575678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.946 [2024-12-10 22:59:22.575709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.946 qpair failed and we were unable to recover it. 00:28:14.946 [2024-12-10 22:59:22.575821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.946 [2024-12-10 22:59:22.575852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.946 qpair failed and we were unable to recover it. 00:28:14.946 [2024-12-10 22:59:22.575961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.946 [2024-12-10 22:59:22.575993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.946 qpair failed and we were unable to recover it. 00:28:14.946 [2024-12-10 22:59:22.576260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.946 [2024-12-10 22:59:22.576296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.946 qpair failed and we were unable to recover it. 00:28:14.946 [2024-12-10 22:59:22.576566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.946 [2024-12-10 22:59:22.576598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.946 qpair failed and we were unable to recover it. 00:28:14.946 [2024-12-10 22:59:22.576768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.946 [2024-12-10 22:59:22.576800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.946 qpair failed and we were unable to recover it. 00:28:14.946 [2024-12-10 22:59:22.576914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.946 [2024-12-10 22:59:22.576946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.946 qpair failed and we were unable to recover it. 00:28:14.946 [2024-12-10 22:59:22.577113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.946 [2024-12-10 22:59:22.577144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.946 qpair failed and we were unable to recover it. 00:28:14.947 [2024-12-10 22:59:22.577421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.947 [2024-12-10 22:59:22.577453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.947 qpair failed and we were unable to recover it. 00:28:14.947 [2024-12-10 22:59:22.577649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.947 [2024-12-10 22:59:22.577680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.947 qpair failed and we were unable to recover it. 00:28:14.947 [2024-12-10 22:59:22.577847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.947 [2024-12-10 22:59:22.577878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.947 qpair failed and we were unable to recover it. 00:28:14.947 [2024-12-10 22:59:22.578003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.947 [2024-12-10 22:59:22.578034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.947 qpair failed and we were unable to recover it. 00:28:14.947 [2024-12-10 22:59:22.578146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.947 [2024-12-10 22:59:22.578191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.947 qpair failed and we were unable to recover it. 00:28:14.947 [2024-12-10 22:59:22.578306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.947 [2024-12-10 22:59:22.578338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.947 qpair failed and we were unable to recover it. 00:28:14.947 [2024-12-10 22:59:22.578446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.947 [2024-12-10 22:59:22.578476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.947 qpair failed and we were unable to recover it. 00:28:14.947 [2024-12-10 22:59:22.578577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.947 [2024-12-10 22:59:22.578608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.947 qpair failed and we were unable to recover it. 00:28:14.947 [2024-12-10 22:59:22.578830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.947 [2024-12-10 22:59:22.578861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.947 qpair failed and we were unable to recover it. 00:28:14.947 [2024-12-10 22:59:22.579035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.947 [2024-12-10 22:59:22.579066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.947 qpair failed and we were unable to recover it. 00:28:14.947 [2024-12-10 22:59:22.579179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.947 [2024-12-10 22:59:22.579212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.947 qpair failed and we were unable to recover it. 00:28:14.947 [2024-12-10 22:59:22.579384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.947 [2024-12-10 22:59:22.579416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.947 qpair failed and we were unable to recover it. 00:28:14.947 [2024-12-10 22:59:22.579678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.947 [2024-12-10 22:59:22.579709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.947 qpair failed and we were unable to recover it. 00:28:14.947 [2024-12-10 22:59:22.579819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.947 [2024-12-10 22:59:22.579849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.947 qpair failed and we were unable to recover it. 00:28:14.947 [2024-12-10 22:59:22.580070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.947 [2024-12-10 22:59:22.580101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.947 qpair failed and we were unable to recover it. 00:28:14.947 [2024-12-10 22:59:22.580208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.947 [2024-12-10 22:59:22.580240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.947 qpair failed and we were unable to recover it. 00:28:14.947 [2024-12-10 22:59:22.580359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.947 [2024-12-10 22:59:22.580390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.947 qpair failed and we were unable to recover it. 00:28:14.947 [2024-12-10 22:59:22.580579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.947 [2024-12-10 22:59:22.580609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.947 qpair failed and we were unable to recover it. 00:28:14.947 [2024-12-10 22:59:22.580777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.947 [2024-12-10 22:59:22.580808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.947 qpair failed and we were unable to recover it. 00:28:14.947 [2024-12-10 22:59:22.580927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.947 [2024-12-10 22:59:22.580958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.947 qpair failed and we were unable to recover it. 00:28:14.947 [2024-12-10 22:59:22.581056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.947 [2024-12-10 22:59:22.581087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.947 qpair failed and we were unable to recover it. 00:28:14.947 [2024-12-10 22:59:22.581253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.947 [2024-12-10 22:59:22.581285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.947 qpair failed and we were unable to recover it. 00:28:14.947 [2024-12-10 22:59:22.581492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.947 [2024-12-10 22:59:22.581529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.947 qpair failed and we were unable to recover it. 00:28:14.947 [2024-12-10 22:59:22.581640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.947 [2024-12-10 22:59:22.581672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.947 qpair failed and we were unable to recover it. 00:28:14.947 [2024-12-10 22:59:22.581838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.947 [2024-12-10 22:59:22.581869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.947 qpair failed and we were unable to recover it. 00:28:14.947 [2024-12-10 22:59:22.582037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.947 [2024-12-10 22:59:22.582069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.947 qpair failed and we were unable to recover it. 00:28:14.947 [2024-12-10 22:59:22.582236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.947 [2024-12-10 22:59:22.582269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.947 qpair failed and we were unable to recover it. 00:28:14.947 [2024-12-10 22:59:22.582388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.947 [2024-12-10 22:59:22.582419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.947 qpair failed and we were unable to recover it. 00:28:14.947 [2024-12-10 22:59:22.582614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.947 [2024-12-10 22:59:22.582646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.947 qpair failed and we were unable to recover it. 00:28:14.947 [2024-12-10 22:59:22.582766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.947 [2024-12-10 22:59:22.582797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.947 qpair failed and we were unable to recover it. 00:28:14.947 [2024-12-10 22:59:22.582897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.947 [2024-12-10 22:59:22.582928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.947 qpair failed and we were unable to recover it. 00:28:14.947 [2024-12-10 22:59:22.583098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.947 [2024-12-10 22:59:22.583130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.947 qpair failed and we were unable to recover it. 00:28:14.947 [2024-12-10 22:59:22.583402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.947 [2024-12-10 22:59:22.583435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.947 qpair failed and we were unable to recover it. 00:28:14.947 [2024-12-10 22:59:22.583627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.947 [2024-12-10 22:59:22.583659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.947 qpair failed and we were unable to recover it. 00:28:14.947 [2024-12-10 22:59:22.583777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.947 [2024-12-10 22:59:22.583809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.947 qpair failed and we were unable to recover it. 00:28:14.947 [2024-12-10 22:59:22.584002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.948 [2024-12-10 22:59:22.584032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.948 qpair failed and we were unable to recover it. 00:28:14.948 [2024-12-10 22:59:22.584154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.948 [2024-12-10 22:59:22.584198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.948 qpair failed and we were unable to recover it. 00:28:14.948 [2024-12-10 22:59:22.584391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.948 [2024-12-10 22:59:22.584422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.948 qpair failed and we were unable to recover it. 00:28:14.948 [2024-12-10 22:59:22.584642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.948 [2024-12-10 22:59:22.584675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.948 qpair failed and we were unable to recover it. 00:28:14.948 [2024-12-10 22:59:22.584865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.948 [2024-12-10 22:59:22.584896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.948 qpair failed and we were unable to recover it. 00:28:14.948 [2024-12-10 22:59:22.585066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.948 [2024-12-10 22:59:22.585097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.948 qpair failed and we were unable to recover it. 00:28:14.948 [2024-12-10 22:59:22.585378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.948 [2024-12-10 22:59:22.585411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.948 qpair failed and we were unable to recover it. 00:28:14.948 [2024-12-10 22:59:22.585525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.948 [2024-12-10 22:59:22.585558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.948 qpair failed and we were unable to recover it. 00:28:14.948 [2024-12-10 22:59:22.585709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.948 [2024-12-10 22:59:22.585741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.948 qpair failed and we were unable to recover it. 00:28:14.948 [2024-12-10 22:59:22.585994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.948 [2024-12-10 22:59:22.586026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.948 qpair failed and we were unable to recover it. 00:28:14.948 [2024-12-10 22:59:22.586220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.948 [2024-12-10 22:59:22.586253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.948 qpair failed and we were unable to recover it. 00:28:14.948 [2024-12-10 22:59:22.586422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.948 [2024-12-10 22:59:22.586454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.948 qpair failed and we were unable to recover it. 00:28:14.948 [2024-12-10 22:59:22.586633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.948 [2024-12-10 22:59:22.586665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.948 qpair failed and we were unable to recover it. 00:28:14.948 [2024-12-10 22:59:22.586875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.948 [2024-12-10 22:59:22.586908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.948 qpair failed and we were unable to recover it. 00:28:14.948 [2024-12-10 22:59:22.587177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.948 [2024-12-10 22:59:22.587210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.948 qpair failed and we were unable to recover it. 00:28:14.948 [2024-12-10 22:59:22.587323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.948 [2024-12-10 22:59:22.587355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.948 qpair failed and we were unable to recover it. 00:28:14.948 [2024-12-10 22:59:22.587550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.948 [2024-12-10 22:59:22.587583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.948 qpair failed and we were unable to recover it. 00:28:14.948 [2024-12-10 22:59:22.587692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.948 [2024-12-10 22:59:22.587724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.948 qpair failed and we were unable to recover it. 00:28:14.948 [2024-12-10 22:59:22.587894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.948 [2024-12-10 22:59:22.587926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.948 qpair failed and we were unable to recover it. 00:28:14.948 [2024-12-10 22:59:22.588112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.948 [2024-12-10 22:59:22.588144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.948 qpair failed and we were unable to recover it. 00:28:14.948 [2024-12-10 22:59:22.588282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.948 [2024-12-10 22:59:22.588314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.948 qpair failed and we were unable to recover it. 00:28:14.948 [2024-12-10 22:59:22.588563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.948 [2024-12-10 22:59:22.588595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.948 qpair failed and we were unable to recover it. 00:28:14.948 [2024-12-10 22:59:22.588700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.948 [2024-12-10 22:59:22.588732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.948 qpair failed and we were unable to recover it. 00:28:14.948 [2024-12-10 22:59:22.588851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.948 [2024-12-10 22:59:22.588883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.948 qpair failed and we were unable to recover it. 00:28:14.948 [2024-12-10 22:59:22.589053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.948 [2024-12-10 22:59:22.589085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.948 qpair failed and we were unable to recover it. 00:28:14.948 [2024-12-10 22:59:22.589271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.948 [2024-12-10 22:59:22.589304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.948 qpair failed and we were unable to recover it. 00:28:14.948 [2024-12-10 22:59:22.589417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.948 [2024-12-10 22:59:22.589448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.948 qpair failed and we were unable to recover it. 00:28:14.948 [2024-12-10 22:59:22.589611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.948 [2024-12-10 22:59:22.589643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.948 qpair failed and we were unable to recover it. 00:28:14.948 [2024-12-10 22:59:22.589751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.948 [2024-12-10 22:59:22.589781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.948 qpair failed and we were unable to recover it. 00:28:14.948 [2024-12-10 22:59:22.589900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.948 [2024-12-10 22:59:22.589932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.948 qpair failed and we were unable to recover it. 00:28:14.948 [2024-12-10 22:59:22.590097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.948 [2024-12-10 22:59:22.590129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.948 qpair failed and we were unable to recover it. 00:28:14.948 [2024-12-10 22:59:22.590306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.948 [2024-12-10 22:59:22.590339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.948 qpair failed and we were unable to recover it. 00:28:14.948 [2024-12-10 22:59:22.590502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.948 [2024-12-10 22:59:22.590534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.948 qpair failed and we were unable to recover it. 00:28:14.948 [2024-12-10 22:59:22.590797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.948 [2024-12-10 22:59:22.590829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.948 qpair failed and we were unable to recover it. 00:28:14.948 [2024-12-10 22:59:22.590963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.948 [2024-12-10 22:59:22.590995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.948 qpair failed and we were unable to recover it. 00:28:14.948 [2024-12-10 22:59:22.591190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.948 [2024-12-10 22:59:22.591224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.948 qpair failed and we were unable to recover it. 00:28:14.948 [2024-12-10 22:59:22.591360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.948 [2024-12-10 22:59:22.591391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.948 qpair failed and we were unable to recover it. 00:28:14.948 [2024-12-10 22:59:22.591493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.948 [2024-12-10 22:59:22.591524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.948 qpair failed and we were unable to recover it. 00:28:14.948 [2024-12-10 22:59:22.591715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.948 [2024-12-10 22:59:22.591747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.948 qpair failed and we were unable to recover it. 00:28:14.948 [2024-12-10 22:59:22.591849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.949 [2024-12-10 22:59:22.591881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.949 qpair failed and we were unable to recover it. 00:28:14.949 [2024-12-10 22:59:22.592063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.949 [2024-12-10 22:59:22.592093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.949 qpair failed and we were unable to recover it. 00:28:14.949 [2024-12-10 22:59:22.592273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.949 [2024-12-10 22:59:22.592306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.949 qpair failed and we were unable to recover it. 00:28:14.949 [2024-12-10 22:59:22.592508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.949 [2024-12-10 22:59:22.592540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.949 qpair failed and we were unable to recover it. 00:28:14.949 [2024-12-10 22:59:22.592642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.949 [2024-12-10 22:59:22.592674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.949 qpair failed and we were unable to recover it. 00:28:14.949 [2024-12-10 22:59:22.592774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.949 [2024-12-10 22:59:22.592805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.949 qpair failed and we were unable to recover it. 00:28:14.949 [2024-12-10 22:59:22.592971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.949 [2024-12-10 22:59:22.593003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.949 qpair failed and we were unable to recover it. 00:28:14.949 [2024-12-10 22:59:22.593194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.949 [2024-12-10 22:59:22.593227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.949 qpair failed and we were unable to recover it. 00:28:14.949 [2024-12-10 22:59:22.593340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.949 [2024-12-10 22:59:22.593371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.949 qpair failed and we were unable to recover it. 00:28:14.949 [2024-12-10 22:59:22.593506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.949 [2024-12-10 22:59:22.593538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.949 qpair failed and we were unable to recover it. 00:28:14.949 [2024-12-10 22:59:22.593639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.949 [2024-12-10 22:59:22.593670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.949 qpair failed and we were unable to recover it. 00:28:14.949 [2024-12-10 22:59:22.593874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.949 [2024-12-10 22:59:22.593905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.949 qpair failed and we were unable to recover it. 00:28:14.949 [2024-12-10 22:59:22.594143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.949 [2024-12-10 22:59:22.594185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.949 qpair failed and we were unable to recover it. 00:28:14.949 [2024-12-10 22:59:22.594287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.949 [2024-12-10 22:59:22.594318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.949 qpair failed and we were unable to recover it. 00:28:14.949 [2024-12-10 22:59:22.594436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.949 [2024-12-10 22:59:22.594468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.949 qpair failed and we were unable to recover it. 00:28:14.949 [2024-12-10 22:59:22.594577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.949 [2024-12-10 22:59:22.594614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.949 qpair failed and we were unable to recover it. 00:28:14.949 [2024-12-10 22:59:22.594802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.949 [2024-12-10 22:59:22.594834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.949 qpair failed and we were unable to recover it. 00:28:14.949 [2024-12-10 22:59:22.594952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.949 [2024-12-10 22:59:22.594984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.949 qpair failed and we were unable to recover it. 00:28:14.949 [2024-12-10 22:59:22.595120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.949 [2024-12-10 22:59:22.595151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.949 qpair failed and we were unable to recover it. 00:28:14.949 [2024-12-10 22:59:22.595332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.949 [2024-12-10 22:59:22.595364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.949 qpair failed and we were unable to recover it. 00:28:14.949 [2024-12-10 22:59:22.595561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.949 [2024-12-10 22:59:22.595594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.949 qpair failed and we were unable to recover it. 00:28:14.949 [2024-12-10 22:59:22.595784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.949 [2024-12-10 22:59:22.595815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.949 qpair failed and we were unable to recover it. 00:28:14.949 [2024-12-10 22:59:22.595993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.949 [2024-12-10 22:59:22.596024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.949 qpair failed and we were unable to recover it. 00:28:14.949 [2024-12-10 22:59:22.596193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.949 [2024-12-10 22:59:22.596226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.949 qpair failed and we were unable to recover it. 00:28:14.949 [2024-12-10 22:59:22.596391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.949 [2024-12-10 22:59:22.596422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.949 qpair failed and we were unable to recover it. 00:28:14.949 [2024-12-10 22:59:22.596657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.949 [2024-12-10 22:59:22.596689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.949 qpair failed and we were unable to recover it. 00:28:14.949 [2024-12-10 22:59:22.596859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.949 [2024-12-10 22:59:22.596890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.949 qpair failed and we were unable to recover it. 00:28:14.949 [2024-12-10 22:59:22.597024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.949 [2024-12-10 22:59:22.597055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.949 qpair failed and we were unable to recover it. 00:28:14.949 [2024-12-10 22:59:22.597232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.949 [2024-12-10 22:59:22.597265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.949 qpair failed and we were unable to recover it. 00:28:14.949 [2024-12-10 22:59:22.597433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.949 [2024-12-10 22:59:22.597466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.949 qpair failed and we were unable to recover it. 00:28:14.949 [2024-12-10 22:59:22.597587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.949 [2024-12-10 22:59:22.597618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.949 qpair failed and we were unable to recover it. 00:28:14.949 [2024-12-10 22:59:22.597808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.949 [2024-12-10 22:59:22.597840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.949 qpair failed and we were unable to recover it. 00:28:14.949 [2024-12-10 22:59:22.597951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.949 [2024-12-10 22:59:22.597982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.949 qpair failed and we were unable to recover it. 00:28:14.949 [2024-12-10 22:59:22.598182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.949 [2024-12-10 22:59:22.598216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.949 qpair failed and we were unable to recover it. 00:28:14.949 [2024-12-10 22:59:22.598416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.949 [2024-12-10 22:59:22.598448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.949 qpair failed and we were unable to recover it. 00:28:14.949 [2024-12-10 22:59:22.598660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.949 [2024-12-10 22:59:22.598692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.949 qpair failed and we were unable to recover it. 00:28:14.949 [2024-12-10 22:59:22.598808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.949 [2024-12-10 22:59:22.598839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.949 qpair failed and we were unable to recover it. 00:28:14.949 [2024-12-10 22:59:22.599007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.949 [2024-12-10 22:59:22.599039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.949 qpair failed and we were unable to recover it. 00:28:14.949 [2024-12-10 22:59:22.599230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.949 [2024-12-10 22:59:22.599263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.949 qpair failed and we were unable to recover it. 00:28:14.949 [2024-12-10 22:59:22.599377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.949 [2024-12-10 22:59:22.599410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.950 qpair failed and we were unable to recover it. 00:28:14.950 [2024-12-10 22:59:22.599580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.950 [2024-12-10 22:59:22.599612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.950 qpair failed and we were unable to recover it. 00:28:14.950 [2024-12-10 22:59:22.599795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.950 [2024-12-10 22:59:22.599826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.950 qpair failed and we were unable to recover it. 00:28:14.950 [2024-12-10 22:59:22.600096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.950 [2024-12-10 22:59:22.600127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.950 qpair failed and we were unable to recover it. 00:28:14.950 [2024-12-10 22:59:22.600245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.950 [2024-12-10 22:59:22.600278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.950 qpair failed and we were unable to recover it. 00:28:14.950 [2024-12-10 22:59:22.600400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.950 [2024-12-10 22:59:22.600431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.950 qpair failed and we were unable to recover it. 00:28:14.950 [2024-12-10 22:59:22.600592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.950 [2024-12-10 22:59:22.600624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.950 qpair failed and we were unable to recover it. 00:28:14.950 [2024-12-10 22:59:22.600808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.950 [2024-12-10 22:59:22.600839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.950 qpair failed and we were unable to recover it. 00:28:14.950 [2024-12-10 22:59:22.601008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.950 [2024-12-10 22:59:22.601040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.950 qpair failed and we were unable to recover it. 00:28:14.950 [2024-12-10 22:59:22.601208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.950 [2024-12-10 22:59:22.601242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.950 qpair failed and we were unable to recover it. 00:28:14.950 [2024-12-10 22:59:22.601362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.950 [2024-12-10 22:59:22.601393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.950 qpair failed and we were unable to recover it. 00:28:14.950 [2024-12-10 22:59:22.601519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.950 [2024-12-10 22:59:22.601550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.950 qpair failed and we were unable to recover it. 00:28:14.950 [2024-12-10 22:59:22.601673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.950 [2024-12-10 22:59:22.601705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.950 qpair failed and we were unable to recover it. 00:28:14.950 [2024-12-10 22:59:22.601819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.950 [2024-12-10 22:59:22.601851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.950 qpair failed and we were unable to recover it. 00:28:14.950 [2024-12-10 22:59:22.601972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.950 [2024-12-10 22:59:22.602003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.950 qpair failed and we were unable to recover it. 00:28:14.950 [2024-12-10 22:59:22.602121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.950 [2024-12-10 22:59:22.602153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.950 qpair failed and we were unable to recover it. 00:28:14.950 [2024-12-10 22:59:22.602397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.950 [2024-12-10 22:59:22.602435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.950 qpair failed and we were unable to recover it. 00:28:14.950 [2024-12-10 22:59:22.602617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.950 [2024-12-10 22:59:22.602648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.950 qpair failed and we were unable to recover it. 00:28:14.950 [2024-12-10 22:59:22.602817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.950 [2024-12-10 22:59:22.602848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.950 qpair failed and we were unable to recover it. 00:28:14.950 [2024-12-10 22:59:22.602959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.950 [2024-12-10 22:59:22.602995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.950 qpair failed and we were unable to recover it. 00:28:14.950 [2024-12-10 22:59:22.603169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.950 [2024-12-10 22:59:22.603202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.950 qpair failed and we were unable to recover it. 00:28:14.950 [2024-12-10 22:59:22.603323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.950 [2024-12-10 22:59:22.603353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.950 qpair failed and we were unable to recover it. 00:28:14.950 [2024-12-10 22:59:22.603463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.950 [2024-12-10 22:59:22.603494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.950 qpair failed and we were unable to recover it. 00:28:14.950 [2024-12-10 22:59:22.603677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.950 [2024-12-10 22:59:22.603708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.950 qpair failed and we were unable to recover it. 00:28:14.950 [2024-12-10 22:59:22.603874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.950 [2024-12-10 22:59:22.603906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.950 qpair failed and we were unable to recover it. 00:28:14.950 [2024-12-10 22:59:22.604110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.950 [2024-12-10 22:59:22.604141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.950 qpair failed and we were unable to recover it. 00:28:14.950 [2024-12-10 22:59:22.604317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.950 [2024-12-10 22:59:22.604349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.950 qpair failed and we were unable to recover it. 00:28:14.950 [2024-12-10 22:59:22.604467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.950 [2024-12-10 22:59:22.604498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.950 qpair failed and we were unable to recover it. 00:28:14.950 [2024-12-10 22:59:22.604757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.950 [2024-12-10 22:59:22.604789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.950 qpair failed and we were unable to recover it. 00:28:14.950 [2024-12-10 22:59:22.604959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.950 [2024-12-10 22:59:22.604991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.950 qpair failed and we were unable to recover it. 00:28:14.950 [2024-12-10 22:59:22.605199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.950 [2024-12-10 22:59:22.605233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.950 qpair failed and we were unable to recover it. 00:28:14.950 [2024-12-10 22:59:22.605473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.950 [2024-12-10 22:59:22.605505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.950 qpair failed and we were unable to recover it. 00:28:14.950 [2024-12-10 22:59:22.605617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.950 [2024-12-10 22:59:22.605648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.950 qpair failed and we were unable to recover it. 00:28:14.950 [2024-12-10 22:59:22.605820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.950 [2024-12-10 22:59:22.605851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.950 qpair failed and we were unable to recover it. 00:28:14.950 [2024-12-10 22:59:22.605966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.950 [2024-12-10 22:59:22.605997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.950 qpair failed and we were unable to recover it. 00:28:14.950 [2024-12-10 22:59:22.606181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.950 [2024-12-10 22:59:22.606214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.950 qpair failed and we were unable to recover it. 00:28:14.950 [2024-12-10 22:59:22.606401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.950 [2024-12-10 22:59:22.606432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.950 qpair failed and we were unable to recover it. 00:28:14.950 [2024-12-10 22:59:22.606547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.950 [2024-12-10 22:59:22.606577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.950 qpair failed and we were unable to recover it. 00:28:14.950 [2024-12-10 22:59:22.606744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.950 [2024-12-10 22:59:22.606775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.950 qpair failed and we were unable to recover it. 00:28:14.950 [2024-12-10 22:59:22.606969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.950 [2024-12-10 22:59:22.607000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.950 qpair failed and we were unable to recover it. 00:28:14.951 [2024-12-10 22:59:22.607172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.951 [2024-12-10 22:59:22.607204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.951 qpair failed and we were unable to recover it. 00:28:14.951 [2024-12-10 22:59:22.607440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.951 [2024-12-10 22:59:22.607471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.951 qpair failed and we were unable to recover it. 00:28:14.951 [2024-12-10 22:59:22.607641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.951 [2024-12-10 22:59:22.607672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.951 qpair failed and we were unable to recover it. 00:28:14.951 [2024-12-10 22:59:22.607884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.951 [2024-12-10 22:59:22.607916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.951 qpair failed and we were unable to recover it. 00:28:14.951 [2024-12-10 22:59:22.608110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.951 [2024-12-10 22:59:22.608141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.951 qpair failed and we were unable to recover it. 00:28:14.951 [2024-12-10 22:59:22.608318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.951 [2024-12-10 22:59:22.608350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.951 qpair failed and we were unable to recover it. 00:28:14.951 [2024-12-10 22:59:22.608456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.951 [2024-12-10 22:59:22.608487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.951 qpair failed and we were unable to recover it. 00:28:14.951 [2024-12-10 22:59:22.608605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.951 [2024-12-10 22:59:22.608636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.951 qpair failed and we were unable to recover it. 00:28:14.951 [2024-12-10 22:59:22.608801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.951 [2024-12-10 22:59:22.608833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.951 qpair failed and we were unable to recover it. 00:28:14.951 [2024-12-10 22:59:22.608955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.951 [2024-12-10 22:59:22.608986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.951 qpair failed and we were unable to recover it. 00:28:14.951 [2024-12-10 22:59:22.609154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.951 [2024-12-10 22:59:22.609193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.951 qpair failed and we were unable to recover it. 00:28:14.951 [2024-12-10 22:59:22.609305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.951 [2024-12-10 22:59:22.609336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.951 qpair failed and we were unable to recover it. 00:28:14.951 [2024-12-10 22:59:22.609446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.951 [2024-12-10 22:59:22.609477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.951 qpair failed and we were unable to recover it. 00:28:14.951 [2024-12-10 22:59:22.609651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.951 [2024-12-10 22:59:22.609682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.951 qpair failed and we were unable to recover it. 00:28:14.951 [2024-12-10 22:59:22.609791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.951 [2024-12-10 22:59:22.609823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.951 qpair failed and we were unable to recover it. 00:28:14.951 [2024-12-10 22:59:22.610005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.951 [2024-12-10 22:59:22.610036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.951 qpair failed and we were unable to recover it. 00:28:14.951 [2024-12-10 22:59:22.610167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.951 [2024-12-10 22:59:22.610211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.951 qpair failed and we were unable to recover it. 00:28:14.951 [2024-12-10 22:59:22.610324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.951 [2024-12-10 22:59:22.610356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.951 qpair failed and we were unable to recover it. 00:28:14.951 [2024-12-10 22:59:22.610522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.951 [2024-12-10 22:59:22.610554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:14.951 qpair failed and we were unable to recover it. 00:28:14.951 [2024-12-10 22:59:22.610720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:14.951 [2024-12-10 22:59:22.610751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.232 qpair failed and we were unable to recover it. 00:28:15.232 [2024-12-10 22:59:22.610921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.232 [2024-12-10 22:59:22.610954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.232 qpair failed and we were unable to recover it. 00:28:15.232 [2024-12-10 22:59:22.611137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.232 [2024-12-10 22:59:22.611181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.232 qpair failed and we were unable to recover it. 00:28:15.232 [2024-12-10 22:59:22.611369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.232 [2024-12-10 22:59:22.611400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.232 qpair failed and we were unable to recover it. 00:28:15.232 [2024-12-10 22:59:22.611568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.233 [2024-12-10 22:59:22.611599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.233 qpair failed and we were unable to recover it. 00:28:15.233 [2024-12-10 22:59:22.611767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.233 [2024-12-10 22:59:22.611797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.233 qpair failed and we were unable to recover it. 00:28:15.233 [2024-12-10 22:59:22.611913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.233 [2024-12-10 22:59:22.611945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.233 qpair failed and we were unable to recover it. 00:28:15.233 [2024-12-10 22:59:22.612143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.233 [2024-12-10 22:59:22.612183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.233 qpair failed and we were unable to recover it. 00:28:15.233 [2024-12-10 22:59:22.612285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.233 [2024-12-10 22:59:22.612316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.233 qpair failed and we were unable to recover it. 00:28:15.233 [2024-12-10 22:59:22.612506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.233 [2024-12-10 22:59:22.612537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.233 qpair failed and we were unable to recover it. 00:28:15.233 [2024-12-10 22:59:22.612655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.233 [2024-12-10 22:59:22.612686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.233 qpair failed and we were unable to recover it. 00:28:15.233 [2024-12-10 22:59:22.612862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.233 [2024-12-10 22:59:22.612894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.233 qpair failed and we were unable to recover it. 00:28:15.233 [2024-12-10 22:59:22.613088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.233 [2024-12-10 22:59:22.613120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.233 qpair failed and we were unable to recover it. 00:28:15.233 [2024-12-10 22:59:22.613318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.233 [2024-12-10 22:59:22.613352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.233 qpair failed and we were unable to recover it. 00:28:15.233 [2024-12-10 22:59:22.613553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.233 [2024-12-10 22:59:22.613584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.233 qpair failed and we were unable to recover it. 00:28:15.233 [2024-12-10 22:59:22.613690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.233 [2024-12-10 22:59:22.613721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.233 qpair failed and we were unable to recover it. 00:28:15.233 [2024-12-10 22:59:22.613903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.233 [2024-12-10 22:59:22.613935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.233 qpair failed and we were unable to recover it. 00:28:15.233 [2024-12-10 22:59:22.614194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.233 [2024-12-10 22:59:22.614228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.233 qpair failed and we were unable to recover it. 00:28:15.233 [2024-12-10 22:59:22.614431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.233 [2024-12-10 22:59:22.614463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.233 qpair failed and we were unable to recover it. 00:28:15.233 [2024-12-10 22:59:22.614629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.233 [2024-12-10 22:59:22.614660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.233 qpair failed and we were unable to recover it. 00:28:15.233 [2024-12-10 22:59:22.614853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.233 [2024-12-10 22:59:22.614885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.233 qpair failed and we were unable to recover it. 00:28:15.233 [2024-12-10 22:59:22.615137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.233 [2024-12-10 22:59:22.615176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.233 qpair failed and we were unable to recover it. 00:28:15.233 [2024-12-10 22:59:22.615280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.233 [2024-12-10 22:59:22.615312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.233 qpair failed and we were unable to recover it. 00:28:15.233 [2024-12-10 22:59:22.615478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.233 [2024-12-10 22:59:22.615510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.233 qpair failed and we were unable to recover it. 00:28:15.233 [2024-12-10 22:59:22.615704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.233 [2024-12-10 22:59:22.615735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.233 qpair failed and we were unable to recover it. 00:28:15.233 [2024-12-10 22:59:22.615856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.233 [2024-12-10 22:59:22.615887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.233 qpair failed and we were unable to recover it. 00:28:15.233 [2024-12-10 22:59:22.616004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.233 [2024-12-10 22:59:22.616036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.233 qpair failed and we were unable to recover it. 00:28:15.233 [2024-12-10 22:59:22.616221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.233 [2024-12-10 22:59:22.616253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.233 qpair failed and we were unable to recover it. 00:28:15.233 [2024-12-10 22:59:22.616539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.233 [2024-12-10 22:59:22.616573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.233 qpair failed and we were unable to recover it. 00:28:15.233 [2024-12-10 22:59:22.616780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.233 [2024-12-10 22:59:22.616812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.233 qpair failed and we were unable to recover it. 00:28:15.233 [2024-12-10 22:59:22.617004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.233 [2024-12-10 22:59:22.617035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.233 qpair failed and we were unable to recover it. 00:28:15.233 [2024-12-10 22:59:22.617140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.233 [2024-12-10 22:59:22.617181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.233 qpair failed and we were unable to recover it. 00:28:15.233 [2024-12-10 22:59:22.617374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.233 [2024-12-10 22:59:22.617405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.233 qpair failed and we were unable to recover it. 00:28:15.233 [2024-12-10 22:59:22.617585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.233 [2024-12-10 22:59:22.617617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.233 qpair failed and we were unable to recover it. 00:28:15.233 [2024-12-10 22:59:22.617732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.233 [2024-12-10 22:59:22.617764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.233 qpair failed and we were unable to recover it. 00:28:15.233 [2024-12-10 22:59:22.617931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.233 [2024-12-10 22:59:22.617962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.233 qpair failed and we were unable to recover it. 00:28:15.233 [2024-12-10 22:59:22.618090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.233 [2024-12-10 22:59:22.618122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.233 qpair failed and we were unable to recover it. 00:28:15.233 [2024-12-10 22:59:22.618334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.233 [2024-12-10 22:59:22.618372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.233 qpair failed and we were unable to recover it. 00:28:15.233 [2024-12-10 22:59:22.618479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.233 [2024-12-10 22:59:22.618511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.233 qpair failed and we were unable to recover it. 00:28:15.233 [2024-12-10 22:59:22.618679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.233 [2024-12-10 22:59:22.618710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.233 qpair failed and we were unable to recover it. 00:28:15.233 [2024-12-10 22:59:22.618990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.233 [2024-12-10 22:59:22.619021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.233 qpair failed and we were unable to recover it. 00:28:15.234 [2024-12-10 22:59:22.619212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.234 [2024-12-10 22:59:22.619245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.234 qpair failed and we were unable to recover it. 00:28:15.234 [2024-12-10 22:59:22.619363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.234 [2024-12-10 22:59:22.619394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.234 qpair failed and we were unable to recover it. 00:28:15.234 [2024-12-10 22:59:22.619496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.234 [2024-12-10 22:59:22.619528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.234 qpair failed and we were unable to recover it. 00:28:15.234 [2024-12-10 22:59:22.619698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.234 [2024-12-10 22:59:22.619729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.234 qpair failed and we were unable to recover it. 00:28:15.234 [2024-12-10 22:59:22.619914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.234 [2024-12-10 22:59:22.619945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.234 qpair failed and we were unable to recover it. 00:28:15.234 [2024-12-10 22:59:22.620062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.234 [2024-12-10 22:59:22.620094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.234 qpair failed and we were unable to recover it. 00:28:15.234 [2024-12-10 22:59:22.620263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.234 [2024-12-10 22:59:22.620295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.234 qpair failed and we were unable to recover it. 00:28:15.234 [2024-12-10 22:59:22.620409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.234 [2024-12-10 22:59:22.620441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.234 qpair failed and we were unable to recover it. 00:28:15.234 [2024-12-10 22:59:22.620570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.234 [2024-12-10 22:59:22.620601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.234 qpair failed and we were unable to recover it. 00:28:15.234 [2024-12-10 22:59:22.620703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.234 [2024-12-10 22:59:22.620734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.234 qpair failed and we were unable to recover it. 00:28:15.234 [2024-12-10 22:59:22.620911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.234 [2024-12-10 22:59:22.620944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.234 qpair failed and we were unable to recover it. 00:28:15.234 [2024-12-10 22:59:22.621139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.234 [2024-12-10 22:59:22.621201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.234 qpair failed and we were unable to recover it. 00:28:15.234 [2024-12-10 22:59:22.621407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.234 [2024-12-10 22:59:22.621439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.234 qpair failed and we were unable to recover it. 00:28:15.234 [2024-12-10 22:59:22.621605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.234 [2024-12-10 22:59:22.621636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.234 qpair failed and we were unable to recover it. 00:28:15.234 [2024-12-10 22:59:22.621873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.234 [2024-12-10 22:59:22.621904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.234 qpair failed and we were unable to recover it. 00:28:15.234 [2024-12-10 22:59:22.622093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.234 [2024-12-10 22:59:22.622125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.234 qpair failed and we were unable to recover it. 00:28:15.234 [2024-12-10 22:59:22.622254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.234 [2024-12-10 22:59:22.622286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.234 qpair failed and we were unable to recover it. 00:28:15.234 [2024-12-10 22:59:22.622477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.234 [2024-12-10 22:59:22.622509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.234 qpair failed and we were unable to recover it. 00:28:15.234 [2024-12-10 22:59:22.622700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.234 [2024-12-10 22:59:22.622732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.234 qpair failed and we were unable to recover it. 00:28:15.234 [2024-12-10 22:59:22.622846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.234 [2024-12-10 22:59:22.622877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.234 qpair failed and we were unable to recover it. 00:28:15.234 [2024-12-10 22:59:22.623075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.234 [2024-12-10 22:59:22.623107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.234 qpair failed and we were unable to recover it. 00:28:15.234 [2024-12-10 22:59:22.623287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.234 [2024-12-10 22:59:22.623319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.234 qpair failed and we were unable to recover it. 00:28:15.234 [2024-12-10 22:59:22.623506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.234 [2024-12-10 22:59:22.623538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.234 qpair failed and we were unable to recover it. 00:28:15.234 [2024-12-10 22:59:22.623719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.234 [2024-12-10 22:59:22.623751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.234 qpair failed and we were unable to recover it. 00:28:15.234 [2024-12-10 22:59:22.623902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.234 [2024-12-10 22:59:22.623933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.234 qpair failed and we were unable to recover it. 00:28:15.234 [2024-12-10 22:59:22.624117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.234 [2024-12-10 22:59:22.624149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.234 qpair failed and we were unable to recover it. 00:28:15.234 [2024-12-10 22:59:22.624266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.234 [2024-12-10 22:59:22.624299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.234 qpair failed and we were unable to recover it. 00:28:15.234 [2024-12-10 22:59:22.624416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.234 [2024-12-10 22:59:22.624448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.234 qpair failed and we were unable to recover it. 00:28:15.234 [2024-12-10 22:59:22.624550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.234 [2024-12-10 22:59:22.624583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.234 qpair failed and we were unable to recover it. 00:28:15.234 [2024-12-10 22:59:22.624704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.234 [2024-12-10 22:59:22.624736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.234 qpair failed and we were unable to recover it. 00:28:15.234 [2024-12-10 22:59:22.624846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.234 [2024-12-10 22:59:22.624878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.234 qpair failed and we were unable to recover it. 00:28:15.234 [2024-12-10 22:59:22.625046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.234 [2024-12-10 22:59:22.625078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.234 qpair failed and we were unable to recover it. 00:28:15.234 [2024-12-10 22:59:22.625288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.234 [2024-12-10 22:59:22.625322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.234 qpair failed and we were unable to recover it. 00:28:15.234 [2024-12-10 22:59:22.625437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.234 [2024-12-10 22:59:22.625468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.234 qpair failed and we were unable to recover it. 00:28:15.234 [2024-12-10 22:59:22.625571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.234 [2024-12-10 22:59:22.625602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.234 qpair failed and we were unable to recover it. 00:28:15.234 [2024-12-10 22:59:22.625771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.234 [2024-12-10 22:59:22.625803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.234 qpair failed and we were unable to recover it. 00:28:15.234 [2024-12-10 22:59:22.625927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.234 [2024-12-10 22:59:22.625965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.234 qpair failed and we were unable to recover it. 00:28:15.234 [2024-12-10 22:59:22.626134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.234 [2024-12-10 22:59:22.626173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.234 qpair failed and we were unable to recover it. 00:28:15.234 [2024-12-10 22:59:22.626390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.234 [2024-12-10 22:59:22.626421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.234 qpair failed and we were unable to recover it. 00:28:15.235 [2024-12-10 22:59:22.626543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.235 [2024-12-10 22:59:22.626574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.235 qpair failed and we were unable to recover it. 00:28:15.235 [2024-12-10 22:59:22.626689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.235 [2024-12-10 22:59:22.626720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.235 qpair failed and we were unable to recover it. 00:28:15.235 [2024-12-10 22:59:22.626858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.235 [2024-12-10 22:59:22.626889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.235 qpair failed and we were unable to recover it. 00:28:15.235 [2024-12-10 22:59:22.627058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.235 [2024-12-10 22:59:22.627089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.235 qpair failed and we were unable to recover it. 00:28:15.235 [2024-12-10 22:59:22.627195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.235 [2024-12-10 22:59:22.627227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.235 qpair failed and we were unable to recover it. 00:28:15.235 [2024-12-10 22:59:22.627339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.235 [2024-12-10 22:59:22.627370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.235 qpair failed and we were unable to recover it. 00:28:15.235 [2024-12-10 22:59:22.627491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.235 [2024-12-10 22:59:22.627522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.235 qpair failed and we were unable to recover it. 00:28:15.235 [2024-12-10 22:59:22.627801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.235 [2024-12-10 22:59:22.627832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.235 qpair failed and we were unable to recover it. 00:28:15.235 [2024-12-10 22:59:22.628008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.235 [2024-12-10 22:59:22.628038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.235 qpair failed and we were unable to recover it. 00:28:15.235 [2024-12-10 22:59:22.628179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.235 [2024-12-10 22:59:22.628211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.235 qpair failed and we were unable to recover it. 00:28:15.235 [2024-12-10 22:59:22.628330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.235 [2024-12-10 22:59:22.628360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.235 qpair failed and we were unable to recover it. 00:28:15.235 [2024-12-10 22:59:22.628471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.235 [2024-12-10 22:59:22.628503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.235 qpair failed and we were unable to recover it. 00:28:15.235 [2024-12-10 22:59:22.628713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.235 [2024-12-10 22:59:22.628744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.235 qpair failed and we were unable to recover it. 00:28:15.235 [2024-12-10 22:59:22.628933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.235 [2024-12-10 22:59:22.628964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.235 qpair failed and we were unable to recover it. 00:28:15.235 [2024-12-10 22:59:22.629075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.235 [2024-12-10 22:59:22.629106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.235 qpair failed and we were unable to recover it. 00:28:15.235 [2024-12-10 22:59:22.629261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.235 [2024-12-10 22:59:22.629294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.235 qpair failed and we were unable to recover it. 00:28:15.235 [2024-12-10 22:59:22.629485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.235 [2024-12-10 22:59:22.629516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.235 qpair failed and we were unable to recover it. 00:28:15.235 [2024-12-10 22:59:22.629629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.235 [2024-12-10 22:59:22.629660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.235 qpair failed and we were unable to recover it. 00:28:15.235 [2024-12-10 22:59:22.629778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.235 [2024-12-10 22:59:22.629809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.235 qpair failed and we were unable to recover it. 00:28:15.235 [2024-12-10 22:59:22.629925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.235 [2024-12-10 22:59:22.629957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.235 qpair failed and we were unable to recover it. 00:28:15.235 [2024-12-10 22:59:22.630080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.235 [2024-12-10 22:59:22.630109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.235 qpair failed and we were unable to recover it. 00:28:15.235 [2024-12-10 22:59:22.630241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.235 [2024-12-10 22:59:22.630274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.235 qpair failed and we were unable to recover it. 00:28:15.235 [2024-12-10 22:59:22.630459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.235 [2024-12-10 22:59:22.630490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.235 qpair failed and we were unable to recover it. 00:28:15.235 [2024-12-10 22:59:22.630594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.235 [2024-12-10 22:59:22.630625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.235 qpair failed and we were unable to recover it. 00:28:15.235 [2024-12-10 22:59:22.630752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.235 [2024-12-10 22:59:22.630783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.235 qpair failed and we were unable to recover it. 00:28:15.235 [2024-12-10 22:59:22.630954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.235 [2024-12-10 22:59:22.630985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.235 qpair failed and we were unable to recover it. 00:28:15.235 [2024-12-10 22:59:22.631169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.235 [2024-12-10 22:59:22.631201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.235 qpair failed and we were unable to recover it. 00:28:15.235 [2024-12-10 22:59:22.631319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.235 [2024-12-10 22:59:22.631351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.235 qpair failed and we were unable to recover it. 00:28:15.235 [2024-12-10 22:59:22.631518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.235 [2024-12-10 22:59:22.631549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.235 qpair failed and we were unable to recover it. 00:28:15.235 [2024-12-10 22:59:22.631658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.235 [2024-12-10 22:59:22.631688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.235 qpair failed and we were unable to recover it. 00:28:15.235 [2024-12-10 22:59:22.631795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.235 [2024-12-10 22:59:22.631826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.235 qpair failed and we were unable to recover it. 00:28:15.235 [2024-12-10 22:59:22.631990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.235 [2024-12-10 22:59:22.632022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.235 qpair failed and we were unable to recover it. 00:28:15.235 [2024-12-10 22:59:22.632193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.235 [2024-12-10 22:59:22.632226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.235 qpair failed and we were unable to recover it. 00:28:15.235 [2024-12-10 22:59:22.632335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.235 [2024-12-10 22:59:22.632367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.235 qpair failed and we were unable to recover it. 00:28:15.235 [2024-12-10 22:59:22.632490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.235 [2024-12-10 22:59:22.632521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.235 qpair failed and we were unable to recover it. 00:28:15.235 [2024-12-10 22:59:22.632621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.235 [2024-12-10 22:59:22.632652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.235 qpair failed and we were unable to recover it. 00:28:15.235 [2024-12-10 22:59:22.632826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.235 [2024-12-10 22:59:22.632857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.235 qpair failed and we were unable to recover it. 00:28:15.235 [2024-12-10 22:59:22.632960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.235 [2024-12-10 22:59:22.632996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.235 qpair failed and we were unable to recover it. 00:28:15.235 [2024-12-10 22:59:22.633116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.235 [2024-12-10 22:59:22.633147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.235 qpair failed and we were unable to recover it. 00:28:15.235 [2024-12-10 22:59:22.633345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.236 [2024-12-10 22:59:22.633378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.236 qpair failed and we were unable to recover it. 00:28:15.236 [2024-12-10 22:59:22.633566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.236 [2024-12-10 22:59:22.633597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.236 qpair failed and we were unable to recover it. 00:28:15.236 [2024-12-10 22:59:22.633716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.236 [2024-12-10 22:59:22.633747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.236 qpair failed and we were unable to recover it. 00:28:15.236 [2024-12-10 22:59:22.633916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.236 [2024-12-10 22:59:22.633947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.236 qpair failed and we were unable to recover it. 00:28:15.236 [2024-12-10 22:59:22.634120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.236 [2024-12-10 22:59:22.634150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.236 qpair failed and we were unable to recover it. 00:28:15.236 [2024-12-10 22:59:22.634266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.236 [2024-12-10 22:59:22.634298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.236 qpair failed and we were unable to recover it. 00:28:15.236 [2024-12-10 22:59:22.634416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.236 [2024-12-10 22:59:22.634447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.236 qpair failed and we were unable to recover it. 00:28:15.236 [2024-12-10 22:59:22.634617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.236 [2024-12-10 22:59:22.634648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.236 qpair failed and we were unable to recover it. 00:28:15.236 [2024-12-10 22:59:22.634763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.236 [2024-12-10 22:59:22.634794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.236 qpair failed and we were unable to recover it. 00:28:15.236 [2024-12-10 22:59:22.634912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.236 [2024-12-10 22:59:22.634942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.236 qpair failed and we were unable to recover it. 00:28:15.236 [2024-12-10 22:59:22.635110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.236 [2024-12-10 22:59:22.635141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.236 qpair failed and we were unable to recover it. 00:28:15.236 [2024-12-10 22:59:22.635317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.236 [2024-12-10 22:59:22.635349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.236 qpair failed and we were unable to recover it. 00:28:15.236 [2024-12-10 22:59:22.635525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.236 [2024-12-10 22:59:22.635557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.236 qpair failed and we were unable to recover it. 00:28:15.236 [2024-12-10 22:59:22.635806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.236 [2024-12-10 22:59:22.635837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.236 qpair failed and we were unable to recover it. 00:28:15.236 [2024-12-10 22:59:22.636043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.236 [2024-12-10 22:59:22.636073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.236 qpair failed and we were unable to recover it. 00:28:15.236 [2024-12-10 22:59:22.636190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.236 [2024-12-10 22:59:22.636223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.236 qpair failed and we were unable to recover it. 00:28:15.236 [2024-12-10 22:59:22.636327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.236 [2024-12-10 22:59:22.636357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.236 qpair failed and we were unable to recover it. 00:28:15.236 [2024-12-10 22:59:22.636459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.236 [2024-12-10 22:59:22.636490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.236 qpair failed and we were unable to recover it. 00:28:15.236 [2024-12-10 22:59:22.636679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.236 [2024-12-10 22:59:22.636709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.236 qpair failed and we were unable to recover it. 00:28:15.236 [2024-12-10 22:59:22.636902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.236 [2024-12-10 22:59:22.636933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.236 qpair failed and we were unable to recover it. 00:28:15.236 [2024-12-10 22:59:22.637117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.236 [2024-12-10 22:59:22.637148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.236 qpair failed and we were unable to recover it. 00:28:15.236 [2024-12-10 22:59:22.637360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.236 [2024-12-10 22:59:22.637392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.236 qpair failed and we were unable to recover it. 00:28:15.236 [2024-12-10 22:59:22.637505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.236 [2024-12-10 22:59:22.637537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.236 qpair failed and we were unable to recover it. 00:28:15.236 [2024-12-10 22:59:22.637651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.236 [2024-12-10 22:59:22.637683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.236 qpair failed and we were unable to recover it. 00:28:15.236 [2024-12-10 22:59:22.637802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.236 [2024-12-10 22:59:22.637833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.236 qpair failed and we were unable to recover it. 00:28:15.236 [2024-12-10 22:59:22.638007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.236 [2024-12-10 22:59:22.638038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.236 qpair failed and we were unable to recover it. 00:28:15.236 [2024-12-10 22:59:22.638206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.236 [2024-12-10 22:59:22.638239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.236 qpair failed and we were unable to recover it. 00:28:15.236 [2024-12-10 22:59:22.638342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.236 [2024-12-10 22:59:22.638373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.236 qpair failed and we were unable to recover it. 00:28:15.236 [2024-12-10 22:59:22.638556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.236 [2024-12-10 22:59:22.638588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.236 qpair failed and we were unable to recover it. 00:28:15.236 [2024-12-10 22:59:22.638789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.236 [2024-12-10 22:59:22.638821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.236 qpair failed and we were unable to recover it. 00:28:15.236 [2024-12-10 22:59:22.638945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.236 [2024-12-10 22:59:22.638976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.236 qpair failed and we were unable to recover it. 00:28:15.236 [2024-12-10 22:59:22.639078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.236 [2024-12-10 22:59:22.639109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.236 qpair failed and we were unable to recover it. 00:28:15.236 [2024-12-10 22:59:22.639285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.236 [2024-12-10 22:59:22.639317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.236 qpair failed and we were unable to recover it. 00:28:15.236 [2024-12-10 22:59:22.639421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.236 [2024-12-10 22:59:22.639453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.236 qpair failed and we were unable to recover it. 00:28:15.236 [2024-12-10 22:59:22.639646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.236 [2024-12-10 22:59:22.639678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.236 qpair failed and we were unable to recover it. 00:28:15.236 [2024-12-10 22:59:22.639843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.236 [2024-12-10 22:59:22.639875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.236 qpair failed and we were unable to recover it. 00:28:15.236 [2024-12-10 22:59:22.640081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.236 [2024-12-10 22:59:22.640112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.236 qpair failed and we were unable to recover it. 00:28:15.236 [2024-12-10 22:59:22.640223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.236 [2024-12-10 22:59:22.640256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.236 qpair failed and we were unable to recover it. 00:28:15.236 [2024-12-10 22:59:22.640369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.236 [2024-12-10 22:59:22.640405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.237 qpair failed and we were unable to recover it. 00:28:15.237 [2024-12-10 22:59:22.640607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.237 [2024-12-10 22:59:22.640638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.237 qpair failed and we were unable to recover it. 00:28:15.237 [2024-12-10 22:59:22.640759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.237 [2024-12-10 22:59:22.640790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.237 qpair failed and we were unable to recover it. 00:28:15.237 [2024-12-10 22:59:22.640956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.237 [2024-12-10 22:59:22.640987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.237 qpair failed and we were unable to recover it. 00:28:15.237 [2024-12-10 22:59:22.641192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.237 [2024-12-10 22:59:22.641225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.237 qpair failed and we were unable to recover it. 00:28:15.237 [2024-12-10 22:59:22.641395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.237 [2024-12-10 22:59:22.641427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.237 qpair failed and we were unable to recover it. 00:28:15.237 [2024-12-10 22:59:22.641529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.237 [2024-12-10 22:59:22.641560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.237 qpair failed and we were unable to recover it. 00:28:15.237 [2024-12-10 22:59:22.641729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.237 [2024-12-10 22:59:22.641761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.237 qpair failed and we were unable to recover it. 00:28:15.237 [2024-12-10 22:59:22.641877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.237 [2024-12-10 22:59:22.641909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.237 qpair failed and we were unable to recover it. 00:28:15.237 [2024-12-10 22:59:22.642024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.237 [2024-12-10 22:59:22.642055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.237 qpair failed and we were unable to recover it. 00:28:15.237 [2024-12-10 22:59:22.642221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.237 [2024-12-10 22:59:22.642255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.237 qpair failed and we were unable to recover it. 00:28:15.237 [2024-12-10 22:59:22.642362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.237 [2024-12-10 22:59:22.642393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.237 qpair failed and we were unable to recover it. 00:28:15.237 [2024-12-10 22:59:22.642562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.237 [2024-12-10 22:59:22.642593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.237 qpair failed and we were unable to recover it. 00:28:15.237 [2024-12-10 22:59:22.642697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.237 [2024-12-10 22:59:22.642729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.237 qpair failed and we were unable to recover it. 00:28:15.237 [2024-12-10 22:59:22.642948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.237 [2024-12-10 22:59:22.642980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.237 qpair failed and we were unable to recover it. 00:28:15.237 [2024-12-10 22:59:22.643171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.237 [2024-12-10 22:59:22.643203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.237 qpair failed and we were unable to recover it. 00:28:15.237 [2024-12-10 22:59:22.643398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.237 [2024-12-10 22:59:22.643429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.237 qpair failed and we were unable to recover it. 00:28:15.237 [2024-12-10 22:59:22.643598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.237 [2024-12-10 22:59:22.643629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.237 qpair failed and we were unable to recover it. 00:28:15.237 [2024-12-10 22:59:22.643796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.237 [2024-12-10 22:59:22.643828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.237 qpair failed and we were unable to recover it. 00:28:15.237 [2024-12-10 22:59:22.644087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.237 [2024-12-10 22:59:22.644118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.237 qpair failed and we were unable to recover it. 00:28:15.237 [2024-12-10 22:59:22.644240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.237 [2024-12-10 22:59:22.644271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.237 qpair failed and we were unable to recover it. 00:28:15.237 [2024-12-10 22:59:22.644483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.237 [2024-12-10 22:59:22.644514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.237 qpair failed and we were unable to recover it. 00:28:15.237 [2024-12-10 22:59:22.644642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.237 [2024-12-10 22:59:22.644673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.237 qpair failed and we were unable to recover it. 00:28:15.237 [2024-12-10 22:59:22.644838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.237 [2024-12-10 22:59:22.644869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.237 qpair failed and we were unable to recover it. 00:28:15.237 [2024-12-10 22:59:22.645067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.237 [2024-12-10 22:59:22.645097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.237 qpair failed and we were unable to recover it. 00:28:15.237 [2024-12-10 22:59:22.645215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.237 [2024-12-10 22:59:22.645247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.237 qpair failed and we were unable to recover it. 00:28:15.237 [2024-12-10 22:59:22.645363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.237 [2024-12-10 22:59:22.645394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.237 qpair failed and we were unable to recover it. 00:28:15.237 [2024-12-10 22:59:22.645502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.237 [2024-12-10 22:59:22.645534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.237 qpair failed and we were unable to recover it. 00:28:15.237 [2024-12-10 22:59:22.645650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.237 [2024-12-10 22:59:22.645681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.237 qpair failed and we were unable to recover it. 00:28:15.237 [2024-12-10 22:59:22.645794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.237 [2024-12-10 22:59:22.645824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.237 qpair failed and we were unable to recover it. 00:28:15.237 [2024-12-10 22:59:22.645930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.237 [2024-12-10 22:59:22.645961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.237 qpair failed and we were unable to recover it. 00:28:15.237 [2024-12-10 22:59:22.646129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.237 [2024-12-10 22:59:22.646170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.237 qpair failed and we were unable to recover it. 00:28:15.237 [2024-12-10 22:59:22.646342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.237 [2024-12-10 22:59:22.646374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.237 qpair failed and we were unable to recover it. 00:28:15.237 [2024-12-10 22:59:22.646556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.237 [2024-12-10 22:59:22.646587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.237 qpair failed and we were unable to recover it. 00:28:15.237 [2024-12-10 22:59:22.646702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.237 [2024-12-10 22:59:22.646733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.237 qpair failed and we were unable to recover it. 00:28:15.237 [2024-12-10 22:59:22.646898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.237 [2024-12-10 22:59:22.646929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.237 qpair failed and we were unable to recover it. 00:28:15.237 [2024-12-10 22:59:22.647097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.237 [2024-12-10 22:59:22.647128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.237 qpair failed and we were unable to recover it. 00:28:15.237 [2024-12-10 22:59:22.647376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.238 [2024-12-10 22:59:22.647409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.238 qpair failed and we were unable to recover it. 00:28:15.238 [2024-12-10 22:59:22.647594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.238 [2024-12-10 22:59:22.647624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.238 qpair failed and we were unable to recover it. 00:28:15.238 [2024-12-10 22:59:22.647879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.238 [2024-12-10 22:59:22.647910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.238 qpair failed and we were unable to recover it. 00:28:15.238 [2024-12-10 22:59:22.648016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.238 [2024-12-10 22:59:22.648059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.238 qpair failed and we were unable to recover it. 00:28:15.238 [2024-12-10 22:59:22.648254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.238 [2024-12-10 22:59:22.648286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.238 qpair failed and we were unable to recover it. 00:28:15.238 [2024-12-10 22:59:22.648390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.238 [2024-12-10 22:59:22.648421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.238 qpair failed and we were unable to recover it. 00:28:15.238 [2024-12-10 22:59:22.648524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.238 [2024-12-10 22:59:22.648555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.238 qpair failed and we were unable to recover it. 00:28:15.238 [2024-12-10 22:59:22.648670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.238 [2024-12-10 22:59:22.648700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.238 qpair failed and we were unable to recover it. 00:28:15.238 [2024-12-10 22:59:22.648881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.238 [2024-12-10 22:59:22.648912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.238 qpair failed and we were unable to recover it. 00:28:15.238 [2024-12-10 22:59:22.649026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.238 [2024-12-10 22:59:22.649056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.238 qpair failed and we were unable to recover it. 00:28:15.238 [2024-12-10 22:59:22.649226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.238 [2024-12-10 22:59:22.649258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.238 qpair failed and we were unable to recover it. 00:28:15.238 [2024-12-10 22:59:22.649384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.238 [2024-12-10 22:59:22.649416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.238 qpair failed and we were unable to recover it. 00:28:15.238 [2024-12-10 22:59:22.649528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.238 [2024-12-10 22:59:22.649559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.238 qpair failed and we were unable to recover it. 00:28:15.238 [2024-12-10 22:59:22.649662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.238 [2024-12-10 22:59:22.649693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.238 qpair failed and we were unable to recover it. 00:28:15.238 [2024-12-10 22:59:22.649810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.238 [2024-12-10 22:59:22.649841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.238 qpair failed and we were unable to recover it. 00:28:15.238 [2024-12-10 22:59:22.650009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.238 [2024-12-10 22:59:22.650040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.238 qpair failed and we were unable to recover it. 00:28:15.238 [2024-12-10 22:59:22.650175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.238 [2024-12-10 22:59:22.650207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.238 qpair failed and we were unable to recover it. 00:28:15.238 [2024-12-10 22:59:22.650381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.238 [2024-12-10 22:59:22.650413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.238 qpair failed and we were unable to recover it. 00:28:15.238 [2024-12-10 22:59:22.650642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.238 [2024-12-10 22:59:22.650674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.238 qpair failed and we were unable to recover it. 00:28:15.238 [2024-12-10 22:59:22.650841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.238 [2024-12-10 22:59:22.650872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.238 qpair failed and we were unable to recover it. 00:28:15.238 [2024-12-10 22:59:22.651044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.238 [2024-12-10 22:59:22.651083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.238 qpair failed and we were unable to recover it. 00:28:15.238 [2024-12-10 22:59:22.651249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.238 [2024-12-10 22:59:22.651282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.238 qpair failed and we were unable to recover it. 00:28:15.238 [2024-12-10 22:59:22.651450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.238 [2024-12-10 22:59:22.651481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.238 qpair failed and we were unable to recover it. 00:28:15.238 [2024-12-10 22:59:22.651578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.238 [2024-12-10 22:59:22.651610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.238 qpair failed and we were unable to recover it. 00:28:15.238 [2024-12-10 22:59:22.651780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.238 [2024-12-10 22:59:22.651811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.238 qpair failed and we were unable to recover it. 00:28:15.238 [2024-12-10 22:59:22.651935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.238 [2024-12-10 22:59:22.651966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.238 qpair failed and we were unable to recover it. 00:28:15.238 [2024-12-10 22:59:22.652061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.238 [2024-12-10 22:59:22.652092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.238 qpair failed and we were unable to recover it. 00:28:15.238 [2024-12-10 22:59:22.652275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.238 [2024-12-10 22:59:22.652307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.238 qpair failed and we were unable to recover it. 00:28:15.238 [2024-12-10 22:59:22.652488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.238 [2024-12-10 22:59:22.652520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.238 qpair failed and we were unable to recover it. 00:28:15.238 [2024-12-10 22:59:22.652639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.238 [2024-12-10 22:59:22.652670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.238 qpair failed and we were unable to recover it. 00:28:15.238 [2024-12-10 22:59:22.652935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.238 [2024-12-10 22:59:22.652966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.238 qpair failed and we were unable to recover it. 00:28:15.238 [2024-12-10 22:59:22.653079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.238 [2024-12-10 22:59:22.653111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.238 qpair failed and we were unable to recover it. 00:28:15.238 [2024-12-10 22:59:22.653298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.238 [2024-12-10 22:59:22.653330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.238 qpair failed and we were unable to recover it. 00:28:15.238 [2024-12-10 22:59:22.653445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.238 [2024-12-10 22:59:22.653477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.238 qpair failed and we were unable to recover it. 00:28:15.238 [2024-12-10 22:59:22.653648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.238 [2024-12-10 22:59:22.653678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.238 qpair failed and we were unable to recover it. 00:28:15.238 [2024-12-10 22:59:22.653870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.238 [2024-12-10 22:59:22.653902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.238 qpair failed and we were unable to recover it. 00:28:15.238 [2024-12-10 22:59:22.654031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.238 [2024-12-10 22:59:22.654062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.238 qpair failed and we were unable to recover it. 00:28:15.238 [2024-12-10 22:59:22.654197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.238 [2024-12-10 22:59:22.654230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.238 qpair failed and we were unable to recover it. 00:28:15.238 [2024-12-10 22:59:22.654428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.238 [2024-12-10 22:59:22.654459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.238 qpair failed and we were unable to recover it. 00:28:15.238 [2024-12-10 22:59:22.654579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.238 [2024-12-10 22:59:22.654610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.238 qpair failed and we were unable to recover it. 00:28:15.238 [2024-12-10 22:59:22.654797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.238 [2024-12-10 22:59:22.654829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.238 qpair failed and we were unable to recover it. 00:28:15.238 [2024-12-10 22:59:22.654943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.238 [2024-12-10 22:59:22.654975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.238 qpair failed and we were unable to recover it. 00:28:15.238 [2024-12-10 22:59:22.655149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.238 [2024-12-10 22:59:22.655188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.239 qpair failed and we were unable to recover it. 00:28:15.239 [2024-12-10 22:59:22.655300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.239 [2024-12-10 22:59:22.655338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.239 qpair failed and we were unable to recover it. 00:28:15.239 [2024-12-10 22:59:22.655511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.239 [2024-12-10 22:59:22.655542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.239 qpair failed and we were unable to recover it. 00:28:15.239 [2024-12-10 22:59:22.655643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.239 [2024-12-10 22:59:22.655675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.239 qpair failed and we were unable to recover it. 00:28:15.239 [2024-12-10 22:59:22.655864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.239 [2024-12-10 22:59:22.655896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.239 qpair failed and we were unable to recover it. 00:28:15.239 [2024-12-10 22:59:22.656008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.239 [2024-12-10 22:59:22.656040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.239 qpair failed and we were unable to recover it. 00:28:15.239 [2024-12-10 22:59:22.656206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.239 [2024-12-10 22:59:22.656237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.239 qpair failed and we were unable to recover it. 00:28:15.239 [2024-12-10 22:59:22.656411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.239 [2024-12-10 22:59:22.656442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.239 qpair failed and we were unable to recover it. 00:28:15.239 [2024-12-10 22:59:22.656550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.239 [2024-12-10 22:59:22.656582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.239 qpair failed and we were unable to recover it. 00:28:15.239 [2024-12-10 22:59:22.656700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.239 [2024-12-10 22:59:22.656731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.239 qpair failed and we were unable to recover it. 00:28:15.239 [2024-12-10 22:59:22.656841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.239 [2024-12-10 22:59:22.656872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.239 qpair failed and we were unable to recover it. 00:28:15.239 [2024-12-10 22:59:22.657038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.239 [2024-12-10 22:59:22.657070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.239 qpair failed and we were unable to recover it. 00:28:15.239 [2024-12-10 22:59:22.657205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.239 [2024-12-10 22:59:22.657237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.239 qpair failed and we were unable to recover it. 00:28:15.239 [2024-12-10 22:59:22.657379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.239 [2024-12-10 22:59:22.657411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.239 qpair failed and we were unable to recover it. 00:28:15.239 [2024-12-10 22:59:22.657600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.239 [2024-12-10 22:59:22.657631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.239 qpair failed and we were unable to recover it. 00:28:15.239 [2024-12-10 22:59:22.657802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.239 [2024-12-10 22:59:22.657834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.239 qpair failed and we were unable to recover it. 00:28:15.239 [2024-12-10 22:59:22.658029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.239 [2024-12-10 22:59:22.658061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.239 qpair failed and we were unable to recover it. 00:28:15.239 [2024-12-10 22:59:22.658231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.239 [2024-12-10 22:59:22.658264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.239 qpair failed and we were unable to recover it. 00:28:15.239 [2024-12-10 22:59:22.658432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.239 [2024-12-10 22:59:22.658463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.239 qpair failed and we were unable to recover it. 00:28:15.239 [2024-12-10 22:59:22.658677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.239 [2024-12-10 22:59:22.658709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.239 qpair failed and we were unable to recover it. 00:28:15.239 [2024-12-10 22:59:22.658811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.239 [2024-12-10 22:59:22.658842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.239 qpair failed and we were unable to recover it. 00:28:15.239 [2024-12-10 22:59:22.659009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.239 [2024-12-10 22:59:22.659041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.239 qpair failed and we were unable to recover it. 00:28:15.239 [2024-12-10 22:59:22.659278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.239 [2024-12-10 22:59:22.659310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.239 qpair failed and we were unable to recover it. 00:28:15.239 [2024-12-10 22:59:22.659431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.239 [2024-12-10 22:59:22.659463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.239 qpair failed and we were unable to recover it. 00:28:15.239 [2024-12-10 22:59:22.659582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.239 [2024-12-10 22:59:22.659615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.239 qpair failed and we were unable to recover it. 00:28:15.239 [2024-12-10 22:59:22.659720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.239 [2024-12-10 22:59:22.659751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.239 qpair failed and we were unable to recover it. 00:28:15.239 [2024-12-10 22:59:22.659940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.239 [2024-12-10 22:59:22.659972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.239 qpair failed and we were unable to recover it. 00:28:15.239 [2024-12-10 22:59:22.660080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.239 [2024-12-10 22:59:22.660111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.239 qpair failed and we were unable to recover it. 00:28:15.239 [2024-12-10 22:59:22.660283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.239 [2024-12-10 22:59:22.660352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.239 qpair failed and we were unable to recover it. 00:28:15.239 [2024-12-10 22:59:22.660546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.239 [2024-12-10 22:59:22.660581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.239 qpair failed and we were unable to recover it. 00:28:15.239 [2024-12-10 22:59:22.660700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.239 [2024-12-10 22:59:22.660733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.239 qpair failed and we were unable to recover it. 00:28:15.239 [2024-12-10 22:59:22.660845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.239 [2024-12-10 22:59:22.660877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.239 qpair failed and we were unable to recover it. 00:28:15.239 [2024-12-10 22:59:22.661047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.239 [2024-12-10 22:59:22.661079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.239 qpair failed and we were unable to recover it. 00:28:15.239 [2024-12-10 22:59:22.661274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.239 [2024-12-10 22:59:22.661307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.239 qpair failed and we were unable to recover it. 00:28:15.239 [2024-12-10 22:59:22.661409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.239 [2024-12-10 22:59:22.661440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.239 qpair failed and we were unable to recover it. 00:28:15.239 [2024-12-10 22:59:22.661546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.239 [2024-12-10 22:59:22.661577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.239 qpair failed and we were unable to recover it. 00:28:15.239 [2024-12-10 22:59:22.661749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.239 [2024-12-10 22:59:22.661781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.239 qpair failed and we were unable to recover it. 00:28:15.239 [2024-12-10 22:59:22.661946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.239 [2024-12-10 22:59:22.661977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.239 qpair failed and we were unable to recover it. 00:28:15.239 [2024-12-10 22:59:22.662182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.239 [2024-12-10 22:59:22.662216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.239 qpair failed and we were unable to recover it. 00:28:15.239 [2024-12-10 22:59:22.662341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.239 [2024-12-10 22:59:22.662373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.239 qpair failed and we were unable to recover it. 00:28:15.239 [2024-12-10 22:59:22.662498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.239 [2024-12-10 22:59:22.662529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.239 qpair failed and we were unable to recover it. 00:28:15.239 [2024-12-10 22:59:22.662698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.239 [2024-12-10 22:59:22.662729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.240 qpair failed and we were unable to recover it. 00:28:15.240 [2024-12-10 22:59:22.662846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.240 [2024-12-10 22:59:22.662878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.240 qpair failed and we were unable to recover it. 00:28:15.240 [2024-12-10 22:59:22.662982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.240 [2024-12-10 22:59:22.663012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.240 qpair failed and we were unable to recover it. 00:28:15.240 [2024-12-10 22:59:22.663198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.240 [2024-12-10 22:59:22.663232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.240 qpair failed and we were unable to recover it. 00:28:15.240 [2024-12-10 22:59:22.663400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.240 [2024-12-10 22:59:22.663432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.240 qpair failed and we were unable to recover it. 00:28:15.240 [2024-12-10 22:59:22.663686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.240 [2024-12-10 22:59:22.663717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.240 qpair failed and we were unable to recover it. 00:28:15.240 [2024-12-10 22:59:22.663883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.240 [2024-12-10 22:59:22.663914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.240 qpair failed and we were unable to recover it. 00:28:15.240 [2024-12-10 22:59:22.664026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.240 [2024-12-10 22:59:22.664056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.240 qpair failed and we were unable to recover it. 00:28:15.240 [2024-12-10 22:59:22.664259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.240 [2024-12-10 22:59:22.664293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.240 qpair failed and we were unable to recover it. 00:28:15.240 [2024-12-10 22:59:22.664463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.240 [2024-12-10 22:59:22.664494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.240 qpair failed and we were unable to recover it. 00:28:15.240 [2024-12-10 22:59:22.664661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.240 [2024-12-10 22:59:22.664693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.240 qpair failed and we were unable to recover it. 00:28:15.240 [2024-12-10 22:59:22.664920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.240 [2024-12-10 22:59:22.664951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.240 qpair failed and we were unable to recover it. 00:28:15.240 [2024-12-10 22:59:22.665055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.240 [2024-12-10 22:59:22.665085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.240 qpair failed and we were unable to recover it. 00:28:15.240 [2024-12-10 22:59:22.665225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.240 [2024-12-10 22:59:22.665258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.240 qpair failed and we were unable to recover it. 00:28:15.240 [2024-12-10 22:59:22.665361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.240 [2024-12-10 22:59:22.665398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.240 qpair failed and we were unable to recover it. 00:28:15.240 [2024-12-10 22:59:22.665508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.240 [2024-12-10 22:59:22.665539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.240 qpair failed and we were unable to recover it. 00:28:15.240 [2024-12-10 22:59:22.665706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.240 [2024-12-10 22:59:22.665738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.240 qpair failed and we were unable to recover it. 00:28:15.240 [2024-12-10 22:59:22.665932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.240 [2024-12-10 22:59:22.665964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.240 qpair failed and we were unable to recover it. 00:28:15.240 [2024-12-10 22:59:22.666084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.240 [2024-12-10 22:59:22.666115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.240 qpair failed and we were unable to recover it. 00:28:15.240 [2024-12-10 22:59:22.666296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.240 [2024-12-10 22:59:22.666328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.240 qpair failed and we were unable to recover it. 00:28:15.240 [2024-12-10 22:59:22.666573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.240 [2024-12-10 22:59:22.666604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.240 qpair failed and we were unable to recover it. 00:28:15.240 [2024-12-10 22:59:22.666716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.240 [2024-12-10 22:59:22.666748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.240 qpair failed and we were unable to recover it. 00:28:15.240 [2024-12-10 22:59:22.666859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.240 [2024-12-10 22:59:22.666890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.240 qpair failed and we were unable to recover it. 00:28:15.240 [2024-12-10 22:59:22.667059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.240 [2024-12-10 22:59:22.667091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.240 qpair failed and we were unable to recover it. 00:28:15.240 [2024-12-10 22:59:22.667265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.240 [2024-12-10 22:59:22.667299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.240 qpair failed and we were unable to recover it. 00:28:15.240 [2024-12-10 22:59:22.667432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.240 [2024-12-10 22:59:22.667463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.240 qpair failed and we were unable to recover it. 00:28:15.240 [2024-12-10 22:59:22.667644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.240 [2024-12-10 22:59:22.667675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.240 qpair failed and we were unable to recover it. 00:28:15.240 [2024-12-10 22:59:22.667778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.240 [2024-12-10 22:59:22.667810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.240 qpair failed and we were unable to recover it. 00:28:15.240 [2024-12-10 22:59:22.668006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.240 [2024-12-10 22:59:22.668038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.240 qpair failed and we were unable to recover it. 00:28:15.240 [2024-12-10 22:59:22.668153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.240 [2024-12-10 22:59:22.668194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.240 qpair failed and we were unable to recover it. 00:28:15.240 [2024-12-10 22:59:22.668308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.240 [2024-12-10 22:59:22.668339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.240 qpair failed and we were unable to recover it. 00:28:15.240 [2024-12-10 22:59:22.668437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.240 [2024-12-10 22:59:22.668468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.240 qpair failed and we were unable to recover it. 00:28:15.240 [2024-12-10 22:59:22.668587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.240 [2024-12-10 22:59:22.668619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.240 qpair failed and we were unable to recover it. 00:28:15.240 [2024-12-10 22:59:22.668790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.240 [2024-12-10 22:59:22.668822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.240 qpair failed and we were unable to recover it. 00:28:15.240 [2024-12-10 22:59:22.668921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.240 [2024-12-10 22:59:22.668952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.240 qpair failed and we were unable to recover it. 00:28:15.240 [2024-12-10 22:59:22.669060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.240 [2024-12-10 22:59:22.669091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.240 qpair failed and we were unable to recover it. 00:28:15.240 [2024-12-10 22:59:22.669189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.240 [2024-12-10 22:59:22.669219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.240 qpair failed and we were unable to recover it. 00:28:15.240 [2024-12-10 22:59:22.669397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.240 [2024-12-10 22:59:22.669429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.240 qpair failed and we were unable to recover it. 00:28:15.240 [2024-12-10 22:59:22.669544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.240 [2024-12-10 22:59:22.669576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.240 qpair failed and we were unable to recover it. 00:28:15.240 [2024-12-10 22:59:22.669696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.240 [2024-12-10 22:59:22.669727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.240 qpair failed and we were unable to recover it. 00:28:15.240 [2024-12-10 22:59:22.669837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.240 [2024-12-10 22:59:22.669869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.240 qpair failed and we were unable to recover it. 00:28:15.240 [2024-12-10 22:59:22.670058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.240 [2024-12-10 22:59:22.670091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.240 qpair failed and we were unable to recover it. 00:28:15.240 [2024-12-10 22:59:22.670217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.240 [2024-12-10 22:59:22.670249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.240 qpair failed and we were unable to recover it. 00:28:15.240 [2024-12-10 22:59:22.670438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.240 [2024-12-10 22:59:22.670471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.240 qpair failed and we were unable to recover it. 00:28:15.240 [2024-12-10 22:59:22.670653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.240 [2024-12-10 22:59:22.670685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.240 qpair failed and we were unable to recover it. 00:28:15.240 [2024-12-10 22:59:22.670859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.240 [2024-12-10 22:59:22.670891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.240 qpair failed and we were unable to recover it. 00:28:15.240 [2024-12-10 22:59:22.671072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.241 [2024-12-10 22:59:22.671104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.241 qpair failed and we were unable to recover it. 00:28:15.241 [2024-12-10 22:59:22.671281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.241 [2024-12-10 22:59:22.671312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.241 qpair failed and we were unable to recover it. 00:28:15.241 [2024-12-10 22:59:22.671498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.241 [2024-12-10 22:59:22.671529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.241 qpair failed and we were unable to recover it. 00:28:15.241 [2024-12-10 22:59:22.671782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.241 [2024-12-10 22:59:22.671814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.241 qpair failed and we were unable to recover it. 00:28:15.241 [2024-12-10 22:59:22.671997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.241 [2024-12-10 22:59:22.672028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.241 qpair failed and we were unable to recover it. 00:28:15.241 [2024-12-10 22:59:22.672211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.241 [2024-12-10 22:59:22.672245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.241 qpair failed and we were unable to recover it. 00:28:15.241 [2024-12-10 22:59:22.672458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.241 [2024-12-10 22:59:22.672489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.241 qpair failed and we were unable to recover it. 00:28:15.241 [2024-12-10 22:59:22.672605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.241 [2024-12-10 22:59:22.672636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.241 qpair failed and we were unable to recover it. 00:28:15.241 [2024-12-10 22:59:22.672747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.241 [2024-12-10 22:59:22.672779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.241 qpair failed and we were unable to recover it. 00:28:15.241 [2024-12-10 22:59:22.672949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.241 [2024-12-10 22:59:22.673019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.241 qpair failed and we were unable to recover it. 00:28:15.241 [2024-12-10 22:59:22.673187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.241 [2024-12-10 22:59:22.673257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.241 qpair failed and we were unable to recover it. 00:28:15.241 [2024-12-10 22:59:22.673449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.241 [2024-12-10 22:59:22.673485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.241 qpair failed and we were unable to recover it. 00:28:15.241 [2024-12-10 22:59:22.673615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.241 [2024-12-10 22:59:22.673648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.241 qpair failed and we were unable to recover it. 00:28:15.241 [2024-12-10 22:59:22.673818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.241 [2024-12-10 22:59:22.673851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.241 qpair failed and we were unable to recover it. 00:28:15.241 [2024-12-10 22:59:22.674021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.241 [2024-12-10 22:59:22.674053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.241 qpair failed and we were unable to recover it. 00:28:15.241 [2024-12-10 22:59:22.674244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.241 [2024-12-10 22:59:22.674278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.241 qpair failed and we were unable to recover it. 00:28:15.241 [2024-12-10 22:59:22.674391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.241 [2024-12-10 22:59:22.674423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.241 qpair failed and we were unable to recover it. 00:28:15.241 [2024-12-10 22:59:22.674592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.241 [2024-12-10 22:59:22.674623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.241 qpair failed and we were unable to recover it. 00:28:15.241 [2024-12-10 22:59:22.674858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.241 [2024-12-10 22:59:22.674890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.241 qpair failed and we were unable to recover it. 00:28:15.241 [2024-12-10 22:59:22.675010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.241 [2024-12-10 22:59:22.675041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.241 qpair failed and we were unable to recover it. 00:28:15.241 [2024-12-10 22:59:22.675248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.241 [2024-12-10 22:59:22.675281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.241 qpair failed and we were unable to recover it. 00:28:15.241 [2024-12-10 22:59:22.675460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.241 [2024-12-10 22:59:22.675491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.241 qpair failed and we were unable to recover it. 00:28:15.241 [2024-12-10 22:59:22.675663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.241 [2024-12-10 22:59:22.675705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.241 qpair failed and we were unable to recover it. 00:28:15.241 [2024-12-10 22:59:22.675873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.241 [2024-12-10 22:59:22.675905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.241 qpair failed and we were unable to recover it. 00:28:15.241 [2024-12-10 22:59:22.676073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.241 [2024-12-10 22:59:22.676104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.241 qpair failed and we were unable to recover it. 00:28:15.241 [2024-12-10 22:59:22.676296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.241 [2024-12-10 22:59:22.676335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.241 qpair failed and we were unable to recover it. 00:28:15.241 [2024-12-10 22:59:22.676452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.241 [2024-12-10 22:59:22.676484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.241 qpair failed and we were unable to recover it. 00:28:15.241 [2024-12-10 22:59:22.676594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.241 [2024-12-10 22:59:22.676626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.241 qpair failed and we were unable to recover it. 00:28:15.241 [2024-12-10 22:59:22.676794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.241 [2024-12-10 22:59:22.676826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.241 qpair failed and we were unable to recover it. 00:28:15.241 [2024-12-10 22:59:22.676993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.241 [2024-12-10 22:59:22.677025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.241 qpair failed and we were unable to recover it. 00:28:15.241 [2024-12-10 22:59:22.677216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.241 [2024-12-10 22:59:22.677248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.241 qpair failed and we were unable to recover it. 00:28:15.241 [2024-12-10 22:59:22.677446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.241 [2024-12-10 22:59:22.677477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.241 qpair failed and we were unable to recover it. 00:28:15.241 [2024-12-10 22:59:22.677589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.241 [2024-12-10 22:59:22.677621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.241 qpair failed and we were unable to recover it. 00:28:15.241 [2024-12-10 22:59:22.677725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.241 [2024-12-10 22:59:22.677756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.241 qpair failed and we were unable to recover it. 00:28:15.241 [2024-12-10 22:59:22.677969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.241 [2024-12-10 22:59:22.678000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.241 qpair failed and we were unable to recover it. 00:28:15.241 [2024-12-10 22:59:22.678172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.241 [2024-12-10 22:59:22.678207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.241 qpair failed and we were unable to recover it. 00:28:15.241 [2024-12-10 22:59:22.678388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.241 [2024-12-10 22:59:22.678420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.241 qpair failed and we were unable to recover it. 00:28:15.241 [2024-12-10 22:59:22.678525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.241 [2024-12-10 22:59:22.678557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.241 qpair failed and we were unable to recover it. 00:28:15.241 [2024-12-10 22:59:22.678671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.241 [2024-12-10 22:59:22.678703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.241 qpair failed and we were unable to recover it. 00:28:15.241 [2024-12-10 22:59:22.678887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.241 [2024-12-10 22:59:22.678919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.241 qpair failed and we were unable to recover it. 00:28:15.241 [2024-12-10 22:59:22.679088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.241 [2024-12-10 22:59:22.679119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.241 qpair failed and we were unable to recover it. 00:28:15.242 [2024-12-10 22:59:22.679315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.242 [2024-12-10 22:59:22.679348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.242 qpair failed and we were unable to recover it. 00:28:15.242 [2024-12-10 22:59:22.679474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.242 [2024-12-10 22:59:22.679506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.242 qpair failed and we were unable to recover it. 00:28:15.242 [2024-12-10 22:59:22.679690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.242 [2024-12-10 22:59:22.679721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.242 qpair failed and we were unable to recover it. 00:28:15.242 [2024-12-10 22:59:22.679831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.242 [2024-12-10 22:59:22.679862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.242 qpair failed and we were unable to recover it. 00:28:15.242 [2024-12-10 22:59:22.679976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.242 [2024-12-10 22:59:22.680008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.242 qpair failed and we were unable to recover it. 00:28:15.242 [2024-12-10 22:59:22.680201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.242 [2024-12-10 22:59:22.680234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.242 qpair failed and we were unable to recover it. 00:28:15.242 [2024-12-10 22:59:22.680405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.242 [2024-12-10 22:59:22.680436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.242 qpair failed and we were unable to recover it. 00:28:15.242 [2024-12-10 22:59:22.680604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.242 [2024-12-10 22:59:22.680636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.242 qpair failed and we were unable to recover it. 00:28:15.242 [2024-12-10 22:59:22.680814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.242 [2024-12-10 22:59:22.680884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.242 qpair failed and we were unable to recover it. 00:28:15.242 [2024-12-10 22:59:22.681078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.242 [2024-12-10 22:59:22.681114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.242 qpair failed and we were unable to recover it. 00:28:15.242 [2024-12-10 22:59:22.681248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.242 [2024-12-10 22:59:22.681283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.242 qpair failed and we were unable to recover it. 00:28:15.242 [2024-12-10 22:59:22.681453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.242 [2024-12-10 22:59:22.681484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.242 qpair failed and we were unable to recover it. 00:28:15.242 [2024-12-10 22:59:22.681615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.242 [2024-12-10 22:59:22.681646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.242 qpair failed and we were unable to recover it. 00:28:15.242 [2024-12-10 22:59:22.681813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.242 [2024-12-10 22:59:22.681844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.242 qpair failed and we were unable to recover it. 00:28:15.242 [2024-12-10 22:59:22.682005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.242 [2024-12-10 22:59:22.682035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.242 qpair failed and we were unable to recover it. 00:28:15.242 [2024-12-10 22:59:22.682205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.242 [2024-12-10 22:59:22.682239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.242 qpair failed and we were unable to recover it. 00:28:15.242 [2024-12-10 22:59:22.682354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.242 [2024-12-10 22:59:22.682385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.242 qpair failed and we were unable to recover it. 00:28:15.242 [2024-12-10 22:59:22.682496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.242 [2024-12-10 22:59:22.682526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.242 qpair failed and we were unable to recover it. 00:28:15.242 [2024-12-10 22:59:22.682627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.242 [2024-12-10 22:59:22.682658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.242 qpair failed and we were unable to recover it. 00:28:15.242 [2024-12-10 22:59:22.682859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.242 [2024-12-10 22:59:22.682891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.242 qpair failed and we were unable to recover it. 00:28:15.242 [2024-12-10 22:59:22.683060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.242 [2024-12-10 22:59:22.683090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.242 qpair failed and we were unable to recover it. 00:28:15.242 [2024-12-10 22:59:22.683198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.242 [2024-12-10 22:59:22.683240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.242 qpair failed and we were unable to recover it. 00:28:15.242 [2024-12-10 22:59:22.683356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.242 [2024-12-10 22:59:22.683387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.242 qpair failed and we were unable to recover it. 00:28:15.242 [2024-12-10 22:59:22.683573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.242 [2024-12-10 22:59:22.683604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.242 qpair failed and we were unable to recover it. 00:28:15.242 [2024-12-10 22:59:22.683856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.242 [2024-12-10 22:59:22.683888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.242 qpair failed and we were unable to recover it. 00:28:15.242 [2024-12-10 22:59:22.684055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.242 [2024-12-10 22:59:22.684086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.242 qpair failed and we were unable to recover it. 00:28:15.242 [2024-12-10 22:59:22.684197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.242 [2024-12-10 22:59:22.684228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.242 qpair failed and we were unable to recover it. 00:28:15.242 [2024-12-10 22:59:22.684434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.242 [2024-12-10 22:59:22.684465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.242 qpair failed and we were unable to recover it. 00:28:15.242 [2024-12-10 22:59:22.684646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.242 [2024-12-10 22:59:22.684677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.242 qpair failed and we were unable to recover it. 00:28:15.242 [2024-12-10 22:59:22.684801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.242 [2024-12-10 22:59:22.684831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.242 qpair failed and we were unable to recover it. 00:28:15.242 [2024-12-10 22:59:22.685095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.242 [2024-12-10 22:59:22.685127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.242 qpair failed and we were unable to recover it. 00:28:15.242 [2024-12-10 22:59:22.685318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.242 [2024-12-10 22:59:22.685358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.242 qpair failed and we were unable to recover it. 00:28:15.242 [2024-12-10 22:59:22.685533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.242 [2024-12-10 22:59:22.685565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.242 qpair failed and we were unable to recover it. 00:28:15.242 [2024-12-10 22:59:22.685674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.242 [2024-12-10 22:59:22.685704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.242 qpair failed and we were unable to recover it. 00:28:15.242 [2024-12-10 22:59:22.685894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.242 [2024-12-10 22:59:22.685924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.242 qpair failed and we were unable to recover it. 00:28:15.242 [2024-12-10 22:59:22.686035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.242 [2024-12-10 22:59:22.686066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.242 qpair failed and we were unable to recover it. 00:28:15.242 [2024-12-10 22:59:22.686257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.242 [2024-12-10 22:59:22.686289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.242 qpair failed and we were unable to recover it. 00:28:15.242 [2024-12-10 22:59:22.686409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.242 [2024-12-10 22:59:22.686440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.242 qpair failed and we were unable to recover it. 00:28:15.242 [2024-12-10 22:59:22.686551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.242 [2024-12-10 22:59:22.686581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.242 qpair failed and we were unable to recover it. 00:28:15.242 [2024-12-10 22:59:22.686696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.242 [2024-12-10 22:59:22.686727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.242 qpair failed and we were unable to recover it. 00:28:15.242 [2024-12-10 22:59:22.686838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.242 [2024-12-10 22:59:22.686869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.242 qpair failed and we were unable to recover it. 00:28:15.242 [2024-12-10 22:59:22.687127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.242 [2024-12-10 22:59:22.687167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.242 qpair failed and we were unable to recover it. 00:28:15.242 [2024-12-10 22:59:22.687269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.242 [2024-12-10 22:59:22.687300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.242 qpair failed and we were unable to recover it. 00:28:15.242 [2024-12-10 22:59:22.687428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.242 [2024-12-10 22:59:22.687458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.242 qpair failed and we were unable to recover it. 00:28:15.242 [2024-12-10 22:59:22.687624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.242 [2024-12-10 22:59:22.687656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.242 qpair failed and we were unable to recover it. 00:28:15.242 [2024-12-10 22:59:22.687837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.242 [2024-12-10 22:59:22.687869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.242 qpair failed and we were unable to recover it. 00:28:15.243 [2024-12-10 22:59:22.688036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.243 [2024-12-10 22:59:22.688066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.243 qpair failed and we were unable to recover it. 00:28:15.243 [2024-12-10 22:59:22.688192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.243 [2024-12-10 22:59:22.688225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.243 qpair failed and we were unable to recover it. 00:28:15.243 [2024-12-10 22:59:22.688357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.243 [2024-12-10 22:59:22.688389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.243 qpair failed and we were unable to recover it. 00:28:15.243 [2024-12-10 22:59:22.688568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.243 [2024-12-10 22:59:22.688599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.243 qpair failed and we were unable to recover it. 00:28:15.243 [2024-12-10 22:59:22.688710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.243 [2024-12-10 22:59:22.688741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.243 qpair failed and we were unable to recover it. 00:28:15.243 [2024-12-10 22:59:22.688860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.243 [2024-12-10 22:59:22.688892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.243 qpair failed and we were unable to recover it. 00:28:15.243 [2024-12-10 22:59:22.688993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.243 [2024-12-10 22:59:22.689025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.243 qpair failed and we were unable to recover it. 00:28:15.243 [2024-12-10 22:59:22.689144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.243 [2024-12-10 22:59:22.689203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.243 qpair failed and we were unable to recover it. 00:28:15.243 [2024-12-10 22:59:22.689330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.243 [2024-12-10 22:59:22.689362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.243 qpair failed and we were unable to recover it. 00:28:15.243 [2024-12-10 22:59:22.689526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.243 [2024-12-10 22:59:22.689556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.243 qpair failed and we were unable to recover it. 00:28:15.243 [2024-12-10 22:59:22.689721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.243 [2024-12-10 22:59:22.689751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.243 qpair failed and we were unable to recover it. 00:28:15.243 [2024-12-10 22:59:22.689999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.243 [2024-12-10 22:59:22.690031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.243 qpair failed and we were unable to recover it. 00:28:15.243 [2024-12-10 22:59:22.690292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.243 [2024-12-10 22:59:22.690325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.243 qpair failed and we were unable to recover it. 00:28:15.243 [2024-12-10 22:59:22.690516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.243 [2024-12-10 22:59:22.690549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.243 qpair failed and we were unable to recover it. 00:28:15.243 [2024-12-10 22:59:22.690715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.243 [2024-12-10 22:59:22.690747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.243 qpair failed and we were unable to recover it. 00:28:15.243 [2024-12-10 22:59:22.690862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.243 [2024-12-10 22:59:22.690892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.243 qpair failed and we were unable to recover it. 00:28:15.243 [2024-12-10 22:59:22.691016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.243 [2024-12-10 22:59:22.691052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.243 qpair failed and we were unable to recover it. 00:28:15.243 [2024-12-10 22:59:22.691238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.243 [2024-12-10 22:59:22.691270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.243 qpair failed and we were unable to recover it. 00:28:15.243 [2024-12-10 22:59:22.691528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.243 [2024-12-10 22:59:22.691560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.243 qpair failed and we were unable to recover it. 00:28:15.243 [2024-12-10 22:59:22.691665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.243 [2024-12-10 22:59:22.691696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.243 qpair failed and we were unable to recover it. 00:28:15.243 [2024-12-10 22:59:22.691865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.243 [2024-12-10 22:59:22.691897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.243 qpair failed and we were unable to recover it. 00:28:15.243 [2024-12-10 22:59:22.692015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.243 [2024-12-10 22:59:22.692046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.243 qpair failed and we were unable to recover it. 00:28:15.243 [2024-12-10 22:59:22.692285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.243 [2024-12-10 22:59:22.692317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.243 qpair failed and we were unable to recover it. 00:28:15.243 [2024-12-10 22:59:22.692434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.243 [2024-12-10 22:59:22.692466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.243 qpair failed and we were unable to recover it. 00:28:15.243 [2024-12-10 22:59:22.692661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.243 [2024-12-10 22:59:22.692693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.243 qpair failed and we were unable to recover it. 00:28:15.243 [2024-12-10 22:59:22.692879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.243 [2024-12-10 22:59:22.692911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.243 qpair failed and we were unable to recover it. 00:28:15.243 [2024-12-10 22:59:22.693019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.243 [2024-12-10 22:59:22.693051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.243 qpair failed and we were unable to recover it. 00:28:15.243 [2024-12-10 22:59:22.693176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.243 [2024-12-10 22:59:22.693210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.243 qpair failed and we were unable to recover it. 00:28:15.243 [2024-12-10 22:59:22.693310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.243 [2024-12-10 22:59:22.693341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.243 qpair failed and we were unable to recover it. 00:28:15.243 [2024-12-10 22:59:22.693440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.243 [2024-12-10 22:59:22.693477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.243 qpair failed and we were unable to recover it. 00:28:15.243 [2024-12-10 22:59:22.693594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.243 [2024-12-10 22:59:22.693626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.243 qpair failed and we were unable to recover it. 00:28:15.243 [2024-12-10 22:59:22.693792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.243 [2024-12-10 22:59:22.693823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.243 qpair failed and we were unable to recover it. 00:28:15.243 [2024-12-10 22:59:22.694084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.243 [2024-12-10 22:59:22.694115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.243 qpair failed and we were unable to recover it. 00:28:15.243 [2024-12-10 22:59:22.694254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.243 [2024-12-10 22:59:22.694287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.243 qpair failed and we were unable to recover it. 00:28:15.243 [2024-12-10 22:59:22.694472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.243 [2024-12-10 22:59:22.694504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.243 qpair failed and we were unable to recover it. 00:28:15.243 [2024-12-10 22:59:22.694672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.243 [2024-12-10 22:59:22.694704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.243 qpair failed and we were unable to recover it. 00:28:15.243 [2024-12-10 22:59:22.694945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.243 [2024-12-10 22:59:22.694977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.243 qpair failed and we were unable to recover it. 00:28:15.243 [2024-12-10 22:59:22.695091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.243 [2024-12-10 22:59:22.695122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.243 qpair failed and we were unable to recover it. 00:28:15.243 [2024-12-10 22:59:22.695251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.243 [2024-12-10 22:59:22.695283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.243 qpair failed and we were unable to recover it. 00:28:15.243 [2024-12-10 22:59:22.695463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.243 [2024-12-10 22:59:22.695494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.243 qpair failed and we were unable to recover it. 00:28:15.243 [2024-12-10 22:59:22.695676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.243 [2024-12-10 22:59:22.695708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.243 qpair failed and we were unable to recover it. 00:28:15.243 [2024-12-10 22:59:22.695823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.243 [2024-12-10 22:59:22.695855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.243 qpair failed and we were unable to recover it. 00:28:15.243 [2024-12-10 22:59:22.696091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.243 [2024-12-10 22:59:22.696122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.243 qpair failed and we were unable to recover it. 00:28:15.243 [2024-12-10 22:59:22.696257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.243 [2024-12-10 22:59:22.696293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.243 qpair failed and we were unable to recover it. 00:28:15.243 [2024-12-10 22:59:22.696416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.243 [2024-12-10 22:59:22.696446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.243 qpair failed and we were unable to recover it. 00:28:15.243 [2024-12-10 22:59:22.696623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.243 [2024-12-10 22:59:22.696654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.243 qpair failed and we were unable to recover it. 00:28:15.244 [2024-12-10 22:59:22.696823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.244 [2024-12-10 22:59:22.696855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.244 qpair failed and we were unable to recover it. 00:28:15.244 [2024-12-10 22:59:22.697042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.244 [2024-12-10 22:59:22.697074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.244 qpair failed and we were unable to recover it. 00:28:15.244 [2024-12-10 22:59:22.697190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.244 [2024-12-10 22:59:22.697223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.244 qpair failed and we were unable to recover it. 00:28:15.244 [2024-12-10 22:59:22.697389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.244 [2024-12-10 22:59:22.697420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.244 qpair failed and we were unable to recover it. 00:28:15.244 [2024-12-10 22:59:22.697608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.244 [2024-12-10 22:59:22.697640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.244 qpair failed and we were unable to recover it. 00:28:15.244 [2024-12-10 22:59:22.697778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.244 [2024-12-10 22:59:22.697810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.244 qpair failed and we were unable to recover it. 00:28:15.244 [2024-12-10 22:59:22.697934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.244 [2024-12-10 22:59:22.697965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.244 qpair failed and we were unable to recover it. 00:28:15.244 [2024-12-10 22:59:22.698140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.244 [2024-12-10 22:59:22.698180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.244 qpair failed and we were unable to recover it. 00:28:15.244 [2024-12-10 22:59:22.698296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.244 [2024-12-10 22:59:22.698329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.244 qpair failed and we were unable to recover it. 00:28:15.244 [2024-12-10 22:59:22.698501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.244 [2024-12-10 22:59:22.698532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.244 qpair failed and we were unable to recover it. 00:28:15.244 [2024-12-10 22:59:22.698649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.244 [2024-12-10 22:59:22.698685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.244 qpair failed and we were unable to recover it. 00:28:15.244 [2024-12-10 22:59:22.698922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.244 [2024-12-10 22:59:22.698960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.244 qpair failed and we were unable to recover it. 00:28:15.244 [2024-12-10 22:59:22.699073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.244 [2024-12-10 22:59:22.699105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.244 qpair failed and we were unable to recover it. 00:28:15.244 [2024-12-10 22:59:22.699215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.244 [2024-12-10 22:59:22.699247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.244 qpair failed and we were unable to recover it. 00:28:15.244 [2024-12-10 22:59:22.699356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.244 [2024-12-10 22:59:22.699386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.244 qpair failed and we were unable to recover it. 00:28:15.244 [2024-12-10 22:59:22.699625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.244 [2024-12-10 22:59:22.699657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.244 qpair failed and we were unable to recover it. 00:28:15.244 [2024-12-10 22:59:22.699916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.244 [2024-12-10 22:59:22.699946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.244 qpair failed and we were unable to recover it. 00:28:15.244 [2024-12-10 22:59:22.700116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.244 [2024-12-10 22:59:22.700148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.244 qpair failed and we were unable to recover it. 00:28:15.244 [2024-12-10 22:59:22.700281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.244 [2024-12-10 22:59:22.700312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.244 qpair failed and we were unable to recover it. 00:28:15.244 [2024-12-10 22:59:22.700495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.244 [2024-12-10 22:59:22.700525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.244 qpair failed and we were unable to recover it. 00:28:15.244 [2024-12-10 22:59:22.700626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.244 [2024-12-10 22:59:22.700658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.244 qpair failed and we were unable to recover it. 00:28:15.244 [2024-12-10 22:59:22.700777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.244 [2024-12-10 22:59:22.700807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.244 qpair failed and we were unable to recover it. 00:28:15.244 [2024-12-10 22:59:22.700971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.244 [2024-12-10 22:59:22.701001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.244 qpair failed and we were unable to recover it. 00:28:15.244 [2024-12-10 22:59:22.701115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.244 [2024-12-10 22:59:22.701146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.244 qpair failed and we were unable to recover it. 00:28:15.244 [2024-12-10 22:59:22.701344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.244 [2024-12-10 22:59:22.701376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.244 qpair failed and we were unable to recover it. 00:28:15.244 [2024-12-10 22:59:22.701546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.244 [2024-12-10 22:59:22.701577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.244 qpair failed and we were unable to recover it. 00:28:15.244 [2024-12-10 22:59:22.701758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.244 [2024-12-10 22:59:22.701788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.244 qpair failed and we were unable to recover it. 00:28:15.244 [2024-12-10 22:59:22.701904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.244 [2024-12-10 22:59:22.701936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.244 qpair failed and we were unable to recover it. 00:28:15.244 [2024-12-10 22:59:22.702049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.244 [2024-12-10 22:59:22.702079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.244 qpair failed and we were unable to recover it. 00:28:15.244 [2024-12-10 22:59:22.702190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.244 [2024-12-10 22:59:22.702224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.244 qpair failed and we were unable to recover it. 00:28:15.244 [2024-12-10 22:59:22.702488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.244 [2024-12-10 22:59:22.702521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.244 qpair failed and we were unable to recover it. 00:28:15.244 [2024-12-10 22:59:22.702625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.244 [2024-12-10 22:59:22.702655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.244 qpair failed and we were unable to recover it. 00:28:15.244 [2024-12-10 22:59:22.702771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.244 [2024-12-10 22:59:22.702802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.244 qpair failed and we were unable to recover it. 00:28:15.244 [2024-12-10 22:59:22.702928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.244 [2024-12-10 22:59:22.702960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.244 qpair failed and we were unable to recover it. 00:28:15.244 [2024-12-10 22:59:22.703074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.244 [2024-12-10 22:59:22.703104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.244 qpair failed and we were unable to recover it. 00:28:15.244 [2024-12-10 22:59:22.703339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.244 [2024-12-10 22:59:22.703372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.244 qpair failed and we were unable to recover it. 00:28:15.244 [2024-12-10 22:59:22.703486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.244 [2024-12-10 22:59:22.703517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.244 qpair failed and we were unable to recover it. 00:28:15.244 [2024-12-10 22:59:22.703686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.244 [2024-12-10 22:59:22.703718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.244 qpair failed and we were unable to recover it. 00:28:15.244 [2024-12-10 22:59:22.703915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.244 [2024-12-10 22:59:22.703946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.244 qpair failed and we were unable to recover it. 00:28:15.244 [2024-12-10 22:59:22.704216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.244 [2024-12-10 22:59:22.704249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.244 qpair failed and we were unable to recover it. 00:28:15.244 [2024-12-10 22:59:22.704354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.244 [2024-12-10 22:59:22.704385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.244 qpair failed and we were unable to recover it. 00:28:15.244 [2024-12-10 22:59:22.704577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.244 [2024-12-10 22:59:22.704608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.244 qpair failed and we were unable to recover it. 00:28:15.244 [2024-12-10 22:59:22.704795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.244 [2024-12-10 22:59:22.704826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.244 qpair failed and we were unable to recover it. 00:28:15.244 [2024-12-10 22:59:22.704994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.244 [2024-12-10 22:59:22.705026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.244 qpair failed and we were unable to recover it. 00:28:15.244 [2024-12-10 22:59:22.705184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.244 [2024-12-10 22:59:22.705216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.244 qpair failed and we were unable to recover it. 00:28:15.244 [2024-12-10 22:59:22.705387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.244 [2024-12-10 22:59:22.705418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.244 qpair failed and we were unable to recover it. 00:28:15.244 [2024-12-10 22:59:22.705611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.245 [2024-12-10 22:59:22.705643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.245 qpair failed and we were unable to recover it. 00:28:15.245 [2024-12-10 22:59:22.705856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.245 [2024-12-10 22:59:22.705888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.245 qpair failed and we were unable to recover it. 00:28:15.245 [2024-12-10 22:59:22.706015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.245 [2024-12-10 22:59:22.706047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.245 qpair failed and we were unable to recover it. 00:28:15.245 [2024-12-10 22:59:22.706234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.245 [2024-12-10 22:59:22.706266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.245 qpair failed and we were unable to recover it. 00:28:15.245 [2024-12-10 22:59:22.706435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.245 [2024-12-10 22:59:22.706466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.245 qpair failed and we were unable to recover it. 00:28:15.245 [2024-12-10 22:59:22.706643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.245 [2024-12-10 22:59:22.706682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.245 qpair failed and we were unable to recover it. 00:28:15.245 [2024-12-10 22:59:22.706785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.245 [2024-12-10 22:59:22.706817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.245 qpair failed and we were unable to recover it. 00:28:15.245 [2024-12-10 22:59:22.707005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.245 [2024-12-10 22:59:22.707036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.245 qpair failed and we were unable to recover it. 00:28:15.245 [2024-12-10 22:59:22.707250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.245 [2024-12-10 22:59:22.707282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.245 qpair failed and we were unable to recover it. 00:28:15.245 [2024-12-10 22:59:22.707452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.245 [2024-12-10 22:59:22.707482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.245 qpair failed and we were unable to recover it. 00:28:15.245 [2024-12-10 22:59:22.707598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.245 [2024-12-10 22:59:22.707630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.245 qpair failed and we were unable to recover it. 00:28:15.245 [2024-12-10 22:59:22.707878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.245 [2024-12-10 22:59:22.707910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.245 qpair failed and we were unable to recover it. 00:28:15.245 [2024-12-10 22:59:22.708016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.245 [2024-12-10 22:59:22.708047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.245 qpair failed and we were unable to recover it. 00:28:15.245 [2024-12-10 22:59:22.708148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.245 [2024-12-10 22:59:22.708188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.245 qpair failed and we were unable to recover it. 00:28:15.245 [2024-12-10 22:59:22.708356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.245 [2024-12-10 22:59:22.708387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.245 qpair failed and we were unable to recover it. 00:28:15.245 [2024-12-10 22:59:22.708510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.245 [2024-12-10 22:59:22.708540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.245 qpair failed and we were unable to recover it. 00:28:15.245 [2024-12-10 22:59:22.708660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.245 [2024-12-10 22:59:22.708690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.245 qpair failed and we were unable to recover it. 00:28:15.245 [2024-12-10 22:59:22.708875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.245 [2024-12-10 22:59:22.708906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.245 qpair failed and we were unable to recover it. 00:28:15.245 [2024-12-10 22:59:22.709029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.245 [2024-12-10 22:59:22.709059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.245 qpair failed and we were unable to recover it. 00:28:15.245 [2024-12-10 22:59:22.709256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.245 [2024-12-10 22:59:22.709288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.245 qpair failed and we were unable to recover it. 00:28:15.245 [2024-12-10 22:59:22.709578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.245 [2024-12-10 22:59:22.709610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.245 qpair failed and we were unable to recover it. 00:28:15.245 [2024-12-10 22:59:22.709724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.245 [2024-12-10 22:59:22.709756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.245 qpair failed and we were unable to recover it. 00:28:15.245 [2024-12-10 22:59:22.709941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.245 [2024-12-10 22:59:22.709973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.245 qpair failed and we were unable to recover it. 00:28:15.245 [2024-12-10 22:59:22.710086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.245 [2024-12-10 22:59:22.710116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.245 qpair failed and we were unable to recover it. 00:28:15.245 [2024-12-10 22:59:22.710300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.245 [2024-12-10 22:59:22.710333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.245 qpair failed and we were unable to recover it. 00:28:15.245 [2024-12-10 22:59:22.710455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.245 [2024-12-10 22:59:22.710487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.245 qpair failed and we were unable to recover it. 00:28:15.245 [2024-12-10 22:59:22.710652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.245 [2024-12-10 22:59:22.710682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.245 qpair failed and we were unable to recover it. 00:28:15.245 [2024-12-10 22:59:22.710813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.245 [2024-12-10 22:59:22.710843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.245 qpair failed and we were unable to recover it. 00:28:15.245 [2024-12-10 22:59:22.711007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.245 [2024-12-10 22:59:22.711039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.245 qpair failed and we were unable to recover it. 00:28:15.245 [2024-12-10 22:59:22.711156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.245 [2024-12-10 22:59:22.711199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.245 qpair failed and we were unable to recover it. 00:28:15.245 [2024-12-10 22:59:22.711310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.245 [2024-12-10 22:59:22.711341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.245 qpair failed and we were unable to recover it. 00:28:15.245 [2024-12-10 22:59:22.711466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.245 [2024-12-10 22:59:22.711497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.245 qpair failed and we were unable to recover it. 00:28:15.245 [2024-12-10 22:59:22.711665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.245 [2024-12-10 22:59:22.711702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.245 qpair failed and we were unable to recover it. 00:28:15.245 [2024-12-10 22:59:22.711870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.245 [2024-12-10 22:59:22.711900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.245 qpair failed and we were unable to recover it. 00:28:15.245 [2024-12-10 22:59:22.712003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.245 [2024-12-10 22:59:22.712034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.245 qpair failed and we were unable to recover it. 00:28:15.245 [2024-12-10 22:59:22.712234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.245 [2024-12-10 22:59:22.712268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.245 qpair failed and we were unable to recover it. 00:28:15.245 [2024-12-10 22:59:22.712387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.245 [2024-12-10 22:59:22.712419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.245 qpair failed and we were unable to recover it. 00:28:15.245 [2024-12-10 22:59:22.712592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.245 [2024-12-10 22:59:22.712623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.245 qpair failed and we were unable to recover it. 00:28:15.245 [2024-12-10 22:59:22.712739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.245 [2024-12-10 22:59:22.712771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.245 qpair failed and we were unable to recover it. 00:28:15.245 [2024-12-10 22:59:22.712876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.245 [2024-12-10 22:59:22.712907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.245 qpair failed and we were unable to recover it. 00:28:15.245 [2024-12-10 22:59:22.713072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.245 [2024-12-10 22:59:22.713103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.245 qpair failed and we were unable to recover it. 00:28:15.245 [2024-12-10 22:59:22.713302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.245 [2024-12-10 22:59:22.713334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.245 qpair failed and we were unable to recover it. 00:28:15.245 [2024-12-10 22:59:22.713445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.245 [2024-12-10 22:59:22.713477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.245 qpair failed and we were unable to recover it. 00:28:15.245 [2024-12-10 22:59:22.713649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.245 [2024-12-10 22:59:22.713681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.245 qpair failed and we were unable to recover it. 00:28:15.246 [2024-12-10 22:59:22.713781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.246 [2024-12-10 22:59:22.713812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.246 qpair failed and we were unable to recover it. 00:28:15.246 [2024-12-10 22:59:22.714103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.246 [2024-12-10 22:59:22.714135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.246 qpair failed and we were unable to recover it. 00:28:15.246 [2024-12-10 22:59:22.714319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.246 [2024-12-10 22:59:22.714351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.246 qpair failed and we were unable to recover it. 00:28:15.246 [2024-12-10 22:59:22.714455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.246 [2024-12-10 22:59:22.714486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.246 qpair failed and we were unable to recover it. 00:28:15.246 [2024-12-10 22:59:22.714732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.246 [2024-12-10 22:59:22.714763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.246 qpair failed and we were unable to recover it. 00:28:15.246 [2024-12-10 22:59:22.714953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.246 [2024-12-10 22:59:22.714986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.246 qpair failed and we were unable to recover it. 00:28:15.246 [2024-12-10 22:59:22.715152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.246 [2024-12-10 22:59:22.715193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.246 qpair failed and we were unable to recover it. 00:28:15.246 [2024-12-10 22:59:22.715428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.246 [2024-12-10 22:59:22.715459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.246 qpair failed and we were unable to recover it. 00:28:15.246 [2024-12-10 22:59:22.715696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.246 [2024-12-10 22:59:22.715727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.246 qpair failed and we were unable to recover it. 00:28:15.246 [2024-12-10 22:59:22.716000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.246 [2024-12-10 22:59:22.716031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.246 qpair failed and we were unable to recover it. 00:28:15.246 [2024-12-10 22:59:22.716149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.246 [2024-12-10 22:59:22.716190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.246 qpair failed and we were unable to recover it. 00:28:15.246 [2024-12-10 22:59:22.716309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.246 [2024-12-10 22:59:22.716339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.246 qpair failed and we were unable to recover it. 00:28:15.246 [2024-12-10 22:59:22.716508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.246 [2024-12-10 22:59:22.716539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.246 qpair failed and we were unable to recover it. 00:28:15.246 [2024-12-10 22:59:22.716742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.246 [2024-12-10 22:59:22.716773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.246 qpair failed and we were unable to recover it. 00:28:15.246 [2024-12-10 22:59:22.716969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.246 [2024-12-10 22:59:22.716999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.246 qpair failed and we were unable to recover it. 00:28:15.246 [2024-12-10 22:59:22.717103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.246 [2024-12-10 22:59:22.717134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.246 qpair failed and we were unable to recover it. 00:28:15.246 [2024-12-10 22:59:22.717262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.246 [2024-12-10 22:59:22.717293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.246 qpair failed and we were unable to recover it. 00:28:15.246 [2024-12-10 22:59:22.717393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.246 [2024-12-10 22:59:22.717424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.246 qpair failed and we were unable to recover it. 00:28:15.246 [2024-12-10 22:59:22.717618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.246 [2024-12-10 22:59:22.717649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.246 qpair failed and we were unable to recover it. 00:28:15.246 [2024-12-10 22:59:22.717782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.246 [2024-12-10 22:59:22.717813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.246 qpair failed and we were unable to recover it. 00:28:15.246 [2024-12-10 22:59:22.717924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.246 [2024-12-10 22:59:22.717956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.246 qpair failed and we were unable to recover it. 00:28:15.246 [2024-12-10 22:59:22.718058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.246 [2024-12-10 22:59:22.718090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.246 qpair failed and we were unable to recover it. 00:28:15.246 [2024-12-10 22:59:22.718193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.246 [2024-12-10 22:59:22.718225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.246 qpair failed and we were unable to recover it. 00:28:15.246 [2024-12-10 22:59:22.718337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.246 [2024-12-10 22:59:22.718368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.246 qpair failed and we were unable to recover it. 00:28:15.246 [2024-12-10 22:59:22.718577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.246 [2024-12-10 22:59:22.718609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.246 qpair failed and we were unable to recover it. 00:28:15.246 [2024-12-10 22:59:22.718868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.246 [2024-12-10 22:59:22.718900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.246 qpair failed and we were unable to recover it. 00:28:15.246 [2024-12-10 22:59:22.719012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.246 [2024-12-10 22:59:22.719044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.246 qpair failed and we were unable to recover it. 00:28:15.246 [2024-12-10 22:59:22.719170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.246 [2024-12-10 22:59:22.719203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.246 qpair failed and we were unable to recover it. 00:28:15.246 [2024-12-10 22:59:22.719370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.246 [2024-12-10 22:59:22.719401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.246 qpair failed and we were unable to recover it. 00:28:15.246 [2024-12-10 22:59:22.719571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.246 [2024-12-10 22:59:22.719608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.246 qpair failed and we were unable to recover it. 00:28:15.246 [2024-12-10 22:59:22.719774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.246 [2024-12-10 22:59:22.719806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.246 qpair failed and we were unable to recover it. 00:28:15.246 [2024-12-10 22:59:22.719971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.246 [2024-12-10 22:59:22.720002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.246 qpair failed and we were unable to recover it. 00:28:15.246 [2024-12-10 22:59:22.720233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.246 [2024-12-10 22:59:22.720265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.246 qpair failed and we were unable to recover it. 00:28:15.246 [2024-12-10 22:59:22.720405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.246 [2024-12-10 22:59:22.720437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.246 qpair failed and we were unable to recover it. 00:28:15.246 [2024-12-10 22:59:22.720568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.246 [2024-12-10 22:59:22.720600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.246 qpair failed and we were unable to recover it. 00:28:15.246 [2024-12-10 22:59:22.720771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.246 [2024-12-10 22:59:22.720802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.246 qpair failed and we were unable to recover it. 00:28:15.246 [2024-12-10 22:59:22.720902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.246 [2024-12-10 22:59:22.720933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.246 qpair failed and we were unable to recover it. 00:28:15.246 [2024-12-10 22:59:22.721050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.246 [2024-12-10 22:59:22.721081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.246 qpair failed and we were unable to recover it. 00:28:15.246 [2024-12-10 22:59:22.721181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.246 [2024-12-10 22:59:22.721214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.246 qpair failed and we were unable to recover it. 00:28:15.246 [2024-12-10 22:59:22.721397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.246 [2024-12-10 22:59:22.721428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.246 qpair failed and we were unable to recover it. 00:28:15.246 [2024-12-10 22:59:22.721688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.246 [2024-12-10 22:59:22.721719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.246 qpair failed and we were unable to recover it. 00:28:15.246 [2024-12-10 22:59:22.721909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.246 [2024-12-10 22:59:22.721940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.246 qpair failed and we were unable to recover it. 00:28:15.246 [2024-12-10 22:59:22.722052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.246 [2024-12-10 22:59:22.722083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.246 qpair failed and we were unable to recover it. 00:28:15.246 [2024-12-10 22:59:22.722257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.246 [2024-12-10 22:59:22.722291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.246 qpair failed and we were unable to recover it. 00:28:15.246 [2024-12-10 22:59:22.722555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.246 [2024-12-10 22:59:22.722586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.246 qpair failed and we were unable to recover it. 00:28:15.246 [2024-12-10 22:59:22.722790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.246 [2024-12-10 22:59:22.722822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.246 qpair failed and we were unable to recover it. 00:28:15.246 [2024-12-10 22:59:22.722987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.246 [2024-12-10 22:59:22.723019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.246 qpair failed and we were unable to recover it. 00:28:15.246 [2024-12-10 22:59:22.723195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.246 [2024-12-10 22:59:22.723228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.246 qpair failed and we were unable to recover it. 00:28:15.246 [2024-12-10 22:59:22.723417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.246 [2024-12-10 22:59:22.723448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.246 qpair failed and we were unable to recover it. 00:28:15.246 [2024-12-10 22:59:22.723614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.246 [2024-12-10 22:59:22.723645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.247 qpair failed and we were unable to recover it. 00:28:15.247 [2024-12-10 22:59:22.723810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.247 [2024-12-10 22:59:22.723843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.247 qpair failed and we were unable to recover it. 00:28:15.247 [2024-12-10 22:59:22.724016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.247 [2024-12-10 22:59:22.724047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.247 qpair failed and we were unable to recover it. 00:28:15.247 [2024-12-10 22:59:22.724310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.247 [2024-12-10 22:59:22.724342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.247 qpair failed and we were unable to recover it. 00:28:15.247 [2024-12-10 22:59:22.724532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.247 [2024-12-10 22:59:22.724564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.247 qpair failed and we were unable to recover it. 00:28:15.247 [2024-12-10 22:59:22.724669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.247 [2024-12-10 22:59:22.724700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.247 qpair failed and we were unable to recover it. 00:28:15.247 [2024-12-10 22:59:22.724812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.247 [2024-12-10 22:59:22.724844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.247 qpair failed and we were unable to recover it. 00:28:15.247 [2024-12-10 22:59:22.724947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.247 [2024-12-10 22:59:22.724984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.247 qpair failed and we were unable to recover it. 00:28:15.247 [2024-12-10 22:59:22.725104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.247 [2024-12-10 22:59:22.725135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.247 qpair failed and we were unable to recover it. 00:28:15.247 [2024-12-10 22:59:22.725266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.247 [2024-12-10 22:59:22.725298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.247 qpair failed and we were unable to recover it. 00:28:15.247 [2024-12-10 22:59:22.725403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.247 [2024-12-10 22:59:22.725434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.247 qpair failed and we were unable to recover it. 00:28:15.247 [2024-12-10 22:59:22.725540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.247 [2024-12-10 22:59:22.725572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.247 qpair failed and we were unable to recover it. 00:28:15.247 [2024-12-10 22:59:22.725737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.247 [2024-12-10 22:59:22.725768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.247 qpair failed and we were unable to recover it. 00:28:15.247 [2024-12-10 22:59:22.725954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.247 [2024-12-10 22:59:22.725985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.247 qpair failed and we were unable to recover it. 00:28:15.247 [2024-12-10 22:59:22.726149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.247 [2024-12-10 22:59:22.726193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.247 qpair failed and we were unable to recover it. 00:28:15.247 [2024-12-10 22:59:22.726377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.247 [2024-12-10 22:59:22.726408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.247 qpair failed and we were unable to recover it. 00:28:15.247 [2024-12-10 22:59:22.726577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.247 [2024-12-10 22:59:22.726608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.247 qpair failed and we were unable to recover it. 00:28:15.247 [2024-12-10 22:59:22.726777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.247 [2024-12-10 22:59:22.726809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.247 qpair failed and we were unable to recover it. 00:28:15.247 [2024-12-10 22:59:22.727000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.247 [2024-12-10 22:59:22.727031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.247 qpair failed and we were unable to recover it. 00:28:15.247 [2024-12-10 22:59:22.727226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.247 [2024-12-10 22:59:22.727259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.247 qpair failed and we were unable to recover it. 00:28:15.247 [2024-12-10 22:59:22.727368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.247 [2024-12-10 22:59:22.727399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.247 qpair failed and we were unable to recover it. 00:28:15.247 [2024-12-10 22:59:22.727616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.247 [2024-12-10 22:59:22.727648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.247 qpair failed and we were unable to recover it. 00:28:15.247 [2024-12-10 22:59:22.727782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.247 [2024-12-10 22:59:22.727813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.247 qpair failed and we were unable to recover it. 00:28:15.247 [2024-12-10 22:59:22.728002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.247 [2024-12-10 22:59:22.728034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.247 qpair failed and we were unable to recover it. 00:28:15.247 [2024-12-10 22:59:22.728151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.247 [2024-12-10 22:59:22.728191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.247 qpair failed and we were unable to recover it. 00:28:15.247 [2024-12-10 22:59:22.728306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.247 [2024-12-10 22:59:22.728337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.247 qpair failed and we were unable to recover it. 00:28:15.247 [2024-12-10 22:59:22.728504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.247 [2024-12-10 22:59:22.728535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.247 qpair failed and we were unable to recover it. 00:28:15.247 [2024-12-10 22:59:22.728703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.247 [2024-12-10 22:59:22.728735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.247 qpair failed and we were unable to recover it. 00:28:15.247 [2024-12-10 22:59:22.729009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.247 [2024-12-10 22:59:22.729040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.247 qpair failed and we were unable to recover it. 00:28:15.247 [2024-12-10 22:59:22.729297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.247 [2024-12-10 22:59:22.729329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.247 qpair failed and we were unable to recover it. 00:28:15.247 [2024-12-10 22:59:22.729441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.247 [2024-12-10 22:59:22.729472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.247 qpair failed and we were unable to recover it. 00:28:15.247 [2024-12-10 22:59:22.729659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.247 [2024-12-10 22:59:22.729690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.247 qpair failed and we were unable to recover it. 00:28:15.247 [2024-12-10 22:59:22.729874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.247 [2024-12-10 22:59:22.729905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.247 qpair failed and we were unable to recover it. 00:28:15.247 [2024-12-10 22:59:22.730072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.247 [2024-12-10 22:59:22.730104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.247 qpair failed and we were unable to recover it. 00:28:15.247 [2024-12-10 22:59:22.730211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.247 [2024-12-10 22:59:22.730244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.247 qpair failed and we were unable to recover it. 00:28:15.247 [2024-12-10 22:59:22.730441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.247 [2024-12-10 22:59:22.730471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.247 qpair failed and we were unable to recover it. 00:28:15.247 [2024-12-10 22:59:22.730588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.247 [2024-12-10 22:59:22.730620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.247 qpair failed and we were unable to recover it. 00:28:15.247 [2024-12-10 22:59:22.730742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.247 [2024-12-10 22:59:22.730774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.247 qpair failed and we were unable to recover it. 00:28:15.247 [2024-12-10 22:59:22.730940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.247 [2024-12-10 22:59:22.730971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.247 qpair failed and we were unable to recover it. 00:28:15.247 [2024-12-10 22:59:22.731070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.247 [2024-12-10 22:59:22.731102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.247 qpair failed and we were unable to recover it. 00:28:15.247 [2024-12-10 22:59:22.731222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.247 [2024-12-10 22:59:22.731254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.247 qpair failed and we were unable to recover it. 00:28:15.247 [2024-12-10 22:59:22.731443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.247 [2024-12-10 22:59:22.731474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.247 qpair failed and we were unable to recover it. 00:28:15.247 [2024-12-10 22:59:22.731642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.247 [2024-12-10 22:59:22.731673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.247 qpair failed and we were unable to recover it. 00:28:15.247 [2024-12-10 22:59:22.731843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.247 [2024-12-10 22:59:22.731875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.247 qpair failed and we were unable to recover it. 00:28:15.247 [2024-12-10 22:59:22.732047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.247 [2024-12-10 22:59:22.732079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.247 qpair failed and we were unable to recover it. 00:28:15.247 [2024-12-10 22:59:22.732252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.247 [2024-12-10 22:59:22.732284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.247 qpair failed and we were unable to recover it. 00:28:15.247 [2024-12-10 22:59:22.732452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.247 [2024-12-10 22:59:22.732484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.247 qpair failed and we were unable to recover it. 00:28:15.247 [2024-12-10 22:59:22.732600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.247 [2024-12-10 22:59:22.732631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.247 qpair failed and we were unable to recover it. 00:28:15.247 [2024-12-10 22:59:22.732809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.247 [2024-12-10 22:59:22.732846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.247 qpair failed and we were unable to recover it. 00:28:15.247 [2024-12-10 22:59:22.733013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.247 [2024-12-10 22:59:22.733043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.247 qpair failed and we were unable to recover it. 00:28:15.247 [2024-12-10 22:59:22.733225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.247 [2024-12-10 22:59:22.733257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.248 qpair failed and we were unable to recover it. 00:28:15.248 [2024-12-10 22:59:22.733369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.248 [2024-12-10 22:59:22.733400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.248 qpair failed and we were unable to recover it. 00:28:15.248 [2024-12-10 22:59:22.733606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.248 [2024-12-10 22:59:22.733638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.248 qpair failed and we were unable to recover it. 00:28:15.248 [2024-12-10 22:59:22.733811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.248 [2024-12-10 22:59:22.733842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.248 qpair failed and we were unable to recover it. 00:28:15.248 [2024-12-10 22:59:22.734030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.248 [2024-12-10 22:59:22.734061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.248 qpair failed and we were unable to recover it. 00:28:15.248 [2024-12-10 22:59:22.734250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.248 [2024-12-10 22:59:22.734282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.248 qpair failed and we were unable to recover it. 00:28:15.248 [2024-12-10 22:59:22.734400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.248 [2024-12-10 22:59:22.734432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.248 qpair failed and we were unable to recover it. 00:28:15.248 [2024-12-10 22:59:22.734642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.248 [2024-12-10 22:59:22.734673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.248 qpair failed and we were unable to recover it. 00:28:15.248 [2024-12-10 22:59:22.734954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.248 [2024-12-10 22:59:22.734986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.248 qpair failed and we were unable to recover it. 00:28:15.248 [2024-12-10 22:59:22.735111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.248 [2024-12-10 22:59:22.735143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.248 qpair failed and we were unable to recover it. 00:28:15.248 [2024-12-10 22:59:22.735266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.248 [2024-12-10 22:59:22.735297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.248 qpair failed and we were unable to recover it. 00:28:15.248 [2024-12-10 22:59:22.735400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.248 [2024-12-10 22:59:22.735431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.248 qpair failed and we were unable to recover it. 00:28:15.248 [2024-12-10 22:59:22.735632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.248 [2024-12-10 22:59:22.735664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.248 qpair failed and we were unable to recover it. 00:28:15.248 [2024-12-10 22:59:22.735828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.248 [2024-12-10 22:59:22.735859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.248 qpair failed and we were unable to recover it. 00:28:15.248 [2024-12-10 22:59:22.736057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.248 [2024-12-10 22:59:22.736089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.248 qpair failed and we were unable to recover it. 00:28:15.248 [2024-12-10 22:59:22.736272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.248 [2024-12-10 22:59:22.736305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.248 qpair failed and we were unable to recover it. 00:28:15.248 [2024-12-10 22:59:22.736480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.248 [2024-12-10 22:59:22.736512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.248 qpair failed and we were unable to recover it. 00:28:15.248 [2024-12-10 22:59:22.736688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.248 [2024-12-10 22:59:22.736720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.248 qpair failed and we were unable to recover it. 00:28:15.248 [2024-12-10 22:59:22.736884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.248 [2024-12-10 22:59:22.736916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.248 qpair failed and we were unable to recover it. 00:28:15.248 [2024-12-10 22:59:22.737093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.248 [2024-12-10 22:59:22.737126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.248 qpair failed and we were unable to recover it. 00:28:15.248 [2024-12-10 22:59:22.737257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.248 [2024-12-10 22:59:22.737289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.248 qpair failed and we were unable to recover it. 00:28:15.248 [2024-12-10 22:59:22.737533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.248 [2024-12-10 22:59:22.737564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.248 qpair failed and we were unable to recover it. 00:28:15.248 [2024-12-10 22:59:22.737733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.248 [2024-12-10 22:59:22.737765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.248 qpair failed and we were unable to recover it. 00:28:15.248 [2024-12-10 22:59:22.737954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.248 [2024-12-10 22:59:22.737985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.248 qpair failed and we were unable to recover it. 00:28:15.248 [2024-12-10 22:59:22.738177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.248 [2024-12-10 22:59:22.738209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.248 qpair failed and we were unable to recover it. 00:28:15.248 [2024-12-10 22:59:22.738379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.248 [2024-12-10 22:59:22.738411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.248 qpair failed and we were unable to recover it. 00:28:15.248 [2024-12-10 22:59:22.738599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.248 [2024-12-10 22:59:22.738629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.248 qpair failed and we were unable to recover it. 00:28:15.248 [2024-12-10 22:59:22.738745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.248 [2024-12-10 22:59:22.738776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.248 qpair failed and we were unable to recover it. 00:28:15.248 [2024-12-10 22:59:22.738968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.248 [2024-12-10 22:59:22.738999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.248 qpair failed and we were unable to recover it. 00:28:15.248 [2024-12-10 22:59:22.739125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.248 [2024-12-10 22:59:22.739166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.248 qpair failed and we were unable to recover it. 00:28:15.248 [2024-12-10 22:59:22.739351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.248 [2024-12-10 22:59:22.739381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.248 qpair failed and we were unable to recover it. 00:28:15.248 [2024-12-10 22:59:22.739549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.248 [2024-12-10 22:59:22.739580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.248 qpair failed and we were unable to recover it. 00:28:15.248 [2024-12-10 22:59:22.739771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.248 [2024-12-10 22:59:22.739802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.248 qpair failed and we were unable to recover it. 00:28:15.248 [2024-12-10 22:59:22.739920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.248 [2024-12-10 22:59:22.739951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.248 qpair failed and we were unable to recover it. 00:28:15.248 [2024-12-10 22:59:22.740195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.248 [2024-12-10 22:59:22.740228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.248 qpair failed and we were unable to recover it. 00:28:15.248 [2024-12-10 22:59:22.740421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.248 [2024-12-10 22:59:22.740453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.248 qpair failed and we were unable to recover it. 00:28:15.248 [2024-12-10 22:59:22.740657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.248 [2024-12-10 22:59:22.740687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.248 qpair failed and we were unable to recover it. 00:28:15.248 [2024-12-10 22:59:22.740812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.248 [2024-12-10 22:59:22.740843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.248 qpair failed and we were unable to recover it. 00:28:15.248 [2024-12-10 22:59:22.740962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.248 [2024-12-10 22:59:22.740994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.248 qpair failed and we were unable to recover it. 00:28:15.248 [2024-12-10 22:59:22.741116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.248 [2024-12-10 22:59:22.741147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.248 qpair failed and we were unable to recover it. 00:28:15.248 [2024-12-10 22:59:22.741337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.248 [2024-12-10 22:59:22.741368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.248 qpair failed and we were unable to recover it. 00:28:15.248 [2024-12-10 22:59:22.741485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.248 [2024-12-10 22:59:22.741516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.248 qpair failed and we were unable to recover it. 00:28:15.248 [2024-12-10 22:59:22.741720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.248 [2024-12-10 22:59:22.741752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.248 qpair failed and we were unable to recover it. 00:28:15.248 [2024-12-10 22:59:22.741869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.248 [2024-12-10 22:59:22.741900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.248 qpair failed and we were unable to recover it. 00:28:15.248 [2024-12-10 22:59:22.742095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.249 [2024-12-10 22:59:22.742126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.249 qpair failed and we were unable to recover it. 00:28:15.249 [2024-12-10 22:59:22.742252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.249 [2024-12-10 22:59:22.742283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.249 qpair failed and we were unable to recover it. 00:28:15.249 [2024-12-10 22:59:22.742390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.249 [2024-12-10 22:59:22.742421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.249 qpair failed and we were unable to recover it. 00:28:15.249 [2024-12-10 22:59:22.742545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.249 [2024-12-10 22:59:22.742575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.249 qpair failed and we were unable to recover it. 00:28:15.249 [2024-12-10 22:59:22.742754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.249 [2024-12-10 22:59:22.742785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.249 qpair failed and we were unable to recover it. 00:28:15.249 [2024-12-10 22:59:22.742886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.249 [2024-12-10 22:59:22.742917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.249 qpair failed and we were unable to recover it. 00:28:15.249 [2024-12-10 22:59:22.743102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.249 [2024-12-10 22:59:22.743133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.249 qpair failed and we were unable to recover it. 00:28:15.249 [2024-12-10 22:59:22.743242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.249 [2024-12-10 22:59:22.743273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.249 qpair failed and we were unable to recover it. 00:28:15.249 [2024-12-10 22:59:22.743390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.249 [2024-12-10 22:59:22.743421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.249 qpair failed and we were unable to recover it. 00:28:15.249 [2024-12-10 22:59:22.743604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.249 [2024-12-10 22:59:22.743634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.249 qpair failed and we were unable to recover it. 00:28:15.249 [2024-12-10 22:59:22.743799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.249 [2024-12-10 22:59:22.743830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.249 qpair failed and we were unable to recover it. 00:28:15.249 [2024-12-10 22:59:22.744030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.249 [2024-12-10 22:59:22.744061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.249 qpair failed and we were unable to recover it. 00:28:15.249 [2024-12-10 22:59:22.744250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.249 [2024-12-10 22:59:22.744282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.249 qpair failed and we were unable to recover it. 00:28:15.249 [2024-12-10 22:59:22.744472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.249 [2024-12-10 22:59:22.744504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.249 qpair failed and we were unable to recover it. 00:28:15.249 [2024-12-10 22:59:22.744624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.249 [2024-12-10 22:59:22.744654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.249 qpair failed and we were unable to recover it. 00:28:15.249 [2024-12-10 22:59:22.744780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.249 [2024-12-10 22:59:22.744811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.249 qpair failed and we were unable to recover it. 00:28:15.249 [2024-12-10 22:59:22.744988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.249 [2024-12-10 22:59:22.745019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.249 qpair failed and we were unable to recover it. 00:28:15.249 [2024-12-10 22:59:22.745115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.249 [2024-12-10 22:59:22.745147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.249 qpair failed and we were unable to recover it. 00:28:15.249 [2024-12-10 22:59:22.745268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.249 [2024-12-10 22:59:22.745301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.249 qpair failed and we were unable to recover it. 00:28:15.249 [2024-12-10 22:59:22.745487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.249 [2024-12-10 22:59:22.745518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.249 qpair failed and we were unable to recover it. 00:28:15.249 [2024-12-10 22:59:22.745640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.249 [2024-12-10 22:59:22.745671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.249 qpair failed and we were unable to recover it. 00:28:15.249 [2024-12-10 22:59:22.745839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.249 [2024-12-10 22:59:22.745871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.249 qpair failed and we were unable to recover it. 00:28:15.249 [2024-12-10 22:59:22.746147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.249 [2024-12-10 22:59:22.746194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.249 qpair failed and we were unable to recover it. 00:28:15.249 [2024-12-10 22:59:22.746313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.249 [2024-12-10 22:59:22.746344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.249 qpair failed and we were unable to recover it. 00:28:15.249 [2024-12-10 22:59:22.746442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.249 [2024-12-10 22:59:22.746473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.249 qpair failed and we were unable to recover it. 00:28:15.249 [2024-12-10 22:59:22.746686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.249 [2024-12-10 22:59:22.746717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.249 qpair failed and we were unable to recover it. 00:28:15.249 [2024-12-10 22:59:22.746894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.249 [2024-12-10 22:59:22.746925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.249 qpair failed and we were unable to recover it. 00:28:15.249 [2024-12-10 22:59:22.747097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.249 [2024-12-10 22:59:22.747128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.249 qpair failed and we were unable to recover it. 00:28:15.249 [2024-12-10 22:59:22.747244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.249 [2024-12-10 22:59:22.747276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.249 qpair failed and we were unable to recover it. 00:28:15.249 [2024-12-10 22:59:22.747445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.249 [2024-12-10 22:59:22.747476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.249 qpair failed and we were unable to recover it. 00:28:15.249 [2024-12-10 22:59:22.747657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.249 [2024-12-10 22:59:22.747689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.249 qpair failed and we were unable to recover it. 00:28:15.249 [2024-12-10 22:59:22.747908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.249 [2024-12-10 22:59:22.747938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.249 qpair failed and we were unable to recover it. 00:28:15.249 [2024-12-10 22:59:22.748127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.249 [2024-12-10 22:59:22.748179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.249 qpair failed and we were unable to recover it. 00:28:15.249 [2024-12-10 22:59:22.748370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.249 [2024-12-10 22:59:22.748401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.249 qpair failed and we were unable to recover it. 00:28:15.249 [2024-12-10 22:59:22.748581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.249 [2024-12-10 22:59:22.748611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.249 qpair failed and we were unable to recover it. 00:28:15.249 [2024-12-10 22:59:22.748802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.249 [2024-12-10 22:59:22.748834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.249 qpair failed and we were unable to recover it. 00:28:15.249 [2024-12-10 22:59:22.749009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.249 [2024-12-10 22:59:22.749040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.249 qpair failed and we were unable to recover it. 00:28:15.249 [2024-12-10 22:59:22.749208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.249 [2024-12-10 22:59:22.749241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.249 qpair failed and we were unable to recover it. 00:28:15.249 [2024-12-10 22:59:22.749354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.249 [2024-12-10 22:59:22.749385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.249 qpair failed and we were unable to recover it. 00:28:15.249 [2024-12-10 22:59:22.749504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.249 [2024-12-10 22:59:22.749534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.249 qpair failed and we were unable to recover it. 00:28:15.249 [2024-12-10 22:59:22.749644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.249 [2024-12-10 22:59:22.749676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.249 qpair failed and we were unable to recover it. 00:28:15.249 [2024-12-10 22:59:22.749806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.249 [2024-12-10 22:59:22.749837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.249 qpair failed and we were unable to recover it. 00:28:15.249 [2024-12-10 22:59:22.749957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.249 [2024-12-10 22:59:22.749989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.249 qpair failed and we were unable to recover it. 00:28:15.249 [2024-12-10 22:59:22.750088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.249 [2024-12-10 22:59:22.750119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.249 qpair failed and we were unable to recover it. 00:28:15.249 [2024-12-10 22:59:22.750295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.249 [2024-12-10 22:59:22.750328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.249 qpair failed and we were unable to recover it. 00:28:15.249 [2024-12-10 22:59:22.750440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.249 [2024-12-10 22:59:22.750471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.249 qpair failed and we were unable to recover it. 00:28:15.249 [2024-12-10 22:59:22.750659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.249 [2024-12-10 22:59:22.750689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.249 qpair failed and we were unable to recover it. 00:28:15.249 [2024-12-10 22:59:22.750856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.249 [2024-12-10 22:59:22.750888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.249 qpair failed and we were unable to recover it. 00:28:15.249 [2024-12-10 22:59:22.751071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.249 [2024-12-10 22:59:22.751102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.249 qpair failed and we were unable to recover it. 00:28:15.249 [2024-12-10 22:59:22.751332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.249 [2024-12-10 22:59:22.751363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.249 qpair failed and we were unable to recover it. 00:28:15.250 [2024-12-10 22:59:22.751538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.250 [2024-12-10 22:59:22.751570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.250 qpair failed and we were unable to recover it. 00:28:15.250 [2024-12-10 22:59:22.751673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.250 [2024-12-10 22:59:22.751703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.250 qpair failed and we were unable to recover it. 00:28:15.250 [2024-12-10 22:59:22.751897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.250 [2024-12-10 22:59:22.751929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.250 qpair failed and we were unable to recover it. 00:28:15.250 [2024-12-10 22:59:22.752131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.250 [2024-12-10 22:59:22.752169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.250 qpair failed and we were unable to recover it. 00:28:15.250 [2024-12-10 22:59:22.752351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.250 [2024-12-10 22:59:22.752382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.250 qpair failed and we were unable to recover it. 00:28:15.250 [2024-12-10 22:59:22.752577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.250 [2024-12-10 22:59:22.752607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.250 qpair failed and we were unable to recover it. 00:28:15.250 [2024-12-10 22:59:22.752774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.250 [2024-12-10 22:59:22.752805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.250 qpair failed and we were unable to recover it. 00:28:15.250 [2024-12-10 22:59:22.752969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.250 [2024-12-10 22:59:22.753000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.250 qpair failed and we were unable to recover it. 00:28:15.250 [2024-12-10 22:59:22.753118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.250 [2024-12-10 22:59:22.753148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.250 qpair failed and we were unable to recover it. 00:28:15.250 [2024-12-10 22:59:22.753274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.250 [2024-12-10 22:59:22.753305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.250 qpair failed and we were unable to recover it. 00:28:15.250 [2024-12-10 22:59:22.753472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.250 [2024-12-10 22:59:22.753502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.250 qpair failed and we were unable to recover it. 00:28:15.250 [2024-12-10 22:59:22.753669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.250 [2024-12-10 22:59:22.753700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.250 qpair failed and we were unable to recover it. 00:28:15.250 [2024-12-10 22:59:22.753879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.250 [2024-12-10 22:59:22.753910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.250 qpair failed and we were unable to recover it. 00:28:15.250 [2024-12-10 22:59:22.754037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.250 [2024-12-10 22:59:22.754068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.250 qpair failed and we were unable to recover it. 00:28:15.250 [2024-12-10 22:59:22.754179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.250 [2024-12-10 22:59:22.754211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.250 qpair failed and we were unable to recover it. 00:28:15.250 [2024-12-10 22:59:22.754404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.250 [2024-12-10 22:59:22.754435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.250 qpair failed and we were unable to recover it. 00:28:15.250 [2024-12-10 22:59:22.754540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.250 [2024-12-10 22:59:22.754571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.250 qpair failed and we were unable to recover it. 00:28:15.250 [2024-12-10 22:59:22.754746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.250 [2024-12-10 22:59:22.754777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.250 qpair failed and we were unable to recover it. 00:28:15.250 [2024-12-10 22:59:22.754946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.250 [2024-12-10 22:59:22.754977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.250 qpair failed and we were unable to recover it. 00:28:15.250 [2024-12-10 22:59:22.755146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.250 [2024-12-10 22:59:22.755186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.250 qpair failed and we were unable to recover it. 00:28:15.250 [2024-12-10 22:59:22.755304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.250 [2024-12-10 22:59:22.755335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.250 qpair failed and we were unable to recover it. 00:28:15.250 [2024-12-10 22:59:22.755506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.250 [2024-12-10 22:59:22.755538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.250 qpair failed and we were unable to recover it. 00:28:15.250 [2024-12-10 22:59:22.755703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.250 [2024-12-10 22:59:22.755734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.250 qpair failed and we were unable to recover it. 00:28:15.250 [2024-12-10 22:59:22.755898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.250 [2024-12-10 22:59:22.755929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.250 qpair failed and we were unable to recover it. 00:28:15.250 [2024-12-10 22:59:22.756094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.250 [2024-12-10 22:59:22.756125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.250 qpair failed and we were unable to recover it. 00:28:15.250 [2024-12-10 22:59:22.756262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.250 [2024-12-10 22:59:22.756293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.250 qpair failed and we were unable to recover it. 00:28:15.250 [2024-12-10 22:59:22.756477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.250 [2024-12-10 22:59:22.756508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.250 qpair failed and we were unable to recover it. 00:28:15.250 [2024-12-10 22:59:22.756752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.250 [2024-12-10 22:59:22.756783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.250 qpair failed and we were unable to recover it. 00:28:15.250 [2024-12-10 22:59:22.756907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.250 [2024-12-10 22:59:22.756939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.250 qpair failed and we were unable to recover it. 00:28:15.250 [2024-12-10 22:59:22.757106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.250 [2024-12-10 22:59:22.757136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.250 qpair failed and we were unable to recover it. 00:28:15.250 [2024-12-10 22:59:22.757332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.250 [2024-12-10 22:59:22.757365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.250 qpair failed and we were unable to recover it. 00:28:15.250 [2024-12-10 22:59:22.757488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.250 [2024-12-10 22:59:22.757518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.250 qpair failed and we were unable to recover it. 00:28:15.250 [2024-12-10 22:59:22.757685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.250 [2024-12-10 22:59:22.757717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.250 qpair failed and we were unable to recover it. 00:28:15.250 [2024-12-10 22:59:22.757991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.250 [2024-12-10 22:59:22.758021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.250 qpair failed and we were unable to recover it. 00:28:15.250 [2024-12-10 22:59:22.758262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.250 [2024-12-10 22:59:22.758294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.250 qpair failed and we were unable to recover it. 00:28:15.250 [2024-12-10 22:59:22.758465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.250 [2024-12-10 22:59:22.758497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.250 qpair failed and we were unable to recover it. 00:28:15.250 [2024-12-10 22:59:22.758664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.250 [2024-12-10 22:59:22.758694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.250 qpair failed and we were unable to recover it. 00:28:15.250 [2024-12-10 22:59:22.758908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.250 [2024-12-10 22:59:22.758939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.250 qpair failed and we were unable to recover it. 00:28:15.250 [2024-12-10 22:59:22.759118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.250 [2024-12-10 22:59:22.759149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.250 qpair failed and we were unable to recover it. 00:28:15.250 [2024-12-10 22:59:22.759344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.250 [2024-12-10 22:59:22.759375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.250 qpair failed and we were unable to recover it. 00:28:15.250 [2024-12-10 22:59:22.759485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.250 [2024-12-10 22:59:22.759521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.250 qpair failed and we were unable to recover it. 00:28:15.250 [2024-12-10 22:59:22.759628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.250 [2024-12-10 22:59:22.759658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.250 qpair failed and we were unable to recover it. 00:28:15.250 [2024-12-10 22:59:22.759762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.250 [2024-12-10 22:59:22.759792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.250 qpair failed and we were unable to recover it. 00:28:15.250 [2024-12-10 22:59:22.759901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.250 [2024-12-10 22:59:22.759932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.250 qpair failed and we were unable to recover it. 00:28:15.250 [2024-12-10 22:59:22.760100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.250 [2024-12-10 22:59:22.760131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.250 qpair failed and we were unable to recover it. 00:28:15.250 [2024-12-10 22:59:22.760308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.250 [2024-12-10 22:59:22.760338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.250 qpair failed and we were unable to recover it. 00:28:15.250 [2024-12-10 22:59:22.760512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.250 [2024-12-10 22:59:22.760543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.250 qpair failed and we were unable to recover it. 00:28:15.250 [2024-12-10 22:59:22.760645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.250 [2024-12-10 22:59:22.760676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.250 qpair failed and we were unable to recover it. 00:28:15.250 [2024-12-10 22:59:22.760791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.250 [2024-12-10 22:59:22.760822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.250 qpair failed and we were unable to recover it. 00:28:15.250 [2024-12-10 22:59:22.761028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.250 [2024-12-10 22:59:22.761059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.250 qpair failed and we were unable to recover it. 00:28:15.250 [2024-12-10 22:59:22.761268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.250 [2024-12-10 22:59:22.761301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.250 qpair failed and we were unable to recover it. 00:28:15.251 [2024-12-10 22:59:22.761468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.251 [2024-12-10 22:59:22.761499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.251 qpair failed and we were unable to recover it. 00:28:15.251 [2024-12-10 22:59:22.761666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.251 [2024-12-10 22:59:22.761697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.251 qpair failed and we were unable to recover it. 00:28:15.251 [2024-12-10 22:59:22.761797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.251 [2024-12-10 22:59:22.761827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.251 qpair failed and we were unable to recover it. 00:28:15.251 [2024-12-10 22:59:22.762022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.251 [2024-12-10 22:59:22.762053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.251 qpair failed and we were unable to recover it. 00:28:15.251 [2024-12-10 22:59:22.762170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.251 [2024-12-10 22:59:22.762203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.251 qpair failed and we were unable to recover it. 00:28:15.251 [2024-12-10 22:59:22.762455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.251 [2024-12-10 22:59:22.762485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.251 qpair failed and we were unable to recover it. 00:28:15.251 [2024-12-10 22:59:22.762649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.251 [2024-12-10 22:59:22.762680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.251 qpair failed and we were unable to recover it. 00:28:15.251 [2024-12-10 22:59:22.762869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.251 [2024-12-10 22:59:22.762900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.251 qpair failed and we were unable to recover it. 00:28:15.251 [2024-12-10 22:59:22.763091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.251 [2024-12-10 22:59:22.763121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.251 qpair failed and we were unable to recover it. 00:28:15.251 [2024-12-10 22:59:22.763297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.251 [2024-12-10 22:59:22.763329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.251 qpair failed and we were unable to recover it. 00:28:15.251 [2024-12-10 22:59:22.763440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.251 [2024-12-10 22:59:22.763471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.251 qpair failed and we were unable to recover it. 00:28:15.251 [2024-12-10 22:59:22.763583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.251 [2024-12-10 22:59:22.763613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.251 qpair failed and we were unable to recover it. 00:28:15.251 [2024-12-10 22:59:22.763874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.251 [2024-12-10 22:59:22.763905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.251 qpair failed and we were unable to recover it. 00:28:15.251 [2024-12-10 22:59:22.764013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.251 [2024-12-10 22:59:22.764044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.251 qpair failed and we were unable to recover it. 00:28:15.251 [2024-12-10 22:59:22.764171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.251 [2024-12-10 22:59:22.764204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.251 qpair failed and we were unable to recover it. 00:28:15.251 [2024-12-10 22:59:22.764408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.251 [2024-12-10 22:59:22.764439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.251 qpair failed and we were unable to recover it. 00:28:15.251 [2024-12-10 22:59:22.764605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.251 [2024-12-10 22:59:22.764636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.251 qpair failed and we were unable to recover it. 00:28:15.251 [2024-12-10 22:59:22.764837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.251 [2024-12-10 22:59:22.764867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.251 qpair failed and we were unable to recover it. 00:28:15.251 [2024-12-10 22:59:22.765111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.251 [2024-12-10 22:59:22.765142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.251 qpair failed and we were unable to recover it. 00:28:15.251 [2024-12-10 22:59:22.765330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.251 [2024-12-10 22:59:22.765361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.251 qpair failed and we were unable to recover it. 00:28:15.251 [2024-12-10 22:59:22.765551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.251 [2024-12-10 22:59:22.765581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.251 qpair failed and we were unable to recover it. 00:28:15.251 [2024-12-10 22:59:22.765792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.251 [2024-12-10 22:59:22.765823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.251 qpair failed and we were unable to recover it. 00:28:15.251 [2024-12-10 22:59:22.766015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.251 [2024-12-10 22:59:22.766046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.251 qpair failed and we were unable to recover it. 00:28:15.251 [2024-12-10 22:59:22.766174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.251 [2024-12-10 22:59:22.766208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.251 qpair failed and we were unable to recover it. 00:28:15.251 [2024-12-10 22:59:22.766327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.251 [2024-12-10 22:59:22.766358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.251 qpair failed and we were unable to recover it. 00:28:15.251 [2024-12-10 22:59:22.766483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.251 [2024-12-10 22:59:22.766513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.251 qpair failed and we were unable to recover it. 00:28:15.251 [2024-12-10 22:59:22.766701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.251 [2024-12-10 22:59:22.766732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.251 qpair failed and we were unable to recover it. 00:28:15.251 [2024-12-10 22:59:22.766850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.251 [2024-12-10 22:59:22.766882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.251 qpair failed and we were unable to recover it. 00:28:15.251 [2024-12-10 22:59:22.767002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.251 [2024-12-10 22:59:22.767033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.251 qpair failed and we were unable to recover it. 00:28:15.251 [2024-12-10 22:59:22.767207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.251 [2024-12-10 22:59:22.767240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.251 qpair failed and we were unable to recover it. 00:28:15.251 [2024-12-10 22:59:22.767405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.251 [2024-12-10 22:59:22.767441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.251 qpair failed and we were unable to recover it. 00:28:15.251 [2024-12-10 22:59:22.767700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.251 [2024-12-10 22:59:22.767729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.251 qpair failed and we were unable to recover it. 00:28:15.251 [2024-12-10 22:59:22.767869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.251 [2024-12-10 22:59:22.767900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.251 qpair failed and we were unable to recover it. 00:28:15.251 [2024-12-10 22:59:22.768016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.251 [2024-12-10 22:59:22.768047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.251 qpair failed and we were unable to recover it. 00:28:15.251 [2024-12-10 22:59:22.768243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.251 [2024-12-10 22:59:22.768274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.251 qpair failed and we were unable to recover it. 00:28:15.251 [2024-12-10 22:59:22.768442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.251 [2024-12-10 22:59:22.768472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.251 qpair failed and we were unable to recover it. 00:28:15.251 [2024-12-10 22:59:22.768642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.251 [2024-12-10 22:59:22.768672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.251 qpair failed and we were unable to recover it. 00:28:15.251 [2024-12-10 22:59:22.768839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.251 [2024-12-10 22:59:22.768870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.251 qpair failed and we were unable to recover it. 00:28:15.251 [2024-12-10 22:59:22.769046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.251 [2024-12-10 22:59:22.769077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.251 qpair failed and we were unable to recover it. 00:28:15.251 [2024-12-10 22:59:22.769294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.251 [2024-12-10 22:59:22.769326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.251 qpair failed and we were unable to recover it. 00:28:15.251 [2024-12-10 22:59:22.769518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.251 [2024-12-10 22:59:22.769548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.251 qpair failed and we were unable to recover it. 00:28:15.251 [2024-12-10 22:59:22.769711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.251 [2024-12-10 22:59:22.769743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.251 qpair failed and we were unable to recover it. 00:28:15.251 [2024-12-10 22:59:22.769908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.251 [2024-12-10 22:59:22.769939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.251 qpair failed and we were unable to recover it. 00:28:15.251 [2024-12-10 22:59:22.770112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.251 [2024-12-10 22:59:22.770144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.251 qpair failed and we were unable to recover it. 00:28:15.251 [2024-12-10 22:59:22.770362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.251 [2024-12-10 22:59:22.770393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.251 qpair failed and we were unable to recover it. 00:28:15.251 [2024-12-10 22:59:22.770579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.251 [2024-12-10 22:59:22.770609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.251 qpair failed and we were unable to recover it. 00:28:15.251 [2024-12-10 22:59:22.770724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.251 [2024-12-10 22:59:22.770755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.251 qpair failed and we were unable to recover it. 00:28:15.251 [2024-12-10 22:59:22.770864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.251 [2024-12-10 22:59:22.770894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.251 qpair failed and we were unable to recover it. 00:28:15.251 [2024-12-10 22:59:22.771080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.251 [2024-12-10 22:59:22.771111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.251 qpair failed and we were unable to recover it. 00:28:15.251 [2024-12-10 22:59:22.771265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.251 [2024-12-10 22:59:22.771298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.251 qpair failed and we were unable to recover it. 00:28:15.251 [2024-12-10 22:59:22.771403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.251 [2024-12-10 22:59:22.771434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.251 qpair failed and we were unable to recover it. 00:28:15.251 [2024-12-10 22:59:22.771627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.251 [2024-12-10 22:59:22.771658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.251 qpair failed and we were unable to recover it. 00:28:15.251 [2024-12-10 22:59:22.771768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.251 [2024-12-10 22:59:22.771800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.251 qpair failed and we were unable to recover it. 00:28:15.252 [2024-12-10 22:59:22.771910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.252 [2024-12-10 22:59:22.771941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.252 qpair failed and we were unable to recover it. 00:28:15.252 [2024-12-10 22:59:22.772107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.252 [2024-12-10 22:59:22.772139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.252 qpair failed and we were unable to recover it. 00:28:15.252 [2024-12-10 22:59:22.772316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.252 [2024-12-10 22:59:22.772348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.252 qpair failed and we were unable to recover it. 00:28:15.252 [2024-12-10 22:59:22.772522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.252 [2024-12-10 22:59:22.772553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.252 qpair failed and we were unable to recover it. 00:28:15.252 [2024-12-10 22:59:22.772756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.252 [2024-12-10 22:59:22.772792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.252 qpair failed and we were unable to recover it. 00:28:15.252 [2024-12-10 22:59:22.772976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.252 [2024-12-10 22:59:22.773006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.252 qpair failed and we were unable to recover it. 00:28:15.252 [2024-12-10 22:59:22.773181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.252 [2024-12-10 22:59:22.773214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.252 qpair failed and we were unable to recover it. 00:28:15.252 [2024-12-10 22:59:22.773322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.252 [2024-12-10 22:59:22.773353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.252 qpair failed and we were unable to recover it. 00:28:15.252 [2024-12-10 22:59:22.773524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.252 [2024-12-10 22:59:22.773556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.252 qpair failed and we were unable to recover it. 00:28:15.252 [2024-12-10 22:59:22.773757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.252 [2024-12-10 22:59:22.773787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.252 qpair failed and we were unable to recover it. 00:28:15.252 [2024-12-10 22:59:22.773904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.252 [2024-12-10 22:59:22.773935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.252 qpair failed and we were unable to recover it. 00:28:15.252 [2024-12-10 22:59:22.774125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.252 [2024-12-10 22:59:22.774164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.252 qpair failed and we were unable to recover it. 00:28:15.252 [2024-12-10 22:59:22.774329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.252 [2024-12-10 22:59:22.774360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.252 qpair failed and we were unable to recover it. 00:28:15.252 [2024-12-10 22:59:22.774544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.252 [2024-12-10 22:59:22.774575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.252 qpair failed and we were unable to recover it. 00:28:15.252 [2024-12-10 22:59:22.774762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.252 [2024-12-10 22:59:22.774792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.252 qpair failed and we were unable to recover it. 00:28:15.252 [2024-12-10 22:59:22.774959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.252 [2024-12-10 22:59:22.774989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.252 qpair failed and we were unable to recover it. 00:28:15.252 [2024-12-10 22:59:22.775181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.252 [2024-12-10 22:59:22.775213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.252 qpair failed and we were unable to recover it. 00:28:15.252 [2024-12-10 22:59:22.775479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.252 [2024-12-10 22:59:22.775509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.252 qpair failed and we were unable to recover it. 00:28:15.252 [2024-12-10 22:59:22.775701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.252 [2024-12-10 22:59:22.775733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.252 qpair failed and we were unable to recover it. 00:28:15.252 [2024-12-10 22:59:22.775850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.252 [2024-12-10 22:59:22.775880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.252 qpair failed and we were unable to recover it. 00:28:15.252 [2024-12-10 22:59:22.775990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.252 [2024-12-10 22:59:22.776020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.252 qpair failed and we were unable to recover it. 00:28:15.252 [2024-12-10 22:59:22.776133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.252 [2024-12-10 22:59:22.776172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.252 qpair failed and we were unable to recover it. 00:28:15.252 [2024-12-10 22:59:22.776360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.252 [2024-12-10 22:59:22.776392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.252 qpair failed and we were unable to recover it. 00:28:15.252 [2024-12-10 22:59:22.776507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.252 [2024-12-10 22:59:22.776537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.252 qpair failed and we were unable to recover it. 00:28:15.252 [2024-12-10 22:59:22.776717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.252 [2024-12-10 22:59:22.776749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.252 qpair failed and we were unable to recover it. 00:28:15.252 [2024-12-10 22:59:22.776846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.252 [2024-12-10 22:59:22.776878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.252 qpair failed and we were unable to recover it. 00:28:15.252 [2024-12-10 22:59:22.776973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.252 [2024-12-10 22:59:22.777004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.252 qpair failed and we were unable to recover it. 00:28:15.252 [2024-12-10 22:59:22.777223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.252 [2024-12-10 22:59:22.777255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.252 qpair failed and we were unable to recover it. 00:28:15.252 [2024-12-10 22:59:22.777423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.252 [2024-12-10 22:59:22.777455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.252 qpair failed and we were unable to recover it. 00:28:15.252 [2024-12-10 22:59:22.777643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.252 [2024-12-10 22:59:22.777673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.252 qpair failed and we were unable to recover it. 00:28:15.252 [2024-12-10 22:59:22.777839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.252 [2024-12-10 22:59:22.777870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.252 qpair failed and we were unable to recover it. 00:28:15.252 [2024-12-10 22:59:22.777986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.252 [2024-12-10 22:59:22.778017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.252 qpair failed and we were unable to recover it. 00:28:15.252 [2024-12-10 22:59:22.778165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.252 [2024-12-10 22:59:22.778196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.252 qpair failed and we were unable to recover it. 00:28:15.252 [2024-12-10 22:59:22.778311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.252 [2024-12-10 22:59:22.778341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.252 qpair failed and we were unable to recover it. 00:28:15.252 [2024-12-10 22:59:22.778508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.252 [2024-12-10 22:59:22.778540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.252 qpair failed and we were unable to recover it. 00:28:15.252 [2024-12-10 22:59:22.778703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.252 [2024-12-10 22:59:22.778734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.252 qpair failed and we were unable to recover it. 00:28:15.252 [2024-12-10 22:59:22.778935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.252 [2024-12-10 22:59:22.778966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.252 qpair failed and we were unable to recover it. 00:28:15.252 [2024-12-10 22:59:22.779078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.252 [2024-12-10 22:59:22.779109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.252 qpair failed and we were unable to recover it. 00:28:15.252 [2024-12-10 22:59:22.779250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.252 [2024-12-10 22:59:22.779281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.252 qpair failed and we were unable to recover it. 00:28:15.252 [2024-12-10 22:59:22.779396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.252 [2024-12-10 22:59:22.779427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.252 qpair failed and we were unable to recover it. 00:28:15.252 [2024-12-10 22:59:22.779594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.252 [2024-12-10 22:59:22.779625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.252 qpair failed and we were unable to recover it. 00:28:15.252 [2024-12-10 22:59:22.779745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.252 [2024-12-10 22:59:22.779775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.252 qpair failed and we were unable to recover it. 00:28:15.252 [2024-12-10 22:59:22.779943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.252 [2024-12-10 22:59:22.779975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.252 qpair failed and we were unable to recover it. 00:28:15.252 [2024-12-10 22:59:22.780138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.252 [2024-12-10 22:59:22.780176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.252 qpair failed and we were unable to recover it. 00:28:15.252 [2024-12-10 22:59:22.780347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.252 [2024-12-10 22:59:22.780378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.252 qpair failed and we were unable to recover it. 00:28:15.252 [2024-12-10 22:59:22.780544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.252 [2024-12-10 22:59:22.780579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.252 qpair failed and we were unable to recover it. 00:28:15.252 [2024-12-10 22:59:22.780820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.252 [2024-12-10 22:59:22.780852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.252 qpair failed and we were unable to recover it. 00:28:15.252 [2024-12-10 22:59:22.781082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.252 [2024-12-10 22:59:22.781113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.252 qpair failed and we were unable to recover it. 00:28:15.252 [2024-12-10 22:59:22.781225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.252 [2024-12-10 22:59:22.781259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.252 qpair failed and we were unable to recover it. 00:28:15.252 [2024-12-10 22:59:22.781435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.253 [2024-12-10 22:59:22.781465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.253 qpair failed and we were unable to recover it. 00:28:15.253 [2024-12-10 22:59:22.781584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.253 [2024-12-10 22:59:22.781616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.253 qpair failed and we were unable to recover it. 00:28:15.253 [2024-12-10 22:59:22.781736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.253 [2024-12-10 22:59:22.781766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.253 qpair failed and we were unable to recover it. 00:28:15.253 [2024-12-10 22:59:22.781970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.253 [2024-12-10 22:59:22.782000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.253 qpair failed and we were unable to recover it. 00:28:15.253 [2024-12-10 22:59:22.782173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.253 [2024-12-10 22:59:22.782206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.253 qpair failed and we were unable to recover it. 00:28:15.253 [2024-12-10 22:59:22.782405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.253 [2024-12-10 22:59:22.782435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.253 qpair failed and we were unable to recover it. 00:28:15.253 [2024-12-10 22:59:22.782604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.253 [2024-12-10 22:59:22.782636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.253 qpair failed and we were unable to recover it. 00:28:15.253 [2024-12-10 22:59:22.782818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.253 [2024-12-10 22:59:22.782848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.253 qpair failed and we were unable to recover it. 00:28:15.253 [2024-12-10 22:59:22.782969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.253 [2024-12-10 22:59:22.783001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.253 qpair failed and we were unable to recover it. 00:28:15.253 [2024-12-10 22:59:22.783107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.253 [2024-12-10 22:59:22.783140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.253 qpair failed and we were unable to recover it. 00:28:15.253 [2024-12-10 22:59:22.783276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.253 [2024-12-10 22:59:22.783307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.253 qpair failed and we were unable to recover it. 00:28:15.253 [2024-12-10 22:59:22.783496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.253 [2024-12-10 22:59:22.783528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.253 qpair failed and we were unable to recover it. 00:28:15.253 [2024-12-10 22:59:22.783633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.253 [2024-12-10 22:59:22.783666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.253 qpair failed and we were unable to recover it. 00:28:15.253 [2024-12-10 22:59:22.783833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.253 [2024-12-10 22:59:22.783864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.253 qpair failed and we were unable to recover it. 00:28:15.253 [2024-12-10 22:59:22.784033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.253 [2024-12-10 22:59:22.784066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.253 qpair failed and we were unable to recover it. 00:28:15.253 [2024-12-10 22:59:22.784177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.253 [2024-12-10 22:59:22.784210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.253 qpair failed and we were unable to recover it. 00:28:15.253 [2024-12-10 22:59:22.784407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.253 [2024-12-10 22:59:22.784439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.253 qpair failed and we were unable to recover it. 00:28:15.253 [2024-12-10 22:59:22.784554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.253 [2024-12-10 22:59:22.784587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.253 qpair failed and we were unable to recover it. 00:28:15.253 [2024-12-10 22:59:22.784770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.253 [2024-12-10 22:59:22.784800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.253 qpair failed and we were unable to recover it. 00:28:15.253 [2024-12-10 22:59:22.784995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.253 [2024-12-10 22:59:22.785027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.253 qpair failed and we were unable to recover it. 00:28:15.253 [2024-12-10 22:59:22.785138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.253 [2024-12-10 22:59:22.785190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.253 qpair failed and we were unable to recover it. 00:28:15.253 [2024-12-10 22:59:22.785294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.253 [2024-12-10 22:59:22.785326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.253 qpair failed and we were unable to recover it. 00:28:15.253 [2024-12-10 22:59:22.785446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.253 [2024-12-10 22:59:22.785477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.253 qpair failed and we were unable to recover it. 00:28:15.253 [2024-12-10 22:59:22.785648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.253 [2024-12-10 22:59:22.785683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.253 qpair failed and we were unable to recover it. 00:28:15.253 [2024-12-10 22:59:22.785800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.253 [2024-12-10 22:59:22.785833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.253 qpair failed and we were unable to recover it. 00:28:15.253 [2024-12-10 22:59:22.785950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.253 [2024-12-10 22:59:22.785982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.253 qpair failed and we were unable to recover it. 00:28:15.253 [2024-12-10 22:59:22.786232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.253 [2024-12-10 22:59:22.786264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.253 qpair failed and we were unable to recover it. 00:28:15.253 [2024-12-10 22:59:22.786470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.253 [2024-12-10 22:59:22.786502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.253 qpair failed and we were unable to recover it. 00:28:15.253 [2024-12-10 22:59:22.786628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.253 [2024-12-10 22:59:22.786659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.253 qpair failed and we were unable to recover it. 00:28:15.253 [2024-12-10 22:59:22.786777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.253 [2024-12-10 22:59:22.786809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.253 qpair failed and we were unable to recover it. 00:28:15.253 [2024-12-10 22:59:22.786933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.253 [2024-12-10 22:59:22.786965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.253 qpair failed and we were unable to recover it. 00:28:15.253 [2024-12-10 22:59:22.787083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.253 [2024-12-10 22:59:22.787115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.253 qpair failed and we were unable to recover it. 00:28:15.253 [2024-12-10 22:59:22.787241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.253 [2024-12-10 22:59:22.787273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.253 qpair failed and we were unable to recover it. 00:28:15.253 [2024-12-10 22:59:22.787475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.253 [2024-12-10 22:59:22.787505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.253 qpair failed and we were unable to recover it. 00:28:15.253 [2024-12-10 22:59:22.787682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.253 [2024-12-10 22:59:22.787713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.253 qpair failed and we were unable to recover it. 00:28:15.253 [2024-12-10 22:59:22.787904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.253 [2024-12-10 22:59:22.787934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.253 qpair failed and we were unable to recover it. 00:28:15.253 [2024-12-10 22:59:22.788105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.253 [2024-12-10 22:59:22.788136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.253 qpair failed and we were unable to recover it. 00:28:15.253 [2024-12-10 22:59:22.788263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.253 [2024-12-10 22:59:22.788295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.253 qpair failed and we were unable to recover it. 00:28:15.253 [2024-12-10 22:59:22.788406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.253 [2024-12-10 22:59:22.788438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.253 qpair failed and we were unable to recover it. 00:28:15.253 [2024-12-10 22:59:22.788551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.253 [2024-12-10 22:59:22.788582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.253 qpair failed and we were unable to recover it. 00:28:15.253 [2024-12-10 22:59:22.788771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.253 [2024-12-10 22:59:22.788802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.253 qpair failed and we were unable to recover it. 00:28:15.253 [2024-12-10 22:59:22.788972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.253 [2024-12-10 22:59:22.789003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.253 qpair failed and we were unable to recover it. 00:28:15.253 [2024-12-10 22:59:22.789209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.253 [2024-12-10 22:59:22.789243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.253 qpair failed and we were unable to recover it. 00:28:15.253 [2024-12-10 22:59:22.789418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.253 [2024-12-10 22:59:22.789450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.253 qpair failed and we were unable to recover it. 00:28:15.253 [2024-12-10 22:59:22.789551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.253 [2024-12-10 22:59:22.789582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.253 qpair failed and we were unable to recover it. 00:28:15.253 [2024-12-10 22:59:22.789692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.253 [2024-12-10 22:59:22.789723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.253 qpair failed and we were unable to recover it. 00:28:15.253 [2024-12-10 22:59:22.789856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.253 [2024-12-10 22:59:22.789888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.253 qpair failed and we were unable to recover it. 00:28:15.253 [2024-12-10 22:59:22.790004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.253 [2024-12-10 22:59:22.790036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.253 qpair failed and we were unable to recover it. 00:28:15.253 [2024-12-10 22:59:22.790210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.253 [2024-12-10 22:59:22.790243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.253 qpair failed and we were unable to recover it. 00:28:15.253 [2024-12-10 22:59:22.790412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.253 [2024-12-10 22:59:22.790444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.253 qpair failed and we were unable to recover it. 00:28:15.253 [2024-12-10 22:59:22.790615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.253 [2024-12-10 22:59:22.790648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.253 qpair failed and we were unable to recover it. 00:28:15.253 [2024-12-10 22:59:22.790772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.253 [2024-12-10 22:59:22.790802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.253 qpair failed and we were unable to recover it. 00:28:15.253 [2024-12-10 22:59:22.790968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.253 [2024-12-10 22:59:22.790999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.253 qpair failed and we were unable to recover it. 00:28:15.253 [2024-12-10 22:59:22.791122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.253 [2024-12-10 22:59:22.791154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.253 qpair failed and we were unable to recover it. 00:28:15.253 [2024-12-10 22:59:22.791274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.253 [2024-12-10 22:59:22.791306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.253 qpair failed and we were unable to recover it. 00:28:15.253 [2024-12-10 22:59:22.791412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.253 [2024-12-10 22:59:22.791444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.253 qpair failed and we were unable to recover it. 00:28:15.253 [2024-12-10 22:59:22.791613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.254 [2024-12-10 22:59:22.791644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.254 qpair failed and we were unable to recover it. 00:28:15.254 [2024-12-10 22:59:22.791767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.254 [2024-12-10 22:59:22.791799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.254 qpair failed and we were unable to recover it. 00:28:15.254 [2024-12-10 22:59:22.791981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.254 [2024-12-10 22:59:22.792012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.254 qpair failed and we were unable to recover it. 00:28:15.254 [2024-12-10 22:59:22.792111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.254 [2024-12-10 22:59:22.792143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.254 qpair failed and we were unable to recover it. 00:28:15.254 [2024-12-10 22:59:22.792317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.254 [2024-12-10 22:59:22.792349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.254 qpair failed and we were unable to recover it. 00:28:15.254 [2024-12-10 22:59:22.792511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.254 [2024-12-10 22:59:22.792542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.254 qpair failed and we were unable to recover it. 00:28:15.254 [2024-12-10 22:59:22.792650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.254 [2024-12-10 22:59:22.792681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.254 qpair failed and we were unable to recover it. 00:28:15.254 [2024-12-10 22:59:22.792856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.254 [2024-12-10 22:59:22.792886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.254 qpair failed and we were unable to recover it. 00:28:15.254 [2024-12-10 22:59:22.792986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.254 [2024-12-10 22:59:22.793024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.254 qpair failed and we were unable to recover it. 00:28:15.254 [2024-12-10 22:59:22.793141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.254 [2024-12-10 22:59:22.793204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.254 qpair failed and we were unable to recover it. 00:28:15.254 [2024-12-10 22:59:22.793304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.254 [2024-12-10 22:59:22.793335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.254 qpair failed and we were unable to recover it. 00:28:15.254 [2024-12-10 22:59:22.793497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.254 [2024-12-10 22:59:22.793528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.254 qpair failed and we were unable to recover it. 00:28:15.254 [2024-12-10 22:59:22.793695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.254 [2024-12-10 22:59:22.793726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.254 qpair failed and we were unable to recover it. 00:28:15.254 [2024-12-10 22:59:22.793990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.254 [2024-12-10 22:59:22.794020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.254 qpair failed and we were unable to recover it. 00:28:15.254 [2024-12-10 22:59:22.794203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.254 [2024-12-10 22:59:22.794236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.254 qpair failed and we were unable to recover it. 00:28:15.254 [2024-12-10 22:59:22.794357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.254 [2024-12-10 22:59:22.794387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.254 qpair failed and we were unable to recover it. 00:28:15.254 [2024-12-10 22:59:22.794505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.254 [2024-12-10 22:59:22.794537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.254 qpair failed and we were unable to recover it. 00:28:15.254 [2024-12-10 22:59:22.794649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.254 [2024-12-10 22:59:22.794680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.254 qpair failed and we were unable to recover it. 00:28:15.254 [2024-12-10 22:59:22.794778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.254 [2024-12-10 22:59:22.794809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.254 qpair failed and we were unable to recover it. 00:28:15.254 [2024-12-10 22:59:22.794916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.254 [2024-12-10 22:59:22.794947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.254 qpair failed and we were unable to recover it. 00:28:15.254 [2024-12-10 22:59:22.795065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.254 [2024-12-10 22:59:22.795095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.254 qpair failed and we were unable to recover it. 00:28:15.254 [2024-12-10 22:59:22.795294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.254 [2024-12-10 22:59:22.795326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.254 qpair failed and we were unable to recover it. 00:28:15.254 [2024-12-10 22:59:22.795525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.254 [2024-12-10 22:59:22.795558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.254 qpair failed and we were unable to recover it. 00:28:15.254 [2024-12-10 22:59:22.795793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.254 [2024-12-10 22:59:22.795826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.254 qpair failed and we were unable to recover it. 00:28:15.254 [2024-12-10 22:59:22.796080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.254 [2024-12-10 22:59:22.796112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.254 qpair failed and we were unable to recover it. 00:28:15.254 [2024-12-10 22:59:22.796381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.254 [2024-12-10 22:59:22.796413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.254 qpair failed and we were unable to recover it. 00:28:15.254 [2024-12-10 22:59:22.796529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.254 [2024-12-10 22:59:22.796561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.254 qpair failed and we were unable to recover it. 00:28:15.254 [2024-12-10 22:59:22.796681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.254 [2024-12-10 22:59:22.796713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.254 qpair failed and we were unable to recover it. 00:28:15.254 [2024-12-10 22:59:22.796904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.254 [2024-12-10 22:59:22.796937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.254 qpair failed and we were unable to recover it. 00:28:15.254 [2024-12-10 22:59:22.797050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.254 [2024-12-10 22:59:22.797082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.254 qpair failed and we were unable to recover it. 00:28:15.254 [2024-12-10 22:59:22.797185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.254 [2024-12-10 22:59:22.797216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.254 qpair failed and we were unable to recover it. 00:28:15.254 [2024-12-10 22:59:22.797420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.254 [2024-12-10 22:59:22.797452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.254 qpair failed and we were unable to recover it. 00:28:15.254 [2024-12-10 22:59:22.797714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.254 [2024-12-10 22:59:22.797747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.254 qpair failed and we were unable to recover it. 00:28:15.254 [2024-12-10 22:59:22.797849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.254 [2024-12-10 22:59:22.797878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.254 qpair failed and we were unable to recover it. 00:28:15.254 [2024-12-10 22:59:22.798077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.254 [2024-12-10 22:59:22.798108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.254 qpair failed and we were unable to recover it. 00:28:15.254 [2024-12-10 22:59:22.798355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.254 [2024-12-10 22:59:22.798393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.254 qpair failed and we were unable to recover it. 00:28:15.254 [2024-12-10 22:59:22.798505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.254 [2024-12-10 22:59:22.798536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.254 qpair failed and we were unable to recover it. 00:28:15.254 [2024-12-10 22:59:22.798702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.254 [2024-12-10 22:59:22.798735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.254 qpair failed and we were unable to recover it. 00:28:15.254 [2024-12-10 22:59:22.798931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.254 [2024-12-10 22:59:22.798961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.254 qpair failed and we were unable to recover it. 00:28:15.254 [2024-12-10 22:59:22.799241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.254 [2024-12-10 22:59:22.799274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.254 qpair failed and we were unable to recover it. 00:28:15.254 [2024-12-10 22:59:22.799442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.254 [2024-12-10 22:59:22.799472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.254 qpair failed and we were unable to recover it. 00:28:15.254 [2024-12-10 22:59:22.799576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.254 [2024-12-10 22:59:22.799608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.254 qpair failed and we were unable to recover it. 00:28:15.254 [2024-12-10 22:59:22.799776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.254 [2024-12-10 22:59:22.799808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.254 qpair failed and we were unable to recover it. 00:28:15.254 [2024-12-10 22:59:22.799973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.254 [2024-12-10 22:59:22.800005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.254 qpair failed and we were unable to recover it. 00:28:15.254 [2024-12-10 22:59:22.800120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.254 [2024-12-10 22:59:22.800152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.254 qpair failed and we were unable to recover it. 00:28:15.254 [2024-12-10 22:59:22.800271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.254 [2024-12-10 22:59:22.800301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.254 qpair failed and we were unable to recover it. 00:28:15.254 [2024-12-10 22:59:22.800484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.254 [2024-12-10 22:59:22.800517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.254 qpair failed and we were unable to recover it. 00:28:15.254 [2024-12-10 22:59:22.800712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.254 [2024-12-10 22:59:22.800743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.254 qpair failed and we were unable to recover it. 00:28:15.254 [2024-12-10 22:59:22.800933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.254 [2024-12-10 22:59:22.800965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.254 qpair failed and we were unable to recover it. 00:28:15.254 [2024-12-10 22:59:22.801082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.254 [2024-12-10 22:59:22.801116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.254 qpair failed and we were unable to recover it. 00:28:15.254 [2024-12-10 22:59:22.801350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.254 [2024-12-10 22:59:22.801384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.254 qpair failed and we were unable to recover it. 00:28:15.254 [2024-12-10 22:59:22.801492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.254 [2024-12-10 22:59:22.801523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.254 qpair failed and we were unable to recover it. 00:28:15.254 [2024-12-10 22:59:22.801642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.254 [2024-12-10 22:59:22.801673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.254 qpair failed and we were unable to recover it. 00:28:15.254 [2024-12-10 22:59:22.801791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.254 [2024-12-10 22:59:22.801822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.254 qpair failed and we were unable to recover it. 00:28:15.254 [2024-12-10 22:59:22.802080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.254 [2024-12-10 22:59:22.802112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.254 qpair failed and we were unable to recover it. 00:28:15.254 [2024-12-10 22:59:22.802239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.254 [2024-12-10 22:59:22.802270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.254 qpair failed and we were unable to recover it. 00:28:15.254 [2024-12-10 22:59:22.802438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.254 [2024-12-10 22:59:22.802469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.255 qpair failed and we were unable to recover it. 00:28:15.255 [2024-12-10 22:59:22.802638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.255 [2024-12-10 22:59:22.802669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.255 qpair failed and we were unable to recover it. 00:28:15.255 [2024-12-10 22:59:22.802805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.255 [2024-12-10 22:59:22.802837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.255 qpair failed and we were unable to recover it. 00:28:15.255 [2024-12-10 22:59:22.802955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.255 [2024-12-10 22:59:22.802988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.255 qpair failed and we were unable to recover it. 00:28:15.255 [2024-12-10 22:59:22.803101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.255 [2024-12-10 22:59:22.803133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.255 qpair failed and we were unable to recover it. 00:28:15.255 [2024-12-10 22:59:22.803307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.255 [2024-12-10 22:59:22.803338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.255 qpair failed and we were unable to recover it. 00:28:15.255 [2024-12-10 22:59:22.803443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.255 [2024-12-10 22:59:22.803474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.255 qpair failed and we were unable to recover it. 00:28:15.255 [2024-12-10 22:59:22.803596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.255 [2024-12-10 22:59:22.803627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.255 qpair failed and we were unable to recover it. 00:28:15.255 [2024-12-10 22:59:22.803747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.255 [2024-12-10 22:59:22.803779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.255 qpair failed and we were unable to recover it. 00:28:15.255 [2024-12-10 22:59:22.803887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.255 [2024-12-10 22:59:22.803917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.255 qpair failed and we were unable to recover it. 00:28:15.255 [2024-12-10 22:59:22.804179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.255 [2024-12-10 22:59:22.804213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.255 qpair failed and we were unable to recover it. 00:28:15.255 [2024-12-10 22:59:22.804327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.255 [2024-12-10 22:59:22.804359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.255 qpair failed and we were unable to recover it. 00:28:15.255 [2024-12-10 22:59:22.804543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.255 [2024-12-10 22:59:22.804574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.255 qpair failed and we were unable to recover it. 00:28:15.255 [2024-12-10 22:59:22.804741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.255 [2024-12-10 22:59:22.804774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.255 qpair failed and we were unable to recover it. 00:28:15.255 [2024-12-10 22:59:22.805013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.255 [2024-12-10 22:59:22.805046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.255 qpair failed and we were unable to recover it. 00:28:15.255 [2024-12-10 22:59:22.805211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.255 [2024-12-10 22:59:22.805243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.255 qpair failed and we were unable to recover it. 00:28:15.255 [2024-12-10 22:59:22.805423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.255 [2024-12-10 22:59:22.805455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.255 qpair failed and we were unable to recover it. 00:28:15.255 [2024-12-10 22:59:22.805659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.255 [2024-12-10 22:59:22.805690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.255 qpair failed and we were unable to recover it. 00:28:15.255 [2024-12-10 22:59:22.805867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.255 [2024-12-10 22:59:22.805900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.255 qpair failed and we were unable to recover it. 00:28:15.255 [2024-12-10 22:59:22.806069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.255 [2024-12-10 22:59:22.806100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.255 qpair failed and we were unable to recover it. 00:28:15.255 [2024-12-10 22:59:22.806296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.255 [2024-12-10 22:59:22.806333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.255 qpair failed and we were unable to recover it. 00:28:15.255 [2024-12-10 22:59:22.806449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.255 [2024-12-10 22:59:22.806487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.255 qpair failed and we were unable to recover it. 00:28:15.255 [2024-12-10 22:59:22.806665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.255 [2024-12-10 22:59:22.806696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.255 qpair failed and we were unable to recover it. 00:28:15.255 [2024-12-10 22:59:22.806833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.255 [2024-12-10 22:59:22.806864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.255 qpair failed and we were unable to recover it. 00:28:15.255 [2024-12-10 22:59:22.806991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.255 [2024-12-10 22:59:22.807021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.255 qpair failed and we were unable to recover it. 00:28:15.255 [2024-12-10 22:59:22.807187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.255 [2024-12-10 22:59:22.807218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.255 qpair failed and we were unable to recover it. 00:28:15.255 [2024-12-10 22:59:22.807329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.255 [2024-12-10 22:59:22.807359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.255 qpair failed and we were unable to recover it. 00:28:15.255 [2024-12-10 22:59:22.807551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.255 [2024-12-10 22:59:22.807581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.255 qpair failed and we were unable to recover it. 00:28:15.255 [2024-12-10 22:59:22.807763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.255 [2024-12-10 22:59:22.807794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.255 qpair failed and we were unable to recover it. 00:28:15.255 [2024-12-10 22:59:22.807987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.255 [2024-12-10 22:59:22.808018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.255 qpair failed and we were unable to recover it. 00:28:15.255 [2024-12-10 22:59:22.808186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.255 [2024-12-10 22:59:22.808217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.255 qpair failed and we were unable to recover it. 00:28:15.255 [2024-12-10 22:59:22.808394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.255 [2024-12-10 22:59:22.808425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.255 qpair failed and we were unable to recover it. 00:28:15.255 [2024-12-10 22:59:22.808629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.255 [2024-12-10 22:59:22.808660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.255 qpair failed and we were unable to recover it. 00:28:15.255 [2024-12-10 22:59:22.808779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.255 [2024-12-10 22:59:22.808810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.255 qpair failed and we were unable to recover it. 00:28:15.255 [2024-12-10 22:59:22.808926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.255 [2024-12-10 22:59:22.808957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.255 qpair failed and we were unable to recover it. 00:28:15.255 [2024-12-10 22:59:22.809122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.255 [2024-12-10 22:59:22.809153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.255 qpair failed and we were unable to recover it. 00:28:15.255 [2024-12-10 22:59:22.809308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.255 [2024-12-10 22:59:22.809340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.255 qpair failed and we were unable to recover it. 00:28:15.255 [2024-12-10 22:59:22.809459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.255 [2024-12-10 22:59:22.809491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.255 qpair failed and we were unable to recover it. 00:28:15.255 [2024-12-10 22:59:22.809683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.255 [2024-12-10 22:59:22.809714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.255 qpair failed and we were unable to recover it. 00:28:15.255 [2024-12-10 22:59:22.809918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.255 [2024-12-10 22:59:22.809949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.255 qpair failed and we were unable to recover it. 00:28:15.255 [2024-12-10 22:59:22.810126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.255 [2024-12-10 22:59:22.810169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.255 qpair failed and we were unable to recover it. 00:28:15.255 [2024-12-10 22:59:22.810286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.255 [2024-12-10 22:59:22.810316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.255 qpair failed and we were unable to recover it. 00:28:15.255 [2024-12-10 22:59:22.810533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.255 [2024-12-10 22:59:22.810562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.255 qpair failed and we were unable to recover it. 00:28:15.255 [2024-12-10 22:59:22.810663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.255 [2024-12-10 22:59:22.810691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.255 qpair failed and we were unable to recover it. 00:28:15.255 [2024-12-10 22:59:22.810818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.255 [2024-12-10 22:59:22.810848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.255 qpair failed and we were unable to recover it. 00:28:15.255 [2024-12-10 22:59:22.811054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.255 [2024-12-10 22:59:22.811085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.255 qpair failed and we were unable to recover it. 00:28:15.255 [2024-12-10 22:59:22.811257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.255 [2024-12-10 22:59:22.811289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.255 qpair failed and we were unable to recover it. 00:28:15.255 [2024-12-10 22:59:22.811395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.255 [2024-12-10 22:59:22.811427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.255 qpair failed and we were unable to recover it. 00:28:15.255 [2024-12-10 22:59:22.811549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.255 [2024-12-10 22:59:22.811579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.255 qpair failed and we were unable to recover it. 00:28:15.255 [2024-12-10 22:59:22.811746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.255 [2024-12-10 22:59:22.811782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.255 qpair failed and we were unable to recover it. 00:28:15.255 [2024-12-10 22:59:22.811898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.255 [2024-12-10 22:59:22.811928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.255 qpair failed and we were unable to recover it. 00:28:15.256 [2024-12-10 22:59:22.812041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.256 [2024-12-10 22:59:22.812071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.256 qpair failed and we were unable to recover it. 00:28:15.256 [2024-12-10 22:59:22.812278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.256 [2024-12-10 22:59:22.812310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.256 qpair failed and we were unable to recover it. 00:28:15.256 [2024-12-10 22:59:22.812410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.256 [2024-12-10 22:59:22.812440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.256 qpair failed and we were unable to recover it. 00:28:15.256 [2024-12-10 22:59:22.812639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.256 [2024-12-10 22:59:22.812671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.256 qpair failed and we were unable to recover it. 00:28:15.256 [2024-12-10 22:59:22.812813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.256 [2024-12-10 22:59:22.812847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.256 qpair failed and we were unable to recover it. 00:28:15.256 [2024-12-10 22:59:22.813012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.256 [2024-12-10 22:59:22.813045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.256 qpair failed and we were unable to recover it. 00:28:15.256 [2024-12-10 22:59:22.813215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.256 [2024-12-10 22:59:22.813248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.256 qpair failed and we were unable to recover it. 00:28:15.256 [2024-12-10 22:59:22.813443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.256 [2024-12-10 22:59:22.813476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.256 qpair failed and we were unable to recover it. 00:28:15.256 [2024-12-10 22:59:22.813667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.256 [2024-12-10 22:59:22.813700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.256 qpair failed and we were unable to recover it. 00:28:15.256 [2024-12-10 22:59:22.813816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.256 [2024-12-10 22:59:22.813848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.256 qpair failed and we were unable to recover it. 00:28:15.256 [2024-12-10 22:59:22.814049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.256 [2024-12-10 22:59:22.814081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.256 qpair failed and we were unable to recover it. 00:28:15.256 [2024-12-10 22:59:22.814275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.256 [2024-12-10 22:59:22.814308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.256 qpair failed and we were unable to recover it. 00:28:15.256 [2024-12-10 22:59:22.814421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.256 [2024-12-10 22:59:22.814453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.256 qpair failed and we were unable to recover it. 00:28:15.256 [2024-12-10 22:59:22.814570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.256 [2024-12-10 22:59:22.814602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.256 qpair failed and we were unable to recover it. 00:28:15.256 [2024-12-10 22:59:22.814768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.256 [2024-12-10 22:59:22.814800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.256 qpair failed and we were unable to recover it. 00:28:15.256 [2024-12-10 22:59:22.814916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.256 [2024-12-10 22:59:22.814948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.256 qpair failed and we were unable to recover it. 00:28:15.256 [2024-12-10 22:59:22.815053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.256 [2024-12-10 22:59:22.815085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.256 qpair failed and we were unable to recover it. 00:28:15.256 [2024-12-10 22:59:22.815196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.256 [2024-12-10 22:59:22.815231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.256 qpair failed and we were unable to recover it. 00:28:15.256 [2024-12-10 22:59:22.815398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.256 [2024-12-10 22:59:22.815430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.256 qpair failed and we were unable to recover it. 00:28:15.256 [2024-12-10 22:59:22.815691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.256 [2024-12-10 22:59:22.815724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.256 qpair failed and we were unable to recover it. 00:28:15.256 [2024-12-10 22:59:22.815960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.256 [2024-12-10 22:59:22.815992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.256 qpair failed and we were unable to recover it. 00:28:15.256 [2024-12-10 22:59:22.816177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.256 [2024-12-10 22:59:22.816211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.256 qpair failed and we were unable to recover it. 00:28:15.256 [2024-12-10 22:59:22.816325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.256 [2024-12-10 22:59:22.816358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.256 qpair failed and we were unable to recover it. 00:28:15.256 [2024-12-10 22:59:22.816530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.256 [2024-12-10 22:59:22.816563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.256 qpair failed and we were unable to recover it. 00:28:15.256 [2024-12-10 22:59:22.816737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.256 [2024-12-10 22:59:22.816770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.256 qpair failed and we were unable to recover it. 00:28:15.256 [2024-12-10 22:59:22.816883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.256 [2024-12-10 22:59:22.816915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.256 qpair failed and we were unable to recover it. 00:28:15.256 [2024-12-10 22:59:22.817038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.256 [2024-12-10 22:59:22.817070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.256 qpair failed and we were unable to recover it. 00:28:15.256 [2024-12-10 22:59:22.817194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.256 [2024-12-10 22:59:22.817227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.256 qpair failed and we were unable to recover it. 00:28:15.256 [2024-12-10 22:59:22.817329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.256 [2024-12-10 22:59:22.817358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.256 qpair failed and we were unable to recover it. 00:28:15.256 [2024-12-10 22:59:22.817477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.256 [2024-12-10 22:59:22.817509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.256 qpair failed and we were unable to recover it. 00:28:15.256 [2024-12-10 22:59:22.817680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.256 [2024-12-10 22:59:22.817713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.256 qpair failed and we were unable to recover it. 00:28:15.256 [2024-12-10 22:59:22.817890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.256 [2024-12-10 22:59:22.817922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.256 qpair failed and we were unable to recover it. 00:28:15.256 [2024-12-10 22:59:22.818199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.256 [2024-12-10 22:59:22.818233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.256 qpair failed and we were unable to recover it. 00:28:15.256 [2024-12-10 22:59:22.818419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.256 [2024-12-10 22:59:22.818453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.256 qpair failed and we were unable to recover it. 00:28:15.256 [2024-12-10 22:59:22.818624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.256 [2024-12-10 22:59:22.818655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.256 qpair failed and we were unable to recover it. 00:28:15.256 [2024-12-10 22:59:22.818848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.256 [2024-12-10 22:59:22.818881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.256 qpair failed and we were unable to recover it. 00:28:15.256 [2024-12-10 22:59:22.819050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.256 [2024-12-10 22:59:22.819084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.256 qpair failed and we were unable to recover it. 00:28:15.256 [2024-12-10 22:59:22.819185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.256 [2024-12-10 22:59:22.819223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.256 qpair failed and we were unable to recover it. 00:28:15.256 [2024-12-10 22:59:22.819429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.256 [2024-12-10 22:59:22.819462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.256 qpair failed and we were unable to recover it. 00:28:15.256 [2024-12-10 22:59:22.819590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.256 [2024-12-10 22:59:22.819623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.256 qpair failed and we were unable to recover it. 00:28:15.256 [2024-12-10 22:59:22.819732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.256 [2024-12-10 22:59:22.819766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.256 qpair failed and we were unable to recover it. 00:28:15.256 [2024-12-10 22:59:22.819883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.256 [2024-12-10 22:59:22.819916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.256 qpair failed and we were unable to recover it. 00:28:15.256 [2024-12-10 22:59:22.820030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.256 [2024-12-10 22:59:22.820063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.256 qpair failed and we were unable to recover it. 00:28:15.256 [2024-12-10 22:59:22.820273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.256 [2024-12-10 22:59:22.820307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.256 qpair failed and we were unable to recover it. 00:28:15.256 [2024-12-10 22:59:22.820504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.256 [2024-12-10 22:59:22.820537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.256 qpair failed and we were unable to recover it. 00:28:15.256 [2024-12-10 22:59:22.820712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.256 [2024-12-10 22:59:22.820746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.256 qpair failed and we were unable to recover it. 00:28:15.256 [2024-12-10 22:59:22.820863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.256 [2024-12-10 22:59:22.820897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.256 qpair failed and we were unable to recover it. 00:28:15.256 [2024-12-10 22:59:22.821077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.256 [2024-12-10 22:59:22.821109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.256 qpair failed and we were unable to recover it. 00:28:15.256 [2024-12-10 22:59:22.821237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.256 [2024-12-10 22:59:22.821270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.256 qpair failed and we were unable to recover it. 00:28:15.256 [2024-12-10 22:59:22.821391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.256 [2024-12-10 22:59:22.821423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.256 qpair failed and we were unable to recover it. 00:28:15.256 [2024-12-10 22:59:22.821593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.256 [2024-12-10 22:59:22.821626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.256 qpair failed and we were unable to recover it. 00:28:15.256 [2024-12-10 22:59:22.821752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.256 [2024-12-10 22:59:22.821783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.256 qpair failed and we were unable to recover it. 00:28:15.256 [2024-12-10 22:59:22.821895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.256 [2024-12-10 22:59:22.821928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.256 qpair failed and we were unable to recover it. 00:28:15.256 [2024-12-10 22:59:22.822047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.256 [2024-12-10 22:59:22.822079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.256 qpair failed and we were unable to recover it. 00:28:15.256 [2024-12-10 22:59:22.822245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.256 [2024-12-10 22:59:22.822278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.256 qpair failed and we were unable to recover it. 00:28:15.256 [2024-12-10 22:59:22.822472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.256 [2024-12-10 22:59:22.822505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.256 qpair failed and we were unable to recover it. 00:28:15.256 [2024-12-10 22:59:22.822613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.256 [2024-12-10 22:59:22.822647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.256 qpair failed and we were unable to recover it. 00:28:15.256 [2024-12-10 22:59:22.822831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.256 [2024-12-10 22:59:22.822864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.256 qpair failed and we were unable to recover it. 00:28:15.256 [2024-12-10 22:59:22.823045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.256 [2024-12-10 22:59:22.823078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.256 qpair failed and we were unable to recover it. 00:28:15.256 [2024-12-10 22:59:22.823278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.256 [2024-12-10 22:59:22.823312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.257 qpair failed and we were unable to recover it. 00:28:15.257 [2024-12-10 22:59:22.823432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.257 [2024-12-10 22:59:22.823464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.257 qpair failed and we were unable to recover it. 00:28:15.257 [2024-12-10 22:59:22.823573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.257 [2024-12-10 22:59:22.823607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.257 qpair failed and we were unable to recover it. 00:28:15.257 [2024-12-10 22:59:22.823777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.257 [2024-12-10 22:59:22.823809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.257 qpair failed and we were unable to recover it. 00:28:15.257 [2024-12-10 22:59:22.823999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.257 [2024-12-10 22:59:22.824033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.257 qpair failed and we were unable to recover it. 00:28:15.257 [2024-12-10 22:59:22.824148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.257 [2024-12-10 22:59:22.824191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.257 qpair failed and we were unable to recover it. 00:28:15.257 [2024-12-10 22:59:22.824315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.257 [2024-12-10 22:59:22.824348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.257 qpair failed and we were unable to recover it. 00:28:15.257 [2024-12-10 22:59:22.824556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.257 [2024-12-10 22:59:22.824588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.257 qpair failed and we were unable to recover it. 00:28:15.257 [2024-12-10 22:59:22.824762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.257 [2024-12-10 22:59:22.824794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.257 qpair failed and we were unable to recover it. 00:28:15.257 [2024-12-10 22:59:22.824919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.257 [2024-12-10 22:59:22.824951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.257 qpair failed and we were unable to recover it. 00:28:15.257 [2024-12-10 22:59:22.825061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.257 [2024-12-10 22:59:22.825092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.257 qpair failed and we were unable to recover it. 00:28:15.257 [2024-12-10 22:59:22.825261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.257 [2024-12-10 22:59:22.825294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.257 qpair failed and we were unable to recover it. 00:28:15.257 [2024-12-10 22:59:22.825459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.257 [2024-12-10 22:59:22.825492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.257 qpair failed and we were unable to recover it. 00:28:15.257 [2024-12-10 22:59:22.825599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.257 [2024-12-10 22:59:22.825632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.257 qpair failed and we were unable to recover it. 00:28:15.257 [2024-12-10 22:59:22.825812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.257 [2024-12-10 22:59:22.825844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.257 qpair failed and we were unable to recover it. 00:28:15.257 [2024-12-10 22:59:22.826112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.257 [2024-12-10 22:59:22.826145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.257 qpair failed and we were unable to recover it. 00:28:15.257 [2024-12-10 22:59:22.826272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.257 [2024-12-10 22:59:22.826303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.257 qpair failed and we were unable to recover it. 00:28:15.257 [2024-12-10 22:59:22.826471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.257 [2024-12-10 22:59:22.826505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.257 qpair failed and we were unable to recover it. 00:28:15.257 [2024-12-10 22:59:22.826674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.257 [2024-12-10 22:59:22.826705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.257 qpair failed and we were unable to recover it. 00:28:15.257 [2024-12-10 22:59:22.826874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.257 [2024-12-10 22:59:22.826913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.257 qpair failed and we were unable to recover it. 00:28:15.257 [2024-12-10 22:59:22.827030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.257 [2024-12-10 22:59:22.827063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.257 qpair failed and we were unable to recover it. 00:28:15.257 [2024-12-10 22:59:22.827264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.257 [2024-12-10 22:59:22.827298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.257 qpair failed and we were unable to recover it. 00:28:15.257 [2024-12-10 22:59:22.827489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.257 [2024-12-10 22:59:22.827523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.257 qpair failed and we were unable to recover it. 00:28:15.257 [2024-12-10 22:59:22.827711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.257 [2024-12-10 22:59:22.827742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.257 qpair failed and we were unable to recover it. 00:28:15.257 [2024-12-10 22:59:22.827909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.257 [2024-12-10 22:59:22.827941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.257 qpair failed and we were unable to recover it. 00:28:15.257 [2024-12-10 22:59:22.828113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.257 [2024-12-10 22:59:22.828147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.257 qpair failed and we were unable to recover it. 00:28:15.257 [2024-12-10 22:59:22.828360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.257 [2024-12-10 22:59:22.828392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.257 qpair failed and we were unable to recover it. 00:28:15.257 [2024-12-10 22:59:22.828501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.257 [2024-12-10 22:59:22.828533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.257 qpair failed and we were unable to recover it. 00:28:15.257 [2024-12-10 22:59:22.828726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.257 [2024-12-10 22:59:22.828759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.257 qpair failed and we were unable to recover it. 00:28:15.257 [2024-12-10 22:59:22.828861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.257 [2024-12-10 22:59:22.828892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.257 qpair failed and we were unable to recover it. 00:28:15.257 [2024-12-10 22:59:22.829143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.257 [2024-12-10 22:59:22.829184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.257 qpair failed and we were unable to recover it. 00:28:15.257 [2024-12-10 22:59:22.829298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.257 [2024-12-10 22:59:22.829330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.257 qpair failed and we were unable to recover it. 00:28:15.257 [2024-12-10 22:59:22.829501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.257 [2024-12-10 22:59:22.829534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.257 qpair failed and we were unable to recover it. 00:28:15.257 [2024-12-10 22:59:22.829664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.257 [2024-12-10 22:59:22.829697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.257 qpair failed and we were unable to recover it. 00:28:15.257 [2024-12-10 22:59:22.829863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.257 [2024-12-10 22:59:22.829895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.257 qpair failed and we were unable to recover it. 00:28:15.257 [2024-12-10 22:59:22.830062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.257 [2024-12-10 22:59:22.830095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.257 qpair failed and we were unable to recover it. 00:28:15.257 [2024-12-10 22:59:22.830261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.257 [2024-12-10 22:59:22.830293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.257 qpair failed and we were unable to recover it. 00:28:15.257 [2024-12-10 22:59:22.830400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.257 [2024-12-10 22:59:22.830432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.257 qpair failed and we were unable to recover it. 00:28:15.257 [2024-12-10 22:59:22.830548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.257 [2024-12-10 22:59:22.830579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.257 qpair failed and we were unable to recover it. 00:28:15.257 [2024-12-10 22:59:22.830772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.257 [2024-12-10 22:59:22.830804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.257 qpair failed and we were unable to recover it. 00:28:15.257 [2024-12-10 22:59:22.830971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.257 [2024-12-10 22:59:22.831004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.257 qpair failed and we were unable to recover it. 00:28:15.257 [2024-12-10 22:59:22.831146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.257 [2024-12-10 22:59:22.831191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.257 qpair failed and we were unable to recover it. 00:28:15.257 [2024-12-10 22:59:22.831301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.257 [2024-12-10 22:59:22.831333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.257 qpair failed and we were unable to recover it. 00:28:15.257 [2024-12-10 22:59:22.831501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.257 [2024-12-10 22:59:22.831532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.257 qpair failed and we were unable to recover it. 00:28:15.257 [2024-12-10 22:59:22.831697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.257 [2024-12-10 22:59:22.831728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.257 qpair failed and we were unable to recover it. 00:28:15.257 [2024-12-10 22:59:22.831966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.257 [2024-12-10 22:59:22.831999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.257 qpair failed and we were unable to recover it. 00:28:15.257 [2024-12-10 22:59:22.832113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.257 [2024-12-10 22:59:22.832151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.257 qpair failed and we were unable to recover it. 00:28:15.257 [2024-12-10 22:59:22.832336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.257 [2024-12-10 22:59:22.832370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.257 qpair failed and we were unable to recover it. 00:28:15.257 [2024-12-10 22:59:22.832555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.257 [2024-12-10 22:59:22.832588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.257 qpair failed and we were unable to recover it. 00:28:15.257 [2024-12-10 22:59:22.832761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.257 [2024-12-10 22:59:22.832793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.257 qpair failed and we were unable to recover it. 00:28:15.257 [2024-12-10 22:59:22.832959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.257 [2024-12-10 22:59:22.832992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.257 qpair failed and we were unable to recover it. 00:28:15.257 [2024-12-10 22:59:22.833202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.257 [2024-12-10 22:59:22.833235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.257 qpair failed and we were unable to recover it. 00:28:15.257 [2024-12-10 22:59:22.833354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.257 [2024-12-10 22:59:22.833387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.257 qpair failed and we were unable to recover it. 00:28:15.257 [2024-12-10 22:59:22.833557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.257 [2024-12-10 22:59:22.833589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.257 qpair failed and we were unable to recover it. 00:28:15.257 [2024-12-10 22:59:22.833795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.257 [2024-12-10 22:59:22.833827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.257 qpair failed and we were unable to recover it. 00:28:15.257 [2024-12-10 22:59:22.834015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.257 [2024-12-10 22:59:22.834049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.257 qpair failed and we were unable to recover it. 00:28:15.257 [2024-12-10 22:59:22.834176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.257 [2024-12-10 22:59:22.834209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.257 qpair failed and we were unable to recover it. 00:28:15.257 [2024-12-10 22:59:22.834316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.257 [2024-12-10 22:59:22.834349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.258 qpair failed and we were unable to recover it. 00:28:15.258 [2024-12-10 22:59:22.834542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.258 [2024-12-10 22:59:22.834575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.258 qpair failed and we were unable to recover it. 00:28:15.258 [2024-12-10 22:59:22.834702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.258 [2024-12-10 22:59:22.834733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.258 qpair failed and we were unable to recover it. 00:28:15.258 [2024-12-10 22:59:22.834875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.258 [2024-12-10 22:59:22.834908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.258 qpair failed and we were unable to recover it. 00:28:15.258 [2024-12-10 22:59:22.835090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.258 [2024-12-10 22:59:22.835123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.258 qpair failed and we were unable to recover it. 00:28:15.258 [2024-12-10 22:59:22.835250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.258 [2024-12-10 22:59:22.835282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.258 qpair failed and we were unable to recover it. 00:28:15.258 [2024-12-10 22:59:22.835451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.258 [2024-12-10 22:59:22.835483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.258 qpair failed and we were unable to recover it. 00:28:15.258 [2024-12-10 22:59:22.835652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.258 [2024-12-10 22:59:22.835684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.258 qpair failed and we were unable to recover it. 00:28:15.258 [2024-12-10 22:59:22.835787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.258 [2024-12-10 22:59:22.835820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.258 qpair failed and we were unable to recover it. 00:28:15.258 [2024-12-10 22:59:22.835987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.258 [2024-12-10 22:59:22.836018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.258 qpair failed and we were unable to recover it. 00:28:15.258 [2024-12-10 22:59:22.836122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.258 [2024-12-10 22:59:22.836155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.258 qpair failed and we were unable to recover it. 00:28:15.258 [2024-12-10 22:59:22.836271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.258 [2024-12-10 22:59:22.836304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.258 qpair failed and we were unable to recover it. 00:28:15.258 [2024-12-10 22:59:22.836488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.258 [2024-12-10 22:59:22.836521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.258 qpair failed and we were unable to recover it. 00:28:15.258 [2024-12-10 22:59:22.836636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.258 [2024-12-10 22:59:22.836669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.258 qpair failed and we were unable to recover it. 00:28:15.258 [2024-12-10 22:59:22.836787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.258 [2024-12-10 22:59:22.836820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.258 qpair failed and we were unable to recover it. 00:28:15.258 [2024-12-10 22:59:22.836931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.258 [2024-12-10 22:59:22.836963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.258 qpair failed and we were unable to recover it. 00:28:15.258 [2024-12-10 22:59:22.837083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.258 [2024-12-10 22:59:22.837118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.258 qpair failed and we were unable to recover it. 00:28:15.258 [2024-12-10 22:59:22.837306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.258 [2024-12-10 22:59:22.837339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.258 qpair failed and we were unable to recover it. 00:28:15.258 [2024-12-10 22:59:22.837440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.258 [2024-12-10 22:59:22.837473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.258 qpair failed and we were unable to recover it. 00:28:15.258 [2024-12-10 22:59:22.837654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.258 [2024-12-10 22:59:22.837687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.258 qpair failed and we were unable to recover it. 00:28:15.258 [2024-12-10 22:59:22.837799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.258 [2024-12-10 22:59:22.837831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.258 qpair failed and we were unable to recover it. 00:28:15.258 [2024-12-10 22:59:22.838094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.258 [2024-12-10 22:59:22.838126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.258 qpair failed and we were unable to recover it. 00:28:15.258 [2024-12-10 22:59:22.838318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.258 [2024-12-10 22:59:22.838350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.258 qpair failed and we were unable to recover it. 00:28:15.258 [2024-12-10 22:59:22.838470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.258 [2024-12-10 22:59:22.838504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.258 qpair failed and we were unable to recover it. 00:28:15.258 [2024-12-10 22:59:22.838686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.258 [2024-12-10 22:59:22.838720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.258 qpair failed and we were unable to recover it. 00:28:15.258 [2024-12-10 22:59:22.838838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.258 [2024-12-10 22:59:22.838871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.258 qpair failed and we were unable to recover it. 00:28:15.258 [2024-12-10 22:59:22.839052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.258 [2024-12-10 22:59:22.839084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.258 qpair failed and we were unable to recover it. 00:28:15.258 [2024-12-10 22:59:22.839283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.258 [2024-12-10 22:59:22.839318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.258 qpair failed and we were unable to recover it. 00:28:15.258 [2024-12-10 22:59:22.839439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.258 [2024-12-10 22:59:22.839471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.258 qpair failed and we were unable to recover it. 00:28:15.258 [2024-12-10 22:59:22.839636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.258 [2024-12-10 22:59:22.839669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.258 qpair failed and we were unable to recover it. 00:28:15.258 [2024-12-10 22:59:22.839782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.258 [2024-12-10 22:59:22.839821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.258 qpair failed and we were unable to recover it. 00:28:15.258 [2024-12-10 22:59:22.839922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.258 [2024-12-10 22:59:22.839954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.258 qpair failed and we were unable to recover it. 00:28:15.258 [2024-12-10 22:59:22.840228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.258 [2024-12-10 22:59:22.840261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.258 qpair failed and we were unable to recover it. 00:28:15.258 [2024-12-10 22:59:22.840432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.258 [2024-12-10 22:59:22.840466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.258 qpair failed and we were unable to recover it. 00:28:15.258 [2024-12-10 22:59:22.840706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.258 [2024-12-10 22:59:22.840738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.258 qpair failed and we were unable to recover it. 00:28:15.258 [2024-12-10 22:59:22.840846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.258 [2024-12-10 22:59:22.840880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.258 qpair failed and we were unable to recover it. 00:28:15.258 [2024-12-10 22:59:22.841050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.258 [2024-12-10 22:59:22.841082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.258 qpair failed and we were unable to recover it. 00:28:15.258 [2024-12-10 22:59:22.841211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.258 [2024-12-10 22:59:22.841245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.258 qpair failed and we were unable to recover it. 00:28:15.258 [2024-12-10 22:59:22.841424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.258 [2024-12-10 22:59:22.841457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.258 qpair failed and we were unable to recover it. 00:28:15.258 [2024-12-10 22:59:22.841569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.258 [2024-12-10 22:59:22.841603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.258 qpair failed and we were unable to recover it. 00:28:15.258 [2024-12-10 22:59:22.841772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.258 [2024-12-10 22:59:22.841805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.258 qpair failed and we were unable to recover it. 00:28:15.258 [2024-12-10 22:59:22.841934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.258 [2024-12-10 22:59:22.841967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.258 qpair failed and we were unable to recover it. 00:28:15.258 [2024-12-10 22:59:22.842165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.258 [2024-12-10 22:59:22.842200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.258 qpair failed and we were unable to recover it. 00:28:15.258 [2024-12-10 22:59:22.842367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.258 [2024-12-10 22:59:22.842401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.258 qpair failed and we were unable to recover it. 00:28:15.258 [2024-12-10 22:59:22.842671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.258 [2024-12-10 22:59:22.842703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.258 qpair failed and we were unable to recover it. 00:28:15.258 [2024-12-10 22:59:22.842876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.258 [2024-12-10 22:59:22.842909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.258 qpair failed and we were unable to recover it. 00:28:15.258 [2024-12-10 22:59:22.843079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.258 [2024-12-10 22:59:22.843113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.258 qpair failed and we were unable to recover it. 00:28:15.258 [2024-12-10 22:59:22.843386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.258 [2024-12-10 22:59:22.843418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.258 qpair failed and we were unable to recover it. 00:28:15.258 [2024-12-10 22:59:22.843610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.258 [2024-12-10 22:59:22.843645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.258 qpair failed and we were unable to recover it. 00:28:15.258 [2024-12-10 22:59:22.843812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.258 [2024-12-10 22:59:22.843843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.258 qpair failed and we were unable to recover it. 00:28:15.258 [2024-12-10 22:59:22.844017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.258 [2024-12-10 22:59:22.844051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.258 qpair failed and we were unable to recover it. 00:28:15.258 [2024-12-10 22:59:22.844179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.258 [2024-12-10 22:59:22.844213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.258 qpair failed and we were unable to recover it. 00:28:15.258 [2024-12-10 22:59:22.844328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.258 [2024-12-10 22:59:22.844362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.258 qpair failed and we were unable to recover it. 00:28:15.258 [2024-12-10 22:59:22.844472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.258 [2024-12-10 22:59:22.844504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.258 qpair failed and we were unable to recover it. 00:28:15.258 [2024-12-10 22:59:22.844710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.258 [2024-12-10 22:59:22.844741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.258 qpair failed and we were unable to recover it. 00:28:15.258 [2024-12-10 22:59:22.844856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.258 [2024-12-10 22:59:22.844888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.258 qpair failed and we were unable to recover it. 00:28:15.258 [2024-12-10 22:59:22.845166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.258 [2024-12-10 22:59:22.845199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.259 qpair failed and we were unable to recover it. 00:28:15.259 [2024-12-10 22:59:22.845414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.259 [2024-12-10 22:59:22.845451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.259 qpair failed and we were unable to recover it. 00:28:15.259 [2024-12-10 22:59:22.845575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.259 [2024-12-10 22:59:22.845608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.259 qpair failed and we were unable to recover it. 00:28:15.259 [2024-12-10 22:59:22.845846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.259 [2024-12-10 22:59:22.845878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.259 qpair failed and we were unable to recover it. 00:28:15.259 [2024-12-10 22:59:22.846053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.259 [2024-12-10 22:59:22.846085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.259 qpair failed and we were unable to recover it. 00:28:15.259 [2024-12-10 22:59:22.846289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.259 [2024-12-10 22:59:22.846322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.259 qpair failed and we were unable to recover it. 00:28:15.259 [2024-12-10 22:59:22.846513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.259 [2024-12-10 22:59:22.846545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.259 qpair failed and we were unable to recover it. 00:28:15.259 [2024-12-10 22:59:22.846670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.259 [2024-12-10 22:59:22.846703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.259 qpair failed and we were unable to recover it. 00:28:15.259 [2024-12-10 22:59:22.846805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.259 [2024-12-10 22:59:22.846836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.259 qpair failed and we were unable to recover it. 00:28:15.259 [2024-12-10 22:59:22.847004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.259 [2024-12-10 22:59:22.847037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.259 qpair failed and we were unable to recover it. 00:28:15.259 [2024-12-10 22:59:22.847154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.259 [2024-12-10 22:59:22.847199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.259 qpair failed and we were unable to recover it. 00:28:15.259 [2024-12-10 22:59:22.847307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.259 [2024-12-10 22:59:22.847338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.259 qpair failed and we were unable to recover it. 00:28:15.259 [2024-12-10 22:59:22.847464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.259 [2024-12-10 22:59:22.847496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.259 qpair failed and we were unable to recover it. 00:28:15.259 [2024-12-10 22:59:22.847700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.259 [2024-12-10 22:59:22.847733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.259 qpair failed and we were unable to recover it. 00:28:15.259 [2024-12-10 22:59:22.847849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.259 [2024-12-10 22:59:22.847882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.259 qpair failed and we were unable to recover it. 00:28:15.259 [2024-12-10 22:59:22.847997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.259 [2024-12-10 22:59:22.848028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.259 qpair failed and we were unable to recover it. 00:28:15.259 [2024-12-10 22:59:22.848201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.259 [2024-12-10 22:59:22.848234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.259 qpair failed and we were unable to recover it. 00:28:15.259 [2024-12-10 22:59:22.848402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.259 [2024-12-10 22:59:22.848434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.259 qpair failed and we were unable to recover it. 00:28:15.259 [2024-12-10 22:59:22.848600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.259 [2024-12-10 22:59:22.848632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.259 qpair failed and we were unable to recover it. 00:28:15.259 [2024-12-10 22:59:22.848875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.259 [2024-12-10 22:59:22.848907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.259 qpair failed and we were unable to recover it. 00:28:15.259 [2024-12-10 22:59:22.849217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.259 [2024-12-10 22:59:22.849252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.259 qpair failed and we were unable to recover it. 00:28:15.259 [2024-12-10 22:59:22.849375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.259 [2024-12-10 22:59:22.849407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.259 qpair failed and we were unable to recover it. 00:28:15.259 [2024-12-10 22:59:22.849512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.259 [2024-12-10 22:59:22.849544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.259 qpair failed and we were unable to recover it. 00:28:15.259 [2024-12-10 22:59:22.849712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.259 [2024-12-10 22:59:22.849745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.259 qpair failed and we were unable to recover it. 00:28:15.259 [2024-12-10 22:59:22.850006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.259 [2024-12-10 22:59:22.850037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.259 qpair failed and we were unable to recover it. 00:28:15.259 [2024-12-10 22:59:22.850205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.259 [2024-12-10 22:59:22.850239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.259 qpair failed and we were unable to recover it. 00:28:15.259 [2024-12-10 22:59:22.850423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.259 [2024-12-10 22:59:22.850456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.259 qpair failed and we were unable to recover it. 00:28:15.259 [2024-12-10 22:59:22.850635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.259 [2024-12-10 22:59:22.850667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.259 qpair failed and we were unable to recover it. 00:28:15.259 [2024-12-10 22:59:22.850836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.259 [2024-12-10 22:59:22.850868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.259 qpair failed and we were unable to recover it. 00:28:15.259 [2024-12-10 22:59:22.850986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.259 [2024-12-10 22:59:22.851018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.259 qpair failed and we were unable to recover it. 00:28:15.259 [2024-12-10 22:59:22.851135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.259 [2024-12-10 22:59:22.851187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.259 qpair failed and we were unable to recover it. 00:28:15.259 [2024-12-10 22:59:22.851458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.259 [2024-12-10 22:59:22.851492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.259 qpair failed and we were unable to recover it. 00:28:15.259 [2024-12-10 22:59:22.851597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.259 [2024-12-10 22:59:22.851630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.259 qpair failed and we were unable to recover it. 00:28:15.259 [2024-12-10 22:59:22.851751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.259 [2024-12-10 22:59:22.851783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.259 qpair failed and we were unable to recover it. 00:28:15.259 [2024-12-10 22:59:22.851895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.259 [2024-12-10 22:59:22.851928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.259 qpair failed and we were unable to recover it. 00:28:15.259 [2024-12-10 22:59:22.852038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.259 [2024-12-10 22:59:22.852072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.259 qpair failed and we were unable to recover it. 00:28:15.259 [2024-12-10 22:59:22.852255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.259 [2024-12-10 22:59:22.852288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.259 qpair failed and we were unable to recover it. 00:28:15.259 [2024-12-10 22:59:22.852463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.259 [2024-12-10 22:59:22.852495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.259 qpair failed and we were unable to recover it. 00:28:15.259 [2024-12-10 22:59:22.852594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.259 [2024-12-10 22:59:22.852628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.259 qpair failed and we were unable to recover it. 00:28:15.259 [2024-12-10 22:59:22.852738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.259 [2024-12-10 22:59:22.852772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.259 qpair failed and we were unable to recover it. 00:28:15.259 [2024-12-10 22:59:22.852957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.259 [2024-12-10 22:59:22.852990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.259 qpair failed and we were unable to recover it. 00:28:15.259 [2024-12-10 22:59:22.853092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.259 [2024-12-10 22:59:22.853124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.259 qpair failed and we were unable to recover it. 00:28:15.259 [2024-12-10 22:59:22.853240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.259 [2024-12-10 22:59:22.853285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.259 qpair failed and we were unable to recover it. 00:28:15.259 [2024-12-10 22:59:22.853463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.259 [2024-12-10 22:59:22.853497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.259 qpair failed and we were unable to recover it. 00:28:15.259 [2024-12-10 22:59:22.853690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.259 [2024-12-10 22:59:22.853723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.259 qpair failed and we were unable to recover it. 00:28:15.259 [2024-12-10 22:59:22.853833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.259 [2024-12-10 22:59:22.853865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.259 qpair failed and we were unable to recover it. 00:28:15.259 [2024-12-10 22:59:22.853965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.259 [2024-12-10 22:59:22.853997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.259 qpair failed and we were unable to recover it. 00:28:15.259 [2024-12-10 22:59:22.854176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.259 [2024-12-10 22:59:22.854212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.259 qpair failed and we were unable to recover it. 00:28:15.259 [2024-12-10 22:59:22.854378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.259 [2024-12-10 22:59:22.854411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.259 qpair failed and we were unable to recover it. 00:28:15.259 [2024-12-10 22:59:22.854603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.259 [2024-12-10 22:59:22.854635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.259 qpair failed and we were unable to recover it. 00:28:15.259 [2024-12-10 22:59:22.854747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.259 [2024-12-10 22:59:22.854779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.259 qpair failed and we were unable to recover it. 00:28:15.259 [2024-12-10 22:59:22.854906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.259 [2024-12-10 22:59:22.854938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.259 qpair failed and we were unable to recover it. 00:28:15.259 [2024-12-10 22:59:22.855135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.259 [2024-12-10 22:59:22.855176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.259 qpair failed and we were unable to recover it. 00:28:15.259 [2024-12-10 22:59:22.855280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.259 [2024-12-10 22:59:22.855312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.259 qpair failed and we were unable to recover it. 00:28:15.259 [2024-12-10 22:59:22.855480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.259 [2024-12-10 22:59:22.855513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.259 qpair failed and we were unable to recover it. 00:28:15.259 [2024-12-10 22:59:22.855638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.259 [2024-12-10 22:59:22.855671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.259 qpair failed and we were unable to recover it. 00:28:15.259 [2024-12-10 22:59:22.855784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.259 [2024-12-10 22:59:22.855817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.259 qpair failed and we were unable to recover it. 00:28:15.259 [2024-12-10 22:59:22.856066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.259 [2024-12-10 22:59:22.856098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.259 qpair failed and we were unable to recover it. 00:28:15.259 [2024-12-10 22:59:22.856216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.259 [2024-12-10 22:59:22.856249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.259 qpair failed and we were unable to recover it. 00:28:15.259 [2024-12-10 22:59:22.856352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.259 [2024-12-10 22:59:22.856385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.259 qpair failed and we were unable to recover it. 00:28:15.259 [2024-12-10 22:59:22.856554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.259 [2024-12-10 22:59:22.856587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.259 qpair failed and we were unable to recover it. 00:28:15.259 [2024-12-10 22:59:22.856852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.259 [2024-12-10 22:59:22.856884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.259 qpair failed and we were unable to recover it. 00:28:15.260 [2024-12-10 22:59:22.856983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.260 [2024-12-10 22:59:22.857015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.260 qpair failed and we were unable to recover it. 00:28:15.260 [2024-12-10 22:59:22.857180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.260 [2024-12-10 22:59:22.857213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.260 qpair failed and we were unable to recover it. 00:28:15.260 [2024-12-10 22:59:22.857449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.260 [2024-12-10 22:59:22.857482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.260 qpair failed and we were unable to recover it. 00:28:15.260 [2024-12-10 22:59:22.857718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.260 [2024-12-10 22:59:22.857749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.260 qpair failed and we were unable to recover it. 00:28:15.260 [2024-12-10 22:59:22.857873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.260 [2024-12-10 22:59:22.857904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.260 qpair failed and we were unable to recover it. 00:28:15.260 [2024-12-10 22:59:22.858021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.260 [2024-12-10 22:59:22.858054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.260 qpair failed and we were unable to recover it. 00:28:15.260 [2024-12-10 22:59:22.858193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.260 [2024-12-10 22:59:22.858228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.260 qpair failed and we were unable to recover it. 00:28:15.260 [2024-12-10 22:59:22.858331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.260 [2024-12-10 22:59:22.858370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.260 qpair failed and we were unable to recover it. 00:28:15.260 [2024-12-10 22:59:22.858480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.260 [2024-12-10 22:59:22.858512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.260 qpair failed and we were unable to recover it. 00:28:15.260 [2024-12-10 22:59:22.858678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.260 [2024-12-10 22:59:22.858709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.260 qpair failed and we were unable to recover it. 00:28:15.260 [2024-12-10 22:59:22.858811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.260 [2024-12-10 22:59:22.858843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.260 qpair failed and we were unable to recover it. 00:28:15.260 [2024-12-10 22:59:22.859013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.260 [2024-12-10 22:59:22.859045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.260 qpair failed and we were unable to recover it. 00:28:15.260 [2024-12-10 22:59:22.859326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.260 [2024-12-10 22:59:22.859359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.260 qpair failed and we were unable to recover it. 00:28:15.260 [2024-12-10 22:59:22.859473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.260 [2024-12-10 22:59:22.859505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.260 qpair failed and we were unable to recover it. 00:28:15.260 [2024-12-10 22:59:22.859624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.260 [2024-12-10 22:59:22.859655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.260 qpair failed and we were unable to recover it. 00:28:15.260 [2024-12-10 22:59:22.859779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.260 [2024-12-10 22:59:22.859810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.260 qpair failed and we were unable to recover it. 00:28:15.260 [2024-12-10 22:59:22.859915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.260 [2024-12-10 22:59:22.859948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.260 qpair failed and we were unable to recover it. 00:28:15.260 [2024-12-10 22:59:22.860061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.260 [2024-12-10 22:59:22.860093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.260 qpair failed and we were unable to recover it. 00:28:15.260 [2024-12-10 22:59:22.860199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.260 [2024-12-10 22:59:22.860233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.260 qpair failed and we were unable to recover it. 00:28:15.260 [2024-12-10 22:59:22.860337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.260 [2024-12-10 22:59:22.860371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.260 qpair failed and we were unable to recover it. 00:28:15.260 [2024-12-10 22:59:22.860576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.260 [2024-12-10 22:59:22.860609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.260 qpair failed and we were unable to recover it. 00:28:15.260 [2024-12-10 22:59:22.860797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.260 [2024-12-10 22:59:22.860831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.260 qpair failed and we were unable to recover it. 00:28:15.260 [2024-12-10 22:59:22.860955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.260 [2024-12-10 22:59:22.860987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.260 qpair failed and we were unable to recover it. 00:28:15.260 [2024-12-10 22:59:22.861091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.260 [2024-12-10 22:59:22.861122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.260 qpair failed and we were unable to recover it. 00:28:15.260 [2024-12-10 22:59:22.861311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.260 [2024-12-10 22:59:22.861343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.260 qpair failed and we were unable to recover it. 00:28:15.260 [2024-12-10 22:59:22.861514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.260 [2024-12-10 22:59:22.861546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.260 qpair failed and we were unable to recover it. 00:28:15.260 [2024-12-10 22:59:22.861808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.260 [2024-12-10 22:59:22.861840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.260 qpair failed and we were unable to recover it. 00:28:15.260 [2024-12-10 22:59:22.861943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.260 [2024-12-10 22:59:22.861975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.260 qpair failed and we were unable to recover it. 00:28:15.260 [2024-12-10 22:59:22.862074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.260 [2024-12-10 22:59:22.862106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.260 qpair failed and we were unable to recover it. 00:28:15.260 [2024-12-10 22:59:22.862292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.260 [2024-12-10 22:59:22.862325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.260 qpair failed and we were unable to recover it. 00:28:15.260 [2024-12-10 22:59:22.862512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.260 [2024-12-10 22:59:22.862544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.260 qpair failed and we were unable to recover it. 00:28:15.260 [2024-12-10 22:59:22.862712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.260 [2024-12-10 22:59:22.862744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.260 qpair failed and we were unable to recover it. 00:28:15.260 [2024-12-10 22:59:22.862854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.260 [2024-12-10 22:59:22.862887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.260 qpair failed and we were unable to recover it. 00:28:15.260 [2024-12-10 22:59:22.863003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.260 [2024-12-10 22:59:22.863036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.260 qpair failed and we were unable to recover it. 00:28:15.260 [2024-12-10 22:59:22.863298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.260 [2024-12-10 22:59:22.863332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.260 qpair failed and we were unable to recover it. 00:28:15.260 [2024-12-10 22:59:22.863454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.260 [2024-12-10 22:59:22.863486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.260 qpair failed and we were unable to recover it. 00:28:15.260 [2024-12-10 22:59:22.863604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.260 [2024-12-10 22:59:22.863636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.260 qpair failed and we were unable to recover it. 00:28:15.260 [2024-12-10 22:59:22.863839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.260 [2024-12-10 22:59:22.863873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.260 qpair failed and we were unable to recover it. 00:28:15.260 [2024-12-10 22:59:22.864067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.260 [2024-12-10 22:59:22.864100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.260 qpair failed and we were unable to recover it. 00:28:15.260 [2024-12-10 22:59:22.864221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.260 [2024-12-10 22:59:22.864256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.260 qpair failed and we were unable to recover it. 00:28:15.260 [2024-12-10 22:59:22.864427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.260 [2024-12-10 22:59:22.864460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.260 qpair failed and we were unable to recover it. 00:28:15.260 [2024-12-10 22:59:22.864633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.260 [2024-12-10 22:59:22.864667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.260 qpair failed and we were unable to recover it. 00:28:15.260 [2024-12-10 22:59:22.864866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.260 [2024-12-10 22:59:22.864898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.260 qpair failed and we were unable to recover it. 00:28:15.260 [2024-12-10 22:59:22.865137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.260 [2024-12-10 22:59:22.865180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.260 qpair failed and we were unable to recover it. 00:28:15.260 [2024-12-10 22:59:22.865350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.260 [2024-12-10 22:59:22.865383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.260 qpair failed and we were unable to recover it. 00:28:15.260 [2024-12-10 22:59:22.865486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.260 [2024-12-10 22:59:22.865518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.260 qpair failed and we were unable to recover it. 00:28:15.260 [2024-12-10 22:59:22.865687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.260 [2024-12-10 22:59:22.865719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.260 qpair failed and we were unable to recover it. 00:28:15.260 [2024-12-10 22:59:22.865887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.260 [2024-12-10 22:59:22.865919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.260 qpair failed and we were unable to recover it. 00:28:15.260 [2024-12-10 22:59:22.866035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.260 [2024-12-10 22:59:22.866072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.260 qpair failed and we were unable to recover it. 00:28:15.260 [2024-12-10 22:59:22.866199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.260 [2024-12-10 22:59:22.866241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.260 qpair failed and we were unable to recover it. 00:28:15.260 [2024-12-10 22:59:22.866424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.260 [2024-12-10 22:59:22.866457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.260 qpair failed and we were unable to recover it. 00:28:15.260 [2024-12-10 22:59:22.866631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.260 [2024-12-10 22:59:22.866663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.260 qpair failed and we were unable to recover it. 00:28:15.260 [2024-12-10 22:59:22.866775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.260 [2024-12-10 22:59:22.866807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.260 qpair failed and we were unable to recover it. 00:28:15.260 [2024-12-10 22:59:22.866923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.260 [2024-12-10 22:59:22.866955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.260 qpair failed and we were unable to recover it. 00:28:15.260 [2024-12-10 22:59:22.867057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.260 [2024-12-10 22:59:22.867089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.260 qpair failed and we were unable to recover it. 00:28:15.260 [2024-12-10 22:59:22.867285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.260 [2024-12-10 22:59:22.867319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.260 qpair failed and we were unable to recover it. 00:28:15.260 [2024-12-10 22:59:22.867491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.260 [2024-12-10 22:59:22.867524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.260 qpair failed and we were unable to recover it. 00:28:15.260 [2024-12-10 22:59:22.867634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.260 [2024-12-10 22:59:22.867666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.260 qpair failed and we were unable to recover it. 00:28:15.260 [2024-12-10 22:59:22.867769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.260 [2024-12-10 22:59:22.867801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.260 qpair failed and we were unable to recover it. 00:28:15.260 [2024-12-10 22:59:22.867930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.260 [2024-12-10 22:59:22.867962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.260 qpair failed and we were unable to recover it. 00:28:15.260 [2024-12-10 22:59:22.868073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.260 [2024-12-10 22:59:22.868105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.260 qpair failed and we were unable to recover it. 00:28:15.260 [2024-12-10 22:59:22.868216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.260 [2024-12-10 22:59:22.868249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.260 qpair failed and we were unable to recover it. 00:28:15.260 [2024-12-10 22:59:22.868360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.260 [2024-12-10 22:59:22.868393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.260 qpair failed and we were unable to recover it. 00:28:15.261 [2024-12-10 22:59:22.868592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.261 [2024-12-10 22:59:22.868623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.261 qpair failed and we were unable to recover it. 00:28:15.261 [2024-12-10 22:59:22.868802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.261 [2024-12-10 22:59:22.868834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.261 qpair failed and we were unable to recover it. 00:28:15.261 [2024-12-10 22:59:22.868947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.261 [2024-12-10 22:59:22.868980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.261 qpair failed and we were unable to recover it. 00:28:15.261 [2024-12-10 22:59:22.869084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.261 [2024-12-10 22:59:22.869116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.261 qpair failed and we were unable to recover it. 00:28:15.261 [2024-12-10 22:59:22.869296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.261 [2024-12-10 22:59:22.869330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.261 qpair failed and we were unable to recover it. 00:28:15.261 [2024-12-10 22:59:22.869435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.261 [2024-12-10 22:59:22.869468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.261 qpair failed and we were unable to recover it. 00:28:15.261 [2024-12-10 22:59:22.869569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.261 [2024-12-10 22:59:22.869602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.261 qpair failed and we were unable to recover it. 00:28:15.261 [2024-12-10 22:59:22.869785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.261 [2024-12-10 22:59:22.869818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.261 qpair failed and we were unable to recover it. 00:28:15.261 [2024-12-10 22:59:22.870079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.261 [2024-12-10 22:59:22.870112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.261 qpair failed and we were unable to recover it. 00:28:15.261 [2024-12-10 22:59:22.870248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.261 [2024-12-10 22:59:22.870283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.261 qpair failed and we were unable to recover it. 00:28:15.261 [2024-12-10 22:59:22.870468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.261 [2024-12-10 22:59:22.870501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.261 qpair failed and we were unable to recover it. 00:28:15.261 [2024-12-10 22:59:22.870613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.261 [2024-12-10 22:59:22.870646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.261 qpair failed and we were unable to recover it. 00:28:15.261 [2024-12-10 22:59:22.870764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.261 [2024-12-10 22:59:22.870798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.261 qpair failed and we were unable to recover it. 00:28:15.261 [2024-12-10 22:59:22.870922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.261 [2024-12-10 22:59:22.870954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.261 qpair failed and we were unable to recover it. 00:28:15.261 [2024-12-10 22:59:22.871135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.261 [2024-12-10 22:59:22.871179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.261 qpair failed and we were unable to recover it. 00:28:15.261 [2024-12-10 22:59:22.871468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.261 [2024-12-10 22:59:22.871502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.261 qpair failed and we were unable to recover it. 00:28:15.261 [2024-12-10 22:59:22.871627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.261 [2024-12-10 22:59:22.871660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.261 qpair failed and we were unable to recover it. 00:28:15.261 [2024-12-10 22:59:22.871764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.261 [2024-12-10 22:59:22.871796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.261 qpair failed and we were unable to recover it. 00:28:15.261 [2024-12-10 22:59:22.871966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.261 [2024-12-10 22:59:22.871997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.261 qpair failed and we were unable to recover it. 00:28:15.261 [2024-12-10 22:59:22.872232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.261 [2024-12-10 22:59:22.872266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.261 qpair failed and we were unable to recover it. 00:28:15.261 [2024-12-10 22:59:22.872455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.261 [2024-12-10 22:59:22.872487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.261 qpair failed and we were unable to recover it. 00:28:15.261 [2024-12-10 22:59:22.872601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.261 [2024-12-10 22:59:22.872633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.261 qpair failed and we were unable to recover it. 00:28:15.261 [2024-12-10 22:59:22.872749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.261 [2024-12-10 22:59:22.872781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.261 qpair failed and we were unable to recover it. 00:28:15.261 [2024-12-10 22:59:22.872888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.261 [2024-12-10 22:59:22.872920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.261 qpair failed and we were unable to recover it. 00:28:15.261 [2024-12-10 22:59:22.873187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.261 [2024-12-10 22:59:22.873220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.261 qpair failed and we were unable to recover it. 00:28:15.261 [2024-12-10 22:59:22.873339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.261 [2024-12-10 22:59:22.873370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.261 qpair failed and we were unable to recover it. 00:28:15.261 [2024-12-10 22:59:22.873539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.261 [2024-12-10 22:59:22.873610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.261 qpair failed and we were unable to recover it. 00:28:15.261 [2024-12-10 22:59:22.873750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.261 [2024-12-10 22:59:22.873787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.261 qpair failed and we were unable to recover it. 00:28:15.261 [2024-12-10 22:59:22.873984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.261 [2024-12-10 22:59:22.874018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.261 qpair failed and we were unable to recover it. 00:28:15.261 [2024-12-10 22:59:22.874231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.261 [2024-12-10 22:59:22.874268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.261 qpair failed and we were unable to recover it. 00:28:15.261 [2024-12-10 22:59:22.874387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.261 [2024-12-10 22:59:22.874422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.261 qpair failed and we were unable to recover it. 00:28:15.261 [2024-12-10 22:59:22.874548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.261 [2024-12-10 22:59:22.874581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.261 qpair failed and we were unable to recover it. 00:28:15.261 [2024-12-10 22:59:22.874704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.261 [2024-12-10 22:59:22.874736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.261 qpair failed and we were unable to recover it. 00:28:15.261 [2024-12-10 22:59:22.874909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.261 [2024-12-10 22:59:22.874945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.261 qpair failed and we were unable to recover it. 00:28:15.261 [2024-12-10 22:59:22.875051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.261 [2024-12-10 22:59:22.875086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.261 qpair failed and we were unable to recover it. 00:28:15.261 [2024-12-10 22:59:22.875257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.261 [2024-12-10 22:59:22.875292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.261 qpair failed and we were unable to recover it. 00:28:15.261 [2024-12-10 22:59:22.875465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.261 [2024-12-10 22:59:22.875498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.261 qpair failed and we were unable to recover it. 00:28:15.261 [2024-12-10 22:59:22.875763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.261 [2024-12-10 22:59:22.875796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.261 qpair failed and we were unable to recover it. 00:28:15.261 [2024-12-10 22:59:22.876060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.261 [2024-12-10 22:59:22.876094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.261 qpair failed and we were unable to recover it. 00:28:15.261 [2024-12-10 22:59:22.876339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.261 [2024-12-10 22:59:22.876383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.261 qpair failed and we were unable to recover it. 00:28:15.261 [2024-12-10 22:59:22.876573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.261 [2024-12-10 22:59:22.876606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.261 qpair failed and we were unable to recover it. 00:28:15.261 [2024-12-10 22:59:22.876775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.261 [2024-12-10 22:59:22.876808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.261 qpair failed and we were unable to recover it. 00:28:15.261 [2024-12-10 22:59:22.876928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.261 [2024-12-10 22:59:22.876961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.261 qpair failed and we were unable to recover it. 00:28:15.261 [2024-12-10 22:59:22.877077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.261 [2024-12-10 22:59:22.877111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.261 qpair failed and we were unable to recover it. 00:28:15.261 [2024-12-10 22:59:22.877324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.261 [2024-12-10 22:59:22.877358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.261 qpair failed and we were unable to recover it. 00:28:15.261 [2024-12-10 22:59:22.877487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.261 [2024-12-10 22:59:22.877521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.261 qpair failed and we were unable to recover it. 00:28:15.261 [2024-12-10 22:59:22.877629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.261 [2024-12-10 22:59:22.877663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.261 qpair failed and we were unable to recover it. 00:28:15.261 [2024-12-10 22:59:22.877786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.261 [2024-12-10 22:59:22.877821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.261 qpair failed and we were unable to recover it. 00:28:15.261 [2024-12-10 22:59:22.877991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.261 [2024-12-10 22:59:22.878025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.261 qpair failed and we were unable to recover it. 00:28:15.261 [2024-12-10 22:59:22.878232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.261 [2024-12-10 22:59:22.878267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.261 qpair failed and we were unable to recover it. 00:28:15.261 [2024-12-10 22:59:22.878436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.261 [2024-12-10 22:59:22.878469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.261 qpair failed and we were unable to recover it. 00:28:15.261 [2024-12-10 22:59:22.878655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.261 [2024-12-10 22:59:22.878688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.261 qpair failed and we were unable to recover it. 00:28:15.261 [2024-12-10 22:59:22.878923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.261 [2024-12-10 22:59:22.878955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.261 qpair failed and we were unable to recover it. 00:28:15.261 [2024-12-10 22:59:22.879092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.261 [2024-12-10 22:59:22.879125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.261 qpair failed and we were unable to recover it. 00:28:15.261 [2024-12-10 22:59:22.879244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.261 [2024-12-10 22:59:22.879277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.261 qpair failed and we were unable to recover it. 00:28:15.261 [2024-12-10 22:59:22.879493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.261 [2024-12-10 22:59:22.879526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.261 qpair failed and we were unable to recover it. 00:28:15.261 [2024-12-10 22:59:22.879628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.262 [2024-12-10 22:59:22.879660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.262 qpair failed and we were unable to recover it. 00:28:15.262 [2024-12-10 22:59:22.879785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.262 [2024-12-10 22:59:22.879819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.262 qpair failed and we were unable to recover it. 00:28:15.262 [2024-12-10 22:59:22.880024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.262 [2024-12-10 22:59:22.880056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.262 qpair failed and we were unable to recover it. 00:28:15.262 [2024-12-10 22:59:22.880227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.262 [2024-12-10 22:59:22.880261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.262 qpair failed and we were unable to recover it. 00:28:15.262 [2024-12-10 22:59:22.880378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.262 [2024-12-10 22:59:22.880411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.262 qpair failed and we were unable to recover it. 00:28:15.262 [2024-12-10 22:59:22.880585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.262 [2024-12-10 22:59:22.880617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.262 qpair failed and we were unable to recover it. 00:28:15.262 [2024-12-10 22:59:22.880810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.262 [2024-12-10 22:59:22.880842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.262 qpair failed and we were unable to recover it. 00:28:15.262 [2024-12-10 22:59:22.880948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.262 [2024-12-10 22:59:22.880981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.262 qpair failed and we were unable to recover it. 00:28:15.262 [2024-12-10 22:59:22.881096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.262 [2024-12-10 22:59:22.881128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.262 qpair failed and we were unable to recover it. 00:28:15.262 [2024-12-10 22:59:22.881336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.262 [2024-12-10 22:59:22.881372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.262 qpair failed and we were unable to recover it. 00:28:15.262 [2024-12-10 22:59:22.881545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.262 [2024-12-10 22:59:22.881582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.262 qpair failed and we were unable to recover it. 00:28:15.262 [2024-12-10 22:59:22.881749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.262 [2024-12-10 22:59:22.881781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.262 qpair failed and we were unable to recover it. 00:28:15.262 [2024-12-10 22:59:22.881881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.262 [2024-12-10 22:59:22.881914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.262 qpair failed and we were unable to recover it. 00:28:15.262 [2024-12-10 22:59:22.882020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.262 [2024-12-10 22:59:22.882053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.262 qpair failed and we were unable to recover it. 00:28:15.262 [2024-12-10 22:59:22.882223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.262 [2024-12-10 22:59:22.882257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.262 qpair failed and we were unable to recover it. 00:28:15.262 [2024-12-10 22:59:22.882373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.262 [2024-12-10 22:59:22.882405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.262 qpair failed and we were unable to recover it. 00:28:15.262 [2024-12-10 22:59:22.882515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.262 [2024-12-10 22:59:22.882548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.262 qpair failed and we were unable to recover it. 00:28:15.262 [2024-12-10 22:59:22.882754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.262 [2024-12-10 22:59:22.882785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.262 qpair failed and we were unable to recover it. 00:28:15.262 [2024-12-10 22:59:22.882950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.262 [2024-12-10 22:59:22.882983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.262 qpair failed and we were unable to recover it. 00:28:15.262 [2024-12-10 22:59:22.883088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.262 [2024-12-10 22:59:22.883120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.262 qpair failed and we were unable to recover it. 00:28:15.262 [2024-12-10 22:59:22.883257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.262 [2024-12-10 22:59:22.883291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.262 qpair failed and we were unable to recover it. 00:28:15.262 [2024-12-10 22:59:22.883456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.262 [2024-12-10 22:59:22.883489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.262 qpair failed and we were unable to recover it. 00:28:15.262 [2024-12-10 22:59:22.883667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.262 [2024-12-10 22:59:22.883701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.262 qpair failed and we were unable to recover it. 00:28:15.262 [2024-12-10 22:59:22.883990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.262 [2024-12-10 22:59:22.884024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.262 qpair failed and we were unable to recover it. 00:28:15.262 [2024-12-10 22:59:22.884133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.262 [2024-12-10 22:59:22.884177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.262 qpair failed and we were unable to recover it. 00:28:15.262 [2024-12-10 22:59:22.884361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.262 [2024-12-10 22:59:22.884394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.262 qpair failed and we were unable to recover it. 00:28:15.262 [2024-12-10 22:59:22.884584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.262 [2024-12-10 22:59:22.884618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.262 qpair failed and we were unable to recover it. 00:28:15.262 [2024-12-10 22:59:22.884819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.262 [2024-12-10 22:59:22.884851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.262 qpair failed and we were unable to recover it. 00:28:15.262 [2024-12-10 22:59:22.885022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.262 [2024-12-10 22:59:22.885054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.262 qpair failed and we were unable to recover it. 00:28:15.262 [2024-12-10 22:59:22.885283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.262 [2024-12-10 22:59:22.885318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.262 qpair failed and we were unable to recover it. 00:28:15.262 [2024-12-10 22:59:22.885438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.262 [2024-12-10 22:59:22.885471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.262 qpair failed and we were unable to recover it. 00:28:15.262 [2024-12-10 22:59:22.885585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.262 [2024-12-10 22:59:22.885617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.262 qpair failed and we were unable to recover it. 00:28:15.262 [2024-12-10 22:59:22.885724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.262 [2024-12-10 22:59:22.885755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.262 qpair failed and we were unable to recover it. 00:28:15.262 [2024-12-10 22:59:22.885926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.262 [2024-12-10 22:59:22.885958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.262 qpair failed and we were unable to recover it. 00:28:15.262 [2024-12-10 22:59:22.886057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.262 [2024-12-10 22:59:22.886088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.262 qpair failed and we were unable to recover it. 00:28:15.262 [2024-12-10 22:59:22.886260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.262 [2024-12-10 22:59:22.886293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.262 qpair failed and we were unable to recover it. 00:28:15.262 [2024-12-10 22:59:22.886408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.262 [2024-12-10 22:59:22.886440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.262 qpair failed and we were unable to recover it. 00:28:15.262 [2024-12-10 22:59:22.886607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.262 [2024-12-10 22:59:22.886646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.262 qpair failed and we were unable to recover it. 00:28:15.262 [2024-12-10 22:59:22.886818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.262 [2024-12-10 22:59:22.886851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.262 qpair failed and we were unable to recover it. 00:28:15.262 [2024-12-10 22:59:22.887022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.262 [2024-12-10 22:59:22.887056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.262 qpair failed and we were unable to recover it. 00:28:15.262 [2024-12-10 22:59:22.887176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.262 [2024-12-10 22:59:22.887210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.262 qpair failed and we were unable to recover it. 00:28:15.262 [2024-12-10 22:59:22.887397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.262 [2024-12-10 22:59:22.887429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.262 qpair failed and we were unable to recover it. 00:28:15.262 [2024-12-10 22:59:22.887636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.262 [2024-12-10 22:59:22.887668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.262 qpair failed and we were unable to recover it. 00:28:15.262 [2024-12-10 22:59:22.887907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.262 [2024-12-10 22:59:22.887939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.262 qpair failed and we were unable to recover it. 00:28:15.262 [2024-12-10 22:59:22.888054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.262 [2024-12-10 22:59:22.888087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.262 qpair failed and we were unable to recover it. 00:28:15.262 [2024-12-10 22:59:22.888208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.262 [2024-12-10 22:59:22.888241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.262 qpair failed and we were unable to recover it. 00:28:15.262 [2024-12-10 22:59:22.888366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.262 [2024-12-10 22:59:22.888399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.262 qpair failed and we were unable to recover it. 00:28:15.262 [2024-12-10 22:59:22.888513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.262 [2024-12-10 22:59:22.888544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.262 qpair failed and we were unable to recover it. 00:28:15.262 [2024-12-10 22:59:22.888781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.262 [2024-12-10 22:59:22.888814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.262 qpair failed and we were unable to recover it. 00:28:15.262 [2024-12-10 22:59:22.888986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.262 [2024-12-10 22:59:22.889018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.262 qpair failed and we were unable to recover it. 00:28:15.262 [2024-12-10 22:59:22.889138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.262 [2024-12-10 22:59:22.889178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.262 qpair failed and we were unable to recover it. 00:28:15.262 [2024-12-10 22:59:22.889300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.262 [2024-12-10 22:59:22.889334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.262 qpair failed and we were unable to recover it. 00:28:15.262 [2024-12-10 22:59:22.889448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.262 [2024-12-10 22:59:22.889479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.262 qpair failed and we were unable to recover it. 00:28:15.262 [2024-12-10 22:59:22.889645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.262 [2024-12-10 22:59:22.889677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.262 qpair failed and we were unable to recover it. 00:28:15.262 [2024-12-10 22:59:22.889873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.262 [2024-12-10 22:59:22.889906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.262 qpair failed and we were unable to recover it. 00:28:15.262 [2024-12-10 22:59:22.890073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.262 [2024-12-10 22:59:22.890106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.262 qpair failed and we were unable to recover it. 00:28:15.262 [2024-12-10 22:59:22.890245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.262 [2024-12-10 22:59:22.890279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.262 qpair failed and we were unable to recover it. 00:28:15.262 [2024-12-10 22:59:22.890469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.262 [2024-12-10 22:59:22.890502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.262 qpair failed and we were unable to recover it. 00:28:15.262 [2024-12-10 22:59:22.890606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.262 [2024-12-10 22:59:22.890639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.262 qpair failed and we were unable to recover it. 00:28:15.262 [2024-12-10 22:59:22.890751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.262 [2024-12-10 22:59:22.890784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.262 qpair failed and we were unable to recover it. 00:28:15.262 [2024-12-10 22:59:22.890900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.262 [2024-12-10 22:59:22.890932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.262 qpair failed and we were unable to recover it. 00:28:15.262 [2024-12-10 22:59:22.891040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.262 [2024-12-10 22:59:22.891071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.262 qpair failed and we were unable to recover it. 00:28:15.262 [2024-12-10 22:59:22.891178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.262 [2024-12-10 22:59:22.891213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.262 qpair failed and we were unable to recover it. 00:28:15.262 [2024-12-10 22:59:22.891406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.262 [2024-12-10 22:59:22.891438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.262 qpair failed and we were unable to recover it. 00:28:15.262 [2024-12-10 22:59:22.891556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.262 [2024-12-10 22:59:22.891589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.262 qpair failed and we were unable to recover it. 00:28:15.262 [2024-12-10 22:59:22.891710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.262 [2024-12-10 22:59:22.891743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.263 qpair failed and we were unable to recover it. 00:28:15.263 [2024-12-10 22:59:22.892000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.263 [2024-12-10 22:59:22.892034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.263 qpair failed and we were unable to recover it. 00:28:15.263 [2024-12-10 22:59:22.892279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.263 [2024-12-10 22:59:22.892313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.263 qpair failed and we were unable to recover it. 00:28:15.263 [2024-12-10 22:59:22.892419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.263 [2024-12-10 22:59:22.892452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.263 qpair failed and we were unable to recover it. 00:28:15.263 [2024-12-10 22:59:22.892672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.263 [2024-12-10 22:59:22.892703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.263 qpair failed and we were unable to recover it. 00:28:15.263 [2024-12-10 22:59:22.892814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.263 [2024-12-10 22:59:22.892847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.263 qpair failed and we were unable to recover it. 00:28:15.263 [2024-12-10 22:59:22.893134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.263 [2024-12-10 22:59:22.893173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.263 qpair failed and we were unable to recover it. 00:28:15.263 [2024-12-10 22:59:22.893346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.263 [2024-12-10 22:59:22.893379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.263 qpair failed and we were unable to recover it. 00:28:15.263 [2024-12-10 22:59:22.893547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.263 [2024-12-10 22:59:22.893579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.263 qpair failed and we were unable to recover it. 00:28:15.263 [2024-12-10 22:59:22.893680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.263 [2024-12-10 22:59:22.893711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.263 qpair failed and we were unable to recover it. 00:28:15.263 [2024-12-10 22:59:22.893838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.263 [2024-12-10 22:59:22.893870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.263 qpair failed and we were unable to recover it. 00:28:15.263 [2024-12-10 22:59:22.894046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.263 [2024-12-10 22:59:22.894079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.263 qpair failed and we were unable to recover it. 00:28:15.263 [2024-12-10 22:59:22.894193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.263 [2024-12-10 22:59:22.894226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.263 qpair failed and we were unable to recover it. 00:28:15.263 [2024-12-10 22:59:22.894348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.263 [2024-12-10 22:59:22.894385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.263 qpair failed and we were unable to recover it. 00:28:15.263 [2024-12-10 22:59:22.894499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.263 [2024-12-10 22:59:22.894532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.263 qpair failed and we were unable to recover it. 00:28:15.263 [2024-12-10 22:59:22.894703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.263 [2024-12-10 22:59:22.894737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.263 qpair failed and we were unable to recover it. 00:28:15.263 [2024-12-10 22:59:22.894904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.263 [2024-12-10 22:59:22.894938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.263 qpair failed and we were unable to recover it. 00:28:15.263 [2024-12-10 22:59:22.895129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.263 [2024-12-10 22:59:22.895169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.263 qpair failed and we were unable to recover it. 00:28:15.263 [2024-12-10 22:59:22.895353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.263 [2024-12-10 22:59:22.895385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.263 qpair failed and we were unable to recover it. 00:28:15.263 [2024-12-10 22:59:22.895501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.263 [2024-12-10 22:59:22.895533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.263 qpair failed and we were unable to recover it. 00:28:15.263 [2024-12-10 22:59:22.895711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.263 [2024-12-10 22:59:22.895743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.263 qpair failed and we were unable to recover it. 00:28:15.263 [2024-12-10 22:59:22.895918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.263 [2024-12-10 22:59:22.895950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.263 qpair failed and we were unable to recover it. 00:28:15.263 [2024-12-10 22:59:22.896073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.263 [2024-12-10 22:59:22.896104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.263 qpair failed and we were unable to recover it. 00:28:15.263 [2024-12-10 22:59:22.896284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.263 [2024-12-10 22:59:22.896317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.263 qpair failed and we were unable to recover it. 00:28:15.263 [2024-12-10 22:59:22.896532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.263 [2024-12-10 22:59:22.896564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.263 qpair failed and we were unable to recover it. 00:28:15.263 [2024-12-10 22:59:22.896730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.263 [2024-12-10 22:59:22.896762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.263 qpair failed and we were unable to recover it. 00:28:15.263 [2024-12-10 22:59:22.896953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.263 [2024-12-10 22:59:22.896986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.263 qpair failed and we were unable to recover it. 00:28:15.263 [2024-12-10 22:59:22.897110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.263 [2024-12-10 22:59:22.897143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.263 qpair failed and we were unable to recover it. 00:28:15.263 [2024-12-10 22:59:22.897425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.263 [2024-12-10 22:59:22.897458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.263 qpair failed and we were unable to recover it. 00:28:15.263 [2024-12-10 22:59:22.897575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.263 [2024-12-10 22:59:22.897607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.263 qpair failed and we were unable to recover it. 00:28:15.263 [2024-12-10 22:59:22.897707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.263 [2024-12-10 22:59:22.897739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.263 qpair failed and we were unable to recover it. 00:28:15.263 [2024-12-10 22:59:22.897907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.263 [2024-12-10 22:59:22.897940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.263 qpair failed and we were unable to recover it. 00:28:15.263 [2024-12-10 22:59:22.898111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.263 [2024-12-10 22:59:22.898143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.263 qpair failed and we were unable to recover it. 00:28:15.263 [2024-12-10 22:59:22.898262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.263 [2024-12-10 22:59:22.898294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.263 qpair failed and we were unable to recover it. 00:28:15.263 [2024-12-10 22:59:22.898486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.263 [2024-12-10 22:59:22.898518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.263 qpair failed and we were unable to recover it. 00:28:15.263 [2024-12-10 22:59:22.898730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.263 [2024-12-10 22:59:22.898763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.263 qpair failed and we were unable to recover it. 00:28:15.263 [2024-12-10 22:59:22.898934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.263 [2024-12-10 22:59:22.898966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.263 qpair failed and we were unable to recover it. 00:28:15.263 [2024-12-10 22:59:22.899147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.263 [2024-12-10 22:59:22.899190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.263 qpair failed and we were unable to recover it. 00:28:15.263 [2024-12-10 22:59:22.899623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.263 [2024-12-10 22:59:22.899661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.263 qpair failed and we were unable to recover it. 00:28:15.263 [2024-12-10 22:59:22.899850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.263 [2024-12-10 22:59:22.899885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.263 qpair failed and we were unable to recover it. 00:28:15.263 [2024-12-10 22:59:22.900058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.263 [2024-12-10 22:59:22.900099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.263 qpair failed and we were unable to recover it. 00:28:15.263 [2024-12-10 22:59:22.900230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.263 [2024-12-10 22:59:22.900265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.263 qpair failed and we were unable to recover it. 00:28:15.263 [2024-12-10 22:59:22.900434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.263 [2024-12-10 22:59:22.900467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.263 qpair failed and we were unable to recover it. 00:28:15.263 [2024-12-10 22:59:22.900591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.263 [2024-12-10 22:59:22.900623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.263 qpair failed and we were unable to recover it. 00:28:15.263 [2024-12-10 22:59:22.900822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.263 [2024-12-10 22:59:22.900854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.263 qpair failed and we were unable to recover it. 00:28:15.263 [2024-12-10 22:59:22.900957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.263 [2024-12-10 22:59:22.900989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.263 qpair failed and we were unable to recover it. 00:28:15.263 [2024-12-10 22:59:22.901168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.263 [2024-12-10 22:59:22.901202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.263 qpair failed and we were unable to recover it. 00:28:15.263 [2024-12-10 22:59:22.901371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.263 [2024-12-10 22:59:22.901404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.263 qpair failed and we were unable to recover it. 00:28:15.263 [2024-12-10 22:59:22.901586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.263 [2024-12-10 22:59:22.901620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.263 qpair failed and we were unable to recover it. 00:28:15.263 [2024-12-10 22:59:22.901882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.263 [2024-12-10 22:59:22.901916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.263 qpair failed and we were unable to recover it. 00:28:15.263 [2024-12-10 22:59:22.902017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.263 [2024-12-10 22:59:22.902049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.263 qpair failed and we were unable to recover it. 00:28:15.263 [2024-12-10 22:59:22.902234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.263 [2024-12-10 22:59:22.902266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.263 qpair failed and we were unable to recover it. 00:28:15.263 [2024-12-10 22:59:22.902403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.263 [2024-12-10 22:59:22.902434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.263 qpair failed and we were unable to recover it. 00:28:15.263 [2024-12-10 22:59:22.902602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.263 [2024-12-10 22:59:22.902632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.263 qpair failed and we were unable to recover it. 00:28:15.263 [2024-12-10 22:59:22.902916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.263 [2024-12-10 22:59:22.902985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.263 qpair failed and we were unable to recover it. 00:28:15.263 [2024-12-10 22:59:22.903125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.263 [2024-12-10 22:59:22.903172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.263 qpair failed and we were unable to recover it. 00:28:15.263 [2024-12-10 22:59:22.903344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.263 [2024-12-10 22:59:22.903377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.263 qpair failed and we were unable to recover it. 00:28:15.263 [2024-12-10 22:59:22.903506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.263 [2024-12-10 22:59:22.903537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.263 qpair failed and we were unable to recover it. 00:28:15.263 [2024-12-10 22:59:22.903639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.263 [2024-12-10 22:59:22.903670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.263 qpair failed and we were unable to recover it. 00:28:15.263 [2024-12-10 22:59:22.903862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.263 [2024-12-10 22:59:22.903893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.263 qpair failed and we were unable to recover it. 00:28:15.263 [2024-12-10 22:59:22.904013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.263 [2024-12-10 22:59:22.904045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.263 qpair failed and we were unable to recover it. 00:28:15.263 [2024-12-10 22:59:22.904235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.263 [2024-12-10 22:59:22.904267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.263 qpair failed and we were unable to recover it. 00:28:15.263 [2024-12-10 22:59:22.904450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.263 [2024-12-10 22:59:22.904480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.263 qpair failed and we were unable to recover it. 00:28:15.263 [2024-12-10 22:59:22.904699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.263 [2024-12-10 22:59:22.904728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.263 qpair failed and we were unable to recover it. 00:28:15.263 [2024-12-10 22:59:22.904830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.264 [2024-12-10 22:59:22.904858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.264 qpair failed and we were unable to recover it. 00:28:15.264 [2024-12-10 22:59:22.905030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.264 [2024-12-10 22:59:22.905059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.264 qpair failed and we were unable to recover it. 00:28:15.264 [2024-12-10 22:59:22.905245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.264 [2024-12-10 22:59:22.905276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.264 qpair failed and we were unable to recover it. 00:28:15.264 [2024-12-10 22:59:22.905379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.264 [2024-12-10 22:59:22.905419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.264 qpair failed and we were unable to recover it. 00:28:15.264 [2024-12-10 22:59:22.905537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.264 [2024-12-10 22:59:22.905568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.264 qpair failed and we were unable to recover it. 00:28:15.264 [2024-12-10 22:59:22.905737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.264 [2024-12-10 22:59:22.905768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.264 qpair failed and we were unable to recover it. 00:28:15.264 [2024-12-10 22:59:22.905899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.264 [2024-12-10 22:59:22.905929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.264 qpair failed and we were unable to recover it. 00:28:15.264 [2024-12-10 22:59:22.906035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.264 [2024-12-10 22:59:22.906067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.264 qpair failed and we were unable to recover it. 00:28:15.264 [2024-12-10 22:59:22.906179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.264 [2024-12-10 22:59:22.906210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.264 qpair failed and we were unable to recover it. 00:28:15.264 [2024-12-10 22:59:22.906393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.264 [2024-12-10 22:59:22.906423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.264 qpair failed and we were unable to recover it. 00:28:15.264 [2024-12-10 22:59:22.906615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.264 [2024-12-10 22:59:22.906648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.264 qpair failed and we were unable to recover it. 00:28:15.264 [2024-12-10 22:59:22.906817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.264 [2024-12-10 22:59:22.906847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.264 qpair failed and we were unable to recover it. 00:28:15.264 [2024-12-10 22:59:22.906973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.264 [2024-12-10 22:59:22.907004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.264 qpair failed and we were unable to recover it. 00:28:15.264 [2024-12-10 22:59:22.907198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.264 [2024-12-10 22:59:22.907231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.264 qpair failed and we were unable to recover it. 00:28:15.264 [2024-12-10 22:59:22.907401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.264 [2024-12-10 22:59:22.907431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.264 qpair failed and we were unable to recover it. 00:28:15.264 [2024-12-10 22:59:22.907557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.264 [2024-12-10 22:59:22.907587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.264 qpair failed and we were unable to recover it. 00:28:15.264 [2024-12-10 22:59:22.907771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.264 [2024-12-10 22:59:22.907800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.264 qpair failed and we were unable to recover it. 00:28:15.264 [2024-12-10 22:59:22.907928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.264 [2024-12-10 22:59:22.907960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.264 qpair failed and we were unable to recover it. 00:28:15.264 [2024-12-10 22:59:22.908136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.264 [2024-12-10 22:59:22.908177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.264 qpair failed and we were unable to recover it. 00:28:15.264 [2024-12-10 22:59:22.908377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.264 [2024-12-10 22:59:22.908408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.264 qpair failed and we were unable to recover it. 00:28:15.264 [2024-12-10 22:59:22.908651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.264 [2024-12-10 22:59:22.908685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.264 qpair failed and we were unable to recover it. 00:28:15.264 [2024-12-10 22:59:22.908791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.264 [2024-12-10 22:59:22.908823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.264 qpair failed and we were unable to recover it. 00:28:15.264 [2024-12-10 22:59:22.909014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.264 [2024-12-10 22:59:22.909047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.264 qpair failed and we were unable to recover it. 00:28:15.264 [2024-12-10 22:59:22.909286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.264 [2024-12-10 22:59:22.909321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.264 qpair failed and we were unable to recover it. 00:28:15.264 [2024-12-10 22:59:22.909445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.264 [2024-12-10 22:59:22.909477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.264 qpair failed and we were unable to recover it. 00:28:15.264 [2024-12-10 22:59:22.909590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.264 [2024-12-10 22:59:22.909624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.264 qpair failed and we were unable to recover it. 00:28:15.264 [2024-12-10 22:59:22.909810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.264 [2024-12-10 22:59:22.909842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.264 qpair failed and we were unable to recover it. 00:28:15.264 [2024-12-10 22:59:22.909958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.264 [2024-12-10 22:59:22.909990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.264 qpair failed and we were unable to recover it. 00:28:15.264 [2024-12-10 22:59:22.910124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.264 [2024-12-10 22:59:22.910156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.264 qpair failed and we were unable to recover it. 00:28:15.264 [2024-12-10 22:59:22.910351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.264 [2024-12-10 22:59:22.910384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.264 qpair failed and we were unable to recover it. 00:28:15.264 [2024-12-10 22:59:22.910494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.264 [2024-12-10 22:59:22.910527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.264 qpair failed and we were unable to recover it. 00:28:15.264 [2024-12-10 22:59:22.910702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.264 [2024-12-10 22:59:22.910734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.264 qpair failed and we were unable to recover it. 00:28:15.264 [2024-12-10 22:59:22.910905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.264 [2024-12-10 22:59:22.910939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.264 qpair failed and we were unable to recover it. 00:28:15.264 [2024-12-10 22:59:22.911057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.264 [2024-12-10 22:59:22.911091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.264 qpair failed and we were unable to recover it. 00:28:15.264 [2024-12-10 22:59:22.911202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.264 [2024-12-10 22:59:22.911237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.264 qpair failed and we were unable to recover it. 00:28:15.264 [2024-12-10 22:59:22.911438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.264 [2024-12-10 22:59:22.911472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.264 qpair failed and we were unable to recover it. 00:28:15.264 [2024-12-10 22:59:22.911640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.264 [2024-12-10 22:59:22.911672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.264 qpair failed and we were unable to recover it. 00:28:15.264 [2024-12-10 22:59:22.911790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.264 [2024-12-10 22:59:22.911822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.264 qpair failed and we were unable to recover it. 00:28:15.264 [2024-12-10 22:59:22.911989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.264 [2024-12-10 22:59:22.912021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.264 qpair failed and we were unable to recover it. 00:28:15.264 [2024-12-10 22:59:22.912135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.264 [2024-12-10 22:59:22.912174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.264 qpair failed and we were unable to recover it. 00:28:15.264 [2024-12-10 22:59:22.912302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.264 [2024-12-10 22:59:22.912335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.264 qpair failed and we were unable to recover it. 00:28:15.264 [2024-12-10 22:59:22.912452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.264 [2024-12-10 22:59:22.912484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.264 qpair failed and we were unable to recover it. 00:28:15.264 [2024-12-10 22:59:22.912666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.264 [2024-12-10 22:59:22.912698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.264 qpair failed and we were unable to recover it. 00:28:15.264 [2024-12-10 22:59:22.912901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.264 [2024-12-10 22:59:22.912939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.264 qpair failed and we were unable to recover it. 00:28:15.264 [2024-12-10 22:59:22.913064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.264 [2024-12-10 22:59:22.913096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.264 qpair failed and we were unable to recover it. 00:28:15.264 [2024-12-10 22:59:22.913214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.264 [2024-12-10 22:59:22.913248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.264 qpair failed and we were unable to recover it. 00:28:15.264 [2024-12-10 22:59:22.913359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.264 [2024-12-10 22:59:22.913391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.264 qpair failed and we were unable to recover it. 00:28:15.264 [2024-12-10 22:59:22.913559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.264 [2024-12-10 22:59:22.913591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.264 qpair failed and we were unable to recover it. 00:28:15.264 [2024-12-10 22:59:22.913703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.264 [2024-12-10 22:59:22.913736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.264 qpair failed and we were unable to recover it. 00:28:15.264 [2024-12-10 22:59:22.913864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.264 [2024-12-10 22:59:22.913897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.264 qpair failed and we were unable to recover it. 00:28:15.264 [2024-12-10 22:59:22.914021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.264 [2024-12-10 22:59:22.914054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.264 qpair failed and we were unable to recover it. 00:28:15.264 [2024-12-10 22:59:22.914169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.264 [2024-12-10 22:59:22.914203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.264 qpair failed and we were unable to recover it. 00:28:15.264 [2024-12-10 22:59:22.914375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.264 [2024-12-10 22:59:22.914409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.264 qpair failed and we were unable to recover it. 00:28:15.264 [2024-12-10 22:59:22.914603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.264 [2024-12-10 22:59:22.914635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.264 qpair failed and we were unable to recover it. 00:28:15.264 [2024-12-10 22:59:22.914805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.264 [2024-12-10 22:59:22.914837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.264 qpair failed and we were unable to recover it. 00:28:15.264 [2024-12-10 22:59:22.914955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.264 [2024-12-10 22:59:22.914987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.264 qpair failed and we were unable to recover it. 00:28:15.264 [2024-12-10 22:59:22.915194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.264 [2024-12-10 22:59:22.915228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.264 qpair failed and we were unable to recover it. 00:28:15.264 [2024-12-10 22:59:22.915349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.264 [2024-12-10 22:59:22.915382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.264 qpair failed and we were unable to recover it. 00:28:15.264 [2024-12-10 22:59:22.915562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.264 [2024-12-10 22:59:22.915595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.264 qpair failed and we were unable to recover it. 00:28:15.264 [2024-12-10 22:59:22.915702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.264 [2024-12-10 22:59:22.915733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.264 qpair failed and we were unable to recover it. 00:28:15.264 [2024-12-10 22:59:22.915902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.264 [2024-12-10 22:59:22.915934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.264 qpair failed and we were unable to recover it. 00:28:15.264 [2024-12-10 22:59:22.916062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.265 [2024-12-10 22:59:22.916094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.265 qpair failed and we were unable to recover it. 00:28:15.265 [2024-12-10 22:59:22.916206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.265 [2024-12-10 22:59:22.916239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.265 qpair failed and we were unable to recover it. 00:28:15.265 [2024-12-10 22:59:22.916429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.265 [2024-12-10 22:59:22.916461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.265 qpair failed and we were unable to recover it. 00:28:15.265 [2024-12-10 22:59:22.916564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.265 [2024-12-10 22:59:22.916597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.265 qpair failed and we were unable to recover it. 00:28:15.265 [2024-12-10 22:59:22.916716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.265 [2024-12-10 22:59:22.916748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.265 qpair failed and we were unable to recover it. 00:28:15.265 [2024-12-10 22:59:22.916936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.265 [2024-12-10 22:59:22.916968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.265 qpair failed and we were unable to recover it. 00:28:15.265 [2024-12-10 22:59:22.917142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.265 [2024-12-10 22:59:22.917182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.265 qpair failed and we were unable to recover it. 00:28:15.265 [2024-12-10 22:59:22.917356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.265 [2024-12-10 22:59:22.917389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.265 qpair failed and we were unable to recover it. 00:28:15.265 [2024-12-10 22:59:22.917588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.265 [2024-12-10 22:59:22.917621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.265 qpair failed and we were unable to recover it. 00:28:15.265 [2024-12-10 22:59:22.917777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.265 [2024-12-10 22:59:22.917848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.265 qpair failed and we were unable to recover it. 00:28:15.265 [2024-12-10 22:59:22.917975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.265 [2024-12-10 22:59:22.918010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.265 qpair failed and we were unable to recover it. 00:28:15.265 [2024-12-10 22:59:22.918128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.265 [2024-12-10 22:59:22.918174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.265 qpair failed and we were unable to recover it. 00:28:15.265 [2024-12-10 22:59:22.918380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.265 [2024-12-10 22:59:22.918413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.265 qpair failed and we were unable to recover it. 00:28:15.265 [2024-12-10 22:59:22.918538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.265 [2024-12-10 22:59:22.918569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.265 qpair failed and we were unable to recover it. 00:28:15.265 [2024-12-10 22:59:22.918737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.265 [2024-12-10 22:59:22.918769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.265 qpair failed and we were unable to recover it. 00:28:15.265 [2024-12-10 22:59:22.918879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.265 [2024-12-10 22:59:22.918912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.265 qpair failed and we were unable to recover it. 00:28:15.265 [2024-12-10 22:59:22.919014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.265 [2024-12-10 22:59:22.919046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.265 qpair failed and we were unable to recover it. 00:28:15.265 [2024-12-10 22:59:22.919240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.265 [2024-12-10 22:59:22.919274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.265 qpair failed and we were unable to recover it. 00:28:15.265 [2024-12-10 22:59:22.919444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.265 [2024-12-10 22:59:22.919477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.265 qpair failed and we were unable to recover it. 00:28:15.265 [2024-12-10 22:59:22.919590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.265 [2024-12-10 22:59:22.919622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.265 qpair failed and we were unable to recover it. 00:28:15.265 [2024-12-10 22:59:22.919759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.265 [2024-12-10 22:59:22.919791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.265 qpair failed and we were unable to recover it. 00:28:15.265 [2024-12-10 22:59:22.919983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.265 [2024-12-10 22:59:22.920015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.265 qpair failed and we were unable to recover it. 00:28:15.265 [2024-12-10 22:59:22.920136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.265 [2024-12-10 22:59:22.920190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.265 qpair failed and we were unable to recover it. 00:28:15.265 [2024-12-10 22:59:22.920388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.265 [2024-12-10 22:59:22.920422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.265 qpair failed and we were unable to recover it. 00:28:15.265 [2024-12-10 22:59:22.920612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.265 [2024-12-10 22:59:22.920644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.265 qpair failed and we were unable to recover it. 00:28:15.265 [2024-12-10 22:59:22.920746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.265 [2024-12-10 22:59:22.920778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.265 qpair failed and we were unable to recover it. 00:28:15.265 [2024-12-10 22:59:22.920962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.265 [2024-12-10 22:59:22.920994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.265 qpair failed and we were unable to recover it. 00:28:15.265 [2024-12-10 22:59:22.921116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.265 [2024-12-10 22:59:22.921147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.265 qpair failed and we were unable to recover it. 00:28:15.265 [2024-12-10 22:59:22.921278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.265 [2024-12-10 22:59:22.921312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.265 qpair failed and we were unable to recover it. 00:28:15.265 [2024-12-10 22:59:22.921415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.265 [2024-12-10 22:59:22.921448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.265 qpair failed and we were unable to recover it. 00:28:15.265 [2024-12-10 22:59:22.921627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.265 [2024-12-10 22:59:22.921659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.265 qpair failed and we were unable to recover it. 00:28:15.265 [2024-12-10 22:59:22.921764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.265 [2024-12-10 22:59:22.921796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.265 qpair failed and we were unable to recover it. 00:28:15.265 [2024-12-10 22:59:22.921970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.265 [2024-12-10 22:59:22.922002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.265 qpair failed and we were unable to recover it. 00:28:15.265 [2024-12-10 22:59:22.922113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.265 [2024-12-10 22:59:22.922145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.265 qpair failed and we were unable to recover it. 00:28:15.265 [2024-12-10 22:59:22.922332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.265 [2024-12-10 22:59:22.922365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.265 qpair failed and we were unable to recover it. 00:28:15.265 [2024-12-10 22:59:22.922603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.265 [2024-12-10 22:59:22.922637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.265 qpair failed and we were unable to recover it. 00:28:15.265 [2024-12-10 22:59:22.922759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.265 [2024-12-10 22:59:22.922794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.265 qpair failed and we were unable to recover it. 00:28:15.265 [2024-12-10 22:59:22.922979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.265 [2024-12-10 22:59:22.923012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.265 qpair failed and we were unable to recover it. 00:28:15.265 [2024-12-10 22:59:22.923143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.265 [2024-12-10 22:59:22.923187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.265 qpair failed and we were unable to recover it. 00:28:15.265 [2024-12-10 22:59:22.923392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.265 [2024-12-10 22:59:22.923424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.265 qpair failed and we were unable to recover it. 00:28:15.265 [2024-12-10 22:59:22.923624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.265 [2024-12-10 22:59:22.923659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.265 qpair failed and we were unable to recover it. 00:28:15.265 [2024-12-10 22:59:22.923844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.265 [2024-12-10 22:59:22.923876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.265 qpair failed and we were unable to recover it. 00:28:15.265 [2024-12-10 22:59:22.923982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.265 [2024-12-10 22:59:22.924015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.265 qpair failed and we were unable to recover it. 00:28:15.265 [2024-12-10 22:59:22.924135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.265 [2024-12-10 22:59:22.924178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.265 qpair failed and we were unable to recover it. 00:28:15.265 [2024-12-10 22:59:22.924346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.265 [2024-12-10 22:59:22.924378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.265 qpair failed and we were unable to recover it. 00:28:15.265 [2024-12-10 22:59:22.924552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.265 [2024-12-10 22:59:22.924585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.265 qpair failed and we were unable to recover it. 00:28:15.265 [2024-12-10 22:59:22.924687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.265 [2024-12-10 22:59:22.924721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.265 qpair failed and we were unable to recover it. 00:28:15.265 [2024-12-10 22:59:22.924894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.265 [2024-12-10 22:59:22.924925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.265 qpair failed and we were unable to recover it. 00:28:15.265 [2024-12-10 22:59:22.925041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.265 [2024-12-10 22:59:22.925073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.265 qpair failed and we were unable to recover it. 00:28:15.265 [2024-12-10 22:59:22.925388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.265 [2024-12-10 22:59:22.925461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.265 qpair failed and we were unable to recover it. 00:28:15.265 [2024-12-10 22:59:22.925664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.265 [2024-12-10 22:59:22.925700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.265 qpair failed and we were unable to recover it. 00:28:15.265 [2024-12-10 22:59:22.925972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.265 [2024-12-10 22:59:22.926005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.265 qpair failed and we were unable to recover it. 00:28:15.265 [2024-12-10 22:59:22.926126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.265 [2024-12-10 22:59:22.926171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.265 qpair failed and we were unable to recover it. 00:28:15.265 [2024-12-10 22:59:22.926303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.265 [2024-12-10 22:59:22.926336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.265 qpair failed and we were unable to recover it. 00:28:15.265 [2024-12-10 22:59:22.926465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.265 [2024-12-10 22:59:22.926499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.265 qpair failed and we were unable to recover it. 00:28:15.265 [2024-12-10 22:59:22.926615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.265 [2024-12-10 22:59:22.926649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.265 qpair failed and we were unable to recover it. 00:28:15.265 [2024-12-10 22:59:22.926781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.265 [2024-12-10 22:59:22.926813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.265 qpair failed and we were unable to recover it. 00:28:15.265 [2024-12-10 22:59:22.926916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.265 [2024-12-10 22:59:22.926949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.265 qpair failed and we were unable to recover it. 00:28:15.265 [2024-12-10 22:59:22.927065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.265 [2024-12-10 22:59:22.927098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.265 qpair failed and we were unable to recover it. 00:28:15.265 [2024-12-10 22:59:22.927216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.265 [2024-12-10 22:59:22.927251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.265 qpair failed and we were unable to recover it. 00:28:15.265 [2024-12-10 22:59:22.927444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.265 [2024-12-10 22:59:22.927483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.265 qpair failed and we were unable to recover it. 00:28:15.265 [2024-12-10 22:59:22.927670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.265 [2024-12-10 22:59:22.927702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.265 qpair failed and we were unable to recover it. 00:28:15.265 [2024-12-10 22:59:22.927804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.265 [2024-12-10 22:59:22.927836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.265 qpair failed and we were unable to recover it. 00:28:15.265 [2024-12-10 22:59:22.928151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.265 [2024-12-10 22:59:22.928196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.265 qpair failed and we were unable to recover it. 00:28:15.265 [2024-12-10 22:59:22.928361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.265 [2024-12-10 22:59:22.928394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.265 qpair failed and we were unable to recover it. 00:28:15.265 [2024-12-10 22:59:22.928569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.265 [2024-12-10 22:59:22.928602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.265 qpair failed and we were unable to recover it. 00:28:15.265 [2024-12-10 22:59:22.928704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.265 [2024-12-10 22:59:22.928736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.265 qpair failed and we were unable to recover it. 00:28:15.265 [2024-12-10 22:59:22.928907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.265 [2024-12-10 22:59:22.928939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.265 qpair failed and we were unable to recover it. 00:28:15.265 [2024-12-10 22:59:22.929054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.265 [2024-12-10 22:59:22.929086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.265 qpair failed and we were unable to recover it. 00:28:15.265 [2024-12-10 22:59:22.929268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.265 [2024-12-10 22:59:22.929303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.266 qpair failed and we were unable to recover it. 00:28:15.266 [2024-12-10 22:59:22.929476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.266 [2024-12-10 22:59:22.929510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.266 qpair failed and we were unable to recover it. 00:28:15.266 [2024-12-10 22:59:22.929699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.266 [2024-12-10 22:59:22.929732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.266 qpair failed and we were unable to recover it. 00:28:15.266 [2024-12-10 22:59:22.929843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.266 [2024-12-10 22:59:22.929876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.266 qpair failed and we were unable to recover it. 00:28:15.266 [2024-12-10 22:59:22.930070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.266 [2024-12-10 22:59:22.930103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.266 qpair failed and we were unable to recover it. 00:28:15.266 [2024-12-10 22:59:22.930218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.266 [2024-12-10 22:59:22.930252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.266 qpair failed and we were unable to recover it. 00:28:15.266 [2024-12-10 22:59:22.930365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.266 [2024-12-10 22:59:22.930397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.266 qpair failed and we were unable to recover it. 00:28:15.266 [2024-12-10 22:59:22.930567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.266 [2024-12-10 22:59:22.930605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.266 qpair failed and we were unable to recover it. 00:28:15.266 [2024-12-10 22:59:22.930714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.266 [2024-12-10 22:59:22.930745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.266 qpair failed and we were unable to recover it. 00:28:15.266 [2024-12-10 22:59:22.930884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.266 [2024-12-10 22:59:22.930916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.266 qpair failed and we were unable to recover it. 00:28:15.266 [2024-12-10 22:59:22.931039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.266 [2024-12-10 22:59:22.931072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.266 qpair failed and we were unable to recover it. 00:28:15.266 [2024-12-10 22:59:22.931194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.266 [2024-12-10 22:59:22.931228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.266 qpair failed and we were unable to recover it. 00:28:15.266 [2024-12-10 22:59:22.931403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.266 [2024-12-10 22:59:22.931435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.266 qpair failed and we were unable to recover it. 00:28:15.266 [2024-12-10 22:59:22.931603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.266 [2024-12-10 22:59:22.931635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.266 qpair failed and we were unable to recover it. 00:28:15.266 [2024-12-10 22:59:22.931736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.266 [2024-12-10 22:59:22.931768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.266 qpair failed and we were unable to recover it. 00:28:15.266 [2024-12-10 22:59:22.931946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.266 [2024-12-10 22:59:22.931978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.266 qpair failed and we were unable to recover it. 00:28:15.266 [2024-12-10 22:59:22.932169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.266 [2024-12-10 22:59:22.932204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.266 qpair failed and we were unable to recover it. 00:28:15.266 [2024-12-10 22:59:22.932309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.266 [2024-12-10 22:59:22.932340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.266 qpair failed and we were unable to recover it. 00:28:15.266 [2024-12-10 22:59:22.932451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.266 [2024-12-10 22:59:22.932484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.266 qpair failed and we were unable to recover it. 00:28:15.266 [2024-12-10 22:59:22.932586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.266 [2024-12-10 22:59:22.932617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.266 qpair failed and we were unable to recover it. 00:28:15.266 [2024-12-10 22:59:22.932836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.266 [2024-12-10 22:59:22.932868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.266 qpair failed and we were unable to recover it. 00:28:15.266 [2024-12-10 22:59:22.932994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.266 [2024-12-10 22:59:22.933028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.266 qpair failed and we were unable to recover it. 00:28:15.266 [2024-12-10 22:59:22.933140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.266 [2024-12-10 22:59:22.933184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.266 qpair failed and we were unable to recover it. 00:28:15.266 [2024-12-10 22:59:22.933306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.266 [2024-12-10 22:59:22.933339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.266 qpair failed and we were unable to recover it. 00:28:15.266 [2024-12-10 22:59:22.933521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.266 [2024-12-10 22:59:22.933556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.266 qpair failed and we were unable to recover it. 00:28:15.266 [2024-12-10 22:59:22.933725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.266 [2024-12-10 22:59:22.933759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.266 qpair failed and we were unable to recover it. 00:28:15.266 [2024-12-10 22:59:22.934020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.266 [2024-12-10 22:59:22.934053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.266 qpair failed and we were unable to recover it. 00:28:15.266 [2024-12-10 22:59:22.934170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.266 [2024-12-10 22:59:22.934204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.266 qpair failed and we were unable to recover it. 00:28:15.266 [2024-12-10 22:59:22.934313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.266 [2024-12-10 22:59:22.934348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.266 qpair failed and we were unable to recover it. 00:28:15.266 [2024-12-10 22:59:22.934514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.266 [2024-12-10 22:59:22.934545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.266 qpair failed and we were unable to recover it. 00:28:15.266 [2024-12-10 22:59:22.934728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.266 [2024-12-10 22:59:22.934760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.266 qpair failed and we were unable to recover it. 00:28:15.266 [2024-12-10 22:59:22.934964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.266 [2024-12-10 22:59:22.934997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.266 qpair failed and we were unable to recover it. 00:28:15.266 [2024-12-10 22:59:22.935116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.266 [2024-12-10 22:59:22.935147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.266 qpair failed and we were unable to recover it. 00:28:15.266 [2024-12-10 22:59:22.935330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.266 [2024-12-10 22:59:22.935365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.266 qpair failed and we were unable to recover it. 00:28:15.266 [2024-12-10 22:59:22.935534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.266 [2024-12-10 22:59:22.935566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.266 qpair failed and we were unable to recover it. 00:28:15.266 [2024-12-10 22:59:22.935677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.266 [2024-12-10 22:59:22.935711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.266 qpair failed and we were unable to recover it. 00:28:15.545 [2024-12-10 22:59:22.935883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.545 [2024-12-10 22:59:22.935915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.545 qpair failed and we were unable to recover it. 00:28:15.545 [2024-12-10 22:59:22.936085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.545 [2024-12-10 22:59:22.936118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.545 qpair failed and we were unable to recover it. 00:28:15.545 [2024-12-10 22:59:22.936255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.545 [2024-12-10 22:59:22.936290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.545 qpair failed and we were unable to recover it. 00:28:15.545 [2024-12-10 22:59:22.936409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.545 [2024-12-10 22:59:22.936442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.545 qpair failed and we were unable to recover it. 00:28:15.545 [2024-12-10 22:59:22.936612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.545 [2024-12-10 22:59:22.936645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.545 qpair failed and we were unable to recover it. 00:28:15.545 [2024-12-10 22:59:22.936759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.545 [2024-12-10 22:59:22.936791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.545 qpair failed and we were unable to recover it. 00:28:15.545 [2024-12-10 22:59:22.936915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.545 [2024-12-10 22:59:22.936948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.545 qpair failed and we were unable to recover it. 00:28:15.545 [2024-12-10 22:59:22.937119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.545 [2024-12-10 22:59:22.937155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.545 qpair failed and we were unable to recover it. 00:28:15.545 [2024-12-10 22:59:22.937339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.545 [2024-12-10 22:59:22.937372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.545 qpair failed and we were unable to recover it. 00:28:15.546 [2024-12-10 22:59:22.937555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.546 [2024-12-10 22:59:22.937589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.546 qpair failed and we were unable to recover it. 00:28:15.546 [2024-12-10 22:59:22.937700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.546 [2024-12-10 22:59:22.937732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.546 qpair failed and we were unable to recover it. 00:28:15.546 [2024-12-10 22:59:22.937999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.546 [2024-12-10 22:59:22.938031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.546 qpair failed and we were unable to recover it. 00:28:15.546 [2024-12-10 22:59:22.938141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.546 [2024-12-10 22:59:22.938184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.546 qpair failed and we were unable to recover it. 00:28:15.546 [2024-12-10 22:59:22.938305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.546 [2024-12-10 22:59:22.938338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.546 qpair failed and we were unable to recover it. 00:28:15.546 [2024-12-10 22:59:22.938514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.546 [2024-12-10 22:59:22.938547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.546 qpair failed and we were unable to recover it. 00:28:15.546 [2024-12-10 22:59:22.938744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.546 [2024-12-10 22:59:22.938776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.546 qpair failed and we were unable to recover it. 00:28:15.546 [2024-12-10 22:59:22.939019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.546 [2024-12-10 22:59:22.939052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.546 qpair failed and we were unable to recover it. 00:28:15.546 [2024-12-10 22:59:22.939236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.546 [2024-12-10 22:59:22.939270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.546 qpair failed and we were unable to recover it. 00:28:15.546 [2024-12-10 22:59:22.939376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.546 [2024-12-10 22:59:22.939408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.546 qpair failed and we were unable to recover it. 00:28:15.546 [2024-12-10 22:59:22.939594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.546 [2024-12-10 22:59:22.939627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.546 qpair failed and we were unable to recover it. 00:28:15.546 [2024-12-10 22:59:22.939749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.546 [2024-12-10 22:59:22.939782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.546 qpair failed and we were unable to recover it. 00:28:15.546 [2024-12-10 22:59:22.939950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.546 [2024-12-10 22:59:22.939984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.546 qpair failed and we were unable to recover it. 00:28:15.546 [2024-12-10 22:59:22.940153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.546 [2024-12-10 22:59:22.940196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.546 qpair failed and we were unable to recover it. 00:28:15.546 [2024-12-10 22:59:22.940308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.546 [2024-12-10 22:59:22.940340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.546 qpair failed and we were unable to recover it. 00:28:15.546 [2024-12-10 22:59:22.940451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.546 [2024-12-10 22:59:22.940483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.546 qpair failed and we were unable to recover it. 00:28:15.546 [2024-12-10 22:59:22.940609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.546 [2024-12-10 22:59:22.940642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.546 qpair failed and we were unable to recover it. 00:28:15.546 [2024-12-10 22:59:22.940752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.546 [2024-12-10 22:59:22.940785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.546 qpair failed and we were unable to recover it. 00:28:15.546 [2024-12-10 22:59:22.940887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.546 [2024-12-10 22:59:22.940922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.546 qpair failed and we were unable to recover it. 00:28:15.546 [2024-12-10 22:59:22.941039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.546 [2024-12-10 22:59:22.941072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.546 qpair failed and we were unable to recover it. 00:28:15.546 [2024-12-10 22:59:22.941199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.546 [2024-12-10 22:59:22.941234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.546 qpair failed and we were unable to recover it. 00:28:15.546 [2024-12-10 22:59:22.941336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.546 [2024-12-10 22:59:22.941370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.546 qpair failed and we were unable to recover it. 00:28:15.546 [2024-12-10 22:59:22.941539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.546 [2024-12-10 22:59:22.941573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.546 qpair failed and we were unable to recover it. 00:28:15.546 [2024-12-10 22:59:22.941814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.546 [2024-12-10 22:59:22.941847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.546 qpair failed and we were unable to recover it. 00:28:15.546 [2024-12-10 22:59:22.942110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.546 [2024-12-10 22:59:22.942142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.546 qpair failed and we were unable to recover it. 00:28:15.546 [2024-12-10 22:59:22.942259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.546 [2024-12-10 22:59:22.942293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.546 qpair failed and we were unable to recover it. 00:28:15.546 [2024-12-10 22:59:22.942407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.546 [2024-12-10 22:59:22.942439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.546 qpair failed and we were unable to recover it. 00:28:15.546 [2024-12-10 22:59:22.942626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.546 [2024-12-10 22:59:22.942659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.546 qpair failed and we were unable to recover it. 00:28:15.546 [2024-12-10 22:59:22.942850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.546 [2024-12-10 22:59:22.942884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.546 qpair failed and we were unable to recover it. 00:28:15.546 [2024-12-10 22:59:22.943002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.546 [2024-12-10 22:59:22.943035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.546 qpair failed and we were unable to recover it. 00:28:15.546 [2024-12-10 22:59:22.943155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.546 [2024-12-10 22:59:22.943221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.546 qpair failed and we were unable to recover it. 00:28:15.546 [2024-12-10 22:59:22.943408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.546 [2024-12-10 22:59:22.943440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.546 qpair failed and we were unable to recover it. 00:28:15.546 [2024-12-10 22:59:22.943624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.546 [2024-12-10 22:59:22.943657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.546 qpair failed and we were unable to recover it. 00:28:15.546 [2024-12-10 22:59:22.943775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.546 [2024-12-10 22:59:22.943808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.546 qpair failed and we were unable to recover it. 00:28:15.546 [2024-12-10 22:59:22.943910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.546 [2024-12-10 22:59:22.943944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.546 qpair failed and we were unable to recover it. 00:28:15.546 [2024-12-10 22:59:22.944111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.546 [2024-12-10 22:59:22.944144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.546 qpair failed and we were unable to recover it. 00:28:15.546 [2024-12-10 22:59:22.944333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.546 [2024-12-10 22:59:22.944367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.546 qpair failed and we were unable to recover it. 00:28:15.546 [2024-12-10 22:59:22.944471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.546 [2024-12-10 22:59:22.944504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.546 qpair failed and we were unable to recover it. 00:28:15.546 [2024-12-10 22:59:22.944675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.546 [2024-12-10 22:59:22.944707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.546 qpair failed and we were unable to recover it. 00:28:15.547 [2024-12-10 22:59:22.944816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.547 [2024-12-10 22:59:22.944848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.547 qpair failed and we were unable to recover it. 00:28:15.547 [2024-12-10 22:59:22.945026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.547 [2024-12-10 22:59:22.945058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.547 qpair failed and we were unable to recover it. 00:28:15.547 [2024-12-10 22:59:22.945293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.547 [2024-12-10 22:59:22.945327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.547 qpair failed and we were unable to recover it. 00:28:15.547 [2024-12-10 22:59:22.945435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.547 [2024-12-10 22:59:22.945467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.547 qpair failed and we were unable to recover it. 00:28:15.547 [2024-12-10 22:59:22.945583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.547 [2024-12-10 22:59:22.945616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.547 qpair failed and we were unable to recover it. 00:28:15.547 [2024-12-10 22:59:22.945737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.547 [2024-12-10 22:59:22.945770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.547 qpair failed and we were unable to recover it. 00:28:15.547 [2024-12-10 22:59:22.945872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.547 [2024-12-10 22:59:22.945905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.547 qpair failed and we were unable to recover it. 00:28:15.547 [2024-12-10 22:59:22.946021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.547 [2024-12-10 22:59:22.946053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.547 qpair failed and we were unable to recover it. 00:28:15.547 [2024-12-10 22:59:22.946170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.547 [2024-12-10 22:59:22.946203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.547 qpair failed and we were unable to recover it. 00:28:15.547 [2024-12-10 22:59:22.946369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.547 [2024-12-10 22:59:22.946400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.547 qpair failed and we were unable to recover it. 00:28:15.547 [2024-12-10 22:59:22.946567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.547 [2024-12-10 22:59:22.946599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.547 qpair failed and we were unable to recover it. 00:28:15.547 [2024-12-10 22:59:22.946782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.547 [2024-12-10 22:59:22.946815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.547 qpair failed and we were unable to recover it. 00:28:15.547 [2024-12-10 22:59:22.946992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.547 [2024-12-10 22:59:22.947025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.547 qpair failed and we were unable to recover it. 00:28:15.547 [2024-12-10 22:59:22.947185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.547 [2024-12-10 22:59:22.947217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.547 qpair failed and we were unable to recover it. 00:28:15.547 [2024-12-10 22:59:22.947339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.547 [2024-12-10 22:59:22.947372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.547 qpair failed and we were unable to recover it. 00:28:15.547 [2024-12-10 22:59:22.947537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.547 [2024-12-10 22:59:22.947569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.547 qpair failed and we were unable to recover it. 00:28:15.547 [2024-12-10 22:59:22.947734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.547 [2024-12-10 22:59:22.947767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.547 qpair failed and we were unable to recover it. 00:28:15.547 [2024-12-10 22:59:22.947936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.547 [2024-12-10 22:59:22.947968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.547 qpair failed and we were unable to recover it. 00:28:15.547 [2024-12-10 22:59:22.948143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.547 [2024-12-10 22:59:22.948183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.547 qpair failed and we were unable to recover it. 00:28:15.547 [2024-12-10 22:59:22.948303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.547 [2024-12-10 22:59:22.948336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.547 qpair failed and we were unable to recover it. 00:28:15.547 [2024-12-10 22:59:22.948515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.547 [2024-12-10 22:59:22.948547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.547 qpair failed and we were unable to recover it. 00:28:15.547 [2024-12-10 22:59:22.948809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.547 [2024-12-10 22:59:22.948842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.547 qpair failed and we were unable to recover it. 00:28:15.547 [2024-12-10 22:59:22.948970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.547 [2024-12-10 22:59:22.949003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.547 qpair failed and we were unable to recover it. 00:28:15.547 [2024-12-10 22:59:22.949124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.547 [2024-12-10 22:59:22.949156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.547 qpair failed and we were unable to recover it. 00:28:15.547 [2024-12-10 22:59:22.949367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.547 [2024-12-10 22:59:22.949399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.547 qpair failed and we were unable to recover it. 00:28:15.547 [2024-12-10 22:59:22.949575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.547 [2024-12-10 22:59:22.949607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.547 qpair failed and we were unable to recover it. 00:28:15.547 [2024-12-10 22:59:22.949776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.547 [2024-12-10 22:59:22.949808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.547 qpair failed and we were unable to recover it. 00:28:15.547 [2024-12-10 22:59:22.949977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.547 [2024-12-10 22:59:22.950009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.547 qpair failed and we were unable to recover it. 00:28:15.547 [2024-12-10 22:59:22.950192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.547 [2024-12-10 22:59:22.950225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.547 qpair failed and we were unable to recover it. 00:28:15.547 [2024-12-10 22:59:22.950399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.547 [2024-12-10 22:59:22.950431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.547 qpair failed and we were unable to recover it. 00:28:15.547 [2024-12-10 22:59:22.950540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.547 [2024-12-10 22:59:22.950573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.547 qpair failed and we were unable to recover it. 00:28:15.547 [2024-12-10 22:59:22.950742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.547 [2024-12-10 22:59:22.950774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.547 qpair failed and we were unable to recover it. 00:28:15.547 [2024-12-10 22:59:22.950941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.547 [2024-12-10 22:59:22.950979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.547 qpair failed and we were unable to recover it. 00:28:15.547 [2024-12-10 22:59:22.951082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.547 [2024-12-10 22:59:22.951114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.547 qpair failed and we were unable to recover it. 00:28:15.547 [2024-12-10 22:59:22.951307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.547 [2024-12-10 22:59:22.951340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.547 qpair failed and we were unable to recover it. 00:28:15.547 [2024-12-10 22:59:22.951467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.547 [2024-12-10 22:59:22.951500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.547 qpair failed and we were unable to recover it. 00:28:15.547 [2024-12-10 22:59:22.951673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.547 [2024-12-10 22:59:22.951705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.547 qpair failed and we were unable to recover it. 00:28:15.547 [2024-12-10 22:59:22.951818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.547 [2024-12-10 22:59:22.951850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.547 qpair failed and we were unable to recover it. 00:28:15.547 [2024-12-10 22:59:22.951972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.547 [2024-12-10 22:59:22.952005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.547 qpair failed and we were unable to recover it. 00:28:15.547 [2024-12-10 22:59:22.952180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.548 [2024-12-10 22:59:22.952213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.548 qpair failed and we were unable to recover it. 00:28:15.548 [2024-12-10 22:59:22.952381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.548 [2024-12-10 22:59:22.952413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.548 qpair failed and we were unable to recover it. 00:28:15.548 [2024-12-10 22:59:22.952586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.548 [2024-12-10 22:59:22.952619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.548 qpair failed and we were unable to recover it. 00:28:15.548 [2024-12-10 22:59:22.952789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.548 [2024-12-10 22:59:22.952822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.548 qpair failed and we were unable to recover it. 00:28:15.548 [2024-12-10 22:59:22.952920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.548 [2024-12-10 22:59:22.952953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.548 qpair failed and we were unable to recover it. 00:28:15.548 [2024-12-10 22:59:22.953137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.548 [2024-12-10 22:59:22.953181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.548 qpair failed and we were unable to recover it. 00:28:15.548 [2024-12-10 22:59:22.953354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.548 [2024-12-10 22:59:22.953387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.548 qpair failed and we were unable to recover it. 00:28:15.548 [2024-12-10 22:59:22.953562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.548 [2024-12-10 22:59:22.953594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.548 qpair failed and we were unable to recover it. 00:28:15.548 [2024-12-10 22:59:22.953700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.548 [2024-12-10 22:59:22.953733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.548 qpair failed and we were unable to recover it. 00:28:15.548 [2024-12-10 22:59:22.953993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.548 [2024-12-10 22:59:22.954026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.548 qpair failed and we were unable to recover it. 00:28:15.548 [2024-12-10 22:59:22.954137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.548 [2024-12-10 22:59:22.954181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.548 qpair failed and we were unable to recover it. 00:28:15.548 [2024-12-10 22:59:22.954307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.548 [2024-12-10 22:59:22.954340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.548 qpair failed and we were unable to recover it. 00:28:15.548 [2024-12-10 22:59:22.954439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.548 [2024-12-10 22:59:22.954470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.548 qpair failed and we were unable to recover it. 00:28:15.548 [2024-12-10 22:59:22.954599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.548 [2024-12-10 22:59:22.954632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.548 qpair failed and we were unable to recover it. 00:28:15.548 [2024-12-10 22:59:22.954823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.548 [2024-12-10 22:59:22.954855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.548 qpair failed and we were unable to recover it. 00:28:15.548 [2024-12-10 22:59:22.955052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.548 [2024-12-10 22:59:22.955084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.548 qpair failed and we were unable to recover it. 00:28:15.548 [2024-12-10 22:59:22.955203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.548 [2024-12-10 22:59:22.955237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.548 qpair failed and we were unable to recover it. 00:28:15.548 [2024-12-10 22:59:22.955408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.548 [2024-12-10 22:59:22.955441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.548 qpair failed and we were unable to recover it. 00:28:15.548 [2024-12-10 22:59:22.955550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.548 [2024-12-10 22:59:22.955583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.548 qpair failed and we were unable to recover it. 00:28:15.548 [2024-12-10 22:59:22.955697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.548 [2024-12-10 22:59:22.955731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.548 qpair failed and we were unable to recover it. 00:28:15.548 [2024-12-10 22:59:22.955937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.548 [2024-12-10 22:59:22.955976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.548 qpair failed and we were unable to recover it. 00:28:15.548 [2024-12-10 22:59:22.956081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.548 [2024-12-10 22:59:22.956115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.548 qpair failed and we were unable to recover it. 00:28:15.548 [2024-12-10 22:59:22.956350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.548 [2024-12-10 22:59:22.956383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.548 qpair failed and we were unable to recover it. 00:28:15.548 [2024-12-10 22:59:22.956553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.548 [2024-12-10 22:59:22.956586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.548 qpair failed and we were unable to recover it. 00:28:15.548 [2024-12-10 22:59:22.956761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.548 [2024-12-10 22:59:22.956792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.548 qpair failed and we were unable to recover it. 00:28:15.548 [2024-12-10 22:59:22.957051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.548 [2024-12-10 22:59:22.957082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.548 qpair failed and we were unable to recover it. 00:28:15.548 [2024-12-10 22:59:22.957197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.548 [2024-12-10 22:59:22.957230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.548 qpair failed and we were unable to recover it. 00:28:15.548 [2024-12-10 22:59:22.957351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.548 [2024-12-10 22:59:22.957383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.548 qpair failed and we were unable to recover it. 00:28:15.548 [2024-12-10 22:59:22.957500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.548 [2024-12-10 22:59:22.957533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.548 qpair failed and we were unable to recover it. 00:28:15.548 [2024-12-10 22:59:22.957636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.548 [2024-12-10 22:59:22.957668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.548 qpair failed and we were unable to recover it. 00:28:15.548 [2024-12-10 22:59:22.957836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.548 [2024-12-10 22:59:22.957868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.548 qpair failed and we were unable to recover it. 00:28:15.548 [2024-12-10 22:59:22.958035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.548 [2024-12-10 22:59:22.958070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.548 qpair failed and we were unable to recover it. 00:28:15.548 [2024-12-10 22:59:22.958271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.548 [2024-12-10 22:59:22.958305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.548 qpair failed and we were unable to recover it. 00:28:15.548 [2024-12-10 22:59:22.958475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.548 [2024-12-10 22:59:22.958508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.548 qpair failed and we were unable to recover it. 00:28:15.548 [2024-12-10 22:59:22.958825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.548 [2024-12-10 22:59:22.958898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.548 qpair failed and we were unable to recover it. 00:28:15.548 [2024-12-10 22:59:22.959036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.548 [2024-12-10 22:59:22.959072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.548 qpair failed and we were unable to recover it. 00:28:15.548 [2024-12-10 22:59:22.959275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.548 [2024-12-10 22:59:22.959312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.548 qpair failed and we were unable to recover it. 00:28:15.548 [2024-12-10 22:59:22.959554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.548 [2024-12-10 22:59:22.959588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.548 qpair failed and we were unable to recover it. 00:28:15.548 [2024-12-10 22:59:22.959705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.548 [2024-12-10 22:59:22.959737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.548 qpair failed and we were unable to recover it. 00:28:15.549 [2024-12-10 22:59:22.959857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.549 [2024-12-10 22:59:22.959890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.549 qpair failed and we were unable to recover it. 00:28:15.549 [2024-12-10 22:59:22.960007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.549 [2024-12-10 22:59:22.960041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.549 qpair failed and we were unable to recover it. 00:28:15.549 [2024-12-10 22:59:22.960215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.549 [2024-12-10 22:59:22.960248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.549 qpair failed and we were unable to recover it. 00:28:15.549 [2024-12-10 22:59:22.960442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.549 [2024-12-10 22:59:22.960475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.549 qpair failed and we were unable to recover it. 00:28:15.549 [2024-12-10 22:59:22.960607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.549 [2024-12-10 22:59:22.960640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.549 qpair failed and we were unable to recover it. 00:28:15.549 [2024-12-10 22:59:22.960813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.549 [2024-12-10 22:59:22.960847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.549 qpair failed and we were unable to recover it. 00:28:15.549 [2024-12-10 22:59:22.961019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.549 [2024-12-10 22:59:22.961051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.549 qpair failed and we were unable to recover it. 00:28:15.549 [2024-12-10 22:59:22.961240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.549 [2024-12-10 22:59:22.961275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.549 qpair failed and we were unable to recover it. 00:28:15.549 [2024-12-10 22:59:22.961394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.549 [2024-12-10 22:59:22.961437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.549 qpair failed and we were unable to recover it. 00:28:15.549 [2024-12-10 22:59:22.961633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.549 [2024-12-10 22:59:22.961667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.549 qpair failed and we were unable to recover it. 00:28:15.549 [2024-12-10 22:59:22.961777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.549 [2024-12-10 22:59:22.961809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.549 qpair failed and we were unable to recover it. 00:28:15.549 [2024-12-10 22:59:22.961982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.549 [2024-12-10 22:59:22.962015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.549 qpair failed and we were unable to recover it. 00:28:15.549 [2024-12-10 22:59:22.962119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.549 [2024-12-10 22:59:22.962151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.549 qpair failed and we were unable to recover it. 00:28:15.549 [2024-12-10 22:59:22.962334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.549 [2024-12-10 22:59:22.962369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.549 qpair failed and we were unable to recover it. 00:28:15.549 [2024-12-10 22:59:22.962485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.549 [2024-12-10 22:59:22.962517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.549 qpair failed and we were unable to recover it. 00:28:15.549 [2024-12-10 22:59:22.962647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.549 [2024-12-10 22:59:22.962680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.549 qpair failed and we were unable to recover it. 00:28:15.549 [2024-12-10 22:59:22.962796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.549 [2024-12-10 22:59:22.962828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.549 qpair failed and we were unable to recover it. 00:28:15.549 [2024-12-10 22:59:22.963007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.549 [2024-12-10 22:59:22.963040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.549 qpair failed and we were unable to recover it. 00:28:15.549 [2024-12-10 22:59:22.963212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.549 [2024-12-10 22:59:22.963248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.549 qpair failed and we were unable to recover it. 00:28:15.549 [2024-12-10 22:59:22.963354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.549 [2024-12-10 22:59:22.963386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.549 qpair failed and we were unable to recover it. 00:28:15.549 [2024-12-10 22:59:22.963577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.549 [2024-12-10 22:59:22.963609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.549 qpair failed and we were unable to recover it. 00:28:15.549 [2024-12-10 22:59:22.963793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.549 [2024-12-10 22:59:22.963826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.549 qpair failed and we were unable to recover it. 00:28:15.549 [2024-12-10 22:59:22.964005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.549 [2024-12-10 22:59:22.964038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.549 qpair failed and we were unable to recover it. 00:28:15.549 [2024-12-10 22:59:22.964245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.549 [2024-12-10 22:59:22.964280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.549 qpair failed and we were unable to recover it. 00:28:15.549 [2024-12-10 22:59:22.964382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.549 [2024-12-10 22:59:22.964415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.549 qpair failed and we were unable to recover it. 00:28:15.549 [2024-12-10 22:59:22.964629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.549 [2024-12-10 22:59:22.964660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.549 qpair failed and we were unable to recover it. 00:28:15.549 [2024-12-10 22:59:22.964763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.549 [2024-12-10 22:59:22.964796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.549 qpair failed and we were unable to recover it. 00:28:15.549 [2024-12-10 22:59:22.964974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.549 [2024-12-10 22:59:22.965006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.549 qpair failed and we were unable to recover it. 00:28:15.549 [2024-12-10 22:59:22.965182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.549 [2024-12-10 22:59:22.965216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.549 qpair failed and we were unable to recover it. 00:28:15.549 [2024-12-10 22:59:22.965402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.549 [2024-12-10 22:59:22.965435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.549 qpair failed and we were unable to recover it. 00:28:15.549 [2024-12-10 22:59:22.965614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.549 [2024-12-10 22:59:22.965647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.549 qpair failed and we were unable to recover it. 00:28:15.549 [2024-12-10 22:59:22.965766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.549 [2024-12-10 22:59:22.965798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.549 qpair failed and we were unable to recover it. 00:28:15.549 [2024-12-10 22:59:22.965916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.549 [2024-12-10 22:59:22.965949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.549 qpair failed and we were unable to recover it. 00:28:15.549 [2024-12-10 22:59:22.966124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.549 [2024-12-10 22:59:22.966156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.549 qpair failed and we were unable to recover it. 00:28:15.549 [2024-12-10 22:59:22.966284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.549 [2024-12-10 22:59:22.966316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.549 qpair failed and we were unable to recover it. 00:28:15.549 [2024-12-10 22:59:22.966485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.549 [2024-12-10 22:59:22.966557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.549 qpair failed and we were unable to recover it. 00:28:15.549 [2024-12-10 22:59:22.966684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.549 [2024-12-10 22:59:22.966722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.549 qpair failed and we were unable to recover it. 00:28:15.549 [2024-12-10 22:59:22.966824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.549 [2024-12-10 22:59:22.966858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.549 qpair failed and we were unable to recover it. 00:28:15.550 [2024-12-10 22:59:22.966959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.550 [2024-12-10 22:59:22.966992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.550 qpair failed and we were unable to recover it. 00:28:15.550 [2024-12-10 22:59:22.967177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.550 [2024-12-10 22:59:22.967214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.550 qpair failed and we were unable to recover it. 00:28:15.550 [2024-12-10 22:59:22.967403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.550 [2024-12-10 22:59:22.967436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.550 qpair failed and we were unable to recover it. 00:28:15.550 [2024-12-10 22:59:22.967567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.550 [2024-12-10 22:59:22.967600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.550 qpair failed and we were unable to recover it. 00:28:15.550 [2024-12-10 22:59:22.967701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.550 [2024-12-10 22:59:22.967734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.550 qpair failed and we were unable to recover it. 00:28:15.550 [2024-12-10 22:59:22.967909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.550 [2024-12-10 22:59:22.967940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.550 qpair failed and we were unable to recover it. 00:28:15.550 [2024-12-10 22:59:22.968059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.550 [2024-12-10 22:59:22.968093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.550 qpair failed and we were unable to recover it. 00:28:15.550 [2024-12-10 22:59:22.968209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.550 [2024-12-10 22:59:22.968243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.550 qpair failed and we were unable to recover it. 00:28:15.550 [2024-12-10 22:59:22.968352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.550 [2024-12-10 22:59:22.968384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.550 qpair failed and we were unable to recover it. 00:28:15.550 [2024-12-10 22:59:22.968567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.550 [2024-12-10 22:59:22.968600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.550 qpair failed and we were unable to recover it. 00:28:15.550 [2024-12-10 22:59:22.968718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.550 [2024-12-10 22:59:22.968760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.550 qpair failed and we were unable to recover it. 00:28:15.550 [2024-12-10 22:59:22.968933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.550 [2024-12-10 22:59:22.968967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.550 qpair failed and we were unable to recover it. 00:28:15.550 [2024-12-10 22:59:22.969070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.550 [2024-12-10 22:59:22.969104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.550 qpair failed and we were unable to recover it. 00:28:15.550 [2024-12-10 22:59:22.969283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.550 [2024-12-10 22:59:22.969317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.550 qpair failed and we were unable to recover it. 00:28:15.550 [2024-12-10 22:59:22.969422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.550 [2024-12-10 22:59:22.969456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.550 qpair failed and we were unable to recover it. 00:28:15.550 [2024-12-10 22:59:22.969623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.550 [2024-12-10 22:59:22.969655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.550 qpair failed and we were unable to recover it. 00:28:15.550 [2024-12-10 22:59:22.969840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.550 [2024-12-10 22:59:22.969873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.550 qpair failed and we were unable to recover it. 00:28:15.550 [2024-12-10 22:59:22.970078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.550 [2024-12-10 22:59:22.970110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.550 qpair failed and we were unable to recover it. 00:28:15.550 [2024-12-10 22:59:22.970226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.550 [2024-12-10 22:59:22.970259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.550 qpair failed and we were unable to recover it. 00:28:15.550 [2024-12-10 22:59:22.970374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.550 [2024-12-10 22:59:22.970406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.550 qpair failed and we were unable to recover it. 00:28:15.550 [2024-12-10 22:59:22.970574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.550 [2024-12-10 22:59:22.970607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.550 qpair failed and we were unable to recover it. 00:28:15.550 [2024-12-10 22:59:22.970799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.550 [2024-12-10 22:59:22.970831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.550 qpair failed and we were unable to recover it. 00:28:15.550 [2024-12-10 22:59:22.971021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.550 [2024-12-10 22:59:22.971053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.550 qpair failed and we were unable to recover it. 00:28:15.550 [2024-12-10 22:59:22.971222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.550 [2024-12-10 22:59:22.971256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.550 qpair failed and we were unable to recover it. 00:28:15.550 [2024-12-10 22:59:22.971454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.550 [2024-12-10 22:59:22.971488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.550 qpair failed and we were unable to recover it. 00:28:15.550 [2024-12-10 22:59:22.971609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.550 [2024-12-10 22:59:22.971641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.550 qpair failed and we were unable to recover it. 00:28:15.550 [2024-12-10 22:59:22.971756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.550 [2024-12-10 22:59:22.971789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.550 qpair failed and we were unable to recover it. 00:28:15.550 [2024-12-10 22:59:22.971900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.550 [2024-12-10 22:59:22.971933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.550 qpair failed and we were unable to recover it. 00:28:15.550 [2024-12-10 22:59:22.972104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.550 [2024-12-10 22:59:22.972136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.550 qpair failed and we were unable to recover it. 00:28:15.550 [2024-12-10 22:59:22.972330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.550 [2024-12-10 22:59:22.972363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.550 qpair failed and we were unable to recover it. 00:28:15.550 [2024-12-10 22:59:22.972544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.550 [2024-12-10 22:59:22.972577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.550 qpair failed and we were unable to recover it. 00:28:15.550 [2024-12-10 22:59:22.972703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.550 [2024-12-10 22:59:22.972735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.550 qpair failed and we were unable to recover it. 00:28:15.550 [2024-12-10 22:59:22.972904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.550 [2024-12-10 22:59:22.972936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.551 qpair failed and we were unable to recover it. 00:28:15.551 [2024-12-10 22:59:22.973197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.551 [2024-12-10 22:59:22.973231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.551 qpair failed and we were unable to recover it. 00:28:15.551 [2024-12-10 22:59:22.973346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.551 [2024-12-10 22:59:22.973379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.551 qpair failed and we were unable to recover it. 00:28:15.551 [2024-12-10 22:59:22.973481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.551 [2024-12-10 22:59:22.973512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.551 qpair failed and we were unable to recover it. 00:28:15.551 [2024-12-10 22:59:22.973679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.551 [2024-12-10 22:59:22.973712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.551 qpair failed and we were unable to recover it. 00:28:15.551 [2024-12-10 22:59:22.973954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.551 [2024-12-10 22:59:22.974026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.551 qpair failed and we were unable to recover it. 00:28:15.551 [2024-12-10 22:59:22.974181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.551 [2024-12-10 22:59:22.974218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.551 qpair failed and we were unable to recover it. 00:28:15.551 [2024-12-10 22:59:22.974327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.551 [2024-12-10 22:59:22.974360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.551 qpair failed and we were unable to recover it. 00:28:15.551 [2024-12-10 22:59:22.974470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.551 [2024-12-10 22:59:22.974502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.551 qpair failed and we were unable to recover it. 00:28:15.551 [2024-12-10 22:59:22.974619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.551 [2024-12-10 22:59:22.974652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.551 qpair failed and we were unable to recover it. 00:28:15.551 [2024-12-10 22:59:22.974755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.551 [2024-12-10 22:59:22.974787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.551 qpair failed and we were unable to recover it. 00:28:15.551 [2024-12-10 22:59:22.974972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.551 [2024-12-10 22:59:22.975005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.551 qpair failed and we were unable to recover it. 00:28:15.551 [2024-12-10 22:59:22.975195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.551 [2024-12-10 22:59:22.975230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.551 qpair failed and we were unable to recover it. 00:28:15.551 [2024-12-10 22:59:22.975334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.551 [2024-12-10 22:59:22.975367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.551 qpair failed and we were unable to recover it. 00:28:15.551 [2024-12-10 22:59:22.975585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.551 [2024-12-10 22:59:22.975618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.551 qpair failed and we were unable to recover it. 00:28:15.551 [2024-12-10 22:59:22.975806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.551 [2024-12-10 22:59:22.975839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.551 qpair failed and we were unable to recover it. 00:28:15.551 [2024-12-10 22:59:22.975945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.551 [2024-12-10 22:59:22.975976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.551 qpair failed and we were unable to recover it. 00:28:15.551 [2024-12-10 22:59:22.976170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.551 [2024-12-10 22:59:22.976203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.551 qpair failed and we were unable to recover it. 00:28:15.551 [2024-12-10 22:59:22.976379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.551 [2024-12-10 22:59:22.976412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.551 qpair failed and we were unable to recover it. 00:28:15.551 [2024-12-10 22:59:22.976543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.551 [2024-12-10 22:59:22.976576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.551 qpair failed and we were unable to recover it. 00:28:15.551 [2024-12-10 22:59:22.976696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.551 [2024-12-10 22:59:22.976728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.551 qpair failed and we were unable to recover it. 00:28:15.551 [2024-12-10 22:59:22.976999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.551 [2024-12-10 22:59:22.977030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.551 qpair failed and we were unable to recover it. 00:28:15.551 [2024-12-10 22:59:22.977194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.551 [2024-12-10 22:59:22.977228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.551 qpair failed and we were unable to recover it. 00:28:15.551 [2024-12-10 22:59:22.977397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.551 [2024-12-10 22:59:22.977429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.551 qpair failed and we were unable to recover it. 00:28:15.551 [2024-12-10 22:59:22.977556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.551 [2024-12-10 22:59:22.977589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.551 qpair failed and we were unable to recover it. 00:28:15.551 [2024-12-10 22:59:22.977779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.551 [2024-12-10 22:59:22.977811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.551 qpair failed and we were unable to recover it. 00:28:15.551 [2024-12-10 22:59:22.977927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.551 [2024-12-10 22:59:22.977959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.551 qpair failed and we were unable to recover it. 00:28:15.551 [2024-12-10 22:59:22.978071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.551 [2024-12-10 22:59:22.978103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.551 qpair failed and we were unable to recover it. 00:28:15.551 [2024-12-10 22:59:22.978230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.551 [2024-12-10 22:59:22.978263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.551 qpair failed and we were unable to recover it. 00:28:15.551 [2024-12-10 22:59:22.978368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.551 [2024-12-10 22:59:22.978400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.551 qpair failed and we were unable to recover it. 00:28:15.551 [2024-12-10 22:59:22.978572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.551 [2024-12-10 22:59:22.978605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.551 qpair failed and we were unable to recover it. 00:28:15.551 [2024-12-10 22:59:22.978819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.551 [2024-12-10 22:59:22.978852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.551 qpair failed and we were unable to recover it. 00:28:15.551 [2024-12-10 22:59:22.978970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.551 [2024-12-10 22:59:22.979008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.551 qpair failed and we were unable to recover it. 00:28:15.551 [2024-12-10 22:59:22.979206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.551 [2024-12-10 22:59:22.979239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.551 qpair failed and we were unable to recover it. 00:28:15.551 [2024-12-10 22:59:22.979410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.551 [2024-12-10 22:59:22.979444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.551 qpair failed and we were unable to recover it. 00:28:15.551 [2024-12-10 22:59:22.979578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.551 [2024-12-10 22:59:22.979612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.551 qpair failed and we were unable to recover it. 00:28:15.551 [2024-12-10 22:59:22.979789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.551 [2024-12-10 22:59:22.979821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.551 qpair failed and we were unable to recover it. 00:28:15.551 [2024-12-10 22:59:22.979990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.551 [2024-12-10 22:59:22.980023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.551 qpair failed and we were unable to recover it. 00:28:15.551 [2024-12-10 22:59:22.980206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.551 [2024-12-10 22:59:22.980241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.551 qpair failed and we were unable to recover it. 00:28:15.551 [2024-12-10 22:59:22.980409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.551 [2024-12-10 22:59:22.980441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.551 qpair failed and we were unable to recover it. 00:28:15.551 [2024-12-10 22:59:22.980611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.552 [2024-12-10 22:59:22.980644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.552 qpair failed and we were unable to recover it. 00:28:15.552 [2024-12-10 22:59:22.980896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.552 [2024-12-10 22:59:22.980928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.552 qpair failed and we were unable to recover it. 00:28:15.552 [2024-12-10 22:59:22.981029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.552 [2024-12-10 22:59:22.981062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.552 qpair failed and we were unable to recover it. 00:28:15.552 [2024-12-10 22:59:22.981246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.552 [2024-12-10 22:59:22.981279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.552 qpair failed and we were unable to recover it. 00:28:15.552 [2024-12-10 22:59:22.981463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.552 [2024-12-10 22:59:22.981497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.552 qpair failed and we were unable to recover it. 00:28:15.552 [2024-12-10 22:59:22.981611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.552 [2024-12-10 22:59:22.981643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.552 qpair failed and we were unable to recover it. 00:28:15.552 [2024-12-10 22:59:22.981769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.552 [2024-12-10 22:59:22.981802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.552 qpair failed and we were unable to recover it. 00:28:15.552 [2024-12-10 22:59:22.981919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.552 [2024-12-10 22:59:22.981952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.552 qpair failed and we were unable to recover it. 00:28:15.552 [2024-12-10 22:59:22.982137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.552 [2024-12-10 22:59:22.982177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.552 qpair failed and we were unable to recover it. 00:28:15.552 [2024-12-10 22:59:22.982373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.552 [2024-12-10 22:59:22.982405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.552 qpair failed and we were unable to recover it. 00:28:15.552 [2024-12-10 22:59:22.982515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.552 [2024-12-10 22:59:22.982546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.552 qpair failed and we were unable to recover it. 00:28:15.552 [2024-12-10 22:59:22.982713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.552 [2024-12-10 22:59:22.982745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.552 qpair failed and we were unable to recover it. 00:28:15.552 [2024-12-10 22:59:22.982936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.552 [2024-12-10 22:59:22.982969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.552 qpair failed and we were unable to recover it. 00:28:15.552 [2024-12-10 22:59:22.983080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.552 [2024-12-10 22:59:22.983113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.552 qpair failed and we were unable to recover it. 00:28:15.552 [2024-12-10 22:59:22.983310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.552 [2024-12-10 22:59:22.983345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.552 qpair failed and we were unable to recover it. 00:28:15.552 [2024-12-10 22:59:22.983525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.552 [2024-12-10 22:59:22.983559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.552 qpair failed and we were unable to recover it. 00:28:15.552 [2024-12-10 22:59:22.983761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.552 [2024-12-10 22:59:22.983795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.552 qpair failed and we were unable to recover it. 00:28:15.552 [2024-12-10 22:59:22.983963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.552 [2024-12-10 22:59:22.983996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.552 qpair failed and we were unable to recover it. 00:28:15.552 [2024-12-10 22:59:22.984114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.552 [2024-12-10 22:59:22.984147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.552 qpair failed and we were unable to recover it. 00:28:15.552 [2024-12-10 22:59:22.984281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.552 [2024-12-10 22:59:22.984320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.552 qpair failed and we were unable to recover it. 00:28:15.552 [2024-12-10 22:59:22.984525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.552 [2024-12-10 22:59:22.984557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.552 qpair failed and we were unable to recover it. 00:28:15.552 [2024-12-10 22:59:22.984722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.552 [2024-12-10 22:59:22.984755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.552 qpair failed and we were unable to recover it. 00:28:15.552 [2024-12-10 22:59:22.984927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.552 [2024-12-10 22:59:22.984960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.552 qpair failed and we were unable to recover it. 00:28:15.552 [2024-12-10 22:59:22.985082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.552 [2024-12-10 22:59:22.985115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.552 qpair failed and we were unable to recover it. 00:28:15.552 [2024-12-10 22:59:22.985230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.552 [2024-12-10 22:59:22.985264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.552 qpair failed and we were unable to recover it. 00:28:15.552 [2024-12-10 22:59:22.985430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.552 [2024-12-10 22:59:22.985462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.552 qpair failed and we were unable to recover it. 00:28:15.552 [2024-12-10 22:59:22.985642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.552 [2024-12-10 22:59:22.985674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.552 qpair failed and we were unable to recover it. 00:28:15.552 [2024-12-10 22:59:22.985782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.552 [2024-12-10 22:59:22.985815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.552 qpair failed and we were unable to recover it. 00:28:15.552 [2024-12-10 22:59:22.986004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.552 [2024-12-10 22:59:22.986037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.552 qpair failed and we were unable to recover it. 00:28:15.552 [2024-12-10 22:59:22.986273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.552 [2024-12-10 22:59:22.986307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.552 qpair failed and we were unable to recover it. 00:28:15.552 [2024-12-10 22:59:22.986500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.552 [2024-12-10 22:59:22.986533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.552 qpair failed and we were unable to recover it. 00:28:15.552 [2024-12-10 22:59:22.986700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.552 [2024-12-10 22:59:22.986733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.552 qpair failed and we were unable to recover it. 00:28:15.552 [2024-12-10 22:59:22.986838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.552 [2024-12-10 22:59:22.986871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.552 qpair failed and we were unable to recover it. 00:28:15.552 [2024-12-10 22:59:22.987051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.552 [2024-12-10 22:59:22.987086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.552 qpair failed and we were unable to recover it. 00:28:15.552 [2024-12-10 22:59:22.987185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.552 [2024-12-10 22:59:22.987218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.552 qpair failed and we were unable to recover it. 00:28:15.552 [2024-12-10 22:59:22.987326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.552 [2024-12-10 22:59:22.987359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.552 qpair failed and we were unable to recover it. 00:28:15.552 [2024-12-10 22:59:22.987594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.552 [2024-12-10 22:59:22.987627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.552 qpair failed and we were unable to recover it. 00:28:15.552 [2024-12-10 22:59:22.987865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.552 [2024-12-10 22:59:22.987898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.552 qpair failed and we were unable to recover it. 00:28:15.552 [2024-12-10 22:59:22.988019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.552 [2024-12-10 22:59:22.988052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.552 qpair failed and we were unable to recover it. 00:28:15.552 [2024-12-10 22:59:22.988170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.552 [2024-12-10 22:59:22.988204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.552 qpair failed and we were unable to recover it. 00:28:15.552 [2024-12-10 22:59:22.988314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.553 [2024-12-10 22:59:22.988345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.553 qpair failed and we were unable to recover it. 00:28:15.553 [2024-12-10 22:59:22.988452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.553 [2024-12-10 22:59:22.988485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.553 qpair failed and we were unable to recover it. 00:28:15.553 [2024-12-10 22:59:22.988584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.553 [2024-12-10 22:59:22.988616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.553 qpair failed and we were unable to recover it. 00:28:15.553 [2024-12-10 22:59:22.988720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.553 [2024-12-10 22:59:22.988753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.553 qpair failed and we were unable to recover it. 00:28:15.553 [2024-12-10 22:59:22.988935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.553 [2024-12-10 22:59:22.988969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.553 qpair failed and we were unable to recover it. 00:28:15.553 [2024-12-10 22:59:22.989079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.553 [2024-12-10 22:59:22.989111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.553 qpair failed and we were unable to recover it. 00:28:15.553 [2024-12-10 22:59:22.989239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.553 [2024-12-10 22:59:22.989272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.553 qpair failed and we were unable to recover it. 00:28:15.553 [2024-12-10 22:59:22.989476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.553 [2024-12-10 22:59:22.989508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.553 qpair failed and we were unable to recover it. 00:28:15.553 [2024-12-10 22:59:22.989702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.553 [2024-12-10 22:59:22.989734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.553 qpair failed and we were unable to recover it. 00:28:15.553 [2024-12-10 22:59:22.989903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.553 [2024-12-10 22:59:22.989936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.553 qpair failed and we were unable to recover it. 00:28:15.553 [2024-12-10 22:59:22.990104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.553 [2024-12-10 22:59:22.990136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.553 qpair failed and we were unable to recover it. 00:28:15.553 [2024-12-10 22:59:22.990341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.553 [2024-12-10 22:59:22.990374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.553 qpair failed and we were unable to recover it. 00:28:15.553 [2024-12-10 22:59:22.990493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.553 [2024-12-10 22:59:22.990525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.553 qpair failed and we were unable to recover it. 00:28:15.553 [2024-12-10 22:59:22.990699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.553 [2024-12-10 22:59:22.990730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.553 qpair failed and we were unable to recover it. 00:28:15.553 [2024-12-10 22:59:22.990846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.553 [2024-12-10 22:59:22.990878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.553 qpair failed and we were unable to recover it. 00:28:15.553 [2024-12-10 22:59:22.990993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.553 [2024-12-10 22:59:22.991025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.553 qpair failed and we were unable to recover it. 00:28:15.553 [2024-12-10 22:59:22.991139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.553 [2024-12-10 22:59:22.991190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.553 qpair failed and we were unable to recover it. 00:28:15.553 [2024-12-10 22:59:22.991427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.553 [2024-12-10 22:59:22.991460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.553 qpair failed and we were unable to recover it. 00:28:15.553 [2024-12-10 22:59:22.991694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.553 [2024-12-10 22:59:22.991727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.553 qpair failed and we were unable to recover it. 00:28:15.553 [2024-12-10 22:59:22.991925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.553 [2024-12-10 22:59:22.991956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.553 qpair failed and we were unable to recover it. 00:28:15.553 [2024-12-10 22:59:22.992082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.553 [2024-12-10 22:59:22.992119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.553 qpair failed and we were unable to recover it. 00:28:15.553 [2024-12-10 22:59:22.992346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.553 [2024-12-10 22:59:22.992380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.553 qpair failed and we were unable to recover it. 00:28:15.553 [2024-12-10 22:59:22.992499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.553 [2024-12-10 22:59:22.992531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.553 qpair failed and we were unable to recover it. 00:28:15.553 [2024-12-10 22:59:22.992653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.553 [2024-12-10 22:59:22.992686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.553 qpair failed and we were unable to recover it. 00:28:15.553 [2024-12-10 22:59:22.992856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.553 [2024-12-10 22:59:22.992889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.553 qpair failed and we were unable to recover it. 00:28:15.553 [2024-12-10 22:59:22.993005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.553 [2024-12-10 22:59:22.993038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.553 qpair failed and we were unable to recover it. 00:28:15.553 [2024-12-10 22:59:22.993232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.553 [2024-12-10 22:59:22.993266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.553 qpair failed and we were unable to recover it. 00:28:15.553 [2024-12-10 22:59:22.993434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.553 [2024-12-10 22:59:22.993467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.553 qpair failed and we were unable to recover it. 00:28:15.553 [2024-12-10 22:59:22.993644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.553 [2024-12-10 22:59:22.993677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.553 qpair failed and we were unable to recover it. 00:28:15.553 [2024-12-10 22:59:22.993863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.553 [2024-12-10 22:59:22.993895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.553 qpair failed and we were unable to recover it. 00:28:15.553 [2024-12-10 22:59:22.994092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.553 [2024-12-10 22:59:22.994124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.553 qpair failed and we were unable to recover it. 00:28:15.553 [2024-12-10 22:59:22.994308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.553 [2024-12-10 22:59:22.994342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.553 qpair failed and we were unable to recover it. 00:28:15.553 [2024-12-10 22:59:22.994517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.553 [2024-12-10 22:59:22.994550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.553 qpair failed and we were unable to recover it. 00:28:15.553 [2024-12-10 22:59:22.994666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.553 [2024-12-10 22:59:22.994698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.553 qpair failed and we were unable to recover it. 00:28:15.553 [2024-12-10 22:59:22.994822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.553 [2024-12-10 22:59:22.994855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.553 qpair failed and we were unable to recover it. 00:28:15.553 [2024-12-10 22:59:22.995025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.553 [2024-12-10 22:59:22.995058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.553 qpair failed and we were unable to recover it. 00:28:15.553 [2024-12-10 22:59:22.995262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.553 [2024-12-10 22:59:22.995296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.553 qpair failed and we were unable to recover it. 00:28:15.553 [2024-12-10 22:59:22.995483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.553 [2024-12-10 22:59:22.995516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.553 qpair failed and we were unable to recover it. 00:28:15.553 [2024-12-10 22:59:22.995631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.553 [2024-12-10 22:59:22.995664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.553 qpair failed and we were unable to recover it. 00:28:15.553 [2024-12-10 22:59:22.995856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.553 [2024-12-10 22:59:22.995888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.553 qpair failed and we were unable to recover it. 00:28:15.554 [2024-12-10 22:59:22.996095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.554 [2024-12-10 22:59:22.996129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.554 qpair failed and we were unable to recover it. 00:28:15.554 [2024-12-10 22:59:22.996309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.554 [2024-12-10 22:59:22.996343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.554 qpair failed and we were unable to recover it. 00:28:15.554 [2024-12-10 22:59:22.996474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.554 [2024-12-10 22:59:22.996506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.554 qpair failed and we were unable to recover it. 00:28:15.554 [2024-12-10 22:59:22.996677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.554 [2024-12-10 22:59:22.996709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.554 qpair failed and we were unable to recover it. 00:28:15.554 [2024-12-10 22:59:22.996820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.554 [2024-12-10 22:59:22.996853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.554 qpair failed and we were unable to recover it. 00:28:15.554 [2024-12-10 22:59:22.996970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.554 [2024-12-10 22:59:22.997004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.554 qpair failed and we were unable to recover it. 00:28:15.554 [2024-12-10 22:59:22.997122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.554 [2024-12-10 22:59:22.997155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.554 qpair failed and we were unable to recover it. 00:28:15.554 [2024-12-10 22:59:22.997435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.554 [2024-12-10 22:59:22.997467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.554 qpair failed and we were unable to recover it. 00:28:15.554 [2024-12-10 22:59:22.997602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.554 [2024-12-10 22:59:22.997635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.554 qpair failed and we were unable to recover it. 00:28:15.554 [2024-12-10 22:59:22.997840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.554 [2024-12-10 22:59:22.997873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.554 qpair failed and we were unable to recover it. 00:28:15.554 [2024-12-10 22:59:22.998058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.554 [2024-12-10 22:59:22.998091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.554 qpair failed and we were unable to recover it. 00:28:15.554 [2024-12-10 22:59:22.998277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.554 [2024-12-10 22:59:22.998311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.554 qpair failed and we were unable to recover it. 00:28:15.554 [2024-12-10 22:59:22.998491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.554 [2024-12-10 22:59:22.998523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.554 qpair failed and we were unable to recover it. 00:28:15.554 [2024-12-10 22:59:22.998636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.554 [2024-12-10 22:59:22.998668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.554 qpair failed and we were unable to recover it. 00:28:15.554 [2024-12-10 22:59:22.998870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.554 [2024-12-10 22:59:22.998903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.554 qpair failed and we were unable to recover it. 00:28:15.554 [2024-12-10 22:59:22.999167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.554 [2024-12-10 22:59:22.999200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.554 qpair failed and we were unable to recover it. 00:28:15.554 [2024-12-10 22:59:22.999370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.554 [2024-12-10 22:59:22.999403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.554 qpair failed and we were unable to recover it. 00:28:15.554 [2024-12-10 22:59:22.999586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.554 [2024-12-10 22:59:22.999620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.554 qpair failed and we were unable to recover it. 00:28:15.554 [2024-12-10 22:59:22.999734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.554 [2024-12-10 22:59:22.999766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.554 qpair failed and we were unable to recover it. 00:28:15.554 [2024-12-10 22:59:22.999960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.554 [2024-12-10 22:59:22.999993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.554 qpair failed and we were unable to recover it. 00:28:15.554 [2024-12-10 22:59:23.000106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.554 [2024-12-10 22:59:23.000140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.554 qpair failed and we were unable to recover it. 00:28:15.554 [2024-12-10 22:59:23.000266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.554 [2024-12-10 22:59:23.000300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.554 qpair failed and we were unable to recover it. 00:28:15.554 [2024-12-10 22:59:23.000483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.554 [2024-12-10 22:59:23.000517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.554 qpair failed and we were unable to recover it. 00:28:15.554 [2024-12-10 22:59:23.000623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.554 [2024-12-10 22:59:23.000656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.554 qpair failed and we were unable to recover it. 00:28:15.554 [2024-12-10 22:59:23.000826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.554 [2024-12-10 22:59:23.000860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.554 qpair failed and we were unable to recover it. 00:28:15.554 [2024-12-10 22:59:23.000972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.554 [2024-12-10 22:59:23.001006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.554 qpair failed and we were unable to recover it. 00:28:15.554 [2024-12-10 22:59:23.001201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.554 [2024-12-10 22:59:23.001236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.554 qpair failed and we were unable to recover it. 00:28:15.554 [2024-12-10 22:59:23.001360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.554 [2024-12-10 22:59:23.001393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.554 qpair failed and we were unable to recover it. 00:28:15.554 [2024-12-10 22:59:23.001579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.554 [2024-12-10 22:59:23.001613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.554 qpair failed and we were unable to recover it. 00:28:15.554 [2024-12-10 22:59:23.001715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.554 [2024-12-10 22:59:23.001747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.554 qpair failed and we were unable to recover it. 00:28:15.554 [2024-12-10 22:59:23.001920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.554 [2024-12-10 22:59:23.001953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.554 qpair failed and we were unable to recover it. 00:28:15.554 [2024-12-10 22:59:23.002070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.554 [2024-12-10 22:59:23.002103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.554 qpair failed and we were unable to recover it. 00:28:15.554 [2024-12-10 22:59:23.002304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.554 [2024-12-10 22:59:23.002339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.554 qpair failed and we were unable to recover it. 00:28:15.554 [2024-12-10 22:59:23.002461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.554 [2024-12-10 22:59:23.002493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.554 qpair failed and we were unable to recover it. 00:28:15.554 [2024-12-10 22:59:23.002661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.554 [2024-12-10 22:59:23.002695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.554 qpair failed and we were unable to recover it. 00:28:15.555 [2024-12-10 22:59:23.002819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.555 [2024-12-10 22:59:23.002852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.555 qpair failed and we were unable to recover it. 00:28:15.555 [2024-12-10 22:59:23.003051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.555 [2024-12-10 22:59:23.003083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.555 qpair failed and we were unable to recover it. 00:28:15.555 [2024-12-10 22:59:23.003201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.555 [2024-12-10 22:59:23.003236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.555 qpair failed and we were unable to recover it. 00:28:15.555 [2024-12-10 22:59:23.003434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.555 [2024-12-10 22:59:23.003468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.555 qpair failed and we were unable to recover it. 00:28:15.555 [2024-12-10 22:59:23.003634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.555 [2024-12-10 22:59:23.003667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.555 qpair failed and we were unable to recover it. 00:28:15.555 [2024-12-10 22:59:23.003785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.555 [2024-12-10 22:59:23.003818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.555 qpair failed and we were unable to recover it. 00:28:15.555 [2024-12-10 22:59:23.004086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.555 [2024-12-10 22:59:23.004119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.555 qpair failed and we were unable to recover it. 00:28:15.555 [2024-12-10 22:59:23.004231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.555 [2024-12-10 22:59:23.004266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.555 qpair failed and we were unable to recover it. 00:28:15.555 [2024-12-10 22:59:23.004437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.555 [2024-12-10 22:59:23.004469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.555 qpair failed and we were unable to recover it. 00:28:15.555 [2024-12-10 22:59:23.004573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.555 [2024-12-10 22:59:23.004607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.555 qpair failed and we were unable to recover it. 00:28:15.555 [2024-12-10 22:59:23.004704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.555 [2024-12-10 22:59:23.004737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.555 qpair failed and we were unable to recover it. 00:28:15.555 [2024-12-10 22:59:23.004922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.555 [2024-12-10 22:59:23.004955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.555 qpair failed and we were unable to recover it. 00:28:15.555 [2024-12-10 22:59:23.005144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.555 [2024-12-10 22:59:23.005186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.555 qpair failed and we were unable to recover it. 00:28:15.555 [2024-12-10 22:59:23.005355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.555 [2024-12-10 22:59:23.005392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.555 qpair failed and we were unable to recover it. 00:28:15.555 [2024-12-10 22:59:23.005652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.555 [2024-12-10 22:59:23.005685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.555 qpair failed and we were unable to recover it. 00:28:15.555 [2024-12-10 22:59:23.005870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.555 [2024-12-10 22:59:23.005903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.555 qpair failed and we were unable to recover it. 00:28:15.555 [2024-12-10 22:59:23.006107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.555 [2024-12-10 22:59:23.006140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.555 qpair failed and we were unable to recover it. 00:28:15.555 [2024-12-10 22:59:23.006258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.555 [2024-12-10 22:59:23.006291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.555 qpair failed and we were unable to recover it. 00:28:15.555 [2024-12-10 22:59:23.006409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.555 [2024-12-10 22:59:23.006441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.555 qpair failed and we were unable to recover it. 00:28:15.555 [2024-12-10 22:59:23.006552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.555 [2024-12-10 22:59:23.006585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.555 qpair failed and we were unable to recover it. 00:28:15.555 [2024-12-10 22:59:23.006779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.555 [2024-12-10 22:59:23.006813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.555 qpair failed and we were unable to recover it. 00:28:15.555 [2024-12-10 22:59:23.007012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.555 [2024-12-10 22:59:23.007045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.555 qpair failed and we were unable to recover it. 00:28:15.555 [2024-12-10 22:59:23.007216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.555 [2024-12-10 22:59:23.007249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.555 qpair failed and we were unable to recover it. 00:28:15.555 [2024-12-10 22:59:23.007419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.555 [2024-12-10 22:59:23.007453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.555 qpair failed and we were unable to recover it. 00:28:15.555 [2024-12-10 22:59:23.007572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.555 [2024-12-10 22:59:23.007605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.555 qpair failed and we were unable to recover it. 00:28:15.555 [2024-12-10 22:59:23.007810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.555 [2024-12-10 22:59:23.007843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.555 qpair failed and we were unable to recover it. 00:28:15.555 [2024-12-10 22:59:23.008019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.555 [2024-12-10 22:59:23.008052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.555 qpair failed and we were unable to recover it. 00:28:15.555 [2024-12-10 22:59:23.008177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.555 [2024-12-10 22:59:23.008211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.555 qpair failed and we were unable to recover it. 00:28:15.555 [2024-12-10 22:59:23.008410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.555 [2024-12-10 22:59:23.008442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.555 qpair failed and we were unable to recover it. 00:28:15.555 [2024-12-10 22:59:23.008680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.555 [2024-12-10 22:59:23.008714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.555 qpair failed and we were unable to recover it. 00:28:15.555 [2024-12-10 22:59:23.008819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.555 [2024-12-10 22:59:23.008852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.555 qpair failed and we were unable to recover it. 00:28:15.555 [2024-12-10 22:59:23.009041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.555 [2024-12-10 22:59:23.009074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.555 qpair failed and we were unable to recover it. 00:28:15.555 [2024-12-10 22:59:23.009245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.555 [2024-12-10 22:59:23.009280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.555 qpair failed and we were unable to recover it. 00:28:15.555 [2024-12-10 22:59:23.009494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.555 [2024-12-10 22:59:23.009528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.555 qpair failed and we were unable to recover it. 00:28:15.555 [2024-12-10 22:59:23.009786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.555 [2024-12-10 22:59:23.009819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.555 qpair failed and we were unable to recover it. 00:28:15.555 [2024-12-10 22:59:23.010056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.555 [2024-12-10 22:59:23.010090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.555 qpair failed and we were unable to recover it. 00:28:15.555 [2024-12-10 22:59:23.010278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.555 [2024-12-10 22:59:23.010311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.555 qpair failed and we were unable to recover it. 00:28:15.555 [2024-12-10 22:59:23.010444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.555 [2024-12-10 22:59:23.010477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.555 qpair failed and we were unable to recover it. 00:28:15.555 [2024-12-10 22:59:23.010580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.555 [2024-12-10 22:59:23.010613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.555 qpair failed and we were unable to recover it. 00:28:15.555 [2024-12-10 22:59:23.010801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.555 [2024-12-10 22:59:23.010834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.556 qpair failed and we were unable to recover it. 00:28:15.556 [2024-12-10 22:59:23.010949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.556 [2024-12-10 22:59:23.010982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.556 qpair failed and we were unable to recover it. 00:28:15.556 [2024-12-10 22:59:23.011100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.556 [2024-12-10 22:59:23.011133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.556 qpair failed and we were unable to recover it. 00:28:15.556 [2024-12-10 22:59:23.011338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.556 [2024-12-10 22:59:23.011372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.556 qpair failed and we were unable to recover it. 00:28:15.556 [2024-12-10 22:59:23.011487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.556 [2024-12-10 22:59:23.011520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.556 qpair failed and we were unable to recover it. 00:28:15.556 [2024-12-10 22:59:23.011624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.556 [2024-12-10 22:59:23.011657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.556 qpair failed and we were unable to recover it. 00:28:15.556 [2024-12-10 22:59:23.011767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.556 [2024-12-10 22:59:23.011801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.556 qpair failed and we were unable to recover it. 00:28:15.556 [2024-12-10 22:59:23.011925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.556 [2024-12-10 22:59:23.011959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.556 qpair failed and we were unable to recover it. 00:28:15.556 [2024-12-10 22:59:23.012127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.556 [2024-12-10 22:59:23.012170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.556 qpair failed and we were unable to recover it. 00:28:15.556 [2024-12-10 22:59:23.012340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.556 [2024-12-10 22:59:23.012373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.556 qpair failed and we were unable to recover it. 00:28:15.556 [2024-12-10 22:59:23.012628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.556 [2024-12-10 22:59:23.012661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.556 qpair failed and we were unable to recover it. 00:28:15.556 [2024-12-10 22:59:23.012786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.556 [2024-12-10 22:59:23.012818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.556 qpair failed and we were unable to recover it. 00:28:15.556 [2024-12-10 22:59:23.012941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.556 [2024-12-10 22:59:23.012973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.556 qpair failed and we were unable to recover it. 00:28:15.556 [2024-12-10 22:59:23.013078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.556 [2024-12-10 22:59:23.013111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.556 qpair failed and we were unable to recover it. 00:28:15.556 [2024-12-10 22:59:23.013287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.556 [2024-12-10 22:59:23.013320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.556 qpair failed and we were unable to recover it. 00:28:15.556 [2024-12-10 22:59:23.013441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.556 [2024-12-10 22:59:23.013485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.556 qpair failed and we were unable to recover it. 00:28:15.556 [2024-12-10 22:59:23.013587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.556 [2024-12-10 22:59:23.013619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.556 qpair failed and we were unable to recover it. 00:28:15.556 [2024-12-10 22:59:23.013822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.556 [2024-12-10 22:59:23.013854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.556 qpair failed and we were unable to recover it. 00:28:15.556 [2024-12-10 22:59:23.014051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.556 [2024-12-10 22:59:23.014085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.556 qpair failed and we were unable to recover it. 00:28:15.556 [2024-12-10 22:59:23.014201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.556 [2024-12-10 22:59:23.014236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.556 qpair failed and we were unable to recover it. 00:28:15.556 [2024-12-10 22:59:23.014401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.556 [2024-12-10 22:59:23.014433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.556 qpair failed and we were unable to recover it. 00:28:15.556 [2024-12-10 22:59:23.014600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.556 [2024-12-10 22:59:23.014633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.556 qpair failed and we were unable to recover it. 00:28:15.556 [2024-12-10 22:59:23.014797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.556 [2024-12-10 22:59:23.014830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.556 qpair failed and we were unable to recover it. 00:28:15.556 [2024-12-10 22:59:23.015013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.556 [2024-12-10 22:59:23.015046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.556 qpair failed and we were unable to recover it. 00:28:15.556 [2024-12-10 22:59:23.015179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.556 [2024-12-10 22:59:23.015212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.556 qpair failed and we were unable to recover it. 00:28:15.556 [2024-12-10 22:59:23.015386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.556 [2024-12-10 22:59:23.015419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.556 qpair failed and we were unable to recover it. 00:28:15.556 [2024-12-10 22:59:23.015523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.556 [2024-12-10 22:59:23.015554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.556 qpair failed and we were unable to recover it. 00:28:15.556 [2024-12-10 22:59:23.015726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.556 [2024-12-10 22:59:23.015758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.556 qpair failed and we were unable to recover it. 00:28:15.556 [2024-12-10 22:59:23.015924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.556 [2024-12-10 22:59:23.015957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.556 qpair failed and we were unable to recover it. 00:28:15.556 [2024-12-10 22:59:23.016135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.556 [2024-12-10 22:59:23.016177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.556 qpair failed and we were unable to recover it. 00:28:15.556 [2024-12-10 22:59:23.016282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.556 [2024-12-10 22:59:23.016315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.556 qpair failed and we were unable to recover it. 00:28:15.556 [2024-12-10 22:59:23.016423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.556 [2024-12-10 22:59:23.016457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.556 qpair failed and we were unable to recover it. 00:28:15.556 [2024-12-10 22:59:23.016625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.556 [2024-12-10 22:59:23.016658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.556 qpair failed and we were unable to recover it. 00:28:15.556 [2024-12-10 22:59:23.016893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.556 [2024-12-10 22:59:23.016926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.556 qpair failed and we were unable to recover it. 00:28:15.556 [2024-12-10 22:59:23.017095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.556 [2024-12-10 22:59:23.017127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.556 qpair failed and we were unable to recover it. 00:28:15.556 [2024-12-10 22:59:23.017305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.556 [2024-12-10 22:59:23.017339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.556 qpair failed and we were unable to recover it. 00:28:15.556 [2024-12-10 22:59:23.017512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.556 [2024-12-10 22:59:23.017544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.556 qpair failed and we were unable to recover it. 00:28:15.556 [2024-12-10 22:59:23.017804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.556 [2024-12-10 22:59:23.017836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.556 qpair failed and we were unable to recover it. 00:28:15.556 [2024-12-10 22:59:23.018004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.556 [2024-12-10 22:59:23.018037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.556 qpair failed and we were unable to recover it. 00:28:15.556 [2024-12-10 22:59:23.018212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.556 [2024-12-10 22:59:23.018247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.556 qpair failed and we were unable to recover it. 00:28:15.556 [2024-12-10 22:59:23.018349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.556 [2024-12-10 22:59:23.018381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.557 qpair failed and we were unable to recover it. 00:28:15.557 [2024-12-10 22:59:23.018550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.557 [2024-12-10 22:59:23.018582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.557 qpair failed and we were unable to recover it. 00:28:15.557 [2024-12-10 22:59:23.018764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.557 [2024-12-10 22:59:23.018802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.557 qpair failed and we were unable to recover it. 00:28:15.557 [2024-12-10 22:59:23.018912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.557 [2024-12-10 22:59:23.018944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.557 qpair failed and we were unable to recover it. 00:28:15.557 [2024-12-10 22:59:23.019061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.557 [2024-12-10 22:59:23.019093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.557 qpair failed and we were unable to recover it. 00:28:15.557 [2024-12-10 22:59:23.019250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.557 [2024-12-10 22:59:23.019283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.557 qpair failed and we were unable to recover it. 00:28:15.557 [2024-12-10 22:59:23.019452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.557 [2024-12-10 22:59:23.019484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.557 qpair failed and we were unable to recover it. 00:28:15.557 [2024-12-10 22:59:23.019689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.557 [2024-12-10 22:59:23.019721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.557 qpair failed and we were unable to recover it. 00:28:15.557 [2024-12-10 22:59:23.019893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.557 [2024-12-10 22:59:23.019925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.557 qpair failed and we were unable to recover it. 00:28:15.557 [2024-12-10 22:59:23.020106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.557 [2024-12-10 22:59:23.020139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.557 qpair failed and we were unable to recover it. 00:28:15.557 [2024-12-10 22:59:23.020415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.557 [2024-12-10 22:59:23.020448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.557 qpair failed and we were unable to recover it. 00:28:15.557 [2024-12-10 22:59:23.020617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.557 [2024-12-10 22:59:23.020649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.557 qpair failed and we were unable to recover it. 00:28:15.557 [2024-12-10 22:59:23.020820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.557 [2024-12-10 22:59:23.020852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.557 qpair failed and we were unable to recover it. 00:28:15.557 [2024-12-10 22:59:23.021036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.557 [2024-12-10 22:59:23.021068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.557 qpair failed and we were unable to recover it. 00:28:15.557 [2024-12-10 22:59:23.021237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.557 [2024-12-10 22:59:23.021271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.557 qpair failed and we were unable to recover it. 00:28:15.557 [2024-12-10 22:59:23.021451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.557 [2024-12-10 22:59:23.021484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.557 qpair failed and we were unable to recover it. 00:28:15.557 [2024-12-10 22:59:23.021694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.557 [2024-12-10 22:59:23.021727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.557 qpair failed and we were unable to recover it. 00:28:15.557 [2024-12-10 22:59:23.021895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.557 [2024-12-10 22:59:23.021928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.557 qpair failed and we were unable to recover it. 00:28:15.557 [2024-12-10 22:59:23.022118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.557 [2024-12-10 22:59:23.022151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.557 qpair failed and we were unable to recover it. 00:28:15.557 [2024-12-10 22:59:23.022353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.557 [2024-12-10 22:59:23.022390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.557 qpair failed and we were unable to recover it. 00:28:15.557 [2024-12-10 22:59:23.022566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.557 [2024-12-10 22:59:23.022600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.557 qpair failed and we were unable to recover it. 00:28:15.557 [2024-12-10 22:59:23.022767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.557 [2024-12-10 22:59:23.022801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.557 qpair failed and we were unable to recover it. 00:28:15.557 [2024-12-10 22:59:23.022916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.557 [2024-12-10 22:59:23.022949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.557 qpair failed and we were unable to recover it. 00:28:15.557 [2024-12-10 22:59:23.023139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.557 [2024-12-10 22:59:23.023205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.557 qpair failed and we were unable to recover it. 00:28:15.557 [2024-12-10 22:59:23.023394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.557 [2024-12-10 22:59:23.023426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.557 qpair failed and we were unable to recover it. 00:28:15.557 [2024-12-10 22:59:23.023668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.557 [2024-12-10 22:59:23.023701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.557 qpair failed and we were unable to recover it. 00:28:15.557 [2024-12-10 22:59:23.023807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.557 [2024-12-10 22:59:23.023840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.557 qpair failed and we were unable to recover it. 00:28:15.557 [2024-12-10 22:59:23.024016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.557 [2024-12-10 22:59:23.024047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.557 qpair failed and we were unable to recover it. 00:28:15.557 [2024-12-10 22:59:23.024235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.557 [2024-12-10 22:59:23.024269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.557 qpair failed and we were unable to recover it. 00:28:15.557 [2024-12-10 22:59:23.024455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.557 [2024-12-10 22:59:23.024487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.557 qpair failed and we were unable to recover it. 00:28:15.557 [2024-12-10 22:59:23.024683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.557 [2024-12-10 22:59:23.024715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.557 qpair failed and we were unable to recover it. 00:28:15.557 [2024-12-10 22:59:23.024898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.557 [2024-12-10 22:59:23.024931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.557 qpair failed and we were unable to recover it. 00:28:15.557 [2024-12-10 22:59:23.025185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.557 [2024-12-10 22:59:23.025219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.557 qpair failed and we were unable to recover it. 00:28:15.557 [2024-12-10 22:59:23.025388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.557 [2024-12-10 22:59:23.025421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.557 qpair failed and we were unable to recover it. 00:28:15.557 [2024-12-10 22:59:23.025659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.557 [2024-12-10 22:59:23.025691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.557 qpair failed and we were unable to recover it. 00:28:15.557 [2024-12-10 22:59:23.025944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.557 [2024-12-10 22:59:23.025978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.557 qpair failed and we were unable to recover it. 00:28:15.557 [2024-12-10 22:59:23.026177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.557 [2024-12-10 22:59:23.026209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.557 qpair failed and we were unable to recover it. 00:28:15.557 [2024-12-10 22:59:23.026326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.557 [2024-12-10 22:59:23.026359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.557 qpair failed and we were unable to recover it. 00:28:15.557 [2024-12-10 22:59:23.026472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.557 [2024-12-10 22:59:23.026505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.557 qpair failed and we were unable to recover it. 00:28:15.557 [2024-12-10 22:59:23.026778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.557 [2024-12-10 22:59:23.026809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.557 qpair failed and we were unable to recover it. 00:28:15.557 [2024-12-10 22:59:23.026979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.558 [2024-12-10 22:59:23.027012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.558 qpair failed and we were unable to recover it. 00:28:15.558 [2024-12-10 22:59:23.027188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.558 [2024-12-10 22:59:23.027222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.558 qpair failed and we were unable to recover it. 00:28:15.558 [2024-12-10 22:59:23.027418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.558 [2024-12-10 22:59:23.027449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.558 qpair failed and we were unable to recover it. 00:28:15.558 [2024-12-10 22:59:23.027712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.558 [2024-12-10 22:59:23.027750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.558 qpair failed and we were unable to recover it. 00:28:15.558 [2024-12-10 22:59:23.027963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.558 [2024-12-10 22:59:23.027997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.558 qpair failed and we were unable to recover it. 00:28:15.558 [2024-12-10 22:59:23.028215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.558 [2024-12-10 22:59:23.028249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.558 qpair failed and we were unable to recover it. 00:28:15.558 [2024-12-10 22:59:23.028431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.558 [2024-12-10 22:59:23.028470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.558 qpair failed and we were unable to recover it. 00:28:15.558 [2024-12-10 22:59:23.028640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.558 [2024-12-10 22:59:23.028674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.558 qpair failed and we were unable to recover it. 00:28:15.558 [2024-12-10 22:59:23.028773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.558 [2024-12-10 22:59:23.028806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.558 qpair failed and we were unable to recover it. 00:28:15.558 [2024-12-10 22:59:23.029074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.558 [2024-12-10 22:59:23.029105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.558 qpair failed and we were unable to recover it. 00:28:15.558 [2024-12-10 22:59:23.029218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.558 [2024-12-10 22:59:23.029251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.558 qpair failed and we were unable to recover it. 00:28:15.558 [2024-12-10 22:59:23.029441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.558 [2024-12-10 22:59:23.029474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.558 qpair failed and we were unable to recover it. 00:28:15.558 [2024-12-10 22:59:23.029640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.558 [2024-12-10 22:59:23.029673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.558 qpair failed and we were unable to recover it. 00:28:15.558 [2024-12-10 22:59:23.029860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.558 [2024-12-10 22:59:23.029892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.558 qpair failed and we were unable to recover it. 00:28:15.558 [2024-12-10 22:59:23.030208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.558 [2024-12-10 22:59:23.030243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.558 qpair failed and we were unable to recover it. 00:28:15.558 [2024-12-10 22:59:23.030508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.558 [2024-12-10 22:59:23.030541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.558 qpair failed and we were unable to recover it. 00:28:15.558 [2024-12-10 22:59:23.030813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.558 [2024-12-10 22:59:23.030844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.558 qpair failed and we were unable to recover it. 00:28:15.558 [2024-12-10 22:59:23.031024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.558 [2024-12-10 22:59:23.031056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.558 qpair failed and we were unable to recover it. 00:28:15.558 [2024-12-10 22:59:23.031345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.558 [2024-12-10 22:59:23.031380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.558 qpair failed and we were unable to recover it. 00:28:15.558 [2024-12-10 22:59:23.031677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.558 [2024-12-10 22:59:23.031711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.558 qpair failed and we were unable to recover it. 00:28:15.558 [2024-12-10 22:59:23.031901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.558 [2024-12-10 22:59:23.031933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.558 qpair failed and we were unable to recover it. 00:28:15.558 [2024-12-10 22:59:23.032061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.558 [2024-12-10 22:59:23.032093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.558 qpair failed and we were unable to recover it. 00:28:15.558 [2024-12-10 22:59:23.032324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.558 [2024-12-10 22:59:23.032359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.558 qpair failed and we were unable to recover it. 00:28:15.558 [2024-12-10 22:59:23.032464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.558 [2024-12-10 22:59:23.032497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.558 qpair failed and we were unable to recover it. 00:28:15.558 [2024-12-10 22:59:23.032691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.558 [2024-12-10 22:59:23.032724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.558 qpair failed and we were unable to recover it. 00:28:15.558 [2024-12-10 22:59:23.032835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.558 [2024-12-10 22:59:23.032868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.558 qpair failed and we were unable to recover it. 00:28:15.558 [2024-12-10 22:59:23.033081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.558 [2024-12-10 22:59:23.033114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.558 qpair failed and we were unable to recover it. 00:28:15.558 [2024-12-10 22:59:23.033319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.558 [2024-12-10 22:59:23.033352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.558 qpair failed and we were unable to recover it. 00:28:15.558 [2024-12-10 22:59:23.033529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.558 [2024-12-10 22:59:23.033563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.558 qpair failed and we were unable to recover it. 00:28:15.558 [2024-12-10 22:59:23.033821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.558 [2024-12-10 22:59:23.033853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.558 qpair failed and we were unable to recover it. 00:28:15.558 [2024-12-10 22:59:23.034092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.558 [2024-12-10 22:59:23.034131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.558 qpair failed and we were unable to recover it. 00:28:15.558 [2024-12-10 22:59:23.034322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.558 [2024-12-10 22:59:23.034356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.558 qpair failed and we were unable to recover it. 00:28:15.558 [2024-12-10 22:59:23.034525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.558 [2024-12-10 22:59:23.034559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.558 qpair failed and we were unable to recover it. 00:28:15.558 [2024-12-10 22:59:23.034673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.558 [2024-12-10 22:59:23.034707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.558 qpair failed and we were unable to recover it. 00:28:15.558 [2024-12-10 22:59:23.034986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.558 [2024-12-10 22:59:23.035020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.558 qpair failed and we were unable to recover it. 00:28:15.558 [2024-12-10 22:59:23.035285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.558 [2024-12-10 22:59:23.035320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.558 qpair failed and we were unable to recover it. 00:28:15.558 [2024-12-10 22:59:23.035435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.558 [2024-12-10 22:59:23.035468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.558 qpair failed and we were unable to recover it. 00:28:15.558 [2024-12-10 22:59:23.035656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.558 [2024-12-10 22:59:23.035689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.558 qpair failed and we were unable to recover it. 00:28:15.558 [2024-12-10 22:59:23.035806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.558 [2024-12-10 22:59:23.035839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.558 qpair failed and we were unable to recover it. 00:28:15.558 [2024-12-10 22:59:23.036027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.558 [2024-12-10 22:59:23.036060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.558 qpair failed and we were unable to recover it. 00:28:15.559 [2024-12-10 22:59:23.036173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.559 [2024-12-10 22:59:23.036205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.559 qpair failed and we were unable to recover it. 00:28:15.559 [2024-12-10 22:59:23.036372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.559 [2024-12-10 22:59:23.036404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.559 qpair failed and we were unable to recover it. 00:28:15.559 [2024-12-10 22:59:23.036595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.559 [2024-12-10 22:59:23.036628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.559 qpair failed and we were unable to recover it. 00:28:15.559 [2024-12-10 22:59:23.036798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.559 [2024-12-10 22:59:23.036831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.559 qpair failed and we were unable to recover it. 00:28:15.559 [2024-12-10 22:59:23.037028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.559 [2024-12-10 22:59:23.037062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.559 qpair failed and we were unable to recover it. 00:28:15.559 [2024-12-10 22:59:23.037305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.559 [2024-12-10 22:59:23.037341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.559 qpair failed and we were unable to recover it. 00:28:15.559 [2024-12-10 22:59:23.037459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.559 [2024-12-10 22:59:23.037492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.559 qpair failed and we were unable to recover it. 00:28:15.559 [2024-12-10 22:59:23.037790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.559 [2024-12-10 22:59:23.037823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.559 qpair failed and we were unable to recover it. 00:28:15.559 [2024-12-10 22:59:23.037928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.559 [2024-12-10 22:59:23.037961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.559 qpair failed and we were unable to recover it. 00:28:15.559 [2024-12-10 22:59:23.038073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.559 [2024-12-10 22:59:23.038104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.559 qpair failed and we were unable to recover it. 00:28:15.559 [2024-12-10 22:59:23.038402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.559 [2024-12-10 22:59:23.038435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.559 qpair failed and we were unable to recover it. 00:28:15.559 [2024-12-10 22:59:23.038604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.559 [2024-12-10 22:59:23.038637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.559 qpair failed and we were unable to recover it. 00:28:15.559 [2024-12-10 22:59:23.038811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.559 [2024-12-10 22:59:23.038845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.559 qpair failed and we were unable to recover it. 00:28:15.559 [2024-12-10 22:59:23.039046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.559 [2024-12-10 22:59:23.039079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.559 qpair failed and we were unable to recover it. 00:28:15.559 [2024-12-10 22:59:23.039258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.559 [2024-12-10 22:59:23.039293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.559 qpair failed and we were unable to recover it. 00:28:15.559 [2024-12-10 22:59:23.039556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.559 [2024-12-10 22:59:23.039591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.559 qpair failed and we were unable to recover it. 00:28:15.559 [2024-12-10 22:59:23.039760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.559 [2024-12-10 22:59:23.039793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.559 qpair failed and we were unable to recover it. 00:28:15.559 [2024-12-10 22:59:23.040055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.559 [2024-12-10 22:59:23.040089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.559 qpair failed and we were unable to recover it. 00:28:15.559 [2024-12-10 22:59:23.040330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.559 [2024-12-10 22:59:23.040363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.559 qpair failed and we were unable to recover it. 00:28:15.559 [2024-12-10 22:59:23.040533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.559 [2024-12-10 22:59:23.040568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.559 qpair failed and we were unable to recover it. 00:28:15.559 [2024-12-10 22:59:23.040834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.559 [2024-12-10 22:59:23.040867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.559 qpair failed and we were unable to recover it. 00:28:15.559 [2024-12-10 22:59:23.040977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.559 [2024-12-10 22:59:23.041012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.559 qpair failed and we were unable to recover it. 00:28:15.559 [2024-12-10 22:59:23.041268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.559 [2024-12-10 22:59:23.041302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.559 qpair failed and we were unable to recover it. 00:28:15.559 [2024-12-10 22:59:23.041468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.559 [2024-12-10 22:59:23.041501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.559 qpair failed and we were unable to recover it. 00:28:15.559 [2024-12-10 22:59:23.041784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.559 [2024-12-10 22:59:23.041819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.559 qpair failed and we were unable to recover it. 00:28:15.559 [2024-12-10 22:59:23.042085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.559 [2024-12-10 22:59:23.042117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.559 qpair failed and we were unable to recover it. 00:28:15.559 [2024-12-10 22:59:23.042409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.559 [2024-12-10 22:59:23.042443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.559 qpair failed and we were unable to recover it. 00:28:15.559 [2024-12-10 22:59:23.042708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.559 [2024-12-10 22:59:23.042742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.559 qpair failed and we were unable to recover it. 00:28:15.559 [2024-12-10 22:59:23.042923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.559 [2024-12-10 22:59:23.042967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.559 qpair failed and we were unable to recover it. 00:28:15.559 [2024-12-10 22:59:23.043223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.559 [2024-12-10 22:59:23.043259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.559 qpair failed and we were unable to recover it. 00:28:15.559 [2024-12-10 22:59:23.043450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.559 [2024-12-10 22:59:23.043485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.559 qpair failed and we were unable to recover it. 00:28:15.559 [2024-12-10 22:59:23.043748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.559 [2024-12-10 22:59:23.043787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.559 qpair failed and we were unable to recover it. 00:28:15.559 [2024-12-10 22:59:23.043956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.559 [2024-12-10 22:59:23.043989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.559 qpair failed and we were unable to recover it. 00:28:15.559 [2024-12-10 22:59:23.044114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.559 [2024-12-10 22:59:23.044147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.559 qpair failed and we were unable to recover it. 00:28:15.559 [2024-12-10 22:59:23.044284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.560 [2024-12-10 22:59:23.044317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.560 qpair failed and we were unable to recover it. 00:28:15.560 [2024-12-10 22:59:23.044518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.560 [2024-12-10 22:59:23.044552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.560 qpair failed and we were unable to recover it. 00:28:15.560 [2024-12-10 22:59:23.044743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.560 [2024-12-10 22:59:23.044778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.560 qpair failed and we were unable to recover it. 00:28:15.560 [2024-12-10 22:59:23.044951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.560 [2024-12-10 22:59:23.044985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.560 qpair failed and we were unable to recover it. 00:28:15.560 [2024-12-10 22:59:23.045179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.560 [2024-12-10 22:59:23.045213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.560 qpair failed and we were unable to recover it. 00:28:15.560 [2024-12-10 22:59:23.045387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.560 [2024-12-10 22:59:23.045420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.560 qpair failed and we were unable to recover it. 00:28:15.560 [2024-12-10 22:59:23.045592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.560 [2024-12-10 22:59:23.045626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.560 qpair failed and we were unable to recover it. 00:28:15.560 [2024-12-10 22:59:23.045818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.560 [2024-12-10 22:59:23.045852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.560 qpair failed and we were unable to recover it. 00:28:15.560 [2024-12-10 22:59:23.046043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.560 [2024-12-10 22:59:23.046077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.560 qpair failed and we were unable to recover it. 00:28:15.560 [2024-12-10 22:59:23.046276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.560 [2024-12-10 22:59:23.046313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.560 qpair failed and we were unable to recover it. 00:28:15.560 [2024-12-10 22:59:23.046438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.560 [2024-12-10 22:59:23.046470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.560 qpair failed and we were unable to recover it. 00:28:15.560 [2024-12-10 22:59:23.046604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.560 [2024-12-10 22:59:23.046638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.560 qpair failed and we were unable to recover it. 00:28:15.560 [2024-12-10 22:59:23.046876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.560 [2024-12-10 22:59:23.046908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.560 qpair failed and we were unable to recover it. 00:28:15.560 [2024-12-10 22:59:23.047091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.560 [2024-12-10 22:59:23.047124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.560 qpair failed and we were unable to recover it. 00:28:15.560 [2024-12-10 22:59:23.047344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.560 [2024-12-10 22:59:23.047378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.560 qpair failed and we were unable to recover it. 00:28:15.560 [2024-12-10 22:59:23.047488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.560 [2024-12-10 22:59:23.047520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.560 qpair failed and we were unable to recover it. 00:28:15.560 [2024-12-10 22:59:23.047628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.560 [2024-12-10 22:59:23.047660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.560 qpair failed and we were unable to recover it. 00:28:15.560 [2024-12-10 22:59:23.047913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.560 [2024-12-10 22:59:23.047949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.560 qpair failed and we were unable to recover it. 00:28:15.560 [2024-12-10 22:59:23.048215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.560 [2024-12-10 22:59:23.048249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.560 qpair failed and we were unable to recover it. 00:28:15.560 [2024-12-10 22:59:23.048424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.560 [2024-12-10 22:59:23.048459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.560 qpair failed and we were unable to recover it. 00:28:15.560 [2024-12-10 22:59:23.048574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.560 [2024-12-10 22:59:23.048607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.560 qpair failed and we were unable to recover it. 00:28:15.560 [2024-12-10 22:59:23.048813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.560 [2024-12-10 22:59:23.048845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.560 qpair failed and we were unable to recover it. 00:28:15.560 [2024-12-10 22:59:23.049042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.560 [2024-12-10 22:59:23.049076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.560 qpair failed and we were unable to recover it. 00:28:15.560 [2024-12-10 22:59:23.049253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.560 [2024-12-10 22:59:23.049287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.560 qpair failed and we were unable to recover it. 00:28:15.560 [2024-12-10 22:59:23.049548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.560 [2024-12-10 22:59:23.049592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.560 qpair failed and we were unable to recover it. 00:28:15.560 [2024-12-10 22:59:23.049798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.560 [2024-12-10 22:59:23.049832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.560 qpair failed and we were unable to recover it. 00:28:15.560 [2024-12-10 22:59:23.050018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.560 [2024-12-10 22:59:23.050053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.560 qpair failed and we were unable to recover it. 00:28:15.560 [2024-12-10 22:59:23.050317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.560 [2024-12-10 22:59:23.050352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.560 qpair failed and we were unable to recover it. 00:28:15.560 [2024-12-10 22:59:23.050535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.560 [2024-12-10 22:59:23.050569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.560 qpair failed and we were unable to recover it. 00:28:15.560 [2024-12-10 22:59:23.050830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.560 [2024-12-10 22:59:23.050865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.560 qpair failed and we were unable to recover it. 00:28:15.560 [2024-12-10 22:59:23.051124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.560 [2024-12-10 22:59:23.051193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.560 qpair failed and we were unable to recover it. 00:28:15.560 [2024-12-10 22:59:23.051370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.560 [2024-12-10 22:59:23.051406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.560 qpair failed and we were unable to recover it. 00:28:15.560 [2024-12-10 22:59:23.051663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.560 [2024-12-10 22:59:23.051696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.560 qpair failed and we were unable to recover it. 00:28:15.560 [2024-12-10 22:59:23.051896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.560 [2024-12-10 22:59:23.051930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.560 qpair failed and we were unable to recover it. 00:28:15.560 [2024-12-10 22:59:23.052180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.560 [2024-12-10 22:59:23.052216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.560 qpair failed and we were unable to recover it. 00:28:15.560 [2024-12-10 22:59:23.052399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.560 [2024-12-10 22:59:23.052432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.560 qpair failed and we were unable to recover it. 00:28:15.560 [2024-12-10 22:59:23.052622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.560 [2024-12-10 22:59:23.052657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.560 qpair failed and we were unable to recover it. 00:28:15.560 [2024-12-10 22:59:23.052782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.560 [2024-12-10 22:59:23.052825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.560 qpair failed and we were unable to recover it. 00:28:15.560 [2024-12-10 22:59:23.053024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.560 [2024-12-10 22:59:23.053058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.560 qpair failed and we were unable to recover it. 00:28:15.560 [2024-12-10 22:59:23.053229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.560 [2024-12-10 22:59:23.053265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.560 qpair failed and we were unable to recover it. 00:28:15.560 [2024-12-10 22:59:23.053379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.561 [2024-12-10 22:59:23.053411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.561 qpair failed and we were unable to recover it. 00:28:15.561 [2024-12-10 22:59:23.053579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.561 [2024-12-10 22:59:23.053612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.561 qpair failed and we were unable to recover it. 00:28:15.561 [2024-12-10 22:59:23.053794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.561 [2024-12-10 22:59:23.053827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.561 qpair failed and we were unable to recover it. 00:28:15.561 [2024-12-10 22:59:23.054001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.561 [2024-12-10 22:59:23.054034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.561 qpair failed and we were unable to recover it. 00:28:15.561 [2024-12-10 22:59:23.054205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.561 [2024-12-10 22:59:23.054239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.561 qpair failed and we were unable to recover it. 00:28:15.561 [2024-12-10 22:59:23.054353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.561 [2024-12-10 22:59:23.054386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.561 qpair failed and we were unable to recover it. 00:28:15.561 [2024-12-10 22:59:23.054650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.561 [2024-12-10 22:59:23.054684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.561 qpair failed and we were unable to recover it. 00:28:15.561 [2024-12-10 22:59:23.054853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.561 [2024-12-10 22:59:23.054886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.561 qpair failed and we were unable to recover it. 00:28:15.561 [2024-12-10 22:59:23.055091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.561 [2024-12-10 22:59:23.055124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.561 qpair failed and we were unable to recover it. 00:28:15.561 [2024-12-10 22:59:23.055323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.561 [2024-12-10 22:59:23.055359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.561 qpair failed and we were unable to recover it. 00:28:15.561 [2024-12-10 22:59:23.055528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.561 [2024-12-10 22:59:23.055561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.561 qpair failed and we were unable to recover it. 00:28:15.561 [2024-12-10 22:59:23.055669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.561 [2024-12-10 22:59:23.055702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.561 qpair failed and we were unable to recover it. 00:28:15.561 [2024-12-10 22:59:23.055880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.561 [2024-12-10 22:59:23.055912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.561 qpair failed and we were unable to recover it. 00:28:15.561 [2024-12-10 22:59:23.056104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.561 [2024-12-10 22:59:23.056137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.561 qpair failed and we were unable to recover it. 00:28:15.561 [2024-12-10 22:59:23.056275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.561 [2024-12-10 22:59:23.056308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.561 qpair failed and we were unable to recover it. 00:28:15.561 [2024-12-10 22:59:23.056482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.561 [2024-12-10 22:59:23.056515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.561 qpair failed and we were unable to recover it. 00:28:15.561 [2024-12-10 22:59:23.056756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.561 [2024-12-10 22:59:23.056789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.561 qpair failed and we were unable to recover it. 00:28:15.561 [2024-12-10 22:59:23.056961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.561 [2024-12-10 22:59:23.056995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.561 qpair failed and we were unable to recover it. 00:28:15.561 [2024-12-10 22:59:23.057173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.561 [2024-12-10 22:59:23.057207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.561 qpair failed and we were unable to recover it. 00:28:15.561 [2024-12-10 22:59:23.057379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.561 [2024-12-10 22:59:23.057412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.561 qpair failed and we were unable to recover it. 00:28:15.561 [2024-12-10 22:59:23.057581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.561 [2024-12-10 22:59:23.057614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.561 qpair failed and we were unable to recover it. 00:28:15.561 [2024-12-10 22:59:23.057724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.561 [2024-12-10 22:59:23.057757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.561 qpair failed and we were unable to recover it. 00:28:15.561 [2024-12-10 22:59:23.058004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.561 [2024-12-10 22:59:23.058038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.561 qpair failed and we were unable to recover it. 00:28:15.561 [2024-12-10 22:59:23.058335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.561 [2024-12-10 22:59:23.058371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.561 qpair failed and we were unable to recover it. 00:28:15.561 [2024-12-10 22:59:23.058632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.561 [2024-12-10 22:59:23.058665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.561 qpair failed and we were unable to recover it. 00:28:15.561 [2024-12-10 22:59:23.058844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.561 [2024-12-10 22:59:23.058890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.561 qpair failed and we were unable to recover it. 00:28:15.561 [2024-12-10 22:59:23.059065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.561 [2024-12-10 22:59:23.059099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.561 qpair failed and we were unable to recover it. 00:28:15.561 [2024-12-10 22:59:23.059386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.561 [2024-12-10 22:59:23.059420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.561 qpair failed and we were unable to recover it. 00:28:15.561 [2024-12-10 22:59:23.059536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.561 [2024-12-10 22:59:23.059569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.561 qpair failed and we were unable to recover it. 00:28:15.561 [2024-12-10 22:59:23.059683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.561 [2024-12-10 22:59:23.059718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.561 qpair failed and we were unable to recover it. 00:28:15.561 [2024-12-10 22:59:23.059956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.561 [2024-12-10 22:59:23.059988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.561 qpair failed and we were unable to recover it. 00:28:15.561 [2024-12-10 22:59:23.060195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.561 [2024-12-10 22:59:23.060229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.561 qpair failed and we were unable to recover it. 00:28:15.561 [2024-12-10 22:59:23.060498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.561 [2024-12-10 22:59:23.060532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.561 qpair failed and we were unable to recover it. 00:28:15.561 [2024-12-10 22:59:23.060800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.561 [2024-12-10 22:59:23.060833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.561 qpair failed and we were unable to recover it. 00:28:15.561 [2024-12-10 22:59:23.061120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.561 [2024-12-10 22:59:23.061154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.561 qpair failed and we were unable to recover it. 00:28:15.561 [2024-12-10 22:59:23.061347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.561 [2024-12-10 22:59:23.061381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.561 qpair failed and we were unable to recover it. 00:28:15.561 [2024-12-10 22:59:23.061495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.561 [2024-12-10 22:59:23.061529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.561 qpair failed and we were unable to recover it. 00:28:15.561 [2024-12-10 22:59:23.061723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.561 [2024-12-10 22:59:23.061756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.561 qpair failed and we were unable to recover it. 00:28:15.561 [2024-12-10 22:59:23.061935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.561 [2024-12-10 22:59:23.061967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.561 qpair failed and we were unable to recover it. 00:28:15.561 [2024-12-10 22:59:23.062146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.562 [2024-12-10 22:59:23.062193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.562 qpair failed and we were unable to recover it. 00:28:15.562 [2024-12-10 22:59:23.062316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.562 [2024-12-10 22:59:23.062348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.562 qpair failed and we were unable to recover it. 00:28:15.562 [2024-12-10 22:59:23.062629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.562 [2024-12-10 22:59:23.062661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.562 qpair failed and we were unable to recover it. 00:28:15.562 [2024-12-10 22:59:23.062855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.562 [2024-12-10 22:59:23.062888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.562 qpair failed and we were unable to recover it. 00:28:15.562 [2024-12-10 22:59:23.063065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.562 [2024-12-10 22:59:23.063098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.562 qpair failed and we were unable to recover it. 00:28:15.562 [2024-12-10 22:59:23.063222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.562 [2024-12-10 22:59:23.063257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.562 qpair failed and we were unable to recover it. 00:28:15.562 [2024-12-10 22:59:23.063502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.562 [2024-12-10 22:59:23.063537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.562 qpair failed and we were unable to recover it. 00:28:15.562 [2024-12-10 22:59:23.063730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.562 [2024-12-10 22:59:23.063764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.562 qpair failed and we were unable to recover it. 00:28:15.562 [2024-12-10 22:59:23.064002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.562 [2024-12-10 22:59:23.064034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.562 qpair failed and we were unable to recover it. 00:28:15.562 [2024-12-10 22:59:23.064228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.562 [2024-12-10 22:59:23.064262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.562 qpair failed and we were unable to recover it. 00:28:15.562 [2024-12-10 22:59:23.064501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.562 [2024-12-10 22:59:23.064534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.562 qpair failed and we were unable to recover it. 00:28:15.562 [2024-12-10 22:59:23.064759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.562 [2024-12-10 22:59:23.064791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.562 qpair failed and we were unable to recover it. 00:28:15.562 [2024-12-10 22:59:23.065053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.562 [2024-12-10 22:59:23.065104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.562 qpair failed and we were unable to recover it. 00:28:15.562 [2024-12-10 22:59:23.065313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.562 [2024-12-10 22:59:23.065349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.562 qpair failed and we were unable to recover it. 00:28:15.562 [2024-12-10 22:59:23.065620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.562 [2024-12-10 22:59:23.065654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.562 qpair failed and we were unable to recover it. 00:28:15.562 [2024-12-10 22:59:23.065853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.562 [2024-12-10 22:59:23.065888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.562 qpair failed and we were unable to recover it. 00:28:15.562 [2024-12-10 22:59:23.066019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.562 [2024-12-10 22:59:23.066051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.562 qpair failed and we were unable to recover it. 00:28:15.562 [2024-12-10 22:59:23.066243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.562 [2024-12-10 22:59:23.066276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.562 qpair failed and we were unable to recover it. 00:28:15.562 [2024-12-10 22:59:23.066401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.562 [2024-12-10 22:59:23.066434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.562 qpair failed and we were unable to recover it. 00:28:15.562 [2024-12-10 22:59:23.066640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.562 [2024-12-10 22:59:23.066673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.562 qpair failed and we were unable to recover it. 00:28:15.562 [2024-12-10 22:59:23.066924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.562 [2024-12-10 22:59:23.066956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.562 qpair failed and we were unable to recover it. 00:28:15.562 [2024-12-10 22:59:23.067125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.562 [2024-12-10 22:59:23.067180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.562 qpair failed and we were unable to recover it. 00:28:15.562 [2024-12-10 22:59:23.067290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.562 [2024-12-10 22:59:23.067323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.562 qpair failed and we were unable to recover it. 00:28:15.562 [2024-12-10 22:59:23.067544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.562 [2024-12-10 22:59:23.067578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.562 qpair failed and we were unable to recover it. 00:28:15.562 [2024-12-10 22:59:23.067855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.562 [2024-12-10 22:59:23.067890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.562 qpair failed and we were unable to recover it. 00:28:15.562 [2024-12-10 22:59:23.068131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.562 [2024-12-10 22:59:23.068178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.562 qpair failed and we were unable to recover it. 00:28:15.562 [2024-12-10 22:59:23.068430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.562 [2024-12-10 22:59:23.068463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.562 qpair failed and we were unable to recover it. 00:28:15.562 [2024-12-10 22:59:23.068575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.562 [2024-12-10 22:59:23.068610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.562 qpair failed and we were unable to recover it. 00:28:15.562 [2024-12-10 22:59:23.068752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.562 [2024-12-10 22:59:23.068786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.562 qpair failed and we were unable to recover it. 00:28:15.562 [2024-12-10 22:59:23.068982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.562 [2024-12-10 22:59:23.069016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.562 qpair failed and we were unable to recover it. 00:28:15.562 [2024-12-10 22:59:23.069290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.562 [2024-12-10 22:59:23.069327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.562 qpair failed and we were unable to recover it. 00:28:15.562 [2024-12-10 22:59:23.069514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.562 [2024-12-10 22:59:23.069547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.562 qpair failed and we were unable to recover it. 00:28:15.562 [2024-12-10 22:59:23.069802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.562 [2024-12-10 22:59:23.069835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.562 qpair failed and we were unable to recover it. 00:28:15.562 [2024-12-10 22:59:23.070029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.562 [2024-12-10 22:59:23.070064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.562 qpair failed and we were unable to recover it. 00:28:15.562 [2024-12-10 22:59:23.070184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.562 [2024-12-10 22:59:23.070218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.562 qpair failed and we were unable to recover it. 00:28:15.562 [2024-12-10 22:59:23.070422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.562 [2024-12-10 22:59:23.070454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.562 qpair failed and we were unable to recover it. 00:28:15.562 [2024-12-10 22:59:23.070557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.562 [2024-12-10 22:59:23.070590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.562 qpair failed and we were unable to recover it. 00:28:15.562 [2024-12-10 22:59:23.070773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.562 [2024-12-10 22:59:23.070805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.562 qpair failed and we were unable to recover it. 00:28:15.562 [2024-12-10 22:59:23.070995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.562 [2024-12-10 22:59:23.071028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.562 qpair failed and we were unable to recover it. 00:28:15.562 [2024-12-10 22:59:23.071206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.562 [2024-12-10 22:59:23.071239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.562 qpair failed and we were unable to recover it. 00:28:15.563 [2024-12-10 22:59:23.071427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.563 [2024-12-10 22:59:23.071460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.563 qpair failed and we were unable to recover it. 00:28:15.563 [2024-12-10 22:59:23.071651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.563 [2024-12-10 22:59:23.071684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.563 qpair failed and we were unable to recover it. 00:28:15.563 [2024-12-10 22:59:23.071946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.563 [2024-12-10 22:59:23.071979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.563 qpair failed and we were unable to recover it. 00:28:15.563 [2024-12-10 22:59:23.072150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.563 [2024-12-10 22:59:23.072193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.563 qpair failed and we were unable to recover it. 00:28:15.563 [2024-12-10 22:59:23.072446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.563 [2024-12-10 22:59:23.072479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.563 qpair failed and we were unable to recover it. 00:28:15.563 [2024-12-10 22:59:23.072724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.563 [2024-12-10 22:59:23.072758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.563 qpair failed and we were unable to recover it. 00:28:15.563 [2024-12-10 22:59:23.073038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.563 [2024-12-10 22:59:23.073073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.563 qpair failed and we were unable to recover it. 00:28:15.563 [2024-12-10 22:59:23.073297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.563 [2024-12-10 22:59:23.073331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.563 qpair failed and we were unable to recover it. 00:28:15.563 [2024-12-10 22:59:23.073548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.563 [2024-12-10 22:59:23.073581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.563 qpair failed and we were unable to recover it. 00:28:15.563 [2024-12-10 22:59:23.073843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.563 [2024-12-10 22:59:23.073876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.563 qpair failed and we were unable to recover it. 00:28:15.563 [2024-12-10 22:59:23.073990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.563 [2024-12-10 22:59:23.074022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.563 qpair failed and we were unable to recover it. 00:28:15.563 [2024-12-10 22:59:23.074286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.563 [2024-12-10 22:59:23.074319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.563 qpair failed and we were unable to recover it. 00:28:15.563 [2024-12-10 22:59:23.074607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.563 [2024-12-10 22:59:23.074640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.563 qpair failed and we were unable to recover it. 00:28:15.563 [2024-12-10 22:59:23.074817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.563 [2024-12-10 22:59:23.074850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.563 qpair failed and we were unable to recover it. 00:28:15.563 [2024-12-10 22:59:23.075061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.563 [2024-12-10 22:59:23.075100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.563 qpair failed and we were unable to recover it. 00:28:15.563 [2024-12-10 22:59:23.075217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.563 [2024-12-10 22:59:23.075269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.563 qpair failed and we were unable to recover it. 00:28:15.563 [2024-12-10 22:59:23.075439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.563 [2024-12-10 22:59:23.075473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.563 qpair failed and we were unable to recover it. 00:28:15.563 [2024-12-10 22:59:23.075739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.563 [2024-12-10 22:59:23.075773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.563 qpair failed and we were unable to recover it. 00:28:15.563 [2024-12-10 22:59:23.075972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.563 [2024-12-10 22:59:23.076006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.563 qpair failed and we were unable to recover it. 00:28:15.563 [2024-12-10 22:59:23.076256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.563 [2024-12-10 22:59:23.076291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.563 qpair failed and we were unable to recover it. 00:28:15.563 [2024-12-10 22:59:23.076406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.563 [2024-12-10 22:59:23.076438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.563 qpair failed and we were unable to recover it. 00:28:15.563 [2024-12-10 22:59:23.076702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.563 [2024-12-10 22:59:23.076736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.563 qpair failed and we were unable to recover it. 00:28:15.563 [2024-12-10 22:59:23.076934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.563 [2024-12-10 22:59:23.076969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.563 qpair failed and we were unable to recover it. 00:28:15.563 [2024-12-10 22:59:23.077177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.563 [2024-12-10 22:59:23.077212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.563 qpair failed and we were unable to recover it. 00:28:15.563 [2024-12-10 22:59:23.077386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.563 [2024-12-10 22:59:23.077418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.563 qpair failed and we were unable to recover it. 00:28:15.563 [2024-12-10 22:59:23.077587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.563 [2024-12-10 22:59:23.077620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.563 qpair failed and we were unable to recover it. 00:28:15.563 [2024-12-10 22:59:23.077808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.563 [2024-12-10 22:59:23.077841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.563 qpair failed and we were unable to recover it. 00:28:15.563 [2024-12-10 22:59:23.078056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.563 [2024-12-10 22:59:23.078088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.563 qpair failed and we were unable to recover it. 00:28:15.563 [2024-12-10 22:59:23.078277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.563 [2024-12-10 22:59:23.078312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.563 qpair failed and we were unable to recover it. 00:28:15.563 [2024-12-10 22:59:23.078576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.563 [2024-12-10 22:59:23.078610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.563 qpair failed and we were unable to recover it. 00:28:15.563 [2024-12-10 22:59:23.078732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.563 [2024-12-10 22:59:23.078765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.563 qpair failed and we were unable to recover it. 00:28:15.563 [2024-12-10 22:59:23.078948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.564 [2024-12-10 22:59:23.078981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.564 qpair failed and we were unable to recover it. 00:28:15.564 [2024-12-10 22:59:23.079199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.564 [2024-12-10 22:59:23.079233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.564 qpair failed and we were unable to recover it. 00:28:15.564 [2024-12-10 22:59:23.079508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.564 [2024-12-10 22:59:23.079541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.564 qpair failed and we were unable to recover it. 00:28:15.564 [2024-12-10 22:59:23.079714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.564 [2024-12-10 22:59:23.079745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.564 qpair failed and we were unable to recover it. 00:28:15.564 [2024-12-10 22:59:23.079943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.564 [2024-12-10 22:59:23.079975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.564 qpair failed and we were unable to recover it. 00:28:15.564 [2024-12-10 22:59:23.080238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.564 [2024-12-10 22:59:23.080274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.564 qpair failed and we were unable to recover it. 00:28:15.564 [2024-12-10 22:59:23.080470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.564 [2024-12-10 22:59:23.080502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.564 qpair failed and we were unable to recover it. 00:28:15.564 [2024-12-10 22:59:23.080676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.564 [2024-12-10 22:59:23.080712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.564 qpair failed and we were unable to recover it. 00:28:15.564 [2024-12-10 22:59:23.080883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.564 [2024-12-10 22:59:23.080916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.564 qpair failed and we were unable to recover it. 00:28:15.564 [2024-12-10 22:59:23.081100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.564 [2024-12-10 22:59:23.081134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.564 qpair failed and we were unable to recover it. 00:28:15.564 [2024-12-10 22:59:23.081325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.564 [2024-12-10 22:59:23.081359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.564 qpair failed and we were unable to recover it. 00:28:15.564 [2024-12-10 22:59:23.081563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.564 [2024-12-10 22:59:23.081596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.564 qpair failed and we were unable to recover it. 00:28:15.564 [2024-12-10 22:59:23.081766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.564 [2024-12-10 22:59:23.081800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.564 qpair failed and we were unable to recover it. 00:28:15.564 [2024-12-10 22:59:23.081915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.564 [2024-12-10 22:59:23.081949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.564 qpair failed and we were unable to recover it. 00:28:15.564 [2024-12-10 22:59:23.082139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.564 [2024-12-10 22:59:23.082184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.564 qpair failed and we were unable to recover it. 00:28:15.564 [2024-12-10 22:59:23.082309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.564 [2024-12-10 22:59:23.082342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.564 qpair failed and we were unable to recover it. 00:28:15.564 [2024-12-10 22:59:23.082510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.564 [2024-12-10 22:59:23.082542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.564 qpair failed and we were unable to recover it. 00:28:15.564 [2024-12-10 22:59:23.082746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.564 [2024-12-10 22:59:23.082780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.564 qpair failed and we were unable to recover it. 00:28:15.564 [2024-12-10 22:59:23.082961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.564 [2024-12-10 22:59:23.082995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.564 qpair failed and we were unable to recover it. 00:28:15.564 [2024-12-10 22:59:23.083098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.564 [2024-12-10 22:59:23.083132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.564 qpair failed and we were unable to recover it. 00:28:15.564 [2024-12-10 22:59:23.083342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.564 [2024-12-10 22:59:23.083377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.564 qpair failed and we were unable to recover it. 00:28:15.564 [2024-12-10 22:59:23.083548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.564 [2024-12-10 22:59:23.083581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.564 qpair failed and we were unable to recover it. 00:28:15.564 [2024-12-10 22:59:23.083823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.564 [2024-12-10 22:59:23.083856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.564 qpair failed and we were unable to recover it. 00:28:15.564 [2024-12-10 22:59:23.084122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.564 [2024-12-10 22:59:23.084155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.564 qpair failed and we were unable to recover it. 00:28:15.564 [2024-12-10 22:59:23.084452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.564 [2024-12-10 22:59:23.084485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.564 qpair failed and we were unable to recover it. 00:28:15.564 [2024-12-10 22:59:23.084691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.564 [2024-12-10 22:59:23.084724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.564 qpair failed and we were unable to recover it. 00:28:15.564 [2024-12-10 22:59:23.084831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.564 [2024-12-10 22:59:23.084862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.564 qpair failed and we were unable to recover it. 00:28:15.564 [2024-12-10 22:59:23.085127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.564 [2024-12-10 22:59:23.085171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.564 qpair failed and we were unable to recover it. 00:28:15.564 [2024-12-10 22:59:23.085371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.564 [2024-12-10 22:59:23.085404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.564 qpair failed and we were unable to recover it. 00:28:15.564 [2024-12-10 22:59:23.085700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.564 [2024-12-10 22:59:23.085733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.564 qpair failed and we were unable to recover it. 00:28:15.564 [2024-12-10 22:59:23.085910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.564 [2024-12-10 22:59:23.085942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.564 qpair failed and we were unable to recover it. 00:28:15.564 [2024-12-10 22:59:23.086119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.564 [2024-12-10 22:59:23.086152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.564 qpair failed and we were unable to recover it. 00:28:15.564 [2024-12-10 22:59:23.086365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.564 [2024-12-10 22:59:23.086399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.564 qpair failed and we were unable to recover it. 00:28:15.564 [2024-12-10 22:59:23.086601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.564 [2024-12-10 22:59:23.086633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.564 qpair failed and we were unable to recover it. 00:28:15.564 [2024-12-10 22:59:23.086805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.564 [2024-12-10 22:59:23.086837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.564 qpair failed and we were unable to recover it. 00:28:15.564 [2024-12-10 22:59:23.086940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.564 [2024-12-10 22:59:23.086972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.564 qpair failed and we were unable to recover it. 00:28:15.564 [2024-12-10 22:59:23.087223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.564 [2024-12-10 22:59:23.087257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.564 qpair failed and we were unable to recover it. 00:28:15.564 [2024-12-10 22:59:23.087378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.564 [2024-12-10 22:59:23.087410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.564 qpair failed and we were unable to recover it. 00:28:15.564 [2024-12-10 22:59:23.087615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.564 [2024-12-10 22:59:23.087649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.564 qpair failed and we were unable to recover it. 00:28:15.564 [2024-12-10 22:59:23.087915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.564 [2024-12-10 22:59:23.087948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.564 qpair failed and we were unable to recover it. 00:28:15.565 [2024-12-10 22:59:23.088142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.565 [2024-12-10 22:59:23.088183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.565 qpair failed and we were unable to recover it. 00:28:15.565 [2024-12-10 22:59:23.088378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.565 [2024-12-10 22:59:23.088410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.565 qpair failed and we were unable to recover it. 00:28:15.565 [2024-12-10 22:59:23.088603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.565 [2024-12-10 22:59:23.088636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.565 qpair failed and we were unable to recover it. 00:28:15.565 [2024-12-10 22:59:23.088870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.565 [2024-12-10 22:59:23.088903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.565 qpair failed and we were unable to recover it. 00:28:15.565 [2024-12-10 22:59:23.089008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.565 [2024-12-10 22:59:23.089041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.565 qpair failed and we were unable to recover it. 00:28:15.565 [2024-12-10 22:59:23.089326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.565 [2024-12-10 22:59:23.089361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.565 qpair failed and we were unable to recover it. 00:28:15.565 [2024-12-10 22:59:23.089476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.565 [2024-12-10 22:59:23.089509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.565 qpair failed and we were unable to recover it. 00:28:15.565 [2024-12-10 22:59:23.089614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.565 [2024-12-10 22:59:23.089647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.565 qpair failed and we were unable to recover it. 00:28:15.565 [2024-12-10 22:59:23.089891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.565 [2024-12-10 22:59:23.089925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.565 qpair failed and we were unable to recover it. 00:28:15.565 [2024-12-10 22:59:23.090044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.565 [2024-12-10 22:59:23.090077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.565 qpair failed and we were unable to recover it. 00:28:15.565 [2024-12-10 22:59:23.090267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.565 [2024-12-10 22:59:23.090302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.565 qpair failed and we were unable to recover it. 00:28:15.565 [2024-12-10 22:59:23.090502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.565 [2024-12-10 22:59:23.090540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.565 qpair failed and we were unable to recover it. 00:28:15.565 [2024-12-10 22:59:23.090727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.565 [2024-12-10 22:59:23.090761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.565 qpair failed and we were unable to recover it. 00:28:15.565 [2024-12-10 22:59:23.091023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.565 [2024-12-10 22:59:23.091056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.565 qpair failed and we were unable to recover it. 00:28:15.565 [2024-12-10 22:59:23.091259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.565 [2024-12-10 22:59:23.091293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.565 qpair failed and we were unable to recover it. 00:28:15.565 [2024-12-10 22:59:23.091542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.565 [2024-12-10 22:59:23.091576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.565 qpair failed and we were unable to recover it. 00:28:15.565 [2024-12-10 22:59:23.091764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.565 [2024-12-10 22:59:23.091796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.565 qpair failed and we were unable to recover it. 00:28:15.565 [2024-12-10 22:59:23.092013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.565 [2024-12-10 22:59:23.092046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.565 qpair failed and we were unable to recover it. 00:28:15.565 [2024-12-10 22:59:23.092304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.565 [2024-12-10 22:59:23.092338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.565 qpair failed and we were unable to recover it. 00:28:15.565 [2024-12-10 22:59:23.092514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.565 [2024-12-10 22:59:23.092546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.565 qpair failed and we were unable to recover it. 00:28:15.565 [2024-12-10 22:59:23.092750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.565 [2024-12-10 22:59:23.092792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.565 qpair failed and we were unable to recover it. 00:28:15.565 [2024-12-10 22:59:23.092987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.565 [2024-12-10 22:59:23.093021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.565 qpair failed and we were unable to recover it. 00:28:15.565 [2024-12-10 22:59:23.093125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.565 [2024-12-10 22:59:23.093168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.565 qpair failed and we were unable to recover it. 00:28:15.565 [2024-12-10 22:59:23.093295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.565 [2024-12-10 22:59:23.093328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.565 qpair failed and we were unable to recover it. 00:28:15.565 [2024-12-10 22:59:23.093526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.565 [2024-12-10 22:59:23.093560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.565 qpair failed and we were unable to recover it. 00:28:15.565 [2024-12-10 22:59:23.093807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.565 [2024-12-10 22:59:23.093841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.565 qpair failed and we were unable to recover it. 00:28:15.565 [2024-12-10 22:59:23.093970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.565 [2024-12-10 22:59:23.094004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.565 qpair failed and we were unable to recover it. 00:28:15.565 [2024-12-10 22:59:23.094278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.565 [2024-12-10 22:59:23.094312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.565 qpair failed and we were unable to recover it. 00:28:15.565 [2024-12-10 22:59:23.094502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.565 [2024-12-10 22:59:23.094534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.565 qpair failed and we were unable to recover it. 00:28:15.565 [2024-12-10 22:59:23.094707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.565 [2024-12-10 22:59:23.094742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.565 qpair failed and we were unable to recover it. 00:28:15.565 [2024-12-10 22:59:23.094916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.565 [2024-12-10 22:59:23.094950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.565 qpair failed and we were unable to recover it. 00:28:15.565 [2024-12-10 22:59:23.095154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.565 [2024-12-10 22:59:23.095200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.565 qpair failed and we were unable to recover it. 00:28:15.565 [2024-12-10 22:59:23.095379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.565 [2024-12-10 22:59:23.095413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.565 qpair failed and we were unable to recover it. 00:28:15.565 [2024-12-10 22:59:23.095525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.565 [2024-12-10 22:59:23.095556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.565 qpair failed and we were unable to recover it. 00:28:15.565 [2024-12-10 22:59:23.095759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.565 [2024-12-10 22:59:23.095793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.565 qpair failed and we were unable to recover it. 00:28:15.565 [2024-12-10 22:59:23.096001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.565 [2024-12-10 22:59:23.096036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.565 qpair failed and we were unable to recover it. 00:28:15.565 [2024-12-10 22:59:23.096211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.565 [2024-12-10 22:59:23.096246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.565 qpair failed and we were unable to recover it. 00:28:15.565 [2024-12-10 22:59:23.096519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.565 [2024-12-10 22:59:23.096554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.565 qpair failed and we were unable to recover it. 00:28:15.565 [2024-12-10 22:59:23.096878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.565 [2024-12-10 22:59:23.096913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.565 qpair failed and we were unable to recover it. 00:28:15.565 [2024-12-10 22:59:23.097103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.565 [2024-12-10 22:59:23.097136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.565 qpair failed and we were unable to recover it. 00:28:15.566 [2024-12-10 22:59:23.097350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.566 [2024-12-10 22:59:23.097385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.566 qpair failed and we were unable to recover it. 00:28:15.566 [2024-12-10 22:59:23.097562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.566 [2024-12-10 22:59:23.097596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.566 qpair failed and we were unable to recover it. 00:28:15.566 [2024-12-10 22:59:23.097726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.566 [2024-12-10 22:59:23.097759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.566 qpair failed and we were unable to recover it. 00:28:15.566 [2024-12-10 22:59:23.097937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.566 [2024-12-10 22:59:23.097970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.566 qpair failed and we were unable to recover it. 00:28:15.566 [2024-12-10 22:59:23.098237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.566 [2024-12-10 22:59:23.098272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.566 qpair failed and we were unable to recover it. 00:28:15.566 [2024-12-10 22:59:23.098537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.566 [2024-12-10 22:59:23.098571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.566 qpair failed and we were unable to recover it. 00:28:15.566 [2024-12-10 22:59:23.098887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.566 [2024-12-10 22:59:23.098922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.566 qpair failed and we were unable to recover it. 00:28:15.566 [2024-12-10 22:59:23.099097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.566 [2024-12-10 22:59:23.099131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.566 qpair failed and we were unable to recover it. 00:28:15.566 [2024-12-10 22:59:23.099424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.566 [2024-12-10 22:59:23.099457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.566 qpair failed and we were unable to recover it. 00:28:15.566 [2024-12-10 22:59:23.099580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.566 [2024-12-10 22:59:23.099613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.566 qpair failed and we were unable to recover it. 00:28:15.566 [2024-12-10 22:59:23.099727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.566 [2024-12-10 22:59:23.099759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.566 qpair failed and we were unable to recover it. 00:28:15.566 [2024-12-10 22:59:23.100001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.566 [2024-12-10 22:59:23.100033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.566 qpair failed and we were unable to recover it. 00:28:15.566 [2024-12-10 22:59:23.100225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.566 [2024-12-10 22:59:23.100265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.566 qpair failed and we were unable to recover it. 00:28:15.566 [2024-12-10 22:59:23.100432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.566 [2024-12-10 22:59:23.100465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.566 qpair failed and we were unable to recover it. 00:28:15.566 [2024-12-10 22:59:23.100575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.566 [2024-12-10 22:59:23.100608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.566 qpair failed and we were unable to recover it. 00:28:15.566 [2024-12-10 22:59:23.100875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.566 [2024-12-10 22:59:23.100908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.566 qpair failed and we were unable to recover it. 00:28:15.566 [2024-12-10 22:59:23.101081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.566 [2024-12-10 22:59:23.101114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.566 qpair failed and we were unable to recover it. 00:28:15.566 [2024-12-10 22:59:23.101396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.566 [2024-12-10 22:59:23.101431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.566 qpair failed and we were unable to recover it. 00:28:15.566 [2024-12-10 22:59:23.101705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.566 [2024-12-10 22:59:23.101738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.566 qpair failed and we were unable to recover it. 00:28:15.566 [2024-12-10 22:59:23.101943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.566 [2024-12-10 22:59:23.101976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.566 qpair failed and we were unable to recover it. 00:28:15.566 [2024-12-10 22:59:23.102152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.566 [2024-12-10 22:59:23.102194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.566 qpair failed and we were unable to recover it. 00:28:15.566 [2024-12-10 22:59:23.102463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.566 [2024-12-10 22:59:23.102497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.566 qpair failed and we were unable to recover it. 00:28:15.566 [2024-12-10 22:59:23.102681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.566 [2024-12-10 22:59:23.102713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.566 qpair failed and we were unable to recover it. 00:28:15.566 [2024-12-10 22:59:23.103030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.566 [2024-12-10 22:59:23.103064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.566 qpair failed and we were unable to recover it. 00:28:15.566 [2024-12-10 22:59:23.103183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.566 [2024-12-10 22:59:23.103218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.566 qpair failed and we were unable to recover it. 00:28:15.566 [2024-12-10 22:59:23.103483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.566 [2024-12-10 22:59:23.103517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.566 qpair failed and we were unable to recover it. 00:28:15.566 [2024-12-10 22:59:23.103719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.566 [2024-12-10 22:59:23.103754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.566 qpair failed and we were unable to recover it. 00:28:15.566 [2024-12-10 22:59:23.103935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.566 [2024-12-10 22:59:23.103968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.566 qpair failed and we were unable to recover it. 00:28:15.566 [2024-12-10 22:59:23.104237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.566 [2024-12-10 22:59:23.104271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.566 qpair failed and we were unable to recover it. 00:28:15.566 [2024-12-10 22:59:23.104568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.566 [2024-12-10 22:59:23.104601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.566 qpair failed and we were unable to recover it. 00:28:15.566 [2024-12-10 22:59:23.104775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.566 [2024-12-10 22:59:23.104807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.566 qpair failed and we were unable to recover it. 00:28:15.566 [2024-12-10 22:59:23.104987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.566 [2024-12-10 22:59:23.105022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.566 qpair failed and we were unable to recover it. 00:28:15.566 [2024-12-10 22:59:23.105142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.566 [2024-12-10 22:59:23.105185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.566 qpair failed and we were unable to recover it. 00:28:15.566 [2024-12-10 22:59:23.105292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.566 [2024-12-10 22:59:23.105323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.566 qpair failed and we were unable to recover it. 00:28:15.566 [2024-12-10 22:59:23.105588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.566 [2024-12-10 22:59:23.105622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.566 qpair failed and we were unable to recover it. 00:28:15.566 [2024-12-10 22:59:23.105922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.566 [2024-12-10 22:59:23.105955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.566 qpair failed and we were unable to recover it. 00:28:15.566 [2024-12-10 22:59:23.106132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.566 [2024-12-10 22:59:23.106173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.566 qpair failed and we were unable to recover it. 00:28:15.566 [2024-12-10 22:59:23.106304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.566 [2024-12-10 22:59:23.106338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.566 qpair failed and we were unable to recover it. 00:28:15.566 [2024-12-10 22:59:23.106532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.566 [2024-12-10 22:59:23.106564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.566 qpair failed and we were unable to recover it. 00:28:15.566 [2024-12-10 22:59:23.106845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.566 [2024-12-10 22:59:23.106889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.567 qpair failed and we were unable to recover it. 00:28:15.567 [2024-12-10 22:59:23.107156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.567 [2024-12-10 22:59:23.107217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.567 qpair failed and we were unable to recover it. 00:28:15.567 [2024-12-10 22:59:23.107503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.567 [2024-12-10 22:59:23.107537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.567 qpair failed and we were unable to recover it. 00:28:15.567 [2024-12-10 22:59:23.107806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.567 [2024-12-10 22:59:23.107839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.567 qpair failed and we were unable to recover it. 00:28:15.567 [2024-12-10 22:59:23.108130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.567 [2024-12-10 22:59:23.108176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.567 qpair failed and we were unable to recover it. 00:28:15.567 [2024-12-10 22:59:23.108349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.567 [2024-12-10 22:59:23.108384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.567 qpair failed and we were unable to recover it. 00:28:15.567 [2024-12-10 22:59:23.108556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.567 [2024-12-10 22:59:23.108589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.567 qpair failed and we were unable to recover it. 00:28:15.567 [2024-12-10 22:59:23.108705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.567 [2024-12-10 22:59:23.108737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.567 qpair failed and we were unable to recover it. 00:28:15.567 [2024-12-10 22:59:23.109005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.567 [2024-12-10 22:59:23.109038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.567 qpair failed and we were unable to recover it. 00:28:15.567 [2024-12-10 22:59:23.109238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.567 [2024-12-10 22:59:23.109273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.567 qpair failed and we were unable to recover it. 00:28:15.567 [2024-12-10 22:59:23.109451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.567 [2024-12-10 22:59:23.109484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.567 qpair failed and we were unable to recover it. 00:28:15.567 [2024-12-10 22:59:23.109678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.567 [2024-12-10 22:59:23.109712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.567 qpair failed and we were unable to recover it. 00:28:15.567 [2024-12-10 22:59:23.109957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.567 [2024-12-10 22:59:23.109991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.567 qpair failed and we were unable to recover it. 00:28:15.567 [2024-12-10 22:59:23.110097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.567 [2024-12-10 22:59:23.110130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.567 qpair failed and we were unable to recover it. 00:28:15.567 [2024-12-10 22:59:23.110320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.567 [2024-12-10 22:59:23.110354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.567 qpair failed and we were unable to recover it. 00:28:15.567 [2024-12-10 22:59:23.110457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.567 [2024-12-10 22:59:23.110490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.567 qpair failed and we were unable to recover it. 00:28:15.567 [2024-12-10 22:59:23.110691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.567 [2024-12-10 22:59:23.110725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.567 qpair failed and we were unable to recover it. 00:28:15.567 [2024-12-10 22:59:23.110935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.567 [2024-12-10 22:59:23.110968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.567 qpair failed and we were unable to recover it. 00:28:15.567 [2024-12-10 22:59:23.111074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.567 [2024-12-10 22:59:23.111105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.567 qpair failed and we were unable to recover it. 00:28:15.567 [2024-12-10 22:59:23.111359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.567 [2024-12-10 22:59:23.111394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.567 qpair failed and we were unable to recover it. 00:28:15.567 [2024-12-10 22:59:23.111566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.567 [2024-12-10 22:59:23.111599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.567 qpair failed and we were unable to recover it. 00:28:15.567 [2024-12-10 22:59:23.111790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.567 [2024-12-10 22:59:23.111824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.567 qpair failed and we were unable to recover it. 00:28:15.567 [2024-12-10 22:59:23.111996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.567 [2024-12-10 22:59:23.112029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.567 qpair failed and we were unable to recover it. 00:28:15.567 [2024-12-10 22:59:23.112151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.567 [2024-12-10 22:59:23.112193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.567 qpair failed and we were unable to recover it. 00:28:15.567 [2024-12-10 22:59:23.112463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.567 [2024-12-10 22:59:23.112497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.567 qpair failed and we were unable to recover it. 00:28:15.567 [2024-12-10 22:59:23.112741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.567 [2024-12-10 22:59:23.112775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.567 qpair failed and we were unable to recover it. 00:28:15.567 [2024-12-10 22:59:23.112981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.567 [2024-12-10 22:59:23.113014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.567 qpair failed and we were unable to recover it. 00:28:15.567 [2024-12-10 22:59:23.113209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.567 [2024-12-10 22:59:23.113245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.567 qpair failed and we were unable to recover it. 00:28:15.567 [2024-12-10 22:59:23.113427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.567 [2024-12-10 22:59:23.113461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.567 qpair failed and we were unable to recover it. 00:28:15.567 [2024-12-10 22:59:23.113586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.567 [2024-12-10 22:59:23.113618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.567 qpair failed and we were unable to recover it. 00:28:15.567 [2024-12-10 22:59:23.113797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.567 [2024-12-10 22:59:23.113829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.567 qpair failed and we were unable to recover it. 00:28:15.567 [2024-12-10 22:59:23.113957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.567 [2024-12-10 22:59:23.113991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.567 qpair failed and we were unable to recover it. 00:28:15.567 [2024-12-10 22:59:23.114173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.567 [2024-12-10 22:59:23.114209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.567 qpair failed and we were unable to recover it. 00:28:15.567 [2024-12-10 22:59:23.114422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.567 [2024-12-10 22:59:23.114455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.567 qpair failed and we were unable to recover it. 00:28:15.567 [2024-12-10 22:59:23.114627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.567 [2024-12-10 22:59:23.114660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.567 qpair failed and we were unable to recover it. 00:28:15.567 [2024-12-10 22:59:23.114831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.567 [2024-12-10 22:59:23.114864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.567 qpair failed and we were unable to recover it. 00:28:15.567 [2024-12-10 22:59:23.115050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.567 [2024-12-10 22:59:23.115083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.567 qpair failed and we were unable to recover it. 00:28:15.567 [2024-12-10 22:59:23.115297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.567 [2024-12-10 22:59:23.115331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.567 qpair failed and we were unable to recover it. 00:28:15.567 [2024-12-10 22:59:23.115460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.567 [2024-12-10 22:59:23.115493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.567 qpair failed and we were unable to recover it. 00:28:15.567 [2024-12-10 22:59:23.115737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.567 [2024-12-10 22:59:23.115771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.567 qpair failed and we were unable to recover it. 00:28:15.567 [2024-12-10 22:59:23.115971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.568 [2024-12-10 22:59:23.116005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.568 qpair failed and we were unable to recover it. 00:28:15.568 [2024-12-10 22:59:23.116182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.568 [2024-12-10 22:59:23.116222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.568 qpair failed and we were unable to recover it. 00:28:15.568 [2024-12-10 22:59:23.116350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.568 [2024-12-10 22:59:23.116384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.568 qpair failed and we were unable to recover it. 00:28:15.568 [2024-12-10 22:59:23.116634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.568 [2024-12-10 22:59:23.116668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.568 qpair failed and we were unable to recover it. 00:28:15.568 [2024-12-10 22:59:23.116913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.568 [2024-12-10 22:59:23.116947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.568 qpair failed and we were unable to recover it. 00:28:15.568 [2024-12-10 22:59:23.117176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.568 [2024-12-10 22:59:23.117211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.568 qpair failed and we were unable to recover it. 00:28:15.568 [2024-12-10 22:59:23.117387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.568 [2024-12-10 22:59:23.117420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.568 qpair failed and we were unable to recover it. 00:28:15.568 [2024-12-10 22:59:23.117592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.568 [2024-12-10 22:59:23.117626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.568 qpair failed and we were unable to recover it. 00:28:15.568 [2024-12-10 22:59:23.117820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.568 [2024-12-10 22:59:23.117854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.568 qpair failed and we were unable to recover it. 00:28:15.568 [2024-12-10 22:59:23.117971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.568 [2024-12-10 22:59:23.118004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.568 qpair failed and we were unable to recover it. 00:28:15.568 [2024-12-10 22:59:23.118211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.568 [2024-12-10 22:59:23.118247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.568 qpair failed and we were unable to recover it. 00:28:15.568 [2024-12-10 22:59:23.118421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.568 [2024-12-10 22:59:23.118455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.568 qpair failed and we were unable to recover it. 00:28:15.568 [2024-12-10 22:59:23.118661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.568 [2024-12-10 22:59:23.118695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.568 qpair failed and we were unable to recover it. 00:28:15.568 [2024-12-10 22:59:23.118884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.568 [2024-12-10 22:59:23.118917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.568 qpair failed and we were unable to recover it. 00:28:15.568 [2024-12-10 22:59:23.119178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.568 [2024-12-10 22:59:23.119213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.568 qpair failed and we were unable to recover it. 00:28:15.568 [2024-12-10 22:59:23.119399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.568 [2024-12-10 22:59:23.119433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.568 qpair failed and we were unable to recover it. 00:28:15.568 [2024-12-10 22:59:23.119632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.568 [2024-12-10 22:59:23.119665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.568 qpair failed and we were unable to recover it. 00:28:15.568 [2024-12-10 22:59:23.119838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.568 [2024-12-10 22:59:23.119872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.568 qpair failed and we were unable to recover it. 00:28:15.568 [2024-12-10 22:59:23.120047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.568 [2024-12-10 22:59:23.120079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.568 qpair failed and we were unable to recover it. 00:28:15.568 [2024-12-10 22:59:23.120354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.568 [2024-12-10 22:59:23.120388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.568 qpair failed and we were unable to recover it. 00:28:15.568 [2024-12-10 22:59:23.120660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.568 [2024-12-10 22:59:23.120693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.568 qpair failed and we were unable to recover it. 00:28:15.568 [2024-12-10 22:59:23.120810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.568 [2024-12-10 22:59:23.120844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.568 qpair failed and we were unable to recover it. 00:28:15.568 [2024-12-10 22:59:23.121054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.568 [2024-12-10 22:59:23.121088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.568 qpair failed and we were unable to recover it. 00:28:15.568 [2024-12-10 22:59:23.121220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.568 [2024-12-10 22:59:23.121255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.568 qpair failed and we were unable to recover it. 00:28:15.568 [2024-12-10 22:59:23.121521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.568 [2024-12-10 22:59:23.121555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.568 qpair failed and we were unable to recover it. 00:28:15.568 [2024-12-10 22:59:23.121662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.568 [2024-12-10 22:59:23.121695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.568 qpair failed and we were unable to recover it. 00:28:15.568 [2024-12-10 22:59:23.121871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.568 [2024-12-10 22:59:23.121905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.568 qpair failed and we were unable to recover it. 00:28:15.568 [2024-12-10 22:59:23.122093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.568 [2024-12-10 22:59:23.122126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.568 qpair failed and we were unable to recover it. 00:28:15.568 [2024-12-10 22:59:23.122407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.568 [2024-12-10 22:59:23.122447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.568 qpair failed and we were unable to recover it. 00:28:15.568 [2024-12-10 22:59:23.122671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.568 [2024-12-10 22:59:23.122705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.568 qpair failed and we were unable to recover it. 00:28:15.568 [2024-12-10 22:59:23.122913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.568 [2024-12-10 22:59:23.122947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.568 qpair failed and we were unable to recover it. 00:28:15.568 [2024-12-10 22:59:23.123075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.568 [2024-12-10 22:59:23.123109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.568 qpair failed and we were unable to recover it. 00:28:15.568 [2024-12-10 22:59:23.123343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.568 [2024-12-10 22:59:23.123378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.568 qpair failed and we were unable to recover it. 00:28:15.568 [2024-12-10 22:59:23.123603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.568 [2024-12-10 22:59:23.123636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.568 qpair failed and we were unable to recover it. 00:28:15.568 [2024-12-10 22:59:23.123825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.568 [2024-12-10 22:59:23.123858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.568 qpair failed and we were unable to recover it. 00:28:15.568 [2024-12-10 22:59:23.124134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.569 [2024-12-10 22:59:23.124179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.569 qpair failed and we were unable to recover it. 00:28:15.569 [2024-12-10 22:59:23.124455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.569 [2024-12-10 22:59:23.124488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.569 qpair failed and we were unable to recover it. 00:28:15.569 [2024-12-10 22:59:23.124762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.569 [2024-12-10 22:59:23.124795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.569 qpair failed and we were unable to recover it. 00:28:15.569 [2024-12-10 22:59:23.124906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.569 [2024-12-10 22:59:23.124939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.569 qpair failed and we were unable to recover it. 00:28:15.569 [2024-12-10 22:59:23.125116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.569 [2024-12-10 22:59:23.125150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.569 qpair failed and we were unable to recover it. 00:28:15.569 [2024-12-10 22:59:23.125370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.569 [2024-12-10 22:59:23.125404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.569 qpair failed and we were unable to recover it. 00:28:15.569 [2024-12-10 22:59:23.125680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.569 [2024-12-10 22:59:23.125713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.569 qpair failed and we were unable to recover it. 00:28:15.569 [2024-12-10 22:59:23.125846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.569 [2024-12-10 22:59:23.125881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.569 qpair failed and we were unable to recover it. 00:28:15.569 [2024-12-10 22:59:23.126087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.569 [2024-12-10 22:59:23.126120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.569 qpair failed and we were unable to recover it. 00:28:15.569 [2024-12-10 22:59:23.126288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.569 [2024-12-10 22:59:23.126323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.569 qpair failed and we were unable to recover it. 00:28:15.569 [2024-12-10 22:59:23.126519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.569 [2024-12-10 22:59:23.126552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.569 qpair failed and we were unable to recover it. 00:28:15.569 [2024-12-10 22:59:23.126692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.569 [2024-12-10 22:59:23.126726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.569 qpair failed and we were unable to recover it. 00:28:15.569 [2024-12-10 22:59:23.126911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.569 [2024-12-10 22:59:23.126945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.569 qpair failed and we were unable to recover it. 00:28:15.569 [2024-12-10 22:59:23.127126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.569 [2024-12-10 22:59:23.127171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.569 qpair failed and we were unable to recover it. 00:28:15.569 [2024-12-10 22:59:23.127364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.569 [2024-12-10 22:59:23.127398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.569 qpair failed and we were unable to recover it. 00:28:15.569 [2024-12-10 22:59:23.127522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.569 [2024-12-10 22:59:23.127555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.569 qpair failed and we were unable to recover it. 00:28:15.569 [2024-12-10 22:59:23.127733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.569 [2024-12-10 22:59:23.127767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.569 qpair failed and we were unable to recover it. 00:28:15.569 [2024-12-10 22:59:23.128039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.569 [2024-12-10 22:59:23.128074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.569 qpair failed and we were unable to recover it. 00:28:15.569 [2024-12-10 22:59:23.128290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.569 [2024-12-10 22:59:23.128326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.569 qpair failed and we were unable to recover it. 00:28:15.569 [2024-12-10 22:59:23.128575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.569 [2024-12-10 22:59:23.128608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.569 qpair failed and we were unable to recover it. 00:28:15.569 [2024-12-10 22:59:23.128810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.569 [2024-12-10 22:59:23.128845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.569 qpair failed and we were unable to recover it. 00:28:15.569 [2024-12-10 22:59:23.129055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.569 [2024-12-10 22:59:23.129089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.569 qpair failed and we were unable to recover it. 00:28:15.569 [2024-12-10 22:59:23.129293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.569 [2024-12-10 22:59:23.129328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.569 qpair failed and we were unable to recover it. 00:28:15.569 [2024-12-10 22:59:23.129510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.569 [2024-12-10 22:59:23.129544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.569 qpair failed and we were unable to recover it. 00:28:15.569 [2024-12-10 22:59:23.129819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.569 [2024-12-10 22:59:23.129851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.569 qpair failed and we were unable to recover it. 00:28:15.569 [2024-12-10 22:59:23.130135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.569 [2024-12-10 22:59:23.130179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.569 qpair failed and we were unable to recover it. 00:28:15.569 [2024-12-10 22:59:23.130360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.569 [2024-12-10 22:59:23.130393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.569 qpair failed and we were unable to recover it. 00:28:15.569 [2024-12-10 22:59:23.130513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.569 [2024-12-10 22:59:23.130547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.569 qpair failed and we were unable to recover it. 00:28:15.569 [2024-12-10 22:59:23.130758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.569 [2024-12-10 22:59:23.130793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.569 qpair failed and we were unable to recover it. 00:28:15.569 [2024-12-10 22:59:23.130922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.569 [2024-12-10 22:59:23.130957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.569 qpair failed and we were unable to recover it. 00:28:15.569 [2024-12-10 22:59:23.131238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.569 [2024-12-10 22:59:23.131274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.569 qpair failed and we were unable to recover it. 00:28:15.569 [2024-12-10 22:59:23.131394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.569 [2024-12-10 22:59:23.131428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.569 qpair failed and we were unable to recover it. 00:28:15.569 [2024-12-10 22:59:23.131610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.569 [2024-12-10 22:59:23.131643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.569 qpair failed and we were unable to recover it. 00:28:15.569 [2024-12-10 22:59:23.131839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.569 [2024-12-10 22:59:23.131873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.569 qpair failed and we were unable to recover it. 00:28:15.569 [2024-12-10 22:59:23.132051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.569 [2024-12-10 22:59:23.132091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.569 qpair failed and we were unable to recover it. 00:28:15.569 [2024-12-10 22:59:23.132382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.569 [2024-12-10 22:59:23.132417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.569 qpair failed and we were unable to recover it. 00:28:15.569 [2024-12-10 22:59:23.132534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.569 [2024-12-10 22:59:23.132568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.569 qpair failed and we were unable to recover it. 00:28:15.569 [2024-12-10 22:59:23.132817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.569 [2024-12-10 22:59:23.132852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.569 qpair failed and we were unable to recover it. 00:28:15.569 [2024-12-10 22:59:23.132990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.569 [2024-12-10 22:59:23.133024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.569 qpair failed and we were unable to recover it. 00:28:15.569 [2024-12-10 22:59:23.133155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.569 [2024-12-10 22:59:23.133200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.569 qpair failed and we were unable to recover it. 00:28:15.570 [2024-12-10 22:59:23.133478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.570 [2024-12-10 22:59:23.133511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.570 qpair failed and we were unable to recover it. 00:28:15.570 [2024-12-10 22:59:23.133787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.570 [2024-12-10 22:59:23.133822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.570 qpair failed and we were unable to recover it. 00:28:15.570 [2024-12-10 22:59:23.134045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.570 [2024-12-10 22:59:23.134079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.570 qpair failed and we were unable to recover it. 00:28:15.570 [2024-12-10 22:59:23.134308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.570 [2024-12-10 22:59:23.134344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.570 qpair failed and we were unable to recover it. 00:28:15.570 [2024-12-10 22:59:23.134537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.570 [2024-12-10 22:59:23.134572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.570 qpair failed and we were unable to recover it. 00:28:15.570 [2024-12-10 22:59:23.134752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.570 [2024-12-10 22:59:23.134786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.570 qpair failed and we were unable to recover it. 00:28:15.570 [2024-12-10 22:59:23.135036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.570 [2024-12-10 22:59:23.135071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.570 qpair failed and we were unable to recover it. 00:28:15.570 [2024-12-10 22:59:23.135198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.570 [2024-12-10 22:59:23.135234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.570 qpair failed and we were unable to recover it. 00:28:15.570 [2024-12-10 22:59:23.135444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.570 [2024-12-10 22:59:23.135478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.570 qpair failed and we were unable to recover it. 00:28:15.570 [2024-12-10 22:59:23.135754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.570 [2024-12-10 22:59:23.135805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.570 qpair failed and we were unable to recover it. 00:28:15.570 [2024-12-10 22:59:23.136074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.570 [2024-12-10 22:59:23.136109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.570 qpair failed and we were unable to recover it. 00:28:15.570 [2024-12-10 22:59:23.136405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.570 [2024-12-10 22:59:23.136440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.570 qpair failed and we were unable to recover it. 00:28:15.570 [2024-12-10 22:59:23.136704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.570 [2024-12-10 22:59:23.136737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.570 qpair failed and we were unable to recover it. 00:28:15.570 [2024-12-10 22:59:23.136942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.570 [2024-12-10 22:59:23.136976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.570 qpair failed and we were unable to recover it. 00:28:15.570 [2024-12-10 22:59:23.137169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.570 [2024-12-10 22:59:23.137204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.570 qpair failed and we were unable to recover it. 00:28:15.570 [2024-12-10 22:59:23.137483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.570 [2024-12-10 22:59:23.137520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.570 qpair failed and we were unable to recover it. 00:28:15.570 [2024-12-10 22:59:23.137793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.570 [2024-12-10 22:59:23.137828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.570 qpair failed and we were unable to recover it. 00:28:15.570 [2024-12-10 22:59:23.138111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.570 [2024-12-10 22:59:23.138144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.570 qpair failed and we were unable to recover it. 00:28:15.570 [2024-12-10 22:59:23.138429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.570 [2024-12-10 22:59:23.138464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.570 qpair failed and we were unable to recover it. 00:28:15.570 [2024-12-10 22:59:23.138649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.570 [2024-12-10 22:59:23.138683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.570 qpair failed and we were unable to recover it. 00:28:15.570 [2024-12-10 22:59:23.138861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.570 [2024-12-10 22:59:23.138895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.570 qpair failed and we were unable to recover it. 00:28:15.570 [2024-12-10 22:59:23.139088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.570 [2024-12-10 22:59:23.139128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.570 qpair failed and we were unable to recover it. 00:28:15.570 [2024-12-10 22:59:23.139287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.570 [2024-12-10 22:59:23.139323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.570 qpair failed and we were unable to recover it. 00:28:15.570 [2024-12-10 22:59:23.139464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.570 [2024-12-10 22:59:23.139499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.570 qpair failed and we were unable to recover it. 00:28:15.570 [2024-12-10 22:59:23.139614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.570 [2024-12-10 22:59:23.139646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.570 qpair failed and we were unable to recover it. 00:28:15.570 [2024-12-10 22:59:23.139914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.570 [2024-12-10 22:59:23.139948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.570 qpair failed and we were unable to recover it. 00:28:15.570 [2024-12-10 22:59:23.140126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.570 [2024-12-10 22:59:23.140172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.570 qpair failed and we were unable to recover it. 00:28:15.570 [2024-12-10 22:59:23.140284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.570 [2024-12-10 22:59:23.140318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.570 qpair failed and we were unable to recover it. 00:28:15.570 [2024-12-10 22:59:23.140543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.570 [2024-12-10 22:59:23.140577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.570 qpair failed and we were unable to recover it. 00:28:15.570 [2024-12-10 22:59:23.140787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.570 [2024-12-10 22:59:23.140822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.570 qpair failed and we were unable to recover it. 00:28:15.570 [2024-12-10 22:59:23.141074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.570 [2024-12-10 22:59:23.141108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.570 qpair failed and we were unable to recover it. 00:28:15.570 [2024-12-10 22:59:23.141416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.570 [2024-12-10 22:59:23.141451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.570 qpair failed and we were unable to recover it. 00:28:15.570 [2024-12-10 22:59:23.141640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.570 [2024-12-10 22:59:23.141673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.570 qpair failed and we were unable to recover it. 00:28:15.570 [2024-12-10 22:59:23.141794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.570 [2024-12-10 22:59:23.141825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.570 qpair failed and we were unable to recover it. 00:28:15.570 [2024-12-10 22:59:23.141953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.570 [2024-12-10 22:59:23.141986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.570 qpair failed and we were unable to recover it. 00:28:15.570 [2024-12-10 22:59:23.142204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.570 [2024-12-10 22:59:23.142240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.570 qpair failed and we were unable to recover it. 00:28:15.570 [2024-12-10 22:59:23.142446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.570 [2024-12-10 22:59:23.142480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.570 qpair failed and we were unable to recover it. 00:28:15.570 [2024-12-10 22:59:23.142756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.570 [2024-12-10 22:59:23.142791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.570 qpair failed and we were unable to recover it. 00:28:15.570 [2024-12-10 22:59:23.142915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.570 [2024-12-10 22:59:23.142950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.570 qpair failed and we were unable to recover it. 00:28:15.570 [2024-12-10 22:59:23.143130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.571 [2024-12-10 22:59:23.143204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.571 qpair failed and we were unable to recover it. 00:28:15.571 [2024-12-10 22:59:23.143484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.571 [2024-12-10 22:59:23.143519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.571 qpair failed and we were unable to recover it. 00:28:15.571 [2024-12-10 22:59:23.143812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.571 [2024-12-10 22:59:23.143846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.571 qpair failed and we were unable to recover it. 00:28:15.571 [2024-12-10 22:59:23.144065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.571 [2024-12-10 22:59:23.144099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.571 qpair failed and we were unable to recover it. 00:28:15.571 [2024-12-10 22:59:23.144303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.571 [2024-12-10 22:59:23.144338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.571 qpair failed and we were unable to recover it. 00:28:15.571 [2024-12-10 22:59:23.144522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.571 [2024-12-10 22:59:23.144557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.571 qpair failed and we were unable to recover it. 00:28:15.571 [2024-12-10 22:59:23.144833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.571 [2024-12-10 22:59:23.144868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.571 qpair failed and we were unable to recover it. 00:28:15.571 [2024-12-10 22:59:23.145057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.571 [2024-12-10 22:59:23.145092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.571 qpair failed and we were unable to recover it. 00:28:15.571 [2024-12-10 22:59:23.145272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.571 [2024-12-10 22:59:23.145308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.571 qpair failed and we were unable to recover it. 00:28:15.571 [2024-12-10 22:59:23.145532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.571 [2024-12-10 22:59:23.145567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.571 qpair failed and we were unable to recover it. 00:28:15.571 [2024-12-10 22:59:23.145682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.571 [2024-12-10 22:59:23.145716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.571 qpair failed and we were unable to recover it. 00:28:15.571 [2024-12-10 22:59:23.145832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.571 [2024-12-10 22:59:23.145866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.571 qpair failed and we were unable to recover it. 00:28:15.571 [2024-12-10 22:59:23.146069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.571 [2024-12-10 22:59:23.146103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.571 qpair failed and we were unable to recover it. 00:28:15.571 [2024-12-10 22:59:23.146301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.571 [2024-12-10 22:59:23.146338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.571 qpair failed and we were unable to recover it. 00:28:15.571 [2024-12-10 22:59:23.146542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.571 [2024-12-10 22:59:23.146576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.571 qpair failed and we were unable to recover it. 00:28:15.571 [2024-12-10 22:59:23.146759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.571 [2024-12-10 22:59:23.146792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.571 qpair failed and we were unable to recover it. 00:28:15.571 [2024-12-10 22:59:23.146978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.571 [2024-12-10 22:59:23.147012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.571 qpair failed and we were unable to recover it. 00:28:15.571 [2024-12-10 22:59:23.147128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.571 [2024-12-10 22:59:23.147170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.571 qpair failed and we were unable to recover it. 00:28:15.571 [2024-12-10 22:59:23.147350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.571 [2024-12-10 22:59:23.147384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.571 qpair failed and we were unable to recover it. 00:28:15.571 [2024-12-10 22:59:23.147564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.571 [2024-12-10 22:59:23.147598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.571 qpair failed and we were unable to recover it. 00:28:15.571 [2024-12-10 22:59:23.147879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.571 [2024-12-10 22:59:23.147913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.571 qpair failed and we were unable to recover it. 00:28:15.571 [2024-12-10 22:59:23.148039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.571 [2024-12-10 22:59:23.148074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.571 qpair failed and we were unable to recover it. 00:28:15.571 [2024-12-10 22:59:23.148195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.571 [2024-12-10 22:59:23.148232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.571 qpair failed and we were unable to recover it. 00:28:15.571 [2024-12-10 22:59:23.148360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.571 [2024-12-10 22:59:23.148400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.571 qpair failed and we were unable to recover it. 00:28:15.571 [2024-12-10 22:59:23.148583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.571 [2024-12-10 22:59:23.148617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.571 qpair failed and we were unable to recover it. 00:28:15.571 [2024-12-10 22:59:23.148821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.571 [2024-12-10 22:59:23.148855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.571 qpair failed and we were unable to recover it. 00:28:15.571 [2024-12-10 22:59:23.149110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.571 [2024-12-10 22:59:23.149144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.571 qpair failed and we were unable to recover it. 00:28:15.571 [2024-12-10 22:59:23.149453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.571 [2024-12-10 22:59:23.149486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.571 qpair failed and we were unable to recover it. 00:28:15.571 [2024-12-10 22:59:23.149738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.571 [2024-12-10 22:59:23.149772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.571 qpair failed and we were unable to recover it. 00:28:15.571 [2024-12-10 22:59:23.150083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.571 [2024-12-10 22:59:23.150116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.571 qpair failed and we were unable to recover it. 00:28:15.571 [2024-12-10 22:59:23.150400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.571 [2024-12-10 22:59:23.150436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.571 qpair failed and we were unable to recover it. 00:28:15.571 [2024-12-10 22:59:23.150616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.571 [2024-12-10 22:59:23.150649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.571 qpair failed and we were unable to recover it. 00:28:15.571 [2024-12-10 22:59:23.150782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.571 [2024-12-10 22:59:23.150817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.571 qpair failed and we were unable to recover it. 00:28:15.571 [2024-12-10 22:59:23.151072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.571 [2024-12-10 22:59:23.151107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.571 qpair failed and we were unable to recover it. 00:28:15.571 [2024-12-10 22:59:23.151415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.571 [2024-12-10 22:59:23.151452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.571 qpair failed and we were unable to recover it. 00:28:15.571 [2024-12-10 22:59:23.151681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.571 [2024-12-10 22:59:23.151714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.571 qpair failed and we were unable to recover it. 00:28:15.571 [2024-12-10 22:59:23.151944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.571 [2024-12-10 22:59:23.151979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.571 qpair failed and we were unable to recover it. 00:28:15.571 [2024-12-10 22:59:23.152304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.571 [2024-12-10 22:59:23.152341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.571 qpair failed and we were unable to recover it. 00:28:15.571 [2024-12-10 22:59:23.152572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.571 [2024-12-10 22:59:23.152606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.571 qpair failed and we were unable to recover it. 00:28:15.571 [2024-12-10 22:59:23.152860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.571 [2024-12-10 22:59:23.152894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.571 qpair failed and we were unable to recover it. 00:28:15.572 [2024-12-10 22:59:23.153094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.572 [2024-12-10 22:59:23.153129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.572 qpair failed and we were unable to recover it. 00:28:15.572 [2024-12-10 22:59:23.153400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.572 [2024-12-10 22:59:23.153436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.572 qpair failed and we were unable to recover it. 00:28:15.572 [2024-12-10 22:59:23.153711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.572 [2024-12-10 22:59:23.153745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.572 qpair failed and we were unable to recover it. 00:28:15.572 [2024-12-10 22:59:23.154035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.572 [2024-12-10 22:59:23.154070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.572 qpair failed and we were unable to recover it. 00:28:15.572 [2024-12-10 22:59:23.154348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.572 [2024-12-10 22:59:23.154383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.572 qpair failed and we were unable to recover it. 00:28:15.572 [2024-12-10 22:59:23.154589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.572 [2024-12-10 22:59:23.154623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.572 qpair failed and we were unable to recover it. 00:28:15.572 [2024-12-10 22:59:23.154745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.572 [2024-12-10 22:59:23.154779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.572 qpair failed and we were unable to recover it. 00:28:15.572 [2024-12-10 22:59:23.155063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.572 [2024-12-10 22:59:23.155097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.572 qpair failed and we were unable to recover it. 00:28:15.572 [2024-12-10 22:59:23.155247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.572 [2024-12-10 22:59:23.155283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.572 qpair failed and we were unable to recover it. 00:28:15.572 [2024-12-10 22:59:23.155489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.572 [2024-12-10 22:59:23.155523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.572 qpair failed and we were unable to recover it. 00:28:15.572 [2024-12-10 22:59:23.155631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.572 [2024-12-10 22:59:23.155664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.572 qpair failed and we were unable to recover it. 00:28:15.572 [2024-12-10 22:59:23.155991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.572 [2024-12-10 22:59:23.156026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.572 qpair failed and we were unable to recover it. 00:28:15.572 [2024-12-10 22:59:23.156209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.572 [2024-12-10 22:59:23.156244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.572 qpair failed and we were unable to recover it. 00:28:15.572 [2024-12-10 22:59:23.156390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.572 [2024-12-10 22:59:23.156424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.572 qpair failed and we were unable to recover it. 00:28:15.572 [2024-12-10 22:59:23.156625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.572 [2024-12-10 22:59:23.156659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.572 qpair failed and we were unable to recover it. 00:28:15.572 [2024-12-10 22:59:23.156868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.572 [2024-12-10 22:59:23.156902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.572 qpair failed and we were unable to recover it. 00:28:15.572 [2024-12-10 22:59:23.157124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.572 [2024-12-10 22:59:23.157169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.572 qpair failed and we were unable to recover it. 00:28:15.572 [2024-12-10 22:59:23.157367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.572 [2024-12-10 22:59:23.157402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.572 qpair failed and we were unable to recover it. 00:28:15.572 [2024-12-10 22:59:23.157608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.572 [2024-12-10 22:59:23.157641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.572 qpair failed and we were unable to recover it. 00:28:15.572 [2024-12-10 22:59:23.157838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.572 [2024-12-10 22:59:23.157871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.572 qpair failed and we were unable to recover it. 00:28:15.572 [2024-12-10 22:59:23.158016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.572 [2024-12-10 22:59:23.158050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.572 qpair failed and we were unable to recover it. 00:28:15.572 [2024-12-10 22:59:23.158172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.572 [2024-12-10 22:59:23.158206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.572 qpair failed and we were unable to recover it. 00:28:15.572 [2024-12-10 22:59:23.158387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.572 [2024-12-10 22:59:23.158421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.572 qpair failed and we were unable to recover it. 00:28:15.572 [2024-12-10 22:59:23.158648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.572 [2024-12-10 22:59:23.158685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.572 qpair failed and we were unable to recover it. 00:28:15.572 [2024-12-10 22:59:23.159034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.572 [2024-12-10 22:59:23.159069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.572 qpair failed and we were unable to recover it. 00:28:15.572 [2024-12-10 22:59:23.159203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.572 [2024-12-10 22:59:23.159241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.572 qpair failed and we were unable to recover it. 00:28:15.572 [2024-12-10 22:59:23.159374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.572 [2024-12-10 22:59:23.159408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.572 qpair failed and we were unable to recover it. 00:28:15.572 [2024-12-10 22:59:23.159589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.572 [2024-12-10 22:59:23.159623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.572 qpair failed and we were unable to recover it. 00:28:15.572 [2024-12-10 22:59:23.159818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.572 [2024-12-10 22:59:23.159852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.572 qpair failed and we were unable to recover it. 00:28:15.572 [2024-12-10 22:59:23.160123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.572 [2024-12-10 22:59:23.160170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.572 qpair failed and we were unable to recover it. 00:28:15.572 [2024-12-10 22:59:23.160352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.572 [2024-12-10 22:59:23.160388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.572 qpair failed and we were unable to recover it. 00:28:15.572 [2024-12-10 22:59:23.160571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.572 [2024-12-10 22:59:23.160604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.572 qpair failed and we were unable to recover it. 00:28:15.572 [2024-12-10 22:59:23.160882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.572 [2024-12-10 22:59:23.160916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.572 qpair failed and we were unable to recover it. 00:28:15.572 [2024-12-10 22:59:23.161125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.572 [2024-12-10 22:59:23.161172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.572 qpair failed and we were unable to recover it. 00:28:15.572 [2024-12-10 22:59:23.161359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.572 [2024-12-10 22:59:23.161393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.572 qpair failed and we were unable to recover it. 00:28:15.572 [2024-12-10 22:59:23.161588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.573 [2024-12-10 22:59:23.161621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.573 qpair failed and we were unable to recover it. 00:28:15.573 [2024-12-10 22:59:23.161844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.573 [2024-12-10 22:59:23.161878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.573 qpair failed and we were unable to recover it. 00:28:15.573 [2024-12-10 22:59:23.162092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.573 [2024-12-10 22:59:23.162127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.573 qpair failed and we were unable to recover it. 00:28:15.573 [2024-12-10 22:59:23.162329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.573 [2024-12-10 22:59:23.162364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.573 qpair failed and we were unable to recover it. 00:28:15.573 [2024-12-10 22:59:23.162551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.573 [2024-12-10 22:59:23.162584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.573 qpair failed and we were unable to recover it. 00:28:15.573 [2024-12-10 22:59:23.162875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.573 [2024-12-10 22:59:23.162909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.573 qpair failed and we were unable to recover it. 00:28:15.573 [2024-12-10 22:59:23.163096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.573 [2024-12-10 22:59:23.163129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.573 qpair failed and we were unable to recover it. 00:28:15.573 [2024-12-10 22:59:23.163252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.573 [2024-12-10 22:59:23.163287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.573 qpair failed and we were unable to recover it. 00:28:15.573 [2024-12-10 22:59:23.163480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.573 [2024-12-10 22:59:23.163514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.573 qpair failed and we were unable to recover it. 00:28:15.573 [2024-12-10 22:59:23.163693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.573 [2024-12-10 22:59:23.163727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.573 qpair failed and we were unable to recover it. 00:28:15.573 [2024-12-10 22:59:23.163911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.573 [2024-12-10 22:59:23.163945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.573 qpair failed and we were unable to recover it. 00:28:15.573 [2024-12-10 22:59:23.164055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.573 [2024-12-10 22:59:23.164089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.573 qpair failed and we were unable to recover it. 00:28:15.573 [2024-12-10 22:59:23.164367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.573 [2024-12-10 22:59:23.164402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.573 qpair failed and we were unable to recover it. 00:28:15.573 [2024-12-10 22:59:23.164541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.573 [2024-12-10 22:59:23.164574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.573 qpair failed and we were unable to recover it. 00:28:15.573 [2024-12-10 22:59:23.164829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.573 [2024-12-10 22:59:23.164862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.573 qpair failed and we were unable to recover it. 00:28:15.573 [2024-12-10 22:59:23.164986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.573 [2024-12-10 22:59:23.165020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.573 qpair failed and we were unable to recover it. 00:28:15.573 [2024-12-10 22:59:23.165232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.573 [2024-12-10 22:59:23.165273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.573 qpair failed and we were unable to recover it. 00:28:15.573 [2024-12-10 22:59:23.165479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.573 [2024-12-10 22:59:23.165513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.573 qpair failed and we were unable to recover it. 00:28:15.573 [2024-12-10 22:59:23.165815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.573 [2024-12-10 22:59:23.165850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.573 qpair failed and we were unable to recover it. 00:28:15.573 [2024-12-10 22:59:23.166105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.573 [2024-12-10 22:59:23.166139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.573 qpair failed and we were unable to recover it. 00:28:15.573 [2024-12-10 22:59:23.166264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.573 [2024-12-10 22:59:23.166297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.573 qpair failed and we were unable to recover it. 00:28:15.573 [2024-12-10 22:59:23.166487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.573 [2024-12-10 22:59:23.166521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.573 qpair failed and we were unable to recover it. 00:28:15.573 [2024-12-10 22:59:23.166700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.573 [2024-12-10 22:59:23.166733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.573 qpair failed and we were unable to recover it. 00:28:15.573 [2024-12-10 22:59:23.166949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.573 [2024-12-10 22:59:23.166983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.573 qpair failed and we were unable to recover it. 00:28:15.573 [2024-12-10 22:59:23.167105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.573 [2024-12-10 22:59:23.167139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.573 qpair failed and we were unable to recover it. 00:28:15.573 [2024-12-10 22:59:23.167352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.573 [2024-12-10 22:59:23.167387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.573 qpair failed and we were unable to recover it. 00:28:15.573 [2024-12-10 22:59:23.167509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.573 [2024-12-10 22:59:23.167543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.573 qpair failed and we were unable to recover it. 00:28:15.573 [2024-12-10 22:59:23.167769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.573 [2024-12-10 22:59:23.167803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.573 qpair failed and we were unable to recover it. 00:28:15.573 [2024-12-10 22:59:23.168006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.573 [2024-12-10 22:59:23.168039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.573 qpair failed and we were unable to recover it. 00:28:15.573 [2024-12-10 22:59:23.168178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.573 [2024-12-10 22:59:23.168213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.573 qpair failed and we were unable to recover it. 00:28:15.573 [2024-12-10 22:59:23.168488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.573 [2024-12-10 22:59:23.168523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.573 qpair failed and we were unable to recover it. 00:28:15.573 [2024-12-10 22:59:23.168807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.573 [2024-12-10 22:59:23.168841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.573 qpair failed and we were unable to recover it. 00:28:15.573 [2024-12-10 22:59:23.169031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.573 [2024-12-10 22:59:23.169067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.573 qpair failed and we were unable to recover it. 00:28:15.573 [2024-12-10 22:59:23.169212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.573 [2024-12-10 22:59:23.169249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.573 qpair failed and we were unable to recover it. 00:28:15.573 [2024-12-10 22:59:23.169440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.573 [2024-12-10 22:59:23.169475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.573 qpair failed and we were unable to recover it. 00:28:15.573 [2024-12-10 22:59:23.169677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.573 [2024-12-10 22:59:23.169713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.573 qpair failed and we were unable to recover it. 00:28:15.573 [2024-12-10 22:59:23.169844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.573 [2024-12-10 22:59:23.169880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.573 qpair failed and we were unable to recover it. 00:28:15.573 [2024-12-10 22:59:23.170072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.573 [2024-12-10 22:59:23.170107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.573 qpair failed and we were unable to recover it. 00:28:15.573 [2024-12-10 22:59:23.170406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.573 [2024-12-10 22:59:23.170442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.573 qpair failed and we were unable to recover it. 00:28:15.573 [2024-12-10 22:59:23.170660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.573 [2024-12-10 22:59:23.170694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.573 qpair failed and we were unable to recover it. 00:28:15.574 [2024-12-10 22:59:23.170875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.574 [2024-12-10 22:59:23.170909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.574 qpair failed and we were unable to recover it. 00:28:15.574 [2024-12-10 22:59:23.171089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.574 [2024-12-10 22:59:23.171124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.574 qpair failed and we were unable to recover it. 00:28:15.574 [2024-12-10 22:59:23.171349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.574 [2024-12-10 22:59:23.171385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.574 qpair failed and we were unable to recover it. 00:28:15.574 [2024-12-10 22:59:23.171590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.574 [2024-12-10 22:59:23.171626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.574 qpair failed and we were unable to recover it. 00:28:15.574 [2024-12-10 22:59:23.171835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.574 [2024-12-10 22:59:23.171869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.574 qpair failed and we were unable to recover it. 00:28:15.574 [2024-12-10 22:59:23.172051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.574 [2024-12-10 22:59:23.172086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.574 qpair failed and we were unable to recover it. 00:28:15.574 [2024-12-10 22:59:23.172220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.574 [2024-12-10 22:59:23.172256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.574 qpair failed and we were unable to recover it. 00:28:15.574 [2024-12-10 22:59:23.172394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.574 [2024-12-10 22:59:23.172435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.574 qpair failed and we were unable to recover it. 00:28:15.574 [2024-12-10 22:59:23.172545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.574 [2024-12-10 22:59:23.172580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.574 qpair failed and we were unable to recover it. 00:28:15.574 [2024-12-10 22:59:23.172863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.574 [2024-12-10 22:59:23.172896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.574 qpair failed and we were unable to recover it. 00:28:15.574 [2024-12-10 22:59:23.173009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.574 [2024-12-10 22:59:23.173044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.574 qpair failed and we were unable to recover it. 00:28:15.574 [2024-12-10 22:59:23.173243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.574 [2024-12-10 22:59:23.173280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.574 qpair failed and we were unable to recover it. 00:28:15.574 [2024-12-10 22:59:23.173559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.574 [2024-12-10 22:59:23.173593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.574 qpair failed and we were unable to recover it. 00:28:15.574 [2024-12-10 22:59:23.173704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.574 [2024-12-10 22:59:23.173739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.574 qpair failed and we were unable to recover it. 00:28:15.574 [2024-12-10 22:59:23.173920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.574 [2024-12-10 22:59:23.173955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.574 qpair failed and we were unable to recover it. 00:28:15.574 [2024-12-10 22:59:23.174251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.574 [2024-12-10 22:59:23.174288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.574 qpair failed and we were unable to recover it. 00:28:15.574 [2024-12-10 22:59:23.174402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.574 [2024-12-10 22:59:23.174436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.574 qpair failed and we were unable to recover it. 00:28:15.574 [2024-12-10 22:59:23.174621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.574 [2024-12-10 22:59:23.174663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.574 qpair failed and we were unable to recover it. 00:28:15.574 [2024-12-10 22:59:23.174940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.574 [2024-12-10 22:59:23.174974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.574 qpair failed and we were unable to recover it. 00:28:15.574 [2024-12-10 22:59:23.175154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.574 [2024-12-10 22:59:23.175200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.574 qpair failed and we were unable to recover it. 00:28:15.574 [2024-12-10 22:59:23.175380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.574 [2024-12-10 22:59:23.175413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.574 qpair failed and we were unable to recover it. 00:28:15.574 [2024-12-10 22:59:23.175553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.574 [2024-12-10 22:59:23.175587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.574 qpair failed and we were unable to recover it. 00:28:15.574 [2024-12-10 22:59:23.175788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.574 [2024-12-10 22:59:23.175824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.574 qpair failed and we were unable to recover it. 00:28:15.574 [2024-12-10 22:59:23.176098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.574 [2024-12-10 22:59:23.176134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.574 qpair failed and we were unable to recover it. 00:28:15.574 [2024-12-10 22:59:23.176359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.574 [2024-12-10 22:59:23.176394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.574 qpair failed and we were unable to recover it. 00:28:15.574 [2024-12-10 22:59:23.176610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.574 [2024-12-10 22:59:23.176645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.574 qpair failed and we were unable to recover it. 00:28:15.574 [2024-12-10 22:59:23.176852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.574 [2024-12-10 22:59:23.176887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.574 qpair failed and we were unable to recover it. 00:28:15.574 [2024-12-10 22:59:23.177067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.574 [2024-12-10 22:59:23.177102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.574 qpair failed and we were unable to recover it. 00:28:15.574 [2024-12-10 22:59:23.177374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.574 [2024-12-10 22:59:23.177409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.574 qpair failed and we were unable to recover it. 00:28:15.574 [2024-12-10 22:59:23.177610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.574 [2024-12-10 22:59:23.177645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.574 qpair failed and we were unable to recover it. 00:28:15.574 [2024-12-10 22:59:23.177855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.574 [2024-12-10 22:59:23.177890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.574 qpair failed and we were unable to recover it. 00:28:15.574 [2024-12-10 22:59:23.178152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.574 [2024-12-10 22:59:23.178200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.574 qpair failed and we were unable to recover it. 00:28:15.574 [2024-12-10 22:59:23.178495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.574 [2024-12-10 22:59:23.178530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.574 qpair failed and we were unable to recover it. 00:28:15.574 [2024-12-10 22:59:23.178711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.574 [2024-12-10 22:59:23.178746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.574 qpair failed and we were unable to recover it. 00:28:15.574 [2024-12-10 22:59:23.178945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.574 [2024-12-10 22:59:23.178981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.574 qpair failed and we were unable to recover it. 00:28:15.574 [2024-12-10 22:59:23.179111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.574 [2024-12-10 22:59:23.179146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.574 qpair failed and we were unable to recover it. 00:28:15.574 [2024-12-10 22:59:23.179436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.574 [2024-12-10 22:59:23.179471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.574 qpair failed and we were unable to recover it. 00:28:15.574 [2024-12-10 22:59:23.179604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.574 [2024-12-10 22:59:23.179639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.574 qpair failed and we were unable to recover it. 00:28:15.574 [2024-12-10 22:59:23.179919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.574 [2024-12-10 22:59:23.179954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.574 qpair failed and we were unable to recover it. 00:28:15.574 [2024-12-10 22:59:23.180148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.574 [2024-12-10 22:59:23.180209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.574 qpair failed and we were unable to recover it. 00:28:15.575 [2024-12-10 22:59:23.180409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.575 [2024-12-10 22:59:23.180444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.575 qpair failed and we were unable to recover it. 00:28:15.575 [2024-12-10 22:59:23.180640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.575 [2024-12-10 22:59:23.180674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.575 qpair failed and we were unable to recover it. 00:28:15.575 [2024-12-10 22:59:23.180922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.575 [2024-12-10 22:59:23.180956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.575 qpair failed and we were unable to recover it. 00:28:15.575 [2024-12-10 22:59:23.181155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.575 [2024-12-10 22:59:23.181203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.575 qpair failed and we were unable to recover it. 00:28:15.575 [2024-12-10 22:59:23.181495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.575 [2024-12-10 22:59:23.181536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.575 qpair failed and we were unable to recover it. 00:28:15.575 [2024-12-10 22:59:23.181820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.575 [2024-12-10 22:59:23.181855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.575 qpair failed and we were unable to recover it. 00:28:15.575 [2024-12-10 22:59:23.182037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.575 [2024-12-10 22:59:23.182073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.575 qpair failed and we were unable to recover it. 00:28:15.575 [2024-12-10 22:59:23.182264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.575 [2024-12-10 22:59:23.182300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.575 qpair failed and we were unable to recover it. 00:28:15.575 [2024-12-10 22:59:23.182492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.575 [2024-12-10 22:59:23.182527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.575 qpair failed and we were unable to recover it. 00:28:15.575 [2024-12-10 22:59:23.182654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.575 [2024-12-10 22:59:23.182689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.575 qpair failed and we were unable to recover it. 00:28:15.575 [2024-12-10 22:59:23.182889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.575 [2024-12-10 22:59:23.182925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.575 qpair failed and we were unable to recover it. 00:28:15.575 [2024-12-10 22:59:23.183033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.575 [2024-12-10 22:59:23.183069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.575 qpair failed and we were unable to recover it. 00:28:15.575 [2024-12-10 22:59:23.183284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.575 [2024-12-10 22:59:23.183320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.575 qpair failed and we were unable to recover it. 00:28:15.575 [2024-12-10 22:59:23.183448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.575 [2024-12-10 22:59:23.183483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.575 qpair failed and we were unable to recover it. 00:28:15.575 [2024-12-10 22:59:23.183766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.575 [2024-12-10 22:59:23.183801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.575 qpair failed and we were unable to recover it. 00:28:15.575 [2024-12-10 22:59:23.184013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.575 [2024-12-10 22:59:23.184048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.575 qpair failed and we were unable to recover it. 00:28:15.575 [2024-12-10 22:59:23.184181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.575 [2024-12-10 22:59:23.184217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.575 qpair failed and we were unable to recover it. 00:28:15.575 [2024-12-10 22:59:23.184402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.575 [2024-12-10 22:59:23.184436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.575 qpair failed and we were unable to recover it. 00:28:15.575 [2024-12-10 22:59:23.184565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.575 [2024-12-10 22:59:23.184599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.575 qpair failed and we were unable to recover it. 00:28:15.575 [2024-12-10 22:59:23.184728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.575 [2024-12-10 22:59:23.184764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.575 qpair failed and we were unable to recover it. 00:28:15.575 [2024-12-10 22:59:23.184962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.575 [2024-12-10 22:59:23.184998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.575 qpair failed and we were unable to recover it. 00:28:15.575 [2024-12-10 22:59:23.185128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.575 [2024-12-10 22:59:23.185173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.575 qpair failed and we were unable to recover it. 00:28:15.575 [2024-12-10 22:59:23.185294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.575 [2024-12-10 22:59:23.185329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.575 qpair failed and we were unable to recover it. 00:28:15.575 [2024-12-10 22:59:23.185539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.575 [2024-12-10 22:59:23.185572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.575 qpair failed and we were unable to recover it. 00:28:15.575 [2024-12-10 22:59:23.185751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.575 [2024-12-10 22:59:23.185785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.575 qpair failed and we were unable to recover it. 00:28:15.575 [2024-12-10 22:59:23.185906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.575 [2024-12-10 22:59:23.185941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.575 qpair failed and we were unable to recover it. 00:28:15.575 [2024-12-10 22:59:23.186127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.575 [2024-12-10 22:59:23.186176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.575 qpair failed and we were unable to recover it. 00:28:15.575 [2024-12-10 22:59:23.186387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.575 [2024-12-10 22:59:23.186422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.575 qpair failed and we were unable to recover it. 00:28:15.575 [2024-12-10 22:59:23.186603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.575 [2024-12-10 22:59:23.186637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.575 qpair failed and we were unable to recover it. 00:28:15.575 [2024-12-10 22:59:23.186746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.575 [2024-12-10 22:59:23.186781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.575 qpair failed and we were unable to recover it. 00:28:15.575 [2024-12-10 22:59:23.186963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.575 [2024-12-10 22:59:23.186998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.575 qpair failed and we were unable to recover it. 00:28:15.575 [2024-12-10 22:59:23.187205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.575 [2024-12-10 22:59:23.187241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.575 qpair failed and we were unable to recover it. 00:28:15.575 [2024-12-10 22:59:23.187381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.575 [2024-12-10 22:59:23.187417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.575 qpair failed and we were unable to recover it. 00:28:15.575 [2024-12-10 22:59:23.187532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.575 [2024-12-10 22:59:23.187568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.575 qpair failed and we were unable to recover it. 00:28:15.575 [2024-12-10 22:59:23.187750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.575 [2024-12-10 22:59:23.187785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.575 qpair failed and we were unable to recover it. 00:28:15.575 [2024-12-10 22:59:23.188084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.575 [2024-12-10 22:59:23.188118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.575 qpair failed and we were unable to recover it. 00:28:15.575 [2024-12-10 22:59:23.188272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.575 [2024-12-10 22:59:23.188308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.575 qpair failed and we were unable to recover it. 00:28:15.575 [2024-12-10 22:59:23.188502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.575 [2024-12-10 22:59:23.188536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.575 qpair failed and we were unable to recover it. 00:28:15.575 [2024-12-10 22:59:23.188644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.575 [2024-12-10 22:59:23.188679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.575 qpair failed and we were unable to recover it. 00:28:15.575 [2024-12-10 22:59:23.188896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.575 [2024-12-10 22:59:23.188930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.576 qpair failed and we were unable to recover it. 00:28:15.576 [2024-12-10 22:59:23.189073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.576 [2024-12-10 22:59:23.189108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.576 qpair failed and we were unable to recover it. 00:28:15.576 [2024-12-10 22:59:23.189328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.576 [2024-12-10 22:59:23.189365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.576 qpair failed and we were unable to recover it. 00:28:15.576 [2024-12-10 22:59:23.189491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.576 [2024-12-10 22:59:23.189525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.576 qpair failed and we were unable to recover it. 00:28:15.576 [2024-12-10 22:59:23.189704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.576 [2024-12-10 22:59:23.189739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.576 qpair failed and we were unable to recover it. 00:28:15.576 [2024-12-10 22:59:23.189967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.576 [2024-12-10 22:59:23.190002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.576 qpair failed and we were unable to recover it. 00:28:15.576 [2024-12-10 22:59:23.190195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.576 [2024-12-10 22:59:23.190236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.576 qpair failed and we were unable to recover it. 00:28:15.576 [2024-12-10 22:59:23.190426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.576 [2024-12-10 22:59:23.190461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.576 qpair failed and we were unable to recover it. 00:28:15.576 [2024-12-10 22:59:23.190573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.576 [2024-12-10 22:59:23.190607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.576 qpair failed and we were unable to recover it. 00:28:15.576 [2024-12-10 22:59:23.190787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.576 [2024-12-10 22:59:23.190821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.576 qpair failed and we were unable to recover it. 00:28:15.576 [2024-12-10 22:59:23.191019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.576 [2024-12-10 22:59:23.191055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.576 qpair failed and we were unable to recover it. 00:28:15.576 [2024-12-10 22:59:23.191233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.576 [2024-12-10 22:59:23.191270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.576 qpair failed and we were unable to recover it. 00:28:15.576 [2024-12-10 22:59:23.191389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.576 [2024-12-10 22:59:23.191424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.576 qpair failed and we were unable to recover it. 00:28:15.576 [2024-12-10 22:59:23.191640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.576 [2024-12-10 22:59:23.191674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.576 qpair failed and we were unable to recover it. 00:28:15.576 [2024-12-10 22:59:23.191814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.576 [2024-12-10 22:59:23.191849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.576 qpair failed and we were unable to recover it. 00:28:15.576 [2024-12-10 22:59:23.191962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.576 [2024-12-10 22:59:23.191998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.576 qpair failed and we were unable to recover it. 00:28:15.576 [2024-12-10 22:59:23.192186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.576 [2024-12-10 22:59:23.192221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.576 qpair failed and we were unable to recover it. 00:28:15.576 [2024-12-10 22:59:23.192403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.576 [2024-12-10 22:59:23.192437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.576 qpair failed and we were unable to recover it. 00:28:15.576 [2024-12-10 22:59:23.192564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.576 [2024-12-10 22:59:23.192599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.576 qpair failed and we were unable to recover it. 00:28:15.576 [2024-12-10 22:59:23.192730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.576 [2024-12-10 22:59:23.192765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.576 qpair failed and we were unable to recover it. 00:28:15.576 [2024-12-10 22:59:23.192966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.576 [2024-12-10 22:59:23.193000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.576 qpair failed and we were unable to recover it. 00:28:15.576 [2024-12-10 22:59:23.193125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.576 [2024-12-10 22:59:23.193173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.576 qpair failed and we were unable to recover it. 00:28:15.576 [2024-12-10 22:59:23.193404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.576 [2024-12-10 22:59:23.193438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.576 qpair failed and we were unable to recover it. 00:28:15.576 [2024-12-10 22:59:23.193644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.576 [2024-12-10 22:59:23.193678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.576 qpair failed and we were unable to recover it. 00:28:15.576 [2024-12-10 22:59:23.193809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.576 [2024-12-10 22:59:23.193844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.576 qpair failed and we were unable to recover it. 00:28:15.576 [2024-12-10 22:59:23.194050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.576 [2024-12-10 22:59:23.194085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.576 qpair failed and we were unable to recover it. 00:28:15.576 [2024-12-10 22:59:23.194304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.576 [2024-12-10 22:59:23.194339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.576 qpair failed and we were unable to recover it. 00:28:15.576 [2024-12-10 22:59:23.194519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.576 [2024-12-10 22:59:23.194554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.576 qpair failed and we were unable to recover it. 00:28:15.576 [2024-12-10 22:59:23.194665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.576 [2024-12-10 22:59:23.194701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.576 qpair failed and we were unable to recover it. 00:28:15.576 [2024-12-10 22:59:23.194883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.576 [2024-12-10 22:59:23.194920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.576 qpair failed and we were unable to recover it. 00:28:15.576 [2024-12-10 22:59:23.195098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.576 [2024-12-10 22:59:23.195133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.576 qpair failed and we were unable to recover it. 00:28:15.576 [2024-12-10 22:59:23.195354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.576 [2024-12-10 22:59:23.195387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.576 qpair failed and we were unable to recover it. 00:28:15.576 [2024-12-10 22:59:23.195567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.576 [2024-12-10 22:59:23.195602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.576 qpair failed and we were unable to recover it. 00:28:15.576 [2024-12-10 22:59:23.195749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.576 [2024-12-10 22:59:23.195789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.576 qpair failed and we were unable to recover it. 00:28:15.576 [2024-12-10 22:59:23.195972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.576 [2024-12-10 22:59:23.196006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.576 qpair failed and we were unable to recover it. 00:28:15.576 [2024-12-10 22:59:23.196208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.576 [2024-12-10 22:59:23.196244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.576 qpair failed and we were unable to recover it. 00:28:15.576 [2024-12-10 22:59:23.196367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.576 [2024-12-10 22:59:23.196402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.576 qpair failed and we were unable to recover it. 00:28:15.576 [2024-12-10 22:59:23.196579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.576 [2024-12-10 22:59:23.196613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.576 qpair failed and we were unable to recover it. 00:28:15.577 [2024-12-10 22:59:23.196813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.577 [2024-12-10 22:59:23.196848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.577 qpair failed and we were unable to recover it. 00:28:15.577 [2024-12-10 22:59:23.197026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.577 [2024-12-10 22:59:23.197062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.577 qpair failed and we were unable to recover it. 00:28:15.577 [2024-12-10 22:59:23.197265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.577 [2024-12-10 22:59:23.197301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.577 qpair failed and we were unable to recover it. 00:28:15.577 [2024-12-10 22:59:23.197483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.577 [2024-12-10 22:59:23.197518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.577 qpair failed and we were unable to recover it. 00:28:15.577 [2024-12-10 22:59:23.197634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.577 [2024-12-10 22:59:23.197670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.577 qpair failed and we were unable to recover it. 00:28:15.577 [2024-12-10 22:59:23.197796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.577 [2024-12-10 22:59:23.197831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.577 qpair failed and we were unable to recover it. 00:28:15.577 [2024-12-10 22:59:23.198045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.577 [2024-12-10 22:59:23.198080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.577 qpair failed and we were unable to recover it. 00:28:15.577 [2024-12-10 22:59:23.198302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.577 [2024-12-10 22:59:23.198337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.577 qpair failed and we were unable to recover it. 00:28:15.577 [2024-12-10 22:59:23.198517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.577 [2024-12-10 22:59:23.198551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.577 qpair failed and we were unable to recover it. 00:28:15.577 [2024-12-10 22:59:23.198738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.577 [2024-12-10 22:59:23.198819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.577 qpair failed and we were unable to recover it. 00:28:15.577 [2024-12-10 22:59:23.199048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.577 [2024-12-10 22:59:23.199085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.577 qpair failed and we were unable to recover it. 00:28:15.577 [2024-12-10 22:59:23.199213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.577 [2024-12-10 22:59:23.199249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.577 qpair failed and we were unable to recover it. 00:28:15.577 [2024-12-10 22:59:23.199365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.577 [2024-12-10 22:59:23.199397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.577 qpair failed and we were unable to recover it. 00:28:15.577 [2024-12-10 22:59:23.199576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.577 [2024-12-10 22:59:23.199611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.577 qpair failed and we were unable to recover it. 00:28:15.577 [2024-12-10 22:59:23.199791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.577 [2024-12-10 22:59:23.199825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.577 qpair failed and we were unable to recover it. 00:28:15.577 [2024-12-10 22:59:23.200008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.577 [2024-12-10 22:59:23.200041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.577 qpair failed and we were unable to recover it. 00:28:15.577 [2024-12-10 22:59:23.200232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.577 [2024-12-10 22:59:23.200268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.577 qpair failed and we were unable to recover it. 00:28:15.577 [2024-12-10 22:59:23.200456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.577 [2024-12-10 22:59:23.200490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.577 qpair failed and we were unable to recover it. 00:28:15.577 [2024-12-10 22:59:23.200740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.577 [2024-12-10 22:59:23.200772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.577 qpair failed and we were unable to recover it. 00:28:15.577 [2024-12-10 22:59:23.200902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.577 [2024-12-10 22:59:23.200936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.577 qpair failed and we were unable to recover it. 00:28:15.577 [2024-12-10 22:59:23.201067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.577 [2024-12-10 22:59:23.201102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.577 qpair failed and we were unable to recover it. 00:28:15.577 [2024-12-10 22:59:23.201306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.577 [2024-12-10 22:59:23.201341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.577 qpair failed and we were unable to recover it. 00:28:15.577 [2024-12-10 22:59:23.201537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.577 [2024-12-10 22:59:23.201581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.577 qpair failed and we were unable to recover it. 00:28:15.577 [2024-12-10 22:59:23.201781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.577 [2024-12-10 22:59:23.201815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.577 qpair failed and we were unable to recover it. 00:28:15.577 [2024-12-10 22:59:23.201996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.577 [2024-12-10 22:59:23.202029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.577 qpair failed and we were unable to recover it. 00:28:15.577 [2024-12-10 22:59:23.202210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.577 [2024-12-10 22:59:23.202246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.577 qpair failed and we were unable to recover it. 00:28:15.577 [2024-12-10 22:59:23.202364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.577 [2024-12-10 22:59:23.202398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.577 qpair failed and we were unable to recover it. 00:28:15.577 [2024-12-10 22:59:23.202582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.577 [2024-12-10 22:59:23.202615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.577 qpair failed and we were unable to recover it. 00:28:15.577 [2024-12-10 22:59:23.202817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.577 [2024-12-10 22:59:23.202852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.577 qpair failed and we were unable to recover it. 00:28:15.577 [2024-12-10 22:59:23.202974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.577 [2024-12-10 22:59:23.203007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.577 qpair failed and we were unable to recover it. 00:28:15.577 [2024-12-10 22:59:23.203208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.577 [2024-12-10 22:59:23.203242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.577 qpair failed and we were unable to recover it. 00:28:15.577 [2024-12-10 22:59:23.203426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.577 [2024-12-10 22:59:23.203460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.577 qpair failed and we were unable to recover it. 00:28:15.577 [2024-12-10 22:59:23.203668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.577 [2024-12-10 22:59:23.203702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.577 qpair failed and we were unable to recover it. 00:28:15.577 [2024-12-10 22:59:23.203917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.577 [2024-12-10 22:59:23.203951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.577 qpair failed and we were unable to recover it. 00:28:15.577 [2024-12-10 22:59:23.204132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.577 [2024-12-10 22:59:23.204178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.577 qpair failed and we were unable to recover it. 00:28:15.577 [2024-12-10 22:59:23.204361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.578 [2024-12-10 22:59:23.204396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.578 qpair failed and we were unable to recover it. 00:28:15.578 [2024-12-10 22:59:23.204609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.578 [2024-12-10 22:59:23.204643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.578 qpair failed and we were unable to recover it. 00:28:15.578 [2024-12-10 22:59:23.204752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.578 [2024-12-10 22:59:23.204784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.578 qpair failed and we were unable to recover it. 00:28:15.578 [2024-12-10 22:59:23.204989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.578 [2024-12-10 22:59:23.205023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.578 qpair failed and we were unable to recover it. 00:28:15.578 [2024-12-10 22:59:23.205153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.578 [2024-12-10 22:59:23.205199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.578 qpair failed and we were unable to recover it. 00:28:15.578 [2024-12-10 22:59:23.205382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.578 [2024-12-10 22:59:23.205415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.578 qpair failed and we were unable to recover it. 00:28:15.578 [2024-12-10 22:59:23.205540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.578 [2024-12-10 22:59:23.205575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.578 qpair failed and we were unable to recover it. 00:28:15.578 [2024-12-10 22:59:23.205851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.578 [2024-12-10 22:59:23.205884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.578 qpair failed and we were unable to recover it. 00:28:15.578 [2024-12-10 22:59:23.205992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.578 [2024-12-10 22:59:23.206027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.578 qpair failed and we were unable to recover it. 00:28:15.578 [2024-12-10 22:59:23.206207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.578 [2024-12-10 22:59:23.206242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.578 qpair failed and we were unable to recover it. 00:28:15.578 [2024-12-10 22:59:23.206425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.578 [2024-12-10 22:59:23.206458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.578 qpair failed and we were unable to recover it. 00:28:15.578 [2024-12-10 22:59:23.206588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.578 [2024-12-10 22:59:23.206622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.578 qpair failed and we were unable to recover it. 00:28:15.578 [2024-12-10 22:59:23.206800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.578 [2024-12-10 22:59:23.206833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.578 qpair failed and we were unable to recover it. 00:28:15.578 [2024-12-10 22:59:23.206968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.578 [2024-12-10 22:59:23.207001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.578 qpair failed and we were unable to recover it. 00:28:15.578 [2024-12-10 22:59:23.207297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.578 [2024-12-10 22:59:23.207333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.578 qpair failed and we were unable to recover it. 00:28:15.578 [2024-12-10 22:59:23.207475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.578 [2024-12-10 22:59:23.207509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.578 qpair failed and we were unable to recover it. 00:28:15.578 [2024-12-10 22:59:23.207710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.578 [2024-12-10 22:59:23.207743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.578 qpair failed and we were unable to recover it. 00:28:15.578 [2024-12-10 22:59:23.207922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.578 [2024-12-10 22:59:23.207956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.578 qpair failed and we were unable to recover it. 00:28:15.578 [2024-12-10 22:59:23.208084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.578 [2024-12-10 22:59:23.208117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.578 qpair failed and we were unable to recover it. 00:28:15.578 [2024-12-10 22:59:23.208243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.578 [2024-12-10 22:59:23.208277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.578 qpair failed and we were unable to recover it. 00:28:15.578 [2024-12-10 22:59:23.208395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.578 [2024-12-10 22:59:23.208429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.578 qpair failed and we were unable to recover it. 00:28:15.578 [2024-12-10 22:59:23.208548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.578 [2024-12-10 22:59:23.208580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.578 qpair failed and we were unable to recover it. 00:28:15.578 [2024-12-10 22:59:23.208706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.578 [2024-12-10 22:59:23.208739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.578 qpair failed and we were unable to recover it. 00:28:15.578 [2024-12-10 22:59:23.208955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.578 [2024-12-10 22:59:23.208989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.578 qpair failed and we were unable to recover it. 00:28:15.578 [2024-12-10 22:59:23.209114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.578 [2024-12-10 22:59:23.209147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.578 qpair failed and we were unable to recover it. 00:28:15.578 [2024-12-10 22:59:23.209337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.578 [2024-12-10 22:59:23.209370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.578 qpair failed and we were unable to recover it. 00:28:15.578 [2024-12-10 22:59:23.209563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.578 [2024-12-10 22:59:23.209596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.578 qpair failed and we were unable to recover it. 00:28:15.578 [2024-12-10 22:59:23.209774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.578 [2024-12-10 22:59:23.209812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.578 qpair failed and we were unable to recover it. 00:28:15.578 [2024-12-10 22:59:23.209991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.578 [2024-12-10 22:59:23.210024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.578 qpair failed and we were unable to recover it. 00:28:15.578 [2024-12-10 22:59:23.210201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.578 [2024-12-10 22:59:23.210236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.578 qpair failed and we were unable to recover it. 00:28:15.578 [2024-12-10 22:59:23.210350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.578 [2024-12-10 22:59:23.210384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.578 qpair failed and we were unable to recover it. 00:28:15.578 [2024-12-10 22:59:23.210489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.578 [2024-12-10 22:59:23.210522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.578 qpair failed and we were unable to recover it. 00:28:15.578 [2024-12-10 22:59:23.210651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.578 [2024-12-10 22:59:23.210686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.578 qpair failed and we were unable to recover it. 00:28:15.578 [2024-12-10 22:59:23.210814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.578 [2024-12-10 22:59:23.210847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.578 qpair failed and we were unable to recover it. 00:28:15.578 [2024-12-10 22:59:23.211058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.578 [2024-12-10 22:59:23.211092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.578 qpair failed and we were unable to recover it. 00:28:15.578 [2024-12-10 22:59:23.211278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.578 [2024-12-10 22:59:23.211314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.578 qpair failed and we were unable to recover it. 00:28:15.578 [2024-12-10 22:59:23.211596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.578 [2024-12-10 22:59:23.211629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.578 qpair failed and we were unable to recover it. 00:28:15.578 [2024-12-10 22:59:23.211825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.578 [2024-12-10 22:59:23.211859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.578 qpair failed and we were unable to recover it. 00:28:15.578 [2024-12-10 22:59:23.212038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.578 [2024-12-10 22:59:23.212071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.578 qpair failed and we were unable to recover it. 00:28:15.579 [2024-12-10 22:59:23.212182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.579 [2024-12-10 22:59:23.212216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.579 qpair failed and we were unable to recover it. 00:28:15.579 [2024-12-10 22:59:23.212345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.579 [2024-12-10 22:59:23.212379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.579 qpair failed and we were unable to recover it. 00:28:15.579 [2024-12-10 22:59:23.212530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.579 [2024-12-10 22:59:23.212564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.579 qpair failed and we were unable to recover it. 00:28:15.579 [2024-12-10 22:59:23.212745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.579 [2024-12-10 22:59:23.212779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.579 qpair failed and we were unable to recover it. 00:28:15.579 [2024-12-10 22:59:23.212983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.579 [2024-12-10 22:59:23.213016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.579 qpair failed and we were unable to recover it. 00:28:15.579 [2024-12-10 22:59:23.213237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.579 [2024-12-10 22:59:23.213273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.579 qpair failed and we were unable to recover it. 00:28:15.579 [2024-12-10 22:59:23.213394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.579 [2024-12-10 22:59:23.213428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.579 qpair failed and we were unable to recover it. 00:28:15.579 [2024-12-10 22:59:23.213556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.579 [2024-12-10 22:59:23.213588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.579 qpair failed and we were unable to recover it. 00:28:15.579 [2024-12-10 22:59:23.213769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.579 [2024-12-10 22:59:23.213802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.579 qpair failed and we were unable to recover it. 00:28:15.579 [2024-12-10 22:59:23.214070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.579 [2024-12-10 22:59:23.214104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.579 qpair failed and we were unable to recover it. 00:28:15.579 [2024-12-10 22:59:23.214342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.579 [2024-12-10 22:59:23.214378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.579 qpair failed and we were unable to recover it. 00:28:15.579 [2024-12-10 22:59:23.214652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.579 [2024-12-10 22:59:23.214686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.579 qpair failed and we were unable to recover it. 00:28:15.579 [2024-12-10 22:59:23.214812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.579 [2024-12-10 22:59:23.214845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.579 qpair failed and we were unable to recover it. 00:28:15.579 [2024-12-10 22:59:23.214961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.579 [2024-12-10 22:59:23.214995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.579 qpair failed and we were unable to recover it. 00:28:15.579 [2024-12-10 22:59:23.215182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.579 [2024-12-10 22:59:23.215217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:15.579 qpair failed and we were unable to recover it. 00:28:15.579 [2024-12-10 22:59:23.215551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.579 [2024-12-10 22:59:23.215629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:15.579 qpair failed and we were unable to recover it. 00:28:15.579 [2024-12-10 22:59:23.215903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.579 [2024-12-10 22:59:23.215981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.579 qpair failed and we were unable to recover it. 00:28:15.579 [2024-12-10 22:59:23.216132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.579 [2024-12-10 22:59:23.216192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.579 qpair failed and we were unable to recover it. 00:28:15.579 [2024-12-10 22:59:23.216384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.579 [2024-12-10 22:59:23.216420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.579 qpair failed and we were unable to recover it. 00:28:15.579 [2024-12-10 22:59:23.216537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.579 [2024-12-10 22:59:23.216570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.579 qpair failed and we were unable to recover it. 00:28:15.579 [2024-12-10 22:59:23.216695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.579 [2024-12-10 22:59:23.216728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.579 qpair failed and we were unable to recover it. 00:28:15.579 [2024-12-10 22:59:23.216884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.579 [2024-12-10 22:59:23.216917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.579 qpair failed and we were unable to recover it. 00:28:15.579 [2024-12-10 22:59:23.217097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.579 [2024-12-10 22:59:23.217130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.579 qpair failed and we were unable to recover it. 00:28:15.579 [2024-12-10 22:59:23.217363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.579 [2024-12-10 22:59:23.217398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.579 qpair failed and we were unable to recover it. 00:28:15.579 [2024-12-10 22:59:23.217525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.579 [2024-12-10 22:59:23.217559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.579 qpair failed and we were unable to recover it. 00:28:15.579 [2024-12-10 22:59:23.217668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.579 [2024-12-10 22:59:23.217710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.579 qpair failed and we were unable to recover it. 00:28:15.579 [2024-12-10 22:59:23.217945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.579 [2024-12-10 22:59:23.217978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.579 qpair failed and we were unable to recover it. 00:28:15.579 [2024-12-10 22:59:23.218176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.579 [2024-12-10 22:59:23.218213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.579 qpair failed and we were unable to recover it. 00:28:15.579 [2024-12-10 22:59:23.218488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.579 [2024-12-10 22:59:23.218522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.579 qpair failed and we were unable to recover it. 00:28:15.579 [2024-12-10 22:59:23.218715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.579 [2024-12-10 22:59:23.218748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.579 qpair failed and we were unable to recover it. 00:28:15.579 [2024-12-10 22:59:23.218857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.579 [2024-12-10 22:59:23.218891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.579 qpair failed and we were unable to recover it. 00:28:15.579 [2024-12-10 22:59:23.219086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.579 [2024-12-10 22:59:23.219121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.579 qpair failed and we were unable to recover it. 00:28:15.579 [2024-12-10 22:59:23.219349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.579 [2024-12-10 22:59:23.219385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.579 qpair failed and we were unable to recover it. 00:28:15.579 [2024-12-10 22:59:23.219494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.579 [2024-12-10 22:59:23.219527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.579 qpair failed and we were unable to recover it. 00:28:15.579 [2024-12-10 22:59:23.219650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.579 [2024-12-10 22:59:23.219683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.579 qpair failed and we were unable to recover it. 00:28:15.579 [2024-12-10 22:59:23.219964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.579 [2024-12-10 22:59:23.219999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.579 qpair failed and we were unable to recover it. 00:28:15.579 [2024-12-10 22:59:23.220105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.579 [2024-12-10 22:59:23.220137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.579 qpair failed and we were unable to recover it. 00:28:15.579 [2024-12-10 22:59:23.220358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.579 [2024-12-10 22:59:23.220393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.579 qpair failed and we were unable to recover it. 00:28:15.579 [2024-12-10 22:59:23.220573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.579 [2024-12-10 22:59:23.220605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.579 qpair failed and we were unable to recover it. 00:28:15.579 [2024-12-10 22:59:23.220724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.580 [2024-12-10 22:59:23.220758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.580 qpair failed and we were unable to recover it. 00:28:15.580 [2024-12-10 22:59:23.220932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.580 [2024-12-10 22:59:23.220967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.580 qpair failed and we were unable to recover it. 00:28:15.580 [2024-12-10 22:59:23.221142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.580 [2024-12-10 22:59:23.221189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.580 qpair failed and we were unable to recover it. 00:28:15.580 [2024-12-10 22:59:23.221439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.580 [2024-12-10 22:59:23.221479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.580 qpair failed and we were unable to recover it. 00:28:15.580 [2024-12-10 22:59:23.221592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.580 [2024-12-10 22:59:23.221626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.580 qpair failed and we were unable to recover it. 00:28:15.580 [2024-12-10 22:59:23.221894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.580 [2024-12-10 22:59:23.221927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.580 qpair failed and we were unable to recover it. 00:28:15.580 [2024-12-10 22:59:23.222046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.580 [2024-12-10 22:59:23.222081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.580 qpair failed and we were unable to recover it. 00:28:15.580 [2024-12-10 22:59:23.222298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.580 [2024-12-10 22:59:23.222333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.580 qpair failed and we were unable to recover it. 00:28:15.580 [2024-12-10 22:59:23.222516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.580 [2024-12-10 22:59:23.222551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.580 qpair failed and we were unable to recover it. 00:28:15.580 [2024-12-10 22:59:23.222726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.580 [2024-12-10 22:59:23.222759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.580 qpair failed and we were unable to recover it. 00:28:15.580 [2024-12-10 22:59:23.222883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.580 [2024-12-10 22:59:23.222917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.580 qpair failed and we were unable to recover it. 00:28:15.580 [2024-12-10 22:59:23.223087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.580 [2024-12-10 22:59:23.223120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.580 qpair failed and we were unable to recover it. 00:28:15.580 [2024-12-10 22:59:23.223328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.580 [2024-12-10 22:59:23.223363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.580 qpair failed and we were unable to recover it. 00:28:15.580 [2024-12-10 22:59:23.223491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.580 [2024-12-10 22:59:23.223523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.580 qpair failed and we were unable to recover it. 00:28:15.580 [2024-12-10 22:59:23.223796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.580 [2024-12-10 22:59:23.223830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.580 qpair failed and we were unable to recover it. 00:28:15.580 [2024-12-10 22:59:23.224012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.580 [2024-12-10 22:59:23.224044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.580 qpair failed and we were unable to recover it. 00:28:15.580 [2024-12-10 22:59:23.224335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.580 [2024-12-10 22:59:23.224368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.580 qpair failed and we were unable to recover it. 00:28:15.580 [2024-12-10 22:59:23.224572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.580 [2024-12-10 22:59:23.224605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.580 qpair failed and we were unable to recover it. 00:28:15.580 [2024-12-10 22:59:23.224857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.580 [2024-12-10 22:59:23.224890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.580 qpair failed and we were unable to recover it. 00:28:15.580 [2024-12-10 22:59:23.225027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.580 [2024-12-10 22:59:23.225061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.580 qpair failed and we were unable to recover it. 00:28:15.580 [2024-12-10 22:59:23.225331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.580 [2024-12-10 22:59:23.225366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.580 qpair failed and we were unable to recover it. 00:28:15.580 [2024-12-10 22:59:23.225544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.580 [2024-12-10 22:59:23.225578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.580 qpair failed and we were unable to recover it. 00:28:15.580 [2024-12-10 22:59:23.225751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.580 [2024-12-10 22:59:23.225784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.580 qpair failed and we were unable to recover it. 00:28:15.580 [2024-12-10 22:59:23.225983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.580 [2024-12-10 22:59:23.226016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.580 qpair failed and we were unable to recover it. 00:28:15.580 [2024-12-10 22:59:23.226205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.580 [2024-12-10 22:59:23.226240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.580 qpair failed and we were unable to recover it. 00:28:15.580 [2024-12-10 22:59:23.226437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.580 [2024-12-10 22:59:23.226471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.580 qpair failed and we were unable to recover it. 00:28:15.580 [2024-12-10 22:59:23.226651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.580 [2024-12-10 22:59:23.226685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.580 qpair failed and we were unable to recover it. 00:28:15.580 [2024-12-10 22:59:23.226938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.580 [2024-12-10 22:59:23.226971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.580 qpair failed and we were unable to recover it. 00:28:15.580 [2024-12-10 22:59:23.227267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.580 [2024-12-10 22:59:23.227302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.580 qpair failed and we were unable to recover it. 00:28:15.580 [2024-12-10 22:59:23.227484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.580 [2024-12-10 22:59:23.227518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.580 qpair failed and we were unable to recover it. 00:28:15.580 [2024-12-10 22:59:23.227721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.580 [2024-12-10 22:59:23.227756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.580 qpair failed and we were unable to recover it. 00:28:15.580 [2024-12-10 22:59:23.227870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.580 [2024-12-10 22:59:23.227903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.580 qpair failed and we were unable to recover it. 00:28:15.580 [2024-12-10 22:59:23.228181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.580 [2024-12-10 22:59:23.228217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.580 qpair failed and we were unable to recover it. 00:28:15.580 [2024-12-10 22:59:23.228496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.580 [2024-12-10 22:59:23.228530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.580 qpair failed and we were unable to recover it. 00:28:15.580 [2024-12-10 22:59:23.228658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.580 [2024-12-10 22:59:23.228691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.580 qpair failed and we were unable to recover it. 00:28:15.580 [2024-12-10 22:59:23.228911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.580 [2024-12-10 22:59:23.228945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.580 qpair failed and we were unable to recover it. 00:28:15.580 [2024-12-10 22:59:23.229148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.580 [2024-12-10 22:59:23.229192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.580 qpair failed and we were unable to recover it. 00:28:15.580 [2024-12-10 22:59:23.229330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.580 [2024-12-10 22:59:23.229364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.580 qpair failed and we were unable to recover it. 00:28:15.580 [2024-12-10 22:59:23.229476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.580 [2024-12-10 22:59:23.229510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.580 qpair failed and we were unable to recover it. 00:28:15.580 [2024-12-10 22:59:23.229645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.581 [2024-12-10 22:59:23.229677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.581 qpair failed and we were unable to recover it. 00:28:15.581 [2024-12-10 22:59:23.229872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.581 [2024-12-10 22:59:23.229906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.581 qpair failed and we were unable to recover it. 00:28:15.581 [2024-12-10 22:59:23.230080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.581 [2024-12-10 22:59:23.230114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.581 qpair failed and we were unable to recover it. 00:28:15.581 [2024-12-10 22:59:23.230242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.581 [2024-12-10 22:59:23.230277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.581 qpair failed and we were unable to recover it. 00:28:15.581 [2024-12-10 22:59:23.230457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.581 [2024-12-10 22:59:23.230490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.581 qpair failed and we were unable to recover it. 00:28:15.581 [2024-12-10 22:59:23.230764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.581 [2024-12-10 22:59:23.230797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.581 qpair failed and we were unable to recover it. 00:28:15.581 [2024-12-10 22:59:23.230972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.581 [2024-12-10 22:59:23.231005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.581 qpair failed and we were unable to recover it. 00:28:15.581 [2024-12-10 22:59:23.231297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.581 [2024-12-10 22:59:23.231333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.581 qpair failed and we were unable to recover it. 00:28:15.581 [2024-12-10 22:59:23.231484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.581 [2024-12-10 22:59:23.231519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.581 qpair failed and we were unable to recover it. 00:28:15.581 [2024-12-10 22:59:23.231709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.581 [2024-12-10 22:59:23.231742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.581 qpair failed and we were unable to recover it. 00:28:15.581 [2024-12-10 22:59:23.231863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.581 [2024-12-10 22:59:23.231897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.581 qpair failed and we were unable to recover it. 00:28:15.581 [2024-12-10 22:59:23.232076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.581 [2024-12-10 22:59:23.232110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.581 qpair failed and we were unable to recover it. 00:28:15.581 [2024-12-10 22:59:23.232330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.581 [2024-12-10 22:59:23.232365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.581 qpair failed and we were unable to recover it. 00:28:15.581 [2024-12-10 22:59:23.232489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.581 [2024-12-10 22:59:23.232522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.581 qpair failed and we were unable to recover it. 00:28:15.581 [2024-12-10 22:59:23.232645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.581 [2024-12-10 22:59:23.232678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.581 qpair failed and we were unable to recover it. 00:28:15.581 [2024-12-10 22:59:23.232800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.581 [2024-12-10 22:59:23.232834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.581 qpair failed and we were unable to recover it. 00:28:15.581 [2024-12-10 22:59:23.233010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.581 [2024-12-10 22:59:23.233044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.581 qpair failed and we were unable to recover it. 00:28:15.581 [2024-12-10 22:59:23.233221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.581 [2024-12-10 22:59:23.233256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.581 qpair failed and we were unable to recover it. 00:28:15.581 [2024-12-10 22:59:23.233503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.581 [2024-12-10 22:59:23.233537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.581 qpair failed and we were unable to recover it. 00:28:15.581 [2024-12-10 22:59:23.233835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.581 [2024-12-10 22:59:23.233868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.581 qpair failed and we were unable to recover it. 00:28:15.581 [2024-12-10 22:59:23.234151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.581 [2024-12-10 22:59:23.234197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.581 qpair failed and we were unable to recover it. 00:28:15.581 [2024-12-10 22:59:23.234320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.581 [2024-12-10 22:59:23.234354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.581 qpair failed and we were unable to recover it. 00:28:15.581 [2024-12-10 22:59:23.234601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.581 [2024-12-10 22:59:23.234634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.581 qpair failed and we were unable to recover it. 00:28:15.581 [2024-12-10 22:59:23.234765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.581 [2024-12-10 22:59:23.234800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.581 qpair failed and we were unable to recover it. 00:28:15.581 [2024-12-10 22:59:23.234914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.581 [2024-12-10 22:59:23.234948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.581 qpair failed and we were unable to recover it. 00:28:15.581 [2024-12-10 22:59:23.235225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.581 [2024-12-10 22:59:23.235260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.581 qpair failed and we were unable to recover it. 00:28:15.581 [2024-12-10 22:59:23.235391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.581 [2024-12-10 22:59:23.235434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.581 qpair failed and we were unable to recover it. 00:28:15.581 [2024-12-10 22:59:23.235546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.581 [2024-12-10 22:59:23.235578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.581 qpair failed and we were unable to recover it. 00:28:15.581 [2024-12-10 22:59:23.235754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.581 [2024-12-10 22:59:23.235787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.581 qpair failed and we were unable to recover it. 00:28:15.581 [2024-12-10 22:59:23.235966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.581 [2024-12-10 22:59:23.235999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.581 qpair failed and we were unable to recover it. 00:28:15.581 [2024-12-10 22:59:23.236127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.581 [2024-12-10 22:59:23.236169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.581 qpair failed and we were unable to recover it. 00:28:15.581 [2024-12-10 22:59:23.236353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.581 [2024-12-10 22:59:23.236387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.581 qpair failed and we were unable to recover it. 00:28:15.581 [2024-12-10 22:59:23.236585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.581 [2024-12-10 22:59:23.236625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.581 qpair failed and we were unable to recover it. 00:28:15.581 [2024-12-10 22:59:23.236855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.581 [2024-12-10 22:59:23.236890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.581 qpair failed and we were unable to recover it. 00:28:15.581 [2024-12-10 22:59:23.237070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.581 [2024-12-10 22:59:23.237103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.581 qpair failed and we were unable to recover it. 00:28:15.581 [2024-12-10 22:59:23.237379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.582 [2024-12-10 22:59:23.237419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.582 qpair failed and we were unable to recover it. 00:28:15.582 [2024-12-10 22:59:23.237699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.582 [2024-12-10 22:59:23.237737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.582 qpair failed and we were unable to recover it. 00:28:15.582 [2024-12-10 22:59:23.238006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.582 [2024-12-10 22:59:23.238040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.582 qpair failed and we were unable to recover it. 00:28:15.582 [2024-12-10 22:59:23.238173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.582 [2024-12-10 22:59:23.238208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.582 qpair failed and we were unable to recover it. 00:28:15.582 [2024-12-10 22:59:23.238336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.582 [2024-12-10 22:59:23.238370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.582 qpair failed and we were unable to recover it. 00:28:15.582 [2024-12-10 22:59:23.238570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.582 [2024-12-10 22:59:23.238602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.582 qpair failed and we were unable to recover it. 00:28:15.582 [2024-12-10 22:59:23.238722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.582 [2024-12-10 22:59:23.238755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.582 qpair failed and we were unable to recover it. 00:28:15.582 [2024-12-10 22:59:23.238957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.582 [2024-12-10 22:59:23.238991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.582 qpair failed and we were unable to recover it. 00:28:15.582 [2024-12-10 22:59:23.239178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.582 [2024-12-10 22:59:23.239214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.582 qpair failed and we were unable to recover it. 00:28:15.582 [2024-12-10 22:59:23.239411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.582 [2024-12-10 22:59:23.239444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.582 qpair failed and we were unable to recover it. 00:28:15.582 [2024-12-10 22:59:23.239564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.582 [2024-12-10 22:59:23.239598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.582 qpair failed and we were unable to recover it. 00:28:15.582 [2024-12-10 22:59:23.239874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.582 [2024-12-10 22:59:23.239908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.582 qpair failed and we were unable to recover it. 00:28:15.582 [2024-12-10 22:59:23.240033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.582 [2024-12-10 22:59:23.240066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.582 qpair failed and we were unable to recover it. 00:28:15.582 [2024-12-10 22:59:23.240329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.582 [2024-12-10 22:59:23.240364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.582 qpair failed and we were unable to recover it. 00:28:15.582 [2024-12-10 22:59:23.240551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.582 [2024-12-10 22:59:23.240585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.582 qpair failed and we were unable to recover it. 00:28:15.582 [2024-12-10 22:59:23.240771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.582 [2024-12-10 22:59:23.240803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.582 qpair failed and we were unable to recover it. 00:28:15.582 [2024-12-10 22:59:23.240982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.582 [2024-12-10 22:59:23.241015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.582 qpair failed and we were unable to recover it. 00:28:15.582 [2024-12-10 22:59:23.241280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.582 [2024-12-10 22:59:23.241314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.582 qpair failed and we were unable to recover it. 00:28:15.582 [2024-12-10 22:59:23.241447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.582 [2024-12-10 22:59:23.241481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.582 qpair failed and we were unable to recover it. 00:28:15.582 [2024-12-10 22:59:23.241599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.582 [2024-12-10 22:59:23.241633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.582 qpair failed and we were unable to recover it. 00:28:15.582 [2024-12-10 22:59:23.241839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.582 [2024-12-10 22:59:23.241874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.582 qpair failed and we were unable to recover it. 00:28:15.582 [2024-12-10 22:59:23.242055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.582 [2024-12-10 22:59:23.242088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.582 qpair failed and we were unable to recover it. 00:28:15.582 [2024-12-10 22:59:23.242279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.582 [2024-12-10 22:59:23.242314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.582 qpair failed and we were unable to recover it. 00:28:15.582 [2024-12-10 22:59:23.242564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.582 [2024-12-10 22:59:23.242599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.582 qpair failed and we were unable to recover it. 00:28:15.582 [2024-12-10 22:59:23.242784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.582 [2024-12-10 22:59:23.242818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.582 qpair failed and we were unable to recover it. 00:28:15.582 [2024-12-10 22:59:23.242950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.582 [2024-12-10 22:59:23.242985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.582 qpair failed and we were unable to recover it. 00:28:15.582 [2024-12-10 22:59:23.243293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.582 [2024-12-10 22:59:23.243329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.582 qpair failed and we were unable to recover it. 00:28:15.582 [2024-12-10 22:59:23.243455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.582 [2024-12-10 22:59:23.243489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.582 qpair failed and we were unable to recover it. 00:28:15.582 [2024-12-10 22:59:23.243690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.582 [2024-12-10 22:59:23.243723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.582 qpair failed and we were unable to recover it. 00:28:15.582 [2024-12-10 22:59:23.243844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.582 [2024-12-10 22:59:23.243877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.582 qpair failed and we were unable to recover it. 00:28:15.582 [2024-12-10 22:59:23.244123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.582 [2024-12-10 22:59:23.244173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.582 qpair failed and we were unable to recover it. 00:28:15.582 [2024-12-10 22:59:23.244362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.582 [2024-12-10 22:59:23.244397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.582 qpair failed and we were unable to recover it. 00:28:15.582 [2024-12-10 22:59:23.244527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.582 [2024-12-10 22:59:23.244560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.582 qpair failed and we were unable to recover it. 00:28:15.582 [2024-12-10 22:59:23.244686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.582 [2024-12-10 22:59:23.244720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.582 qpair failed and we were unable to recover it. 00:28:15.582 [2024-12-10 22:59:23.244921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.582 [2024-12-10 22:59:23.244954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.582 qpair failed and we were unable to recover it. 00:28:15.582 [2024-12-10 22:59:23.245246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.582 [2024-12-10 22:59:23.245282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.582 qpair failed and we were unable to recover it. 00:28:15.582 [2024-12-10 22:59:23.245413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.582 [2024-12-10 22:59:23.245448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.582 qpair failed and we were unable to recover it. 00:28:15.582 [2024-12-10 22:59:23.245629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.582 [2024-12-10 22:59:23.245662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.582 qpair failed and we were unable to recover it. 00:28:15.582 [2024-12-10 22:59:23.245875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.582 [2024-12-10 22:59:23.245910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.582 qpair failed and we were unable to recover it. 00:28:15.582 [2024-12-10 22:59:23.246124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.582 [2024-12-10 22:59:23.246169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.582 qpair failed and we were unable to recover it. 00:28:15.583 [2024-12-10 22:59:23.246348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.583 [2024-12-10 22:59:23.246382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.583 qpair failed and we were unable to recover it. 00:28:15.583 [2024-12-10 22:59:23.246594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.583 [2024-12-10 22:59:23.246628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.583 qpair failed and we were unable to recover it. 00:28:15.583 [2024-12-10 22:59:23.246772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.583 [2024-12-10 22:59:23.246806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.583 qpair failed and we were unable to recover it. 00:28:15.583 [2024-12-10 22:59:23.247004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.583 [2024-12-10 22:59:23.247039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.583 qpair failed and we were unable to recover it. 00:28:15.583 [2024-12-10 22:59:23.247240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.583 [2024-12-10 22:59:23.247276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.583 qpair failed and we were unable to recover it. 00:28:15.583 [2024-12-10 22:59:23.247413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.583 [2024-12-10 22:59:23.247448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.583 qpair failed and we were unable to recover it. 00:28:15.583 [2024-12-10 22:59:23.247562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.583 [2024-12-10 22:59:23.247594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.583 qpair failed and we were unable to recover it. 00:28:15.583 [2024-12-10 22:59:23.247782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.583 [2024-12-10 22:59:23.247816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.583 qpair failed and we were unable to recover it. 00:28:15.583 [2024-12-10 22:59:23.248003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.583 [2024-12-10 22:59:23.248038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.583 qpair failed and we were unable to recover it. 00:28:15.583 [2024-12-10 22:59:23.248313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.583 [2024-12-10 22:59:23.248348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.583 qpair failed and we were unable to recover it. 00:28:15.583 [2024-12-10 22:59:23.248556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.583 [2024-12-10 22:59:23.248591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.583 qpair failed and we were unable to recover it. 00:28:15.583 [2024-12-10 22:59:23.248916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.583 [2024-12-10 22:59:23.248950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.583 qpair failed and we were unable to recover it. 00:28:15.583 [2024-12-10 22:59:23.249205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.583 [2024-12-10 22:59:23.249242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.583 qpair failed and we were unable to recover it. 00:28:15.583 [2024-12-10 22:59:23.249383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.583 [2024-12-10 22:59:23.249417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.583 qpair failed and we were unable to recover it. 00:28:15.583 [2024-12-10 22:59:23.249529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.583 [2024-12-10 22:59:23.249564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.583 qpair failed and we were unable to recover it. 00:28:15.583 [2024-12-10 22:59:23.249693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.583 [2024-12-10 22:59:23.249728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.583 qpair failed and we were unable to recover it. 00:28:15.583 [2024-12-10 22:59:23.249905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.583 [2024-12-10 22:59:23.249939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.583 qpair failed and we were unable to recover it. 00:28:15.583 [2024-12-10 22:59:23.250133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.583 [2024-12-10 22:59:23.250179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.583 qpair failed and we were unable to recover it. 00:28:15.583 [2024-12-10 22:59:23.250359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.583 [2024-12-10 22:59:23.250394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.583 qpair failed and we were unable to recover it. 00:28:15.583 [2024-12-10 22:59:23.250572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.583 [2024-12-10 22:59:23.250607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.583 qpair failed and we were unable to recover it. 00:28:15.583 [2024-12-10 22:59:23.250883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.583 [2024-12-10 22:59:23.250918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.583 qpair failed and we were unable to recover it. 00:28:15.583 [2024-12-10 22:59:23.251179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.583 [2024-12-10 22:59:23.251214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.583 qpair failed and we were unable to recover it. 00:28:15.583 [2024-12-10 22:59:23.251420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.583 [2024-12-10 22:59:23.251453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.583 qpair failed and we were unable to recover it. 00:28:15.862 [2024-12-10 22:59:23.251636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.862 [2024-12-10 22:59:23.251671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.862 qpair failed and we were unable to recover it. 00:28:15.862 [2024-12-10 22:59:23.251969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.862 [2024-12-10 22:59:23.252003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.862 qpair failed and we were unable to recover it. 00:28:15.862 [2024-12-10 22:59:23.252272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.862 [2024-12-10 22:59:23.252313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.862 qpair failed and we were unable to recover it. 00:28:15.862 [2024-12-10 22:59:23.252547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.862 [2024-12-10 22:59:23.252582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.862 qpair failed and we were unable to recover it. 00:28:15.862 [2024-12-10 22:59:23.252782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.862 [2024-12-10 22:59:23.252817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.862 qpair failed and we were unable to recover it. 00:28:15.862 [2024-12-10 22:59:23.253021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.862 [2024-12-10 22:59:23.253055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.862 qpair failed and we were unable to recover it. 00:28:15.862 [2024-12-10 22:59:23.253261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.862 [2024-12-10 22:59:23.253296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.862 qpair failed and we were unable to recover it. 00:28:15.862 [2024-12-10 22:59:23.253408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.862 [2024-12-10 22:59:23.253442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.862 qpair failed and we were unable to recover it. 00:28:15.862 [2024-12-10 22:59:23.253643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.862 [2024-12-10 22:59:23.253676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.862 qpair failed and we were unable to recover it. 00:28:15.862 [2024-12-10 22:59:23.253799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.862 [2024-12-10 22:59:23.253833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.862 qpair failed and we were unable to recover it. 00:28:15.862 [2024-12-10 22:59:23.254108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.862 [2024-12-10 22:59:23.254142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.862 qpair failed and we were unable to recover it. 00:28:15.862 [2024-12-10 22:59:23.254360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.862 [2024-12-10 22:59:23.254395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.862 qpair failed and we were unable to recover it. 00:28:15.862 [2024-12-10 22:59:23.254518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.862 [2024-12-10 22:59:23.254552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.862 qpair failed and we were unable to recover it. 00:28:15.862 [2024-12-10 22:59:23.254673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.862 [2024-12-10 22:59:23.254707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.862 qpair failed and we were unable to recover it. 00:28:15.862 [2024-12-10 22:59:23.255007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.863 [2024-12-10 22:59:23.255041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.863 qpair failed and we were unable to recover it. 00:28:15.863 [2024-12-10 22:59:23.255223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.863 [2024-12-10 22:59:23.255258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.863 qpair failed and we were unable to recover it. 00:28:15.863 [2024-12-10 22:59:23.255461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.863 [2024-12-10 22:59:23.255504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.863 qpair failed and we were unable to recover it. 00:28:15.863 [2024-12-10 22:59:23.255760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.863 [2024-12-10 22:59:23.255793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.863 qpair failed and we were unable to recover it. 00:28:15.863 [2024-12-10 22:59:23.256070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.863 [2024-12-10 22:59:23.256104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.863 qpair failed and we were unable to recover it. 00:28:15.863 [2024-12-10 22:59:23.256266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.863 [2024-12-10 22:59:23.256302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.863 qpair failed and we were unable to recover it. 00:28:15.863 [2024-12-10 22:59:23.256504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.863 [2024-12-10 22:59:23.256538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.863 qpair failed and we were unable to recover it. 00:28:15.863 [2024-12-10 22:59:23.256735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.863 [2024-12-10 22:59:23.256771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.863 qpair failed and we were unable to recover it. 00:28:15.863 [2024-12-10 22:59:23.256899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.863 [2024-12-10 22:59:23.256934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.863 qpair failed and we were unable to recover it. 00:28:15.863 [2024-12-10 22:59:23.257188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.863 [2024-12-10 22:59:23.257224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.863 qpair failed and we were unable to recover it. 00:28:15.863 [2024-12-10 22:59:23.257492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.863 [2024-12-10 22:59:23.257526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.863 qpair failed and we were unable to recover it. 00:28:15.863 [2024-12-10 22:59:23.257732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.863 [2024-12-10 22:59:23.257767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.863 qpair failed and we were unable to recover it. 00:28:15.863 [2024-12-10 22:59:23.257946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.863 [2024-12-10 22:59:23.257981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.863 qpair failed and we were unable to recover it. 00:28:15.863 [2024-12-10 22:59:23.258181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.863 [2024-12-10 22:59:23.258217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.863 qpair failed and we were unable to recover it. 00:28:15.863 [2024-12-10 22:59:23.258342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.863 [2024-12-10 22:59:23.258377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.863 qpair failed and we were unable to recover it. 00:28:15.863 [2024-12-10 22:59:23.258563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.863 [2024-12-10 22:59:23.258598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.863 qpair failed and we were unable to recover it. 00:28:15.863 [2024-12-10 22:59:23.258793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.863 [2024-12-10 22:59:23.258828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.863 qpair failed and we were unable to recover it. 00:28:15.863 [2024-12-10 22:59:23.259020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.863 [2024-12-10 22:59:23.259055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.863 qpair failed and we were unable to recover it. 00:28:15.863 [2024-12-10 22:59:23.259334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.863 [2024-12-10 22:59:23.259370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.863 qpair failed and we were unable to recover it. 00:28:15.863 [2024-12-10 22:59:23.259625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.863 [2024-12-10 22:59:23.259659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.863 qpair failed and we were unable to recover it. 00:28:15.863 [2024-12-10 22:59:23.259939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.863 [2024-12-10 22:59:23.259973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.863 qpair failed and we were unable to recover it. 00:28:15.863 [2024-12-10 22:59:23.260101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.863 [2024-12-10 22:59:23.260135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.863 qpair failed and we were unable to recover it. 00:28:15.863 [2024-12-10 22:59:23.260329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.863 [2024-12-10 22:59:23.260365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.863 qpair failed and we were unable to recover it. 00:28:15.863 [2024-12-10 22:59:23.260578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.863 [2024-12-10 22:59:23.260613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.863 qpair failed and we were unable to recover it. 00:28:15.863 [2024-12-10 22:59:23.260837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.863 [2024-12-10 22:59:23.260872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.863 qpair failed and we were unable to recover it. 00:28:15.863 [2024-12-10 22:59:23.261052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.863 [2024-12-10 22:59:23.261086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.863 qpair failed and we were unable to recover it. 00:28:15.863 [2024-12-10 22:59:23.261266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.863 [2024-12-10 22:59:23.261303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.863 qpair failed and we were unable to recover it. 00:28:15.863 [2024-12-10 22:59:23.261583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.863 [2024-12-10 22:59:23.261617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.863 qpair failed and we were unable to recover it. 00:28:15.863 [2024-12-10 22:59:23.261862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.863 [2024-12-10 22:59:23.261895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.863 qpair failed and we were unable to recover it. 00:28:15.863 [2024-12-10 22:59:23.262041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.863 [2024-12-10 22:59:23.262080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.863 qpair failed and we were unable to recover it. 00:28:15.863 [2024-12-10 22:59:23.262219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.863 [2024-12-10 22:59:23.262254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.863 qpair failed and we were unable to recover it. 00:28:15.863 [2024-12-10 22:59:23.262380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.863 [2024-12-10 22:59:23.262415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.863 qpair failed and we were unable to recover it. 00:28:15.863 [2024-12-10 22:59:23.262593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.863 [2024-12-10 22:59:23.262628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.863 qpair failed and we were unable to recover it. 00:28:15.863 [2024-12-10 22:59:23.262827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.863 [2024-12-10 22:59:23.262862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.863 qpair failed and we were unable to recover it. 00:28:15.863 [2024-12-10 22:59:23.263055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.863 [2024-12-10 22:59:23.263089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.863 qpair failed and we were unable to recover it. 00:28:15.863 [2024-12-10 22:59:23.263220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.863 [2024-12-10 22:59:23.263254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.863 qpair failed and we were unable to recover it. 00:28:15.863 [2024-12-10 22:59:23.263536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.863 [2024-12-10 22:59:23.263570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.864 qpair failed and we were unable to recover it. 00:28:15.864 [2024-12-10 22:59:23.263690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.864 [2024-12-10 22:59:23.263724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.864 qpair failed and we were unable to recover it. 00:28:15.864 [2024-12-10 22:59:23.263910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.864 [2024-12-10 22:59:23.263947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.864 qpair failed and we were unable to recover it. 00:28:15.864 [2024-12-10 22:59:23.264152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.864 [2024-12-10 22:59:23.264200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.864 qpair failed and we were unable to recover it. 00:28:15.864 [2024-12-10 22:59:23.264383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.864 [2024-12-10 22:59:23.264415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.864 qpair failed and we were unable to recover it. 00:28:15.864 [2024-12-10 22:59:23.264547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.864 [2024-12-10 22:59:23.264582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.864 qpair failed and we were unable to recover it. 00:28:15.864 [2024-12-10 22:59:23.264773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.864 [2024-12-10 22:59:23.264808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.864 qpair failed and we were unable to recover it. 00:28:15.864 [2024-12-10 22:59:23.264953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.864 [2024-12-10 22:59:23.264986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.864 qpair failed and we were unable to recover it. 00:28:15.864 [2024-12-10 22:59:23.265286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.864 [2024-12-10 22:59:23.265323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.864 qpair failed and we were unable to recover it. 00:28:15.864 [2024-12-10 22:59:23.265510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.864 [2024-12-10 22:59:23.265544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.864 qpair failed and we were unable to recover it. 00:28:15.864 [2024-12-10 22:59:23.265748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.864 [2024-12-10 22:59:23.265781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.864 qpair failed and we were unable to recover it. 00:28:15.864 [2024-12-10 22:59:23.266009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.864 [2024-12-10 22:59:23.266043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.864 qpair failed and we were unable to recover it. 00:28:15.864 [2024-12-10 22:59:23.266232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.864 [2024-12-10 22:59:23.266269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.864 qpair failed and we were unable to recover it. 00:28:15.864 [2024-12-10 22:59:23.266527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.864 [2024-12-10 22:59:23.266564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.864 qpair failed and we were unable to recover it. 00:28:15.864 [2024-12-10 22:59:23.266692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.864 [2024-12-10 22:59:23.266726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.864 qpair failed and we were unable to recover it. 00:28:15.864 [2024-12-10 22:59:23.266921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.864 [2024-12-10 22:59:23.266955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.864 qpair failed and we were unable to recover it. 00:28:15.864 [2024-12-10 22:59:23.267148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.864 [2024-12-10 22:59:23.267194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.864 qpair failed and we were unable to recover it. 00:28:15.864 [2024-12-10 22:59:23.267330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.864 [2024-12-10 22:59:23.267365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.864 qpair failed and we were unable to recover it. 00:28:15.864 [2024-12-10 22:59:23.267483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.864 [2024-12-10 22:59:23.267518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.864 qpair failed and we were unable to recover it. 00:28:15.864 [2024-12-10 22:59:23.267703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.864 [2024-12-10 22:59:23.267738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.864 qpair failed and we were unable to recover it. 00:28:15.864 [2024-12-10 22:59:23.267859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.864 [2024-12-10 22:59:23.267899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.864 qpair failed and we were unable to recover it. 00:28:15.864 [2024-12-10 22:59:23.268079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.864 [2024-12-10 22:59:23.268112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.864 qpair failed and we were unable to recover it. 00:28:15.864 [2024-12-10 22:59:23.268424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.864 [2024-12-10 22:59:23.268459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.864 qpair failed and we were unable to recover it. 00:28:15.864 [2024-12-10 22:59:23.268578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.864 [2024-12-10 22:59:23.268613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.864 qpair failed and we were unable to recover it. 00:28:15.864 [2024-12-10 22:59:23.268780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.864 [2024-12-10 22:59:23.268814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.864 qpair failed and we were unable to recover it. 00:28:15.864 [2024-12-10 22:59:23.268936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.864 [2024-12-10 22:59:23.268970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.864 qpair failed and we were unable to recover it. 00:28:15.864 [2024-12-10 22:59:23.269101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.864 [2024-12-10 22:59:23.269136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.864 qpair failed and we were unable to recover it. 00:28:15.864 [2024-12-10 22:59:23.269427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.864 [2024-12-10 22:59:23.269461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.864 qpair failed and we were unable to recover it. 00:28:15.864 [2024-12-10 22:59:23.269583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.864 [2024-12-10 22:59:23.269617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.864 qpair failed and we were unable to recover it. 00:28:15.864 [2024-12-10 22:59:23.269827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.864 [2024-12-10 22:59:23.269861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.864 qpair failed and we were unable to recover it. 00:28:15.864 [2024-12-10 22:59:23.270176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.864 [2024-12-10 22:59:23.270212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.864 qpair failed and we were unable to recover it. 00:28:15.864 [2024-12-10 22:59:23.270408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.864 [2024-12-10 22:59:23.270444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.864 qpair failed and we were unable to recover it. 00:28:15.864 [2024-12-10 22:59:23.270635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.864 [2024-12-10 22:59:23.270669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.864 qpair failed and we were unable to recover it. 00:28:15.864 [2024-12-10 22:59:23.270949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.864 [2024-12-10 22:59:23.270984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.864 qpair failed and we were unable to recover it. 00:28:15.864 [2024-12-10 22:59:23.271202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.864 [2024-12-10 22:59:23.271240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.864 qpair failed and we were unable to recover it. 00:28:15.864 [2024-12-10 22:59:23.271446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.864 [2024-12-10 22:59:23.271481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.864 qpair failed and we were unable to recover it. 00:28:15.864 [2024-12-10 22:59:23.271616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.864 [2024-12-10 22:59:23.271649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.864 qpair failed and we were unable to recover it. 00:28:15.864 [2024-12-10 22:59:23.271850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.864 [2024-12-10 22:59:23.271883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.864 qpair failed and we were unable to recover it. 00:28:15.864 [2024-12-10 22:59:23.272064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.864 [2024-12-10 22:59:23.272098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.864 qpair failed and we were unable to recover it. 00:28:15.864 [2024-12-10 22:59:23.272232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.864 [2024-12-10 22:59:23.272267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.864 qpair failed and we were unable to recover it. 00:28:15.865 [2024-12-10 22:59:23.272468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.865 [2024-12-10 22:59:23.272502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.865 qpair failed and we were unable to recover it. 00:28:15.865 [2024-12-10 22:59:23.272758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.865 [2024-12-10 22:59:23.272791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.865 qpair failed and we were unable to recover it. 00:28:15.865 [2024-12-10 22:59:23.273054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.865 [2024-12-10 22:59:23.273088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.865 qpair failed and we were unable to recover it. 00:28:15.865 [2024-12-10 22:59:23.273226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.865 [2024-12-10 22:59:23.273262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.865 qpair failed and we were unable to recover it. 00:28:15.865 [2024-12-10 22:59:23.273388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.865 [2024-12-10 22:59:23.273422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.865 qpair failed and we were unable to recover it. 00:28:15.865 [2024-12-10 22:59:23.273683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.865 [2024-12-10 22:59:23.273718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.865 qpair failed and we were unable to recover it. 00:28:15.865 [2024-12-10 22:59:23.273850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.865 [2024-12-10 22:59:23.273885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.865 qpair failed and we were unable to recover it. 00:28:15.865 [2024-12-10 22:59:23.274169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.865 [2024-12-10 22:59:23.274206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.865 qpair failed and we were unable to recover it. 00:28:15.865 [2024-12-10 22:59:23.274344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.865 [2024-12-10 22:59:23.274379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.865 qpair failed and we were unable to recover it. 00:28:15.865 [2024-12-10 22:59:23.274583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.865 [2024-12-10 22:59:23.274618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.865 qpair failed and we were unable to recover it. 00:28:15.865 [2024-12-10 22:59:23.274744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.865 [2024-12-10 22:59:23.274781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.865 qpair failed and we were unable to recover it. 00:28:15.865 [2024-12-10 22:59:23.274896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.865 [2024-12-10 22:59:23.274927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.865 qpair failed and we were unable to recover it. 00:28:15.865 [2024-12-10 22:59:23.275114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.865 [2024-12-10 22:59:23.275148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.865 qpair failed and we were unable to recover it. 00:28:15.865 [2024-12-10 22:59:23.275420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.865 [2024-12-10 22:59:23.275456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.865 qpair failed and we were unable to recover it. 00:28:15.865 [2024-12-10 22:59:23.275727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.865 [2024-12-10 22:59:23.275761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.865 qpair failed and we were unable to recover it. 00:28:15.865 [2024-12-10 22:59:23.276064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.865 [2024-12-10 22:59:23.276097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.865 qpair failed and we were unable to recover it. 00:28:15.865 [2024-12-10 22:59:23.276286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.865 [2024-12-10 22:59:23.276320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.865 qpair failed and we were unable to recover it. 00:28:15.865 [2024-12-10 22:59:23.276454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.865 [2024-12-10 22:59:23.276488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.865 qpair failed and we were unable to recover it. 00:28:15.865 [2024-12-10 22:59:23.276672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.865 [2024-12-10 22:59:23.276707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.865 qpair failed and we were unable to recover it. 00:28:15.865 [2024-12-10 22:59:23.276912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.865 [2024-12-10 22:59:23.276946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.865 qpair failed and we were unable to recover it. 00:28:15.865 [2024-12-10 22:59:23.277081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.865 [2024-12-10 22:59:23.277115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.865 qpair failed and we were unable to recover it. 00:28:15.865 [2024-12-10 22:59:23.277420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.865 [2024-12-10 22:59:23.277463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.865 qpair failed and we were unable to recover it. 00:28:15.865 [2024-12-10 22:59:23.277594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.865 [2024-12-10 22:59:23.277628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.865 qpair failed and we were unable to recover it. 00:28:15.865 [2024-12-10 22:59:23.277756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.865 [2024-12-10 22:59:23.277791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.865 qpair failed and we were unable to recover it. 00:28:15.865 [2024-12-10 22:59:23.277969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.865 [2024-12-10 22:59:23.278004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.865 qpair failed and we were unable to recover it. 00:28:15.865 [2024-12-10 22:59:23.278261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.865 [2024-12-10 22:59:23.278297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.865 qpair failed and we were unable to recover it. 00:28:15.865 [2024-12-10 22:59:23.278410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.865 [2024-12-10 22:59:23.278444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.865 qpair failed and we were unable to recover it. 00:28:15.865 [2024-12-10 22:59:23.278623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.865 [2024-12-10 22:59:23.278658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.865 qpair failed and we were unable to recover it. 00:28:15.865 [2024-12-10 22:59:23.278963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.865 [2024-12-10 22:59:23.278996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.865 qpair failed and we were unable to recover it. 00:28:15.865 [2024-12-10 22:59:23.279201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.865 [2024-12-10 22:59:23.279235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.865 qpair failed and we were unable to recover it. 00:28:15.865 [2024-12-10 22:59:23.279365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.865 [2024-12-10 22:59:23.279400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.865 qpair failed and we were unable to recover it. 00:28:15.865 [2024-12-10 22:59:23.279607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.865 [2024-12-10 22:59:23.279642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.865 qpair failed and we were unable to recover it. 00:28:15.865 [2024-12-10 22:59:23.279869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.865 [2024-12-10 22:59:23.279904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.865 qpair failed and we were unable to recover it. 00:28:15.865 [2024-12-10 22:59:23.280087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.865 [2024-12-10 22:59:23.280121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.865 qpair failed and we were unable to recover it. 00:28:15.865 [2024-12-10 22:59:23.280411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.865 [2024-12-10 22:59:23.280446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.865 qpair failed and we were unable to recover it. 00:28:15.865 [2024-12-10 22:59:23.280680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.865 [2024-12-10 22:59:23.280714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.865 qpair failed and we were unable to recover it. 00:28:15.865 [2024-12-10 22:59:23.280927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.865 [2024-12-10 22:59:23.280961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.865 qpair failed and we were unable to recover it. 00:28:15.865 [2024-12-10 22:59:23.281219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.865 [2024-12-10 22:59:23.281257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.865 qpair failed and we were unable to recover it. 00:28:15.865 [2024-12-10 22:59:23.281509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.865 [2024-12-10 22:59:23.281545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.866 qpair failed and we were unable to recover it. 00:28:15.866 [2024-12-10 22:59:23.281761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.866 [2024-12-10 22:59:23.281794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.866 qpair failed and we were unable to recover it. 00:28:15.866 [2024-12-10 22:59:23.282024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.866 [2024-12-10 22:59:23.282060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.866 qpair failed and we were unable to recover it. 00:28:15.866 [2024-12-10 22:59:23.282264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.866 [2024-12-10 22:59:23.282301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.866 qpair failed and we were unable to recover it. 00:28:15.866 [2024-12-10 22:59:23.282506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.866 [2024-12-10 22:59:23.282540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.866 qpair failed and we were unable to recover it. 00:28:15.866 [2024-12-10 22:59:23.282762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.866 [2024-12-10 22:59:23.282797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.866 qpair failed and we were unable to recover it. 00:28:15.866 [2024-12-10 22:59:23.282990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.866 [2024-12-10 22:59:23.283023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.866 qpair failed and we were unable to recover it. 00:28:15.866 [2024-12-10 22:59:23.283214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.866 [2024-12-10 22:59:23.283248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.866 qpair failed and we were unable to recover it. 00:28:15.866 [2024-12-10 22:59:23.283443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.866 [2024-12-10 22:59:23.283477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.866 qpair failed and we were unable to recover it. 00:28:15.866 [2024-12-10 22:59:23.283601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.866 [2024-12-10 22:59:23.283636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.866 qpair failed and we were unable to recover it. 00:28:15.866 [2024-12-10 22:59:23.283774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.866 [2024-12-10 22:59:23.283816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.866 qpair failed and we were unable to recover it. 00:28:15.866 [2024-12-10 22:59:23.284025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.866 [2024-12-10 22:59:23.284059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.866 qpair failed and we were unable to recover it. 00:28:15.866 [2024-12-10 22:59:23.284267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.866 [2024-12-10 22:59:23.284302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.866 qpair failed and we were unable to recover it. 00:28:15.866 [2024-12-10 22:59:23.284487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.866 [2024-12-10 22:59:23.284523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.866 qpair failed and we were unable to recover it. 00:28:15.866 [2024-12-10 22:59:23.284666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.866 [2024-12-10 22:59:23.284702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.866 qpair failed and we were unable to recover it. 00:28:15.866 [2024-12-10 22:59:23.284914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.866 [2024-12-10 22:59:23.284949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.866 qpair failed and we were unable to recover it. 00:28:15.866 [2024-12-10 22:59:23.285094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.866 [2024-12-10 22:59:23.285127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.866 qpair failed and we were unable to recover it. 00:28:15.866 [2024-12-10 22:59:23.285268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.866 [2024-12-10 22:59:23.285302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.866 qpair failed and we were unable to recover it. 00:28:15.866 [2024-12-10 22:59:23.285510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.866 [2024-12-10 22:59:23.285543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.866 qpair failed and we were unable to recover it. 00:28:15.866 [2024-12-10 22:59:23.285729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.866 [2024-12-10 22:59:23.285764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.866 qpair failed and we were unable to recover it. 00:28:15.866 [2024-12-10 22:59:23.285948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.866 [2024-12-10 22:59:23.285983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.866 qpair failed and we were unable to recover it. 00:28:15.866 [2024-12-10 22:59:23.286210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.866 [2024-12-10 22:59:23.286248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.866 qpair failed and we were unable to recover it. 00:28:15.866 [2024-12-10 22:59:23.286370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.866 [2024-12-10 22:59:23.286404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.866 qpair failed and we were unable to recover it. 00:28:15.866 [2024-12-10 22:59:23.286622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.866 [2024-12-10 22:59:23.286657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.866 qpair failed and we were unable to recover it. 00:28:15.866 [2024-12-10 22:59:23.286870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.866 [2024-12-10 22:59:23.286904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.866 qpair failed and we were unable to recover it. 00:28:15.866 [2024-12-10 22:59:23.287093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.866 [2024-12-10 22:59:23.287127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.866 qpair failed and we were unable to recover it. 00:28:15.866 [2024-12-10 22:59:23.287325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.866 [2024-12-10 22:59:23.287360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.866 qpair failed and we were unable to recover it. 00:28:15.866 [2024-12-10 22:59:23.287486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.866 [2024-12-10 22:59:23.287521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.866 qpair failed and we were unable to recover it. 00:28:15.866 [2024-12-10 22:59:23.287633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.866 [2024-12-10 22:59:23.287669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.866 qpair failed and we were unable to recover it. 00:28:15.866 [2024-12-10 22:59:23.287863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.866 [2024-12-10 22:59:23.287897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.866 qpair failed and we were unable to recover it. 00:28:15.866 [2024-12-10 22:59:23.288078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.866 [2024-12-10 22:59:23.288112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.866 qpair failed and we were unable to recover it. 00:28:15.866 [2024-12-10 22:59:23.288282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.866 [2024-12-10 22:59:23.288318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.866 qpair failed and we were unable to recover it. 00:28:15.866 [2024-12-10 22:59:23.288523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.866 [2024-12-10 22:59:23.288559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.866 qpair failed and we were unable to recover it. 00:28:15.866 [2024-12-10 22:59:23.288828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.866 [2024-12-10 22:59:23.288862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.866 qpair failed and we were unable to recover it. 00:28:15.866 [2024-12-10 22:59:23.289044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.866 [2024-12-10 22:59:23.289079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.866 qpair failed and we were unable to recover it. 00:28:15.866 [2024-12-10 22:59:23.289333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.866 [2024-12-10 22:59:23.289368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.866 qpair failed and we were unable to recover it. 00:28:15.866 [2024-12-10 22:59:23.289483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.866 [2024-12-10 22:59:23.289519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.866 qpair failed and we were unable to recover it. 00:28:15.866 [2024-12-10 22:59:23.289844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.866 [2024-12-10 22:59:23.289879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.866 qpair failed and we were unable to recover it. 00:28:15.866 [2024-12-10 22:59:23.290090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.866 [2024-12-10 22:59:23.290126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.866 qpair failed and we were unable to recover it. 00:28:15.866 [2024-12-10 22:59:23.290322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.867 [2024-12-10 22:59:23.290359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.867 qpair failed and we were unable to recover it. 00:28:15.867 [2024-12-10 22:59:23.290587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.867 [2024-12-10 22:59:23.290621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.867 qpair failed and we were unable to recover it. 00:28:15.867 [2024-12-10 22:59:23.290847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.867 [2024-12-10 22:59:23.290881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.867 qpair failed and we were unable to recover it. 00:28:15.867 [2024-12-10 22:59:23.291003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.867 [2024-12-10 22:59:23.291036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.867 qpair failed and we were unable to recover it. 00:28:15.867 [2024-12-10 22:59:23.291317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.867 [2024-12-10 22:59:23.291352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.867 qpair failed and we were unable to recover it. 00:28:15.867 [2024-12-10 22:59:23.291546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.867 [2024-12-10 22:59:23.291582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.867 qpair failed and we were unable to recover it. 00:28:15.867 [2024-12-10 22:59:23.291781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.867 [2024-12-10 22:59:23.291817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.867 qpair failed and we were unable to recover it. 00:28:15.867 [2024-12-10 22:59:23.291998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.867 [2024-12-10 22:59:23.292034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.867 qpair failed and we were unable to recover it. 00:28:15.867 [2024-12-10 22:59:23.292216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.867 [2024-12-10 22:59:23.292251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.867 qpair failed and we were unable to recover it. 00:28:15.867 [2024-12-10 22:59:23.292434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.867 [2024-12-10 22:59:23.292469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.867 qpair failed and we were unable to recover it. 00:28:15.867 [2024-12-10 22:59:23.292674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.867 [2024-12-10 22:59:23.292708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.867 qpair failed and we were unable to recover it. 00:28:15.867 [2024-12-10 22:59:23.292904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.867 [2024-12-10 22:59:23.292938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.867 qpair failed and we were unable to recover it. 00:28:15.867 [2024-12-10 22:59:23.293060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.867 [2024-12-10 22:59:23.293102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.867 qpair failed and we were unable to recover it. 00:28:15.867 [2024-12-10 22:59:23.293275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.867 [2024-12-10 22:59:23.293310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.867 qpair failed and we were unable to recover it. 00:28:15.867 [2024-12-10 22:59:23.293596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.867 [2024-12-10 22:59:23.293629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.867 qpair failed and we were unable to recover it. 00:28:15.867 [2024-12-10 22:59:23.293900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.867 [2024-12-10 22:59:23.293935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.867 qpair failed and we were unable to recover it. 00:28:15.867 [2024-12-10 22:59:23.294135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.867 [2024-12-10 22:59:23.294180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.867 qpair failed and we were unable to recover it. 00:28:15.867 [2024-12-10 22:59:23.294315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.867 [2024-12-10 22:59:23.294350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.867 qpair failed and we were unable to recover it. 00:28:15.867 [2024-12-10 22:59:23.294475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.867 [2024-12-10 22:59:23.294510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.867 qpair failed and we were unable to recover it. 00:28:15.867 [2024-12-10 22:59:23.294639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.867 [2024-12-10 22:59:23.294674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.867 qpair failed and we were unable to recover it. 00:28:15.867 [2024-12-10 22:59:23.294886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.867 [2024-12-10 22:59:23.294920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.867 qpair failed and we were unable to recover it. 00:28:15.867 [2024-12-10 22:59:23.295101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.867 [2024-12-10 22:59:23.295135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.867 qpair failed and we were unable to recover it. 00:28:15.867 [2024-12-10 22:59:23.295349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.867 [2024-12-10 22:59:23.295384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.867 qpair failed and we were unable to recover it. 00:28:15.867 [2024-12-10 22:59:23.295599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.867 [2024-12-10 22:59:23.295633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.867 qpair failed and we were unable to recover it. 00:28:15.867 [2024-12-10 22:59:23.295965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.867 [2024-12-10 22:59:23.295999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.867 qpair failed and we were unable to recover it. 00:28:15.867 [2024-12-10 22:59:23.296213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.867 [2024-12-10 22:59:23.296250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.867 qpair failed and we were unable to recover it. 00:28:15.867 [2024-12-10 22:59:23.296399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.867 [2024-12-10 22:59:23.296434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.867 qpair failed and we were unable to recover it. 00:28:15.867 [2024-12-10 22:59:23.296559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.867 [2024-12-10 22:59:23.296592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.867 qpair failed and we were unable to recover it. 00:28:15.867 [2024-12-10 22:59:23.296776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.867 [2024-12-10 22:59:23.296813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.867 qpair failed and we were unable to recover it. 00:28:15.867 [2024-12-10 22:59:23.296998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.867 [2024-12-10 22:59:23.297034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.867 qpair failed and we were unable to recover it. 00:28:15.867 [2024-12-10 22:59:23.297318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.867 [2024-12-10 22:59:23.297353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.867 qpair failed and we were unable to recover it. 00:28:15.867 [2024-12-10 22:59:23.297533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.867 [2024-12-10 22:59:23.297567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.867 qpair failed and we were unable to recover it. 00:28:15.867 [2024-12-10 22:59:23.297687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.868 [2024-12-10 22:59:23.297725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.868 qpair failed and we were unable to recover it. 00:28:15.868 [2024-12-10 22:59:23.297935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.868 [2024-12-10 22:59:23.297969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.868 qpair failed and we were unable to recover it. 00:28:15.868 [2024-12-10 22:59:23.298081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.868 [2024-12-10 22:59:23.298116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.868 qpair failed and we were unable to recover it. 00:28:15.868 [2024-12-10 22:59:23.298327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.868 [2024-12-10 22:59:23.298366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.868 qpair failed and we were unable to recover it. 00:28:15.868 [2024-12-10 22:59:23.298570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.868 [2024-12-10 22:59:23.298603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.868 qpair failed and we were unable to recover it. 00:28:15.868 [2024-12-10 22:59:23.298881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.868 [2024-12-10 22:59:23.298916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.868 qpair failed and we were unable to recover it. 00:28:15.868 [2024-12-10 22:59:23.299048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.868 [2024-12-10 22:59:23.299082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.868 qpair failed and we were unable to recover it. 00:28:15.868 [2024-12-10 22:59:23.299263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.868 [2024-12-10 22:59:23.299304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.868 qpair failed and we were unable to recover it. 00:28:15.868 [2024-12-10 22:59:23.299460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.868 [2024-12-10 22:59:23.299493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.868 qpair failed and we were unable to recover it. 00:28:15.868 [2024-12-10 22:59:23.299603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.868 [2024-12-10 22:59:23.299637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.868 qpair failed and we were unable to recover it. 00:28:15.868 [2024-12-10 22:59:23.299918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.868 [2024-12-10 22:59:23.299953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.868 qpair failed and we were unable to recover it. 00:28:15.868 [2024-12-10 22:59:23.300132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.868 [2024-12-10 22:59:23.300190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.868 qpair failed and we were unable to recover it. 00:28:15.868 [2024-12-10 22:59:23.300319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.868 [2024-12-10 22:59:23.300353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.868 qpair failed and we were unable to recover it. 00:28:15.868 [2024-12-10 22:59:23.300566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.868 [2024-12-10 22:59:23.300601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.868 qpair failed and we were unable to recover it. 00:28:15.868 [2024-12-10 22:59:23.300814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.868 [2024-12-10 22:59:23.300851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.868 qpair failed and we were unable to recover it. 00:28:15.868 [2024-12-10 22:59:23.301052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.868 [2024-12-10 22:59:23.301088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.868 qpair failed and we were unable to recover it. 00:28:15.868 [2024-12-10 22:59:23.301346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.868 [2024-12-10 22:59:23.301381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.868 qpair failed and we were unable to recover it. 00:28:15.868 [2024-12-10 22:59:23.301504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.868 [2024-12-10 22:59:23.301540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.868 qpair failed and we were unable to recover it. 00:28:15.868 [2024-12-10 22:59:23.301676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.868 [2024-12-10 22:59:23.301709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.868 qpair failed and we were unable to recover it. 00:28:15.868 [2024-12-10 22:59:23.301927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.868 [2024-12-10 22:59:23.301961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.868 qpair failed and we were unable to recover it. 00:28:15.868 [2024-12-10 22:59:23.302140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.868 [2024-12-10 22:59:23.302195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.868 qpair failed and we were unable to recover it. 00:28:15.868 [2024-12-10 22:59:23.302456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.868 [2024-12-10 22:59:23.302490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.868 qpair failed and we were unable to recover it. 00:28:15.868 [2024-12-10 22:59:23.302735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.868 [2024-12-10 22:59:23.302779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.868 qpair failed and we were unable to recover it. 00:28:15.868 [2024-12-10 22:59:23.302967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.868 [2024-12-10 22:59:23.303002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.868 qpair failed and we were unable to recover it. 00:28:15.868 [2024-12-10 22:59:23.303282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.868 [2024-12-10 22:59:23.303317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.868 qpair failed and we were unable to recover it. 00:28:15.868 [2024-12-10 22:59:23.303596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.868 [2024-12-10 22:59:23.303633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.868 qpair failed and we were unable to recover it. 00:28:15.868 [2024-12-10 22:59:23.303886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.868 [2024-12-10 22:59:23.303920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.868 qpair failed and we were unable to recover it. 00:28:15.868 [2024-12-10 22:59:23.304097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.868 [2024-12-10 22:59:23.304133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.868 qpair failed and we were unable to recover it. 00:28:15.868 [2024-12-10 22:59:23.304330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.868 [2024-12-10 22:59:23.304365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.868 qpair failed and we were unable to recover it. 00:28:15.868 [2024-12-10 22:59:23.304475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.868 [2024-12-10 22:59:23.304509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.868 qpair failed and we were unable to recover it. 00:28:15.868 [2024-12-10 22:59:23.304638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.868 [2024-12-10 22:59:23.304672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.868 qpair failed and we were unable to recover it. 00:28:15.868 [2024-12-10 22:59:23.304927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.868 [2024-12-10 22:59:23.304962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.868 qpair failed and we were unable to recover it. 00:28:15.868 [2024-12-10 22:59:23.305169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.868 [2024-12-10 22:59:23.305205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.868 qpair failed and we were unable to recover it. 00:28:15.868 [2024-12-10 22:59:23.305415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.868 [2024-12-10 22:59:23.305454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.868 qpair failed and we were unable to recover it. 00:28:15.868 [2024-12-10 22:59:23.305597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.868 [2024-12-10 22:59:23.305631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.868 qpair failed and we were unable to recover it. 00:28:15.868 [2024-12-10 22:59:23.305764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.868 [2024-12-10 22:59:23.305800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.868 qpair failed and we were unable to recover it. 00:28:15.868 [2024-12-10 22:59:23.306087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.868 [2024-12-10 22:59:23.306123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.868 qpair failed and we were unable to recover it. 00:28:15.868 [2024-12-10 22:59:23.306270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.868 [2024-12-10 22:59:23.306307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.868 qpair failed and we were unable to recover it. 00:28:15.868 [2024-12-10 22:59:23.306502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.868 [2024-12-10 22:59:23.306536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.868 qpair failed and we were unable to recover it. 00:28:15.868 [2024-12-10 22:59:23.306727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.868 [2024-12-10 22:59:23.306764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.868 qpair failed and we were unable to recover it. 00:28:15.869 [2024-12-10 22:59:23.307046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.869 [2024-12-10 22:59:23.307081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.869 qpair failed and we were unable to recover it. 00:28:15.869 [2024-12-10 22:59:23.307282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.869 [2024-12-10 22:59:23.307317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.869 qpair failed and we were unable to recover it. 00:28:15.869 [2024-12-10 22:59:23.307504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.869 [2024-12-10 22:59:23.307538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.869 qpair failed and we were unable to recover it. 00:28:15.869 [2024-12-10 22:59:23.307722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.869 [2024-12-10 22:59:23.307757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.869 qpair failed and we were unable to recover it. 00:28:15.869 [2024-12-10 22:59:23.307905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.869 [2024-12-10 22:59:23.307939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.869 qpair failed and we were unable to recover it. 00:28:15.869 [2024-12-10 22:59:23.308055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.869 [2024-12-10 22:59:23.308088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.869 qpair failed and we were unable to recover it. 00:28:15.869 [2024-12-10 22:59:23.308263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.869 [2024-12-10 22:59:23.308299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.869 qpair failed and we were unable to recover it. 00:28:15.869 [2024-12-10 22:59:23.308478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.869 [2024-12-10 22:59:23.308511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.869 qpair failed and we were unable to recover it. 00:28:15.869 [2024-12-10 22:59:23.308696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.869 [2024-12-10 22:59:23.308737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.869 qpair failed and we were unable to recover it. 00:28:15.869 [2024-12-10 22:59:23.308849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.869 [2024-12-10 22:59:23.308883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.869 qpair failed and we were unable to recover it. 00:28:15.869 [2024-12-10 22:59:23.309170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.869 [2024-12-10 22:59:23.309207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.869 qpair failed and we were unable to recover it. 00:28:15.869 [2024-12-10 22:59:23.309334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.869 [2024-12-10 22:59:23.309370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.869 qpair failed and we were unable to recover it. 00:28:15.869 [2024-12-10 22:59:23.309553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.869 [2024-12-10 22:59:23.309586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.869 qpair failed and we were unable to recover it. 00:28:15.869 [2024-12-10 22:59:23.309787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.869 [2024-12-10 22:59:23.309822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.869 qpair failed and we were unable to recover it. 00:28:15.869 [2024-12-10 22:59:23.310010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.869 [2024-12-10 22:59:23.310043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.869 qpair failed and we were unable to recover it. 00:28:15.869 [2024-12-10 22:59:23.310223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.869 [2024-12-10 22:59:23.310259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.869 qpair failed and we were unable to recover it. 00:28:15.869 [2024-12-10 22:59:23.310392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.869 [2024-12-10 22:59:23.310426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.869 qpair failed and we were unable to recover it. 00:28:15.869 [2024-12-10 22:59:23.310535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.869 [2024-12-10 22:59:23.310569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.869 qpair failed and we were unable to recover it. 00:28:15.869 [2024-12-10 22:59:23.310706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.869 [2024-12-10 22:59:23.310742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.869 qpair failed and we were unable to recover it. 00:28:15.869 [2024-12-10 22:59:23.310917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.869 [2024-12-10 22:59:23.310951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.869 qpair failed and we were unable to recover it. 00:28:15.869 [2024-12-10 22:59:23.311141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.869 [2024-12-10 22:59:23.311193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.869 qpair failed and we were unable to recover it. 00:28:15.869 [2024-12-10 22:59:23.311400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.869 [2024-12-10 22:59:23.311434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.869 qpair failed and we were unable to recover it. 00:28:15.869 [2024-12-10 22:59:23.311586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.869 [2024-12-10 22:59:23.311620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.869 qpair failed and we were unable to recover it. 00:28:15.869 [2024-12-10 22:59:23.311805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.869 [2024-12-10 22:59:23.311838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.869 qpair failed and we were unable to recover it. 00:28:15.869 [2024-12-10 22:59:23.311950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.869 [2024-12-10 22:59:23.311984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.869 qpair failed and we were unable to recover it. 00:28:15.869 [2024-12-10 22:59:23.312300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.869 [2024-12-10 22:59:23.312336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.869 qpair failed and we were unable to recover it. 00:28:15.869 [2024-12-10 22:59:23.312462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.869 [2024-12-10 22:59:23.312496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.869 qpair failed and we were unable to recover it. 00:28:15.869 [2024-12-10 22:59:23.312607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.869 [2024-12-10 22:59:23.312642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.869 qpair failed and we were unable to recover it. 00:28:15.869 [2024-12-10 22:59:23.312922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.869 [2024-12-10 22:59:23.312957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.869 qpair failed and we were unable to recover it. 00:28:15.869 [2024-12-10 22:59:23.313091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.869 [2024-12-10 22:59:23.313125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.869 qpair failed and we were unable to recover it. 00:28:15.869 [2024-12-10 22:59:23.313448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.869 [2024-12-10 22:59:23.313483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.869 qpair failed and we were unable to recover it. 00:28:15.869 [2024-12-10 22:59:23.313636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.869 [2024-12-10 22:59:23.313671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.869 qpair failed and we were unable to recover it. 00:28:15.869 [2024-12-10 22:59:23.313852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.869 [2024-12-10 22:59:23.313887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.869 qpair failed and we were unable to recover it. 00:28:15.869 [2024-12-10 22:59:23.314083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.869 [2024-12-10 22:59:23.314118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.869 qpair failed and we were unable to recover it. 00:28:15.869 [2024-12-10 22:59:23.314334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.869 [2024-12-10 22:59:23.314370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.869 qpair failed and we were unable to recover it. 00:28:15.869 [2024-12-10 22:59:23.314562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.869 [2024-12-10 22:59:23.314598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.869 qpair failed and we were unable to recover it. 00:28:15.869 [2024-12-10 22:59:23.314804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.869 [2024-12-10 22:59:23.314838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.869 qpair failed and we were unable to recover it. 00:28:15.869 [2024-12-10 22:59:23.314954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.869 [2024-12-10 22:59:23.314987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.869 qpair failed and we were unable to recover it. 00:28:15.869 [2024-12-10 22:59:23.315126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.869 [2024-12-10 22:59:23.315173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.869 qpair failed and we were unable to recover it. 00:28:15.869 [2024-12-10 22:59:23.315359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.869 [2024-12-10 22:59:23.315393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.869 qpair failed and we were unable to recover it. 00:28:15.870 [2024-12-10 22:59:23.315578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.870 [2024-12-10 22:59:23.315612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.870 qpair failed and we were unable to recover it. 00:28:15.870 [2024-12-10 22:59:23.315897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.870 [2024-12-10 22:59:23.315931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.870 qpair failed and we were unable to recover it. 00:28:15.870 [2024-12-10 22:59:23.316112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.870 [2024-12-10 22:59:23.316145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.870 qpair failed and we were unable to recover it. 00:28:15.870 [2024-12-10 22:59:23.316379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.870 [2024-12-10 22:59:23.316415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.870 qpair failed and we were unable to recover it. 00:28:15.870 [2024-12-10 22:59:23.316555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.870 [2024-12-10 22:59:23.316589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.870 qpair failed and we were unable to recover it. 00:28:15.870 [2024-12-10 22:59:23.316705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.870 [2024-12-10 22:59:23.316741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.870 qpair failed and we were unable to recover it. 00:28:15.870 [2024-12-10 22:59:23.316857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.870 [2024-12-10 22:59:23.316891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.870 qpair failed and we were unable to recover it. 00:28:15.870 [2024-12-10 22:59:23.317017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.870 [2024-12-10 22:59:23.317051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.870 qpair failed and we were unable to recover it. 00:28:15.870 [2024-12-10 22:59:23.317201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.870 [2024-12-10 22:59:23.317240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:15.870 qpair failed and we were unable to recover it. 00:28:15.870 [2024-12-10 22:59:23.317424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.870 [2024-12-10 22:59:23.317507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.870 qpair failed and we were unable to recover it. 00:28:15.870 [2024-12-10 22:59:23.317663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.870 [2024-12-10 22:59:23.317702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.870 qpair failed and we were unable to recover it. 00:28:15.870 [2024-12-10 22:59:23.317820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.870 [2024-12-10 22:59:23.317854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.870 qpair failed and we were unable to recover it. 00:28:15.870 [2024-12-10 22:59:23.317965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.870 [2024-12-10 22:59:23.318000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.870 qpair failed and we were unable to recover it. 00:28:15.870 [2024-12-10 22:59:23.318139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.870 [2024-12-10 22:59:23.318194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.870 qpair failed and we were unable to recover it. 00:28:15.870 [2024-12-10 22:59:23.318380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.870 [2024-12-10 22:59:23.318412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.870 qpair failed and we were unable to recover it. 00:28:15.870 [2024-12-10 22:59:23.318619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.870 [2024-12-10 22:59:23.318653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.870 qpair failed and we were unable to recover it. 00:28:15.870 [2024-12-10 22:59:23.318790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.870 [2024-12-10 22:59:23.318824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.870 qpair failed and we were unable to recover it. 00:28:15.870 [2024-12-10 22:59:23.318975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.870 [2024-12-10 22:59:23.319010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.870 qpair failed and we were unable to recover it. 00:28:15.870 [2024-12-10 22:59:23.319207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.870 [2024-12-10 22:59:23.319244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.870 qpair failed and we were unable to recover it. 00:28:15.870 [2024-12-10 22:59:23.319368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.870 [2024-12-10 22:59:23.319403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.870 qpair failed and we were unable to recover it. 00:28:15.870 [2024-12-10 22:59:23.319531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.870 [2024-12-10 22:59:23.319568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.870 qpair failed and we were unable to recover it. 00:28:15.870 [2024-12-10 22:59:23.319695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.870 [2024-12-10 22:59:23.319732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.870 qpair failed and we were unable to recover it. 00:28:15.870 [2024-12-10 22:59:23.319880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.870 [2024-12-10 22:59:23.319924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.870 qpair failed and we were unable to recover it. 00:28:15.870 [2024-12-10 22:59:23.320052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.870 [2024-12-10 22:59:23.320090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.870 qpair failed and we were unable to recover it. 00:28:15.870 [2024-12-10 22:59:23.320221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.870 [2024-12-10 22:59:23.320257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.870 qpair failed and we were unable to recover it. 00:28:15.870 [2024-12-10 22:59:23.320453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.870 [2024-12-10 22:59:23.320487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.870 qpair failed and we were unable to recover it. 00:28:15.870 [2024-12-10 22:59:23.320673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.870 [2024-12-10 22:59:23.320708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.870 qpair failed and we were unable to recover it. 00:28:15.870 [2024-12-10 22:59:23.320889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.870 [2024-12-10 22:59:23.320923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.870 qpair failed and we were unable to recover it. 00:28:15.870 [2024-12-10 22:59:23.321034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.870 [2024-12-10 22:59:23.321071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.870 qpair failed and we were unable to recover it. 00:28:15.870 [2024-12-10 22:59:23.321278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.870 [2024-12-10 22:59:23.321314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.870 qpair failed and we were unable to recover it. 00:28:15.870 [2024-12-10 22:59:23.321437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.870 [2024-12-10 22:59:23.321471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.870 qpair failed and we were unable to recover it. 00:28:15.870 [2024-12-10 22:59:23.321595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.870 [2024-12-10 22:59:23.321630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.870 qpair failed and we were unable to recover it. 00:28:15.870 [2024-12-10 22:59:23.321814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.870 [2024-12-10 22:59:23.321849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.870 qpair failed and we were unable to recover it. 00:28:15.870 [2024-12-10 22:59:23.321973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.870 [2024-12-10 22:59:23.322010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.870 qpair failed and we were unable to recover it. 00:28:15.870 [2024-12-10 22:59:23.322190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.870 [2024-12-10 22:59:23.322225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.870 qpair failed and we were unable to recover it. 00:28:15.870 [2024-12-10 22:59:23.322352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.870 [2024-12-10 22:59:23.322386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.870 qpair failed and we were unable to recover it. 00:28:15.870 [2024-12-10 22:59:23.322520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.870 [2024-12-10 22:59:23.322553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.870 qpair failed and we were unable to recover it. 00:28:15.870 [2024-12-10 22:59:23.322684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.870 [2024-12-10 22:59:23.322717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.870 qpair failed and we were unable to recover it. 00:28:15.870 [2024-12-10 22:59:23.322916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.870 [2024-12-10 22:59:23.322951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.870 qpair failed and we were unable to recover it. 00:28:15.870 [2024-12-10 22:59:23.323061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.871 [2024-12-10 22:59:23.323096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.871 qpair failed and we were unable to recover it. 00:28:15.871 [2024-12-10 22:59:23.323217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.871 [2024-12-10 22:59:23.323251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.871 qpair failed and we were unable to recover it. 00:28:15.871 [2024-12-10 22:59:23.323435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.871 [2024-12-10 22:59:23.323469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.871 qpair failed and we were unable to recover it. 00:28:15.871 [2024-12-10 22:59:23.323611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.871 [2024-12-10 22:59:23.323646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.871 qpair failed and we were unable to recover it. 00:28:15.871 [2024-12-10 22:59:23.323862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.871 [2024-12-10 22:59:23.323898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.871 qpair failed and we were unable to recover it. 00:28:15.871 [2024-12-10 22:59:23.324006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.871 [2024-12-10 22:59:23.324041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.871 qpair failed and we were unable to recover it. 00:28:15.871 [2024-12-10 22:59:23.324172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.871 [2024-12-10 22:59:23.324207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.871 qpair failed and we were unable to recover it. 00:28:15.871 [2024-12-10 22:59:23.324386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.871 [2024-12-10 22:59:23.324420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.871 qpair failed and we were unable to recover it. 00:28:15.871 [2024-12-10 22:59:23.324627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.871 [2024-12-10 22:59:23.324662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.871 qpair failed and we were unable to recover it. 00:28:15.871 [2024-12-10 22:59:23.324789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.871 [2024-12-10 22:59:23.324823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.871 qpair failed and we were unable to recover it. 00:28:15.871 [2024-12-10 22:59:23.324959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.871 [2024-12-10 22:59:23.325000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.871 qpair failed and we were unable to recover it. 00:28:15.871 [2024-12-10 22:59:23.325198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.871 [2024-12-10 22:59:23.325235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.871 qpair failed and we were unable to recover it. 00:28:15.871 [2024-12-10 22:59:23.325425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.871 [2024-12-10 22:59:23.325459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.871 qpair failed and we were unable to recover it. 00:28:15.871 [2024-12-10 22:59:23.325605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.871 [2024-12-10 22:59:23.325640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.871 qpair failed and we were unable to recover it. 00:28:15.871 [2024-12-10 22:59:23.325777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.871 [2024-12-10 22:59:23.325813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.871 qpair failed and we were unable to recover it. 00:28:15.871 [2024-12-10 22:59:23.325925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.871 [2024-12-10 22:59:23.325959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.871 qpair failed and we were unable to recover it. 00:28:15.871 [2024-12-10 22:59:23.326224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.871 [2024-12-10 22:59:23.326261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.871 qpair failed and we were unable to recover it. 00:28:15.871 [2024-12-10 22:59:23.326448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.871 [2024-12-10 22:59:23.326482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.871 qpair failed and we were unable to recover it. 00:28:15.871 [2024-12-10 22:59:23.326685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.871 [2024-12-10 22:59:23.326719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.871 qpair failed and we were unable to recover it. 00:28:15.871 [2024-12-10 22:59:23.326901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.871 [2024-12-10 22:59:23.326937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.871 qpair failed and we were unable to recover it. 00:28:15.871 [2024-12-10 22:59:23.327221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.871 [2024-12-10 22:59:23.327256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.871 qpair failed and we were unable to recover it. 00:28:15.871 [2024-12-10 22:59:23.327457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.871 [2024-12-10 22:59:23.327491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.871 qpair failed and we were unable to recover it. 00:28:15.871 [2024-12-10 22:59:23.327673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.871 [2024-12-10 22:59:23.327709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.871 qpair failed and we were unable to recover it. 00:28:15.871 [2024-12-10 22:59:23.327822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.871 [2024-12-10 22:59:23.327857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.871 qpair failed and we were unable to recover it. 00:28:15.871 [2024-12-10 22:59:23.328140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.871 [2024-12-10 22:59:23.328183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.871 qpair failed and we were unable to recover it. 00:28:15.871 [2024-12-10 22:59:23.328321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.871 [2024-12-10 22:59:23.328353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.871 qpair failed and we were unable to recover it. 00:28:15.871 [2024-12-10 22:59:23.328577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.871 [2024-12-10 22:59:23.328612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.871 qpair failed and we were unable to recover it. 00:28:15.871 [2024-12-10 22:59:23.328802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.871 [2024-12-10 22:59:23.328835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.871 qpair failed and we were unable to recover it. 00:28:15.871 [2024-12-10 22:59:23.329143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.871 [2024-12-10 22:59:23.329194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.871 qpair failed and we were unable to recover it. 00:28:15.871 [2024-12-10 22:59:23.329318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.871 [2024-12-10 22:59:23.329354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.871 qpair failed and we were unable to recover it. 00:28:15.871 [2024-12-10 22:59:23.329634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.871 [2024-12-10 22:59:23.329669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.871 qpair failed and we were unable to recover it. 00:28:15.871 [2024-12-10 22:59:23.329881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.871 [2024-12-10 22:59:23.329915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.871 qpair failed and we were unable to recover it. 00:28:15.871 [2024-12-10 22:59:23.330048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.871 [2024-12-10 22:59:23.330083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.871 qpair failed and we were unable to recover it. 00:28:15.871 [2024-12-10 22:59:23.330291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.871 [2024-12-10 22:59:23.330328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.871 qpair failed and we were unable to recover it. 00:28:15.871 [2024-12-10 22:59:23.330551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.872 [2024-12-10 22:59:23.330585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.872 qpair failed and we were unable to recover it. 00:28:15.872 [2024-12-10 22:59:23.330720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.872 [2024-12-10 22:59:23.330752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.872 qpair failed and we were unable to recover it. 00:28:15.872 [2024-12-10 22:59:23.330864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.872 [2024-12-10 22:59:23.330900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.872 qpair failed and we were unable to recover it. 00:28:15.872 [2024-12-10 22:59:23.331101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.872 [2024-12-10 22:59:23.331136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.872 qpair failed and we were unable to recover it. 00:28:15.872 [2024-12-10 22:59:23.331336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.872 [2024-12-10 22:59:23.331371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.872 qpair failed and we were unable to recover it. 00:28:15.872 [2024-12-10 22:59:23.331564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.872 [2024-12-10 22:59:23.331599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.872 qpair failed and we were unable to recover it. 00:28:15.872 [2024-12-10 22:59:23.331724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.872 [2024-12-10 22:59:23.331761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.872 qpair failed and we were unable to recover it. 00:28:15.872 [2024-12-10 22:59:23.331953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.872 [2024-12-10 22:59:23.331986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.872 qpair failed and we were unable to recover it. 00:28:15.872 [2024-12-10 22:59:23.332176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.872 [2024-12-10 22:59:23.332211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.872 qpair failed and we were unable to recover it. 00:28:15.872 [2024-12-10 22:59:23.332330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.872 [2024-12-10 22:59:23.332363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.872 qpair failed and we were unable to recover it. 00:28:15.872 [2024-12-10 22:59:23.332552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.872 [2024-12-10 22:59:23.332586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.872 qpair failed and we were unable to recover it. 00:28:15.872 [2024-12-10 22:59:23.332814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.872 [2024-12-10 22:59:23.332847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.872 qpair failed and we were unable to recover it. 00:28:15.872 [2024-12-10 22:59:23.333081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.872 [2024-12-10 22:59:23.333116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.872 qpair failed and we were unable to recover it. 00:28:15.872 [2024-12-10 22:59:23.333257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.872 [2024-12-10 22:59:23.333292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.872 qpair failed and we were unable to recover it. 00:28:15.872 [2024-12-10 22:59:23.333418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.872 [2024-12-10 22:59:23.333455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.872 qpair failed and we were unable to recover it. 00:28:15.872 [2024-12-10 22:59:23.333586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.872 [2024-12-10 22:59:23.333622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.872 qpair failed and we were unable to recover it. 00:28:15.872 [2024-12-10 22:59:23.333766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.872 [2024-12-10 22:59:23.333807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.872 qpair failed and we were unable to recover it. 00:28:15.872 [2024-12-10 22:59:23.333945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.872 [2024-12-10 22:59:23.333982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.872 qpair failed and we were unable to recover it. 00:28:15.872 [2024-12-10 22:59:23.334102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.872 [2024-12-10 22:59:23.334136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.872 qpair failed and we were unable to recover it. 00:28:15.872 [2024-12-10 22:59:23.334479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.872 [2024-12-10 22:59:23.334513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.872 qpair failed and we were unable to recover it. 00:28:15.872 [2024-12-10 22:59:23.334634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.872 [2024-12-10 22:59:23.334672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.872 qpair failed and we were unable to recover it. 00:28:15.872 [2024-12-10 22:59:23.334927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.872 [2024-12-10 22:59:23.334962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.872 qpair failed and we were unable to recover it. 00:28:15.872 [2024-12-10 22:59:23.335183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.872 [2024-12-10 22:59:23.335221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.872 qpair failed and we were unable to recover it. 00:28:15.872 [2024-12-10 22:59:23.335437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.872 [2024-12-10 22:59:23.335475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.872 qpair failed and we were unable to recover it. 00:28:15.872 [2024-12-10 22:59:23.335619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.872 [2024-12-10 22:59:23.335654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.872 qpair failed and we were unable to recover it. 00:28:15.872 [2024-12-10 22:59:23.335919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.872 [2024-12-10 22:59:23.335953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.872 qpair failed and we were unable to recover it. 00:28:15.872 [2024-12-10 22:59:23.336183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.872 [2024-12-10 22:59:23.336218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.872 qpair failed and we were unable to recover it. 00:28:15.872 [2024-12-10 22:59:23.336400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.872 [2024-12-10 22:59:23.336434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.872 qpair failed and we were unable to recover it. 00:28:15.872 [2024-12-10 22:59:23.336615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.872 [2024-12-10 22:59:23.336652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.872 qpair failed and we were unable to recover it. 00:28:15.872 [2024-12-10 22:59:23.336855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.872 [2024-12-10 22:59:23.336890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.872 qpair failed and we were unable to recover it. 00:28:15.872 [2024-12-10 22:59:23.337101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.872 [2024-12-10 22:59:23.337137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.872 qpair failed and we were unable to recover it. 00:28:15.872 [2024-12-10 22:59:23.337341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.872 [2024-12-10 22:59:23.337376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.872 qpair failed and we were unable to recover it. 00:28:15.872 [2024-12-10 22:59:23.337511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.872 [2024-12-10 22:59:23.337546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.872 qpair failed and we were unable to recover it. 00:28:15.872 [2024-12-10 22:59:23.337679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.872 [2024-12-10 22:59:23.337712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.872 qpair failed and we were unable to recover it. 00:28:15.872 [2024-12-10 22:59:23.337919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.872 [2024-12-10 22:59:23.337955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.872 qpair failed and we were unable to recover it. 00:28:15.872 [2024-12-10 22:59:23.338076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.872 [2024-12-10 22:59:23.338112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.872 qpair failed and we were unable to recover it. 00:28:15.872 [2024-12-10 22:59:23.338315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.872 [2024-12-10 22:59:23.338353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.872 qpair failed and we were unable to recover it. 00:28:15.872 [2024-12-10 22:59:23.338478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.872 [2024-12-10 22:59:23.338512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.872 qpair failed and we were unable to recover it. 00:28:15.872 [2024-12-10 22:59:23.338723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.872 [2024-12-10 22:59:23.338757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.872 qpair failed and we were unable to recover it. 00:28:15.872 [2024-12-10 22:59:23.338943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.873 [2024-12-10 22:59:23.338977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.873 qpair failed and we were unable to recover it. 00:28:15.873 [2024-12-10 22:59:23.339185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.873 [2024-12-10 22:59:23.339222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.873 qpair failed and we were unable to recover it. 00:28:15.873 [2024-12-10 22:59:23.339354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.873 [2024-12-10 22:59:23.339390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.873 qpair failed and we were unable to recover it. 00:28:15.873 [2024-12-10 22:59:23.339580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.873 [2024-12-10 22:59:23.339614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.873 qpair failed and we were unable to recover it. 00:28:15.873 [2024-12-10 22:59:23.339809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.873 [2024-12-10 22:59:23.339844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.873 qpair failed and we were unable to recover it. 00:28:15.873 [2024-12-10 22:59:23.340142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.873 [2024-12-10 22:59:23.340187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.873 qpair failed and we were unable to recover it. 00:28:15.873 [2024-12-10 22:59:23.340322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.873 [2024-12-10 22:59:23.340355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.873 qpair failed and we were unable to recover it. 00:28:15.873 [2024-12-10 22:59:23.340537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.873 [2024-12-10 22:59:23.340571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.873 qpair failed and we were unable to recover it. 00:28:15.873 [2024-12-10 22:59:23.340855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.873 [2024-12-10 22:59:23.340890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.873 qpair failed and we were unable to recover it. 00:28:15.873 [2024-12-10 22:59:23.341079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.873 [2024-12-10 22:59:23.341113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.873 qpair failed and we were unable to recover it. 00:28:15.873 [2024-12-10 22:59:23.341247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.873 [2024-12-10 22:59:23.341281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.873 qpair failed and we were unable to recover it. 00:28:15.873 [2024-12-10 22:59:23.341484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.873 [2024-12-10 22:59:23.341518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.873 qpair failed and we were unable to recover it. 00:28:15.873 [2024-12-10 22:59:23.341639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.873 [2024-12-10 22:59:23.341672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.873 qpair failed and we were unable to recover it. 00:28:15.873 [2024-12-10 22:59:23.341790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.873 [2024-12-10 22:59:23.341825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.873 qpair failed and we were unable to recover it. 00:28:15.873 [2024-12-10 22:59:23.342123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.873 [2024-12-10 22:59:23.342168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.873 qpair failed and we were unable to recover it. 00:28:15.873 [2024-12-10 22:59:23.342431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.873 [2024-12-10 22:59:23.342465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.873 qpair failed and we were unable to recover it. 00:28:15.873 [2024-12-10 22:59:23.342585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.873 [2024-12-10 22:59:23.342619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.873 qpair failed and we were unable to recover it. 00:28:15.873 [2024-12-10 22:59:23.342754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.873 [2024-12-10 22:59:23.342793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.873 qpair failed and we were unable to recover it. 00:28:15.873 [2024-12-10 22:59:23.342983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.873 [2024-12-10 22:59:23.343017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.873 qpair failed and we were unable to recover it. 00:28:15.873 [2024-12-10 22:59:23.343247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.873 [2024-12-10 22:59:23.343283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.873 qpair failed and we were unable to recover it. 00:28:15.873 [2024-12-10 22:59:23.343417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.873 [2024-12-10 22:59:23.343451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.873 qpair failed and we were unable to recover it. 00:28:15.873 [2024-12-10 22:59:23.343633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.873 [2024-12-10 22:59:23.343668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.873 qpair failed and we were unable to recover it. 00:28:15.873 [2024-12-10 22:59:23.343869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.873 [2024-12-10 22:59:23.343904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.873 qpair failed and we were unable to recover it. 00:28:15.873 [2024-12-10 22:59:23.344172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.873 [2024-12-10 22:59:23.344227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.873 qpair failed and we were unable to recover it. 00:28:15.873 [2024-12-10 22:59:23.344387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.873 [2024-12-10 22:59:23.344422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.873 qpair failed and we were unable to recover it. 00:28:15.873 [2024-12-10 22:59:23.344724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.873 [2024-12-10 22:59:23.344758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.873 qpair failed and we were unable to recover it. 00:28:15.873 [2024-12-10 22:59:23.345037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.873 [2024-12-10 22:59:23.345070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.873 qpair failed and we were unable to recover it. 00:28:15.873 [2024-12-10 22:59:23.345265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.873 [2024-12-10 22:59:23.345301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.873 qpair failed and we were unable to recover it. 00:28:15.873 [2024-12-10 22:59:23.345427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.873 [2024-12-10 22:59:23.345462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.873 qpair failed and we were unable to recover it. 00:28:15.873 [2024-12-10 22:59:23.345669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.873 [2024-12-10 22:59:23.345703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.873 qpair failed and we were unable to recover it. 00:28:15.873 [2024-12-10 22:59:23.345895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.873 [2024-12-10 22:59:23.345930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.873 qpair failed and we were unable to recover it. 00:28:15.873 [2024-12-10 22:59:23.346213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.873 [2024-12-10 22:59:23.346248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.873 qpair failed and we were unable to recover it. 00:28:15.873 [2024-12-10 22:59:23.346527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.873 [2024-12-10 22:59:23.346561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.873 qpair failed and we were unable to recover it. 00:28:15.873 [2024-12-10 22:59:23.346792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.873 [2024-12-10 22:59:23.346825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.873 qpair failed and we were unable to recover it. 00:28:15.873 [2024-12-10 22:59:23.347080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.873 [2024-12-10 22:59:23.347114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.873 qpair failed and we were unable to recover it. 00:28:15.873 [2024-12-10 22:59:23.347244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.873 [2024-12-10 22:59:23.347279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.873 qpair failed and we were unable to recover it. 00:28:15.873 [2024-12-10 22:59:23.347418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.873 [2024-12-10 22:59:23.347452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.873 qpair failed and we were unable to recover it. 00:28:15.873 [2024-12-10 22:59:23.347582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.873 [2024-12-10 22:59:23.347616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.873 qpair failed and we were unable to recover it. 00:28:15.873 [2024-12-10 22:59:23.347820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.873 [2024-12-10 22:59:23.347856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.873 qpair failed and we were unable to recover it. 00:28:15.873 [2024-12-10 22:59:23.347964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.874 [2024-12-10 22:59:23.347998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.874 qpair failed and we were unable to recover it. 00:28:15.874 [2024-12-10 22:59:23.348281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.874 [2024-12-10 22:59:23.348317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.874 qpair failed and we were unable to recover it. 00:28:15.874 [2024-12-10 22:59:23.348449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.874 [2024-12-10 22:59:23.348482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.874 qpair failed and we were unable to recover it. 00:28:15.874 [2024-12-10 22:59:23.348606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.874 [2024-12-10 22:59:23.348641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.874 qpair failed and we were unable to recover it. 00:28:15.874 [2024-12-10 22:59:23.348916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.874 [2024-12-10 22:59:23.348951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.874 qpair failed and we were unable to recover it. 00:28:15.874 [2024-12-10 22:59:23.349222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.874 [2024-12-10 22:59:23.349258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.874 qpair failed and we were unable to recover it. 00:28:15.874 [2024-12-10 22:59:23.349457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.874 [2024-12-10 22:59:23.349495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.874 qpair failed and we were unable to recover it. 00:28:15.874 [2024-12-10 22:59:23.349623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.874 [2024-12-10 22:59:23.349658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.874 qpair failed and we were unable to recover it. 00:28:15.874 [2024-12-10 22:59:23.349960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.874 [2024-12-10 22:59:23.349994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.874 qpair failed and we were unable to recover it. 00:28:15.874 [2024-12-10 22:59:23.350183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.874 [2024-12-10 22:59:23.350219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.874 qpair failed and we were unable to recover it. 00:28:15.874 [2024-12-10 22:59:23.350398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.874 [2024-12-10 22:59:23.350432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.874 qpair failed and we were unable to recover it. 00:28:15.874 [2024-12-10 22:59:23.350709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.874 [2024-12-10 22:59:23.350743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.874 qpair failed and we were unable to recover it. 00:28:15.874 [2024-12-10 22:59:23.350888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.874 [2024-12-10 22:59:23.350926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.874 qpair failed and we were unable to recover it. 00:28:15.874 [2024-12-10 22:59:23.351124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.874 [2024-12-10 22:59:23.351155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.874 qpair failed and we were unable to recover it. 00:28:15.874 [2024-12-10 22:59:23.351348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.874 [2024-12-10 22:59:23.351382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.874 qpair failed and we were unable to recover it. 00:28:15.874 [2024-12-10 22:59:23.351590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.874 [2024-12-10 22:59:23.351623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.874 qpair failed and we were unable to recover it. 00:28:15.874 [2024-12-10 22:59:23.351822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.874 [2024-12-10 22:59:23.351857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.874 qpair failed and we were unable to recover it. 00:28:15.874 [2024-12-10 22:59:23.352061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.874 [2024-12-10 22:59:23.352096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.874 qpair failed and we were unable to recover it. 00:28:15.874 [2024-12-10 22:59:23.352306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.874 [2024-12-10 22:59:23.352347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.874 qpair failed and we were unable to recover it. 00:28:15.874 [2024-12-10 22:59:23.352472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.874 [2024-12-10 22:59:23.352506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.874 qpair failed and we were unable to recover it. 00:28:15.874 [2024-12-10 22:59:23.352690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.874 [2024-12-10 22:59:23.352724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.874 qpair failed and we were unable to recover it. 00:28:15.874 [2024-12-10 22:59:23.352903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.874 [2024-12-10 22:59:23.352937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.874 qpair failed and we were unable to recover it. 00:28:15.874 [2024-12-10 22:59:23.353079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.874 [2024-12-10 22:59:23.353114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.874 qpair failed and we were unable to recover it. 00:28:15.874 [2024-12-10 22:59:23.353338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.874 [2024-12-10 22:59:23.353373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.874 qpair failed and we were unable to recover it. 00:28:15.874 [2024-12-10 22:59:23.353650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.874 [2024-12-10 22:59:23.353685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.874 qpair failed and we were unable to recover it. 00:28:15.874 [2024-12-10 22:59:23.353887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.874 [2024-12-10 22:59:23.353921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.874 qpair failed and we were unable to recover it. 00:28:15.874 [2024-12-10 22:59:23.354104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.874 [2024-12-10 22:59:23.354139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.874 qpair failed and we were unable to recover it. 00:28:15.874 [2024-12-10 22:59:23.354351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.874 [2024-12-10 22:59:23.354384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.874 qpair failed and we were unable to recover it. 00:28:15.874 [2024-12-10 22:59:23.354499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.874 [2024-12-10 22:59:23.354533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.874 qpair failed and we were unable to recover it. 00:28:15.874 [2024-12-10 22:59:23.354649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.874 [2024-12-10 22:59:23.354685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.874 qpair failed and we were unable to recover it. 00:28:15.874 [2024-12-10 22:59:23.354953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.874 [2024-12-10 22:59:23.354987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.874 qpair failed and we were unable to recover it. 00:28:15.874 [2024-12-10 22:59:23.355127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.874 [2024-12-10 22:59:23.355175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.874 qpair failed and we were unable to recover it. 00:28:15.874 [2024-12-10 22:59:23.355374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.874 [2024-12-10 22:59:23.355409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.874 qpair failed and we were unable to recover it. 00:28:15.874 [2024-12-10 22:59:23.355550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.874 [2024-12-10 22:59:23.355585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.874 qpair failed and we were unable to recover it. 00:28:15.874 [2024-12-10 22:59:23.355712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.874 [2024-12-10 22:59:23.355749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.874 qpair failed and we were unable to recover it. 00:28:15.874 [2024-12-10 22:59:23.355878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.874 [2024-12-10 22:59:23.355912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.874 qpair failed and we were unable to recover it. 00:28:15.874 [2024-12-10 22:59:23.356090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.874 [2024-12-10 22:59:23.356125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.874 qpair failed and we were unable to recover it. 00:28:15.874 [2024-12-10 22:59:23.356279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.874 [2024-12-10 22:59:23.356314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.874 qpair failed and we were unable to recover it. 00:28:15.874 [2024-12-10 22:59:23.356514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.875 [2024-12-10 22:59:23.356547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.875 qpair failed and we were unable to recover it. 00:28:15.875 [2024-12-10 22:59:23.356733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.875 [2024-12-10 22:59:23.356770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.875 qpair failed and we were unable to recover it. 00:28:15.875 [2024-12-10 22:59:23.356984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.875 [2024-12-10 22:59:23.357019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.875 qpair failed and we were unable to recover it. 00:28:15.875 [2024-12-10 22:59:23.357207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.875 [2024-12-10 22:59:23.357242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.875 qpair failed and we were unable to recover it. 00:28:15.875 [2024-12-10 22:59:23.357431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.875 [2024-12-10 22:59:23.357465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.875 qpair failed and we were unable to recover it. 00:28:15.875 [2024-12-10 22:59:23.357644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.875 [2024-12-10 22:59:23.357678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.875 qpair failed and we were unable to recover it. 00:28:15.875 [2024-12-10 22:59:23.357900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.875 [2024-12-10 22:59:23.357934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.875 qpair failed and we were unable to recover it. 00:28:15.875 [2024-12-10 22:59:23.358086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.875 [2024-12-10 22:59:23.358120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.875 qpair failed and we were unable to recover it. 00:28:15.875 [2024-12-10 22:59:23.358316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.875 [2024-12-10 22:59:23.358350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.875 qpair failed and we were unable to recover it. 00:28:15.875 [2024-12-10 22:59:23.358488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.875 [2024-12-10 22:59:23.358523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.875 qpair failed and we were unable to recover it. 00:28:15.875 [2024-12-10 22:59:23.358779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.875 [2024-12-10 22:59:23.358815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.875 qpair failed and we were unable to recover it. 00:28:15.875 [2024-12-10 22:59:23.359020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.875 [2024-12-10 22:59:23.359055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.875 qpair failed and we were unable to recover it. 00:28:15.875 [2024-12-10 22:59:23.359254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.875 [2024-12-10 22:59:23.359288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.875 qpair failed and we were unable to recover it. 00:28:15.875 [2024-12-10 22:59:23.359446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.875 [2024-12-10 22:59:23.359480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.875 qpair failed and we were unable to recover it. 00:28:15.875 [2024-12-10 22:59:23.359662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.875 [2024-12-10 22:59:23.359695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.875 qpair failed and we were unable to recover it. 00:28:15.875 [2024-12-10 22:59:23.359910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.875 [2024-12-10 22:59:23.359945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.875 qpair failed and we were unable to recover it. 00:28:15.875 [2024-12-10 22:59:23.360083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.875 [2024-12-10 22:59:23.360116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.875 qpair failed and we were unable to recover it. 00:28:15.875 [2024-12-10 22:59:23.360281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.875 [2024-12-10 22:59:23.360316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.875 qpair failed and we were unable to recover it. 00:28:15.875 [2024-12-10 22:59:23.360462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.875 [2024-12-10 22:59:23.360497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.875 qpair failed and we were unable to recover it. 00:28:15.875 [2024-12-10 22:59:23.360696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.875 [2024-12-10 22:59:23.360730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.875 qpair failed and we were unable to recover it. 00:28:15.875 [2024-12-10 22:59:23.360853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.875 [2024-12-10 22:59:23.360893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.875 qpair failed and we were unable to recover it. 00:28:15.875 [2024-12-10 22:59:23.361117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.875 [2024-12-10 22:59:23.361152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.875 qpair failed and we were unable to recover it. 00:28:15.875 [2024-12-10 22:59:23.361376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.875 [2024-12-10 22:59:23.361410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.875 qpair failed and we were unable to recover it. 00:28:15.875 [2024-12-10 22:59:23.361548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.875 [2024-12-10 22:59:23.361583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.875 qpair failed and we were unable to recover it. 00:28:15.875 [2024-12-10 22:59:23.361775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.875 [2024-12-10 22:59:23.361809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.875 qpair failed and we were unable to recover it. 00:28:15.875 [2024-12-10 22:59:23.362031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.875 [2024-12-10 22:59:23.362064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.875 qpair failed and we were unable to recover it. 00:28:15.875 [2024-12-10 22:59:23.362186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.875 [2024-12-10 22:59:23.362222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.875 qpair failed and we were unable to recover it. 00:28:15.875 [2024-12-10 22:59:23.362410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.875 [2024-12-10 22:59:23.362445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.875 qpair failed and we were unable to recover it. 00:28:15.875 [2024-12-10 22:59:23.362625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.875 [2024-12-10 22:59:23.362659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.875 qpair failed and we were unable to recover it. 00:28:15.875 [2024-12-10 22:59:23.362863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.875 [2024-12-10 22:59:23.362897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.875 qpair failed and we were unable to recover it. 00:28:15.875 [2024-12-10 22:59:23.363172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.875 [2024-12-10 22:59:23.363208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.875 qpair failed and we were unable to recover it. 00:28:15.875 [2024-12-10 22:59:23.363340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.875 [2024-12-10 22:59:23.363374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.875 qpair failed and we were unable to recover it. 00:28:15.875 [2024-12-10 22:59:23.363503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.875 [2024-12-10 22:59:23.363537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.875 qpair failed and we were unable to recover it. 00:28:15.875 [2024-12-10 22:59:23.363814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.875 [2024-12-10 22:59:23.363848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.875 qpair failed and we were unable to recover it. 00:28:15.875 [2024-12-10 22:59:23.364036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.875 [2024-12-10 22:59:23.364070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.875 qpair failed and we were unable to recover it. 00:28:15.875 [2024-12-10 22:59:23.364190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.875 [2024-12-10 22:59:23.364222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.875 qpair failed and we were unable to recover it. 00:28:15.875 [2024-12-10 22:59:23.364405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.875 [2024-12-10 22:59:23.364439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.875 qpair failed and we were unable to recover it. 00:28:15.875 [2024-12-10 22:59:23.364620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.876 [2024-12-10 22:59:23.364655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.876 qpair failed and we were unable to recover it. 00:28:15.876 [2024-12-10 22:59:23.364789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.876 [2024-12-10 22:59:23.364823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.876 qpair failed and we were unable to recover it. 00:28:15.876 [2024-12-10 22:59:23.365004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.876 [2024-12-10 22:59:23.365038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.876 qpair failed and we were unable to recover it. 00:28:15.876 [2024-12-10 22:59:23.365148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.876 [2024-12-10 22:59:23.365208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.876 qpair failed and we were unable to recover it. 00:28:15.876 [2024-12-10 22:59:23.365397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.876 [2024-12-10 22:59:23.365431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.876 qpair failed and we were unable to recover it. 00:28:15.876 [2024-12-10 22:59:23.365611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.876 [2024-12-10 22:59:23.365646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.876 qpair failed and we were unable to recover it. 00:28:15.876 [2024-12-10 22:59:23.365780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.876 [2024-12-10 22:59:23.365814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.876 qpair failed and we were unable to recover it. 00:28:15.876 [2024-12-10 22:59:23.366073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.876 [2024-12-10 22:59:23.366105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.876 qpair failed and we were unable to recover it. 00:28:15.876 [2024-12-10 22:59:23.366240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.876 [2024-12-10 22:59:23.366275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.876 qpair failed and we were unable to recover it. 00:28:15.876 [2024-12-10 22:59:23.366428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.876 [2024-12-10 22:59:23.366462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.876 qpair failed and we were unable to recover it. 00:28:15.876 [2024-12-10 22:59:23.366655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.876 [2024-12-10 22:59:23.366689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.876 qpair failed and we were unable to recover it. 00:28:15.876 [2024-12-10 22:59:23.366891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.876 [2024-12-10 22:59:23.366924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.876 qpair failed and we were unable to recover it. 00:28:15.876 [2024-12-10 22:59:23.367108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.876 [2024-12-10 22:59:23.367142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.876 qpair failed and we were unable to recover it. 00:28:15.876 [2024-12-10 22:59:23.367360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.876 [2024-12-10 22:59:23.367394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.876 qpair failed and we were unable to recover it. 00:28:15.876 [2024-12-10 22:59:23.367530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.876 [2024-12-10 22:59:23.367564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.876 qpair failed and we were unable to recover it. 00:28:15.876 [2024-12-10 22:59:23.367687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.876 [2024-12-10 22:59:23.367722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.876 qpair failed and we were unable to recover it. 00:28:15.876 [2024-12-10 22:59:23.367831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.876 [2024-12-10 22:59:23.367864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.876 qpair failed and we were unable to recover it. 00:28:15.876 [2024-12-10 22:59:23.368144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.876 [2024-12-10 22:59:23.368189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.876 qpair failed and we were unable to recover it. 00:28:15.876 [2024-12-10 22:59:23.368325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.876 [2024-12-10 22:59:23.368359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.876 qpair failed and we were unable to recover it. 00:28:15.876 [2024-12-10 22:59:23.368637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.876 [2024-12-10 22:59:23.368671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.876 qpair failed and we were unable to recover it. 00:28:15.876 [2024-12-10 22:59:23.368868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.876 [2024-12-10 22:59:23.368902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.876 qpair failed and we were unable to recover it. 00:28:15.876 [2024-12-10 22:59:23.369084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.876 [2024-12-10 22:59:23.369119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.876 qpair failed and we were unable to recover it. 00:28:15.876 [2024-12-10 22:59:23.369271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.876 [2024-12-10 22:59:23.369307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.876 qpair failed and we were unable to recover it. 00:28:15.876 [2024-12-10 22:59:23.369587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.876 [2024-12-10 22:59:23.369625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.876 qpair failed and we were unable to recover it. 00:28:15.876 [2024-12-10 22:59:23.369855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.876 [2024-12-10 22:59:23.369889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.876 qpair failed and we were unable to recover it. 00:28:15.876 [2024-12-10 22:59:23.370072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.876 [2024-12-10 22:59:23.370107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.876 qpair failed and we were unable to recover it. 00:28:15.876 [2024-12-10 22:59:23.370342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.876 [2024-12-10 22:59:23.370378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.876 qpair failed and we were unable to recover it. 00:28:15.876 [2024-12-10 22:59:23.370508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.876 [2024-12-10 22:59:23.370542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.876 qpair failed and we were unable to recover it. 00:28:15.876 [2024-12-10 22:59:23.370745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.876 [2024-12-10 22:59:23.370779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.876 qpair failed and we were unable to recover it. 00:28:15.876 [2024-12-10 22:59:23.371096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.876 [2024-12-10 22:59:23.371129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.876 qpair failed and we were unable to recover it. 00:28:15.876 [2024-12-10 22:59:23.371407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.876 [2024-12-10 22:59:23.371442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.876 qpair failed and we were unable to recover it. 00:28:15.876 [2024-12-10 22:59:23.371698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.876 [2024-12-10 22:59:23.371732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.876 qpair failed and we were unable to recover it. 00:28:15.876 [2024-12-10 22:59:23.372019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.876 [2024-12-10 22:59:23.372052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.876 qpair failed and we were unable to recover it. 00:28:15.876 [2024-12-10 22:59:23.372253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.876 [2024-12-10 22:59:23.372289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.877 qpair failed and we were unable to recover it. 00:28:15.877 [2024-12-10 22:59:23.372489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.877 [2024-12-10 22:59:23.372522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.877 qpair failed and we were unable to recover it. 00:28:15.877 [2024-12-10 22:59:23.372666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.877 [2024-12-10 22:59:23.372700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.877 qpair failed and we were unable to recover it. 00:28:15.877 [2024-12-10 22:59:23.372867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.877 [2024-12-10 22:59:23.372901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.877 qpair failed and we were unable to recover it. 00:28:15.877 [2024-12-10 22:59:23.373143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.877 [2024-12-10 22:59:23.373188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.877 qpair failed and we were unable to recover it. 00:28:15.877 [2024-12-10 22:59:23.373421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.877 [2024-12-10 22:59:23.373455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.877 qpair failed and we were unable to recover it. 00:28:15.877 [2024-12-10 22:59:23.373659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.877 [2024-12-10 22:59:23.373693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.877 qpair failed and we were unable to recover it. 00:28:15.877 [2024-12-10 22:59:23.373895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.877 [2024-12-10 22:59:23.373929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.877 qpair failed and we were unable to recover it. 00:28:15.877 [2024-12-10 22:59:23.374146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.877 [2024-12-10 22:59:23.374189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.877 qpair failed and we were unable to recover it. 00:28:15.877 [2024-12-10 22:59:23.374393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.877 [2024-12-10 22:59:23.374426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.877 qpair failed and we were unable to recover it. 00:28:15.877 [2024-12-10 22:59:23.374607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.877 [2024-12-10 22:59:23.374642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.877 qpair failed and we were unable to recover it. 00:28:15.877 [2024-12-10 22:59:23.374896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.877 [2024-12-10 22:59:23.374930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.877 qpair failed and we were unable to recover it. 00:28:15.877 [2024-12-10 22:59:23.375132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.877 [2024-12-10 22:59:23.375173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.877 qpair failed and we were unable to recover it. 00:28:15.877 [2024-12-10 22:59:23.375310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.877 [2024-12-10 22:59:23.375356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.877 qpair failed and we were unable to recover it. 00:28:15.877 [2024-12-10 22:59:23.375489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.877 [2024-12-10 22:59:23.375522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.877 qpair failed and we were unable to recover it. 00:28:15.877 [2024-12-10 22:59:23.375720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.877 [2024-12-10 22:59:23.375754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.877 qpair failed and we were unable to recover it. 00:28:15.877 [2024-12-10 22:59:23.375889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.877 [2024-12-10 22:59:23.375922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.877 qpair failed and we were unable to recover it. 00:28:15.877 [2024-12-10 22:59:23.376209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.877 [2024-12-10 22:59:23.376244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.877 qpair failed and we were unable to recover it. 00:28:15.877 [2024-12-10 22:59:23.376386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.877 [2024-12-10 22:59:23.376420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.877 qpair failed and we were unable to recover it. 00:28:15.877 [2024-12-10 22:59:23.376628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.877 [2024-12-10 22:59:23.376661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.877 qpair failed and we were unable to recover it. 00:28:15.877 [2024-12-10 22:59:23.376869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.877 [2024-12-10 22:59:23.376903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.877 qpair failed and we were unable to recover it. 00:28:15.877 [2024-12-10 22:59:23.377113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.877 [2024-12-10 22:59:23.377147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.877 qpair failed and we were unable to recover it. 00:28:15.877 [2024-12-10 22:59:23.377372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.877 [2024-12-10 22:59:23.377407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.877 qpair failed and we were unable to recover it. 00:28:15.877 [2024-12-10 22:59:23.377610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.877 [2024-12-10 22:59:23.377645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.877 qpair failed and we were unable to recover it. 00:28:15.877 [2024-12-10 22:59:23.377968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.877 [2024-12-10 22:59:23.378003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.877 qpair failed and we were unable to recover it. 00:28:15.877 [2024-12-10 22:59:23.378324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.877 [2024-12-10 22:59:23.378360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.877 qpair failed and we were unable to recover it. 00:28:15.877 [2024-12-10 22:59:23.378560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.877 [2024-12-10 22:59:23.378594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.877 qpair failed and we were unable to recover it. 00:28:15.877 [2024-12-10 22:59:23.378776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.877 [2024-12-10 22:59:23.378810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.877 qpair failed and we were unable to recover it. 00:28:15.877 [2024-12-10 22:59:23.379013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.877 [2024-12-10 22:59:23.379047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.877 qpair failed and we were unable to recover it. 00:28:15.877 [2024-12-10 22:59:23.379245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.877 [2024-12-10 22:59:23.379279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.877 qpair failed and we were unable to recover it. 00:28:15.877 [2024-12-10 22:59:23.379403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.877 [2024-12-10 22:59:23.379444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.877 qpair failed and we were unable to recover it. 00:28:15.877 [2024-12-10 22:59:23.379625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.877 [2024-12-10 22:59:23.379660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.877 qpair failed and we were unable to recover it. 00:28:15.877 [2024-12-10 22:59:23.379846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.877 [2024-12-10 22:59:23.379881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.877 qpair failed and we were unable to recover it. 00:28:15.877 [2024-12-10 22:59:23.379993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.877 [2024-12-10 22:59:23.380027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.877 qpair failed and we were unable to recover it. 00:28:15.877 [2024-12-10 22:59:23.380156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.877 [2024-12-10 22:59:23.380202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.877 qpair failed and we were unable to recover it. 00:28:15.877 [2024-12-10 22:59:23.380387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.877 [2024-12-10 22:59:23.380421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.877 qpair failed and we were unable to recover it. 00:28:15.877 [2024-12-10 22:59:23.380533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.877 [2024-12-10 22:59:23.380568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.877 qpair failed and we were unable to recover it. 00:28:15.877 [2024-12-10 22:59:23.380760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.877 [2024-12-10 22:59:23.380794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.877 qpair failed and we were unable to recover it. 00:28:15.877 [2024-12-10 22:59:23.380977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.877 [2024-12-10 22:59:23.381011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.877 qpair failed and we were unable to recover it. 00:28:15.877 [2024-12-10 22:59:23.381240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.877 [2024-12-10 22:59:23.381275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.878 qpair failed and we were unable to recover it. 00:28:15.878 [2024-12-10 22:59:23.381480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.878 [2024-12-10 22:59:23.381515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.878 qpair failed and we were unable to recover it. 00:28:15.878 [2024-12-10 22:59:23.381715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.878 [2024-12-10 22:59:23.381749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.878 qpair failed and we were unable to recover it. 00:28:15.878 [2024-12-10 22:59:23.381940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.878 [2024-12-10 22:59:23.381974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.878 qpair failed and we were unable to recover it. 00:28:15.878 [2024-12-10 22:59:23.382165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.878 [2024-12-10 22:59:23.382201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.878 qpair failed and we were unable to recover it. 00:28:15.878 [2024-12-10 22:59:23.382387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.878 [2024-12-10 22:59:23.382422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.878 qpair failed and we were unable to recover it. 00:28:15.878 [2024-12-10 22:59:23.382546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.878 [2024-12-10 22:59:23.382579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.878 qpair failed and we were unable to recover it. 00:28:15.878 [2024-12-10 22:59:23.382793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.878 [2024-12-10 22:59:23.382827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.878 qpair failed and we were unable to recover it. 00:28:15.878 [2024-12-10 22:59:23.383062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.878 [2024-12-10 22:59:23.383098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.878 qpair failed and we were unable to recover it. 00:28:15.878 [2024-12-10 22:59:23.383296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.878 [2024-12-10 22:59:23.383330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.878 qpair failed and we were unable to recover it. 00:28:15.878 [2024-12-10 22:59:23.385304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.878 [2024-12-10 22:59:23.385372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.878 qpair failed and we were unable to recover it. 00:28:15.878 [2024-12-10 22:59:23.385616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.878 [2024-12-10 22:59:23.385651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.878 qpair failed and we were unable to recover it. 00:28:15.878 [2024-12-10 22:59:23.385801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.878 [2024-12-10 22:59:23.385835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.878 qpair failed and we were unable to recover it. 00:28:15.878 [2024-12-10 22:59:23.386020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.878 [2024-12-10 22:59:23.386055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.878 qpair failed and we were unable to recover it. 00:28:15.878 [2024-12-10 22:59:23.386237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.878 [2024-12-10 22:59:23.386273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.878 qpair failed and we were unable to recover it. 00:28:15.878 [2024-12-10 22:59:23.386484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.878 [2024-12-10 22:59:23.386518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.878 qpair failed and we were unable to recover it. 00:28:15.878 [2024-12-10 22:59:23.386742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.878 [2024-12-10 22:59:23.386775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.878 qpair failed and we were unable to recover it. 00:28:15.878 [2024-12-10 22:59:23.386978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.878 [2024-12-10 22:59:23.387012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.878 qpair failed and we were unable to recover it. 00:28:15.878 [2024-12-10 22:59:23.387211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.878 [2024-12-10 22:59:23.387247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.878 qpair failed and we were unable to recover it. 00:28:15.878 [2024-12-10 22:59:23.387381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.878 [2024-12-10 22:59:23.387416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.878 qpair failed and we were unable to recover it. 00:28:15.878 [2024-12-10 22:59:23.387543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.878 [2024-12-10 22:59:23.387577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.878 qpair failed and we were unable to recover it. 00:28:15.878 [2024-12-10 22:59:23.387762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.878 [2024-12-10 22:59:23.387798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.878 qpair failed and we were unable to recover it. 00:28:15.878 [2024-12-10 22:59:23.387918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.878 [2024-12-10 22:59:23.387952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.878 qpair failed and we were unable to recover it. 00:28:15.878 [2024-12-10 22:59:23.388174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.878 [2024-12-10 22:59:23.388209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.878 qpair failed and we were unable to recover it. 00:28:15.878 [2024-12-10 22:59:23.388319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.878 [2024-12-10 22:59:23.388356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.878 qpair failed and we were unable to recover it. 00:28:15.878 [2024-12-10 22:59:23.388548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.878 [2024-12-10 22:59:23.388582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.878 qpair failed and we were unable to recover it. 00:28:15.878 [2024-12-10 22:59:23.388854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.878 [2024-12-10 22:59:23.388888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.878 qpair failed and we were unable to recover it. 00:28:15.878 [2024-12-10 22:59:23.389025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.878 [2024-12-10 22:59:23.389059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.878 qpair failed and we were unable to recover it. 00:28:15.878 [2024-12-10 22:59:23.389314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.878 [2024-12-10 22:59:23.389348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.878 qpair failed and we were unable to recover it. 00:28:15.878 [2024-12-10 22:59:23.389482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.878 [2024-12-10 22:59:23.389516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.878 qpair failed and we were unable to recover it. 00:28:15.878 [2024-12-10 22:59:23.389769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.878 [2024-12-10 22:59:23.389802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.878 qpair failed and we were unable to recover it. 00:28:15.878 [2024-12-10 22:59:23.390066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.878 [2024-12-10 22:59:23.390105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.878 qpair failed and we were unable to recover it. 00:28:15.878 [2024-12-10 22:59:23.390452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.878 [2024-12-10 22:59:23.390487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.878 qpair failed and we were unable to recover it. 00:28:15.878 [2024-12-10 22:59:23.390696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.878 [2024-12-10 22:59:23.390730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.878 qpair failed and we were unable to recover it. 00:28:15.878 [2024-12-10 22:59:23.390858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.878 [2024-12-10 22:59:23.390893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.878 qpair failed and we were unable to recover it. 00:28:15.878 [2024-12-10 22:59:23.391084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.878 [2024-12-10 22:59:23.391118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.878 qpair failed and we were unable to recover it. 00:28:15.878 [2024-12-10 22:59:23.391355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.878 [2024-12-10 22:59:23.391391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.878 qpair failed and we were unable to recover it. 00:28:15.878 [2024-12-10 22:59:23.391624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.878 [2024-12-10 22:59:23.391660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.878 qpair failed and we were unable to recover it. 00:28:15.878 [2024-12-10 22:59:23.391872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.878 [2024-12-10 22:59:23.391908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.878 qpair failed and we were unable to recover it. 00:28:15.878 [2024-12-10 22:59:23.392111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.878 [2024-12-10 22:59:23.392146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.879 qpair failed and we were unable to recover it. 00:28:15.879 [2024-12-10 22:59:23.392358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.879 [2024-12-10 22:59:23.392393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.879 qpair failed and we were unable to recover it. 00:28:15.879 [2024-12-10 22:59:23.392506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.879 [2024-12-10 22:59:23.392538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.879 qpair failed and we were unable to recover it. 00:28:15.879 [2024-12-10 22:59:23.392745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.879 [2024-12-10 22:59:23.392779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.879 qpair failed and we were unable to recover it. 00:28:15.879 [2024-12-10 22:59:23.393036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.879 [2024-12-10 22:59:23.393069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.879 qpair failed and we were unable to recover it. 00:28:15.879 [2024-12-10 22:59:23.393198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.879 [2024-12-10 22:59:23.393234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.879 qpair failed and we were unable to recover it. 00:28:15.879 [2024-12-10 22:59:23.393424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.879 [2024-12-10 22:59:23.393459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.879 qpair failed and we were unable to recover it. 00:28:15.879 [2024-12-10 22:59:23.393588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.879 [2024-12-10 22:59:23.393622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.879 qpair failed and we were unable to recover it. 00:28:15.879 [2024-12-10 22:59:23.393948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.879 [2024-12-10 22:59:23.393983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.879 qpair failed and we were unable to recover it. 00:28:15.879 [2024-12-10 22:59:23.394180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.879 [2024-12-10 22:59:23.394216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.879 qpair failed and we were unable to recover it. 00:28:15.879 [2024-12-10 22:59:23.394402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.879 [2024-12-10 22:59:23.394436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.879 qpair failed and we were unable to recover it. 00:28:15.879 [2024-12-10 22:59:23.394615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.879 [2024-12-10 22:59:23.394650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.879 qpair failed and we were unable to recover it. 00:28:15.879 [2024-12-10 22:59:23.394885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.879 [2024-12-10 22:59:23.394920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.879 qpair failed and we were unable to recover it. 00:28:15.879 [2024-12-10 22:59:23.395051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.879 [2024-12-10 22:59:23.395085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.879 qpair failed and we were unable to recover it. 00:28:15.879 [2024-12-10 22:59:23.395237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.879 [2024-12-10 22:59:23.395273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.879 qpair failed and we were unable to recover it. 00:28:15.879 [2024-12-10 22:59:23.395474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.879 [2024-12-10 22:59:23.395508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.879 qpair failed and we were unable to recover it. 00:28:15.879 [2024-12-10 22:59:23.395714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.879 [2024-12-10 22:59:23.395748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.879 qpair failed and we were unable to recover it. 00:28:15.879 [2024-12-10 22:59:23.395874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.879 [2024-12-10 22:59:23.395909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.879 qpair failed and we were unable to recover it. 00:28:15.879 [2024-12-10 22:59:23.396183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.879 [2024-12-10 22:59:23.396219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.879 qpair failed and we were unable to recover it. 00:28:15.879 [2024-12-10 22:59:23.396495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.879 [2024-12-10 22:59:23.396530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.879 qpair failed and we were unable to recover it. 00:28:15.879 [2024-12-10 22:59:23.396671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.879 [2024-12-10 22:59:23.396706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.879 qpair failed and we were unable to recover it. 00:28:15.879 [2024-12-10 22:59:23.396906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.879 [2024-12-10 22:59:23.396941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.879 qpair failed and we were unable to recover it. 00:28:15.879 [2024-12-10 22:59:23.397070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.879 [2024-12-10 22:59:23.397103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.879 qpair failed and we were unable to recover it. 00:28:15.879 [2024-12-10 22:59:23.397319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.879 [2024-12-10 22:59:23.397356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.879 qpair failed and we were unable to recover it. 00:28:15.879 [2024-12-10 22:59:23.397478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.879 [2024-12-10 22:59:23.397515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.879 qpair failed and we were unable to recover it. 00:28:15.879 [2024-12-10 22:59:23.397647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.879 [2024-12-10 22:59:23.397681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.879 qpair failed and we were unable to recover it. 00:28:15.879 [2024-12-10 22:59:23.397889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.879 [2024-12-10 22:59:23.397926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.879 qpair failed and we were unable to recover it. 00:28:15.879 [2024-12-10 22:59:23.398121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.879 [2024-12-10 22:59:23.398171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.879 qpair failed and we were unable to recover it. 00:28:15.879 [2024-12-10 22:59:23.398310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.879 [2024-12-10 22:59:23.398345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.879 qpair failed and we were unable to recover it. 00:28:15.879 [2024-12-10 22:59:23.398499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.879 [2024-12-10 22:59:23.398533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.879 qpair failed and we were unable to recover it. 00:28:15.879 [2024-12-10 22:59:23.398823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.879 [2024-12-10 22:59:23.398857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.879 qpair failed and we were unable to recover it. 00:28:15.879 [2024-12-10 22:59:23.399040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.879 [2024-12-10 22:59:23.399074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.879 qpair failed and we were unable to recover it. 00:28:15.879 [2024-12-10 22:59:23.399258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.879 [2024-12-10 22:59:23.399299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.879 qpair failed and we were unable to recover it. 00:28:15.879 [2024-12-10 22:59:23.399432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.879 [2024-12-10 22:59:23.399466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.879 qpair failed and we were unable to recover it. 00:28:15.879 [2024-12-10 22:59:23.399588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.879 [2024-12-10 22:59:23.399623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.879 qpair failed and we were unable to recover it. 00:28:15.879 [2024-12-10 22:59:23.399741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.879 [2024-12-10 22:59:23.399773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.879 qpair failed and we were unable to recover it. 00:28:15.879 [2024-12-10 22:59:23.399978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.879 [2024-12-10 22:59:23.400012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.879 qpair failed and we were unable to recover it. 00:28:15.879 [2024-12-10 22:59:23.400132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.879 [2024-12-10 22:59:23.400174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.879 qpair failed and we were unable to recover it. 00:28:15.879 [2024-12-10 22:59:23.400358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.879 [2024-12-10 22:59:23.400393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.879 qpair failed and we were unable to recover it. 00:28:15.879 [2024-12-10 22:59:23.400571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.880 [2024-12-10 22:59:23.400605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.880 qpair failed and we were unable to recover it. 00:28:15.880 [2024-12-10 22:59:23.400942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.880 [2024-12-10 22:59:23.400975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.880 qpair failed and we were unable to recover it. 00:28:15.880 [2024-12-10 22:59:23.401240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.880 [2024-12-10 22:59:23.401275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.880 qpair failed and we were unable to recover it. 00:28:15.880 [2024-12-10 22:59:23.401555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.880 [2024-12-10 22:59:23.401589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.880 qpair failed and we were unable to recover it. 00:28:15.880 [2024-12-10 22:59:23.401713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.880 [2024-12-10 22:59:23.401747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.880 qpair failed and we were unable to recover it. 00:28:15.880 [2024-12-10 22:59:23.401937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.880 [2024-12-10 22:59:23.401973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.880 qpair failed and we were unable to recover it. 00:28:15.880 [2024-12-10 22:59:23.402105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.880 [2024-12-10 22:59:23.402140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.880 qpair failed and we were unable to recover it. 00:28:15.880 [2024-12-10 22:59:23.402287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.880 [2024-12-10 22:59:23.402323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.880 qpair failed and we were unable to recover it. 00:28:15.880 [2024-12-10 22:59:23.402504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.880 [2024-12-10 22:59:23.402539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.880 qpair failed and we were unable to recover it. 00:28:15.880 [2024-12-10 22:59:23.402742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.880 [2024-12-10 22:59:23.402776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.880 qpair failed and we were unable to recover it. 00:28:15.880 [2024-12-10 22:59:23.402954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.880 [2024-12-10 22:59:23.402989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.880 qpair failed and we were unable to recover it. 00:28:15.880 [2024-12-10 22:59:23.403097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.880 [2024-12-10 22:59:23.403131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.880 qpair failed and we were unable to recover it. 00:28:15.880 [2024-12-10 22:59:23.403298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.880 [2024-12-10 22:59:23.403332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.880 qpair failed and we were unable to recover it. 00:28:15.880 [2024-12-10 22:59:23.403525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.880 [2024-12-10 22:59:23.403559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.880 qpair failed and we were unable to recover it. 00:28:15.880 [2024-12-10 22:59:23.403741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.880 [2024-12-10 22:59:23.403776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.880 qpair failed and we were unable to recover it. 00:28:15.880 [2024-12-10 22:59:23.403994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.880 [2024-12-10 22:59:23.404028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.880 qpair failed and we were unable to recover it. 00:28:15.880 [2024-12-10 22:59:23.404225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.880 [2024-12-10 22:59:23.404260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.880 qpair failed and we were unable to recover it. 00:28:15.880 [2024-12-10 22:59:23.404456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.880 [2024-12-10 22:59:23.404490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.880 qpair failed and we were unable to recover it. 00:28:15.880 [2024-12-10 22:59:23.404614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.880 [2024-12-10 22:59:23.404648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.880 qpair failed and we were unable to recover it. 00:28:15.880 [2024-12-10 22:59:23.404965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.880 [2024-12-10 22:59:23.404999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.880 qpair failed and we were unable to recover it. 00:28:15.880 [2024-12-10 22:59:23.405130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.880 [2024-12-10 22:59:23.405172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.880 qpair failed and we were unable to recover it. 00:28:15.880 [2024-12-10 22:59:23.405311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.880 [2024-12-10 22:59:23.405345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.880 qpair failed and we were unable to recover it. 00:28:15.880 [2024-12-10 22:59:23.405524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.880 [2024-12-10 22:59:23.405558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.880 qpair failed and we were unable to recover it. 00:28:15.880 [2024-12-10 22:59:23.405889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.880 [2024-12-10 22:59:23.405924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.880 qpair failed and we were unable to recover it. 00:28:15.880 [2024-12-10 22:59:23.406055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.880 [2024-12-10 22:59:23.406089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.880 qpair failed and we were unable to recover it. 00:28:15.880 [2024-12-10 22:59:23.406226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.880 [2024-12-10 22:59:23.406262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.880 qpair failed and we were unable to recover it. 00:28:15.880 [2024-12-10 22:59:23.406379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.880 [2024-12-10 22:59:23.406412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.880 qpair failed and we were unable to recover it. 00:28:15.880 [2024-12-10 22:59:23.406615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.880 [2024-12-10 22:59:23.406649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.880 qpair failed and we were unable to recover it. 00:28:15.880 [2024-12-10 22:59:23.406866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.880 [2024-12-10 22:59:23.406899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.880 qpair failed and we were unable to recover it. 00:28:15.880 [2024-12-10 22:59:23.407092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.880 [2024-12-10 22:59:23.407126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.880 qpair failed and we were unable to recover it. 00:28:15.880 [2024-12-10 22:59:23.407322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.880 [2024-12-10 22:59:23.407358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.880 qpair failed and we were unable to recover it. 00:28:15.880 [2024-12-10 22:59:23.407475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.880 [2024-12-10 22:59:23.407509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.880 qpair failed and we were unable to recover it. 00:28:15.880 [2024-12-10 22:59:23.407692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.880 [2024-12-10 22:59:23.407727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.881 qpair failed and we were unable to recover it. 00:28:15.881 [2024-12-10 22:59:23.407854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.881 [2024-12-10 22:59:23.407897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.881 qpair failed and we were unable to recover it. 00:28:15.881 [2024-12-10 22:59:23.408105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.881 [2024-12-10 22:59:23.408140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.881 qpair failed and we were unable to recover it. 00:28:15.881 [2024-12-10 22:59:23.408399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.881 [2024-12-10 22:59:23.408434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.881 qpair failed and we were unable to recover it. 00:28:15.881 [2024-12-10 22:59:23.408631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.881 [2024-12-10 22:59:23.408663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.881 qpair failed and we were unable to recover it. 00:28:15.881 [2024-12-10 22:59:23.408889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.881 [2024-12-10 22:59:23.408923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.881 qpair failed and we were unable to recover it. 00:28:15.881 [2024-12-10 22:59:23.409123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.881 [2024-12-10 22:59:23.409166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.881 qpair failed and we were unable to recover it. 00:28:15.881 [2024-12-10 22:59:23.409305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.881 [2024-12-10 22:59:23.409340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.881 qpair failed and we were unable to recover it. 00:28:15.881 [2024-12-10 22:59:23.409464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.881 [2024-12-10 22:59:23.409497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.881 qpair failed and we were unable to recover it. 00:28:15.881 [2024-12-10 22:59:23.409642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.881 [2024-12-10 22:59:23.409676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.881 qpair failed and we were unable to recover it. 00:28:15.881 [2024-12-10 22:59:23.409867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.881 [2024-12-10 22:59:23.409902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.881 qpair failed and we were unable to recover it. 00:28:15.881 [2024-12-10 22:59:23.410110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.881 [2024-12-10 22:59:23.410144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.881 qpair failed and we were unable to recover it. 00:28:15.881 [2024-12-10 22:59:23.410308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.881 [2024-12-10 22:59:23.410344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.881 qpair failed and we were unable to recover it. 00:28:15.881 [2024-12-10 22:59:23.410485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.881 [2024-12-10 22:59:23.410518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.881 qpair failed and we were unable to recover it. 00:28:15.881 [2024-12-10 22:59:23.410664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.881 [2024-12-10 22:59:23.410698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.881 qpair failed and we were unable to recover it. 00:28:15.881 [2024-12-10 22:59:23.411013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.881 [2024-12-10 22:59:23.411047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.881 qpair failed and we were unable to recover it. 00:28:15.881 [2024-12-10 22:59:23.411322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.881 [2024-12-10 22:59:23.411361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.881 qpair failed and we were unable to recover it. 00:28:15.881 [2024-12-10 22:59:23.411548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.881 [2024-12-10 22:59:23.411583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.881 qpair failed and we were unable to recover it. 00:28:15.881 [2024-12-10 22:59:23.411768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.881 [2024-12-10 22:59:23.411802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.881 qpair failed and we were unable to recover it. 00:28:15.881 [2024-12-10 22:59:23.411929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.881 [2024-12-10 22:59:23.411963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.881 qpair failed and we were unable to recover it. 00:28:15.881 [2024-12-10 22:59:23.412246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.881 [2024-12-10 22:59:23.412283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.881 qpair failed and we were unable to recover it. 00:28:15.881 [2024-12-10 22:59:23.412436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.881 [2024-12-10 22:59:23.412473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.881 qpair failed and we were unable to recover it. 00:28:15.881 [2024-12-10 22:59:23.412588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.881 [2024-12-10 22:59:23.412622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.881 qpair failed and we were unable to recover it. 00:28:15.881 [2024-12-10 22:59:23.412800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.881 [2024-12-10 22:59:23.412834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.881 qpair failed and we were unable to recover it. 00:28:15.881 [2024-12-10 22:59:23.412960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.881 [2024-12-10 22:59:23.412993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.881 qpair failed and we were unable to recover it. 00:28:15.881 [2024-12-10 22:59:23.413251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.881 [2024-12-10 22:59:23.413286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.881 qpair failed and we were unable to recover it. 00:28:15.881 [2024-12-10 22:59:23.413495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.881 [2024-12-10 22:59:23.413529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.881 qpair failed and we were unable to recover it. 00:28:15.881 [2024-12-10 22:59:23.413659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.881 [2024-12-10 22:59:23.413694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.881 qpair failed and we were unable to recover it. 00:28:15.881 [2024-12-10 22:59:23.413952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.881 [2024-12-10 22:59:23.413988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.881 qpair failed and we were unable to recover it. 00:28:15.881 [2024-12-10 22:59:23.414252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.881 [2024-12-10 22:59:23.414287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.881 qpair failed and we were unable to recover it. 00:28:15.881 [2024-12-10 22:59:23.414446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.881 [2024-12-10 22:59:23.414480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.881 qpair failed and we were unable to recover it. 00:28:15.881 [2024-12-10 22:59:23.414589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.881 [2024-12-10 22:59:23.414624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.881 qpair failed and we were unable to recover it. 00:28:15.881 [2024-12-10 22:59:23.414821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.881 [2024-12-10 22:59:23.414856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.881 qpair failed and we were unable to recover it. 00:28:15.881 [2024-12-10 22:59:23.415053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.881 [2024-12-10 22:59:23.415089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.881 qpair failed and we were unable to recover it. 00:28:15.881 [2024-12-10 22:59:23.415214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.881 [2024-12-10 22:59:23.415250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.881 qpair failed and we were unable to recover it. 00:28:15.881 [2024-12-10 22:59:23.415394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.881 [2024-12-10 22:59:23.415430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.881 qpair failed and we were unable to recover it. 00:28:15.881 [2024-12-10 22:59:23.415625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.881 [2024-12-10 22:59:23.415662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.881 qpair failed and we were unable to recover it. 00:28:15.881 [2024-12-10 22:59:23.415770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.881 [2024-12-10 22:59:23.415804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.881 qpair failed and we were unable to recover it. 00:28:15.881 [2024-12-10 22:59:23.415930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.881 [2024-12-10 22:59:23.415965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.881 qpair failed and we were unable to recover it. 00:28:15.882 [2024-12-10 22:59:23.416172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.882 [2024-12-10 22:59:23.416208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.882 qpair failed and we were unable to recover it. 00:28:15.882 [2024-12-10 22:59:23.416346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.882 [2024-12-10 22:59:23.416380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.882 qpair failed and we were unable to recover it. 00:28:15.882 [2024-12-10 22:59:23.416583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.882 [2024-12-10 22:59:23.416623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.882 qpair failed and we were unable to recover it. 00:28:15.882 [2024-12-10 22:59:23.416854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.882 [2024-12-10 22:59:23.416888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.882 qpair failed and we were unable to recover it. 00:28:15.882 [2024-12-10 22:59:23.417069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.882 [2024-12-10 22:59:23.417103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.882 qpair failed and we were unable to recover it. 00:28:15.882 [2024-12-10 22:59:23.417267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.882 [2024-12-10 22:59:23.417301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.882 qpair failed and we were unable to recover it. 00:28:15.882 [2024-12-10 22:59:23.417429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.882 [2024-12-10 22:59:23.417463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.882 qpair failed and we were unable to recover it. 00:28:15.882 [2024-12-10 22:59:23.417620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.882 [2024-12-10 22:59:23.417653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.882 qpair failed and we were unable to recover it. 00:28:15.882 [2024-12-10 22:59:23.417763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.882 [2024-12-10 22:59:23.417799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.882 qpair failed and we were unable to recover it. 00:28:15.882 [2024-12-10 22:59:23.417996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.882 [2024-12-10 22:59:23.418032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.882 qpair failed and we were unable to recover it. 00:28:15.882 [2024-12-10 22:59:23.418172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.882 [2024-12-10 22:59:23.418206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.882 qpair failed and we were unable to recover it. 00:28:15.882 [2024-12-10 22:59:23.418419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.882 [2024-12-10 22:59:23.418453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.882 qpair failed and we were unable to recover it. 00:28:15.882 [2024-12-10 22:59:23.418570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.882 [2024-12-10 22:59:23.418604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.882 qpair failed and we were unable to recover it. 00:28:15.882 [2024-12-10 22:59:23.418815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.882 [2024-12-10 22:59:23.418849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.882 qpair failed and we were unable to recover it. 00:28:15.882 [2024-12-10 22:59:23.419051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.882 [2024-12-10 22:59:23.419086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.882 qpair failed and we were unable to recover it. 00:28:15.882 [2024-12-10 22:59:23.419245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.882 [2024-12-10 22:59:23.419280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.882 qpair failed and we were unable to recover it. 00:28:15.882 [2024-12-10 22:59:23.419419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.882 [2024-12-10 22:59:23.419453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.882 qpair failed and we were unable to recover it. 00:28:15.882 [2024-12-10 22:59:23.419584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.882 [2024-12-10 22:59:23.419617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.882 qpair failed and we were unable to recover it. 00:28:15.882 [2024-12-10 22:59:23.419820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.882 [2024-12-10 22:59:23.419853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.882 qpair failed and we were unable to recover it. 00:28:15.882 [2024-12-10 22:59:23.420150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.882 [2024-12-10 22:59:23.420197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.882 qpair failed and we were unable to recover it. 00:28:15.882 [2024-12-10 22:59:23.420333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.882 [2024-12-10 22:59:23.420366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.882 qpair failed and we were unable to recover it. 00:28:15.882 [2024-12-10 22:59:23.420481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.882 [2024-12-10 22:59:23.420515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.882 qpair failed and we were unable to recover it. 00:28:15.882 [2024-12-10 22:59:23.420699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.882 [2024-12-10 22:59:23.420732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.882 qpair failed and we were unable to recover it. 00:28:15.882 [2024-12-10 22:59:23.421009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.882 [2024-12-10 22:59:23.421041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.882 qpair failed and we were unable to recover it. 00:28:15.882 [2024-12-10 22:59:23.421167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.882 [2024-12-10 22:59:23.421202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.882 qpair failed and we were unable to recover it. 00:28:15.882 [2024-12-10 22:59:23.421349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.882 [2024-12-10 22:59:23.421383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.882 qpair failed and we were unable to recover it. 00:28:15.882 [2024-12-10 22:59:23.421506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.882 [2024-12-10 22:59:23.421540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.882 qpair failed and we were unable to recover it. 00:28:15.882 [2024-12-10 22:59:23.421726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.882 [2024-12-10 22:59:23.421760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.882 qpair failed and we were unable to recover it. 00:28:15.882 [2024-12-10 22:59:23.421886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.882 [2024-12-10 22:59:23.421919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.882 qpair failed and we were unable to recover it. 00:28:15.882 [2024-12-10 22:59:23.422054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.882 [2024-12-10 22:59:23.422088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.882 qpair failed and we were unable to recover it. 00:28:15.882 [2024-12-10 22:59:23.422294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.882 [2024-12-10 22:59:23.422328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.882 qpair failed and we were unable to recover it. 00:28:15.882 [2024-12-10 22:59:23.422604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.882 [2024-12-10 22:59:23.422638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.882 qpair failed and we were unable to recover it. 00:28:15.882 [2024-12-10 22:59:23.422772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.882 [2024-12-10 22:59:23.422806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.882 qpair failed and we were unable to recover it. 00:28:15.882 [2024-12-10 22:59:23.422950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.882 [2024-12-10 22:59:23.422983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.882 qpair failed and we were unable to recover it. 00:28:15.882 [2024-12-10 22:59:23.423181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.882 [2024-12-10 22:59:23.423217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.882 qpair failed and we were unable to recover it. 00:28:15.882 [2024-12-10 22:59:23.423419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.882 [2024-12-10 22:59:23.423453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.882 qpair failed and we were unable to recover it. 00:28:15.882 [2024-12-10 22:59:23.423612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.882 [2024-12-10 22:59:23.423648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.882 qpair failed and we were unable to recover it. 00:28:15.882 [2024-12-10 22:59:23.423851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.882 [2024-12-10 22:59:23.423887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.882 qpair failed and we were unable to recover it. 00:28:15.882 [2024-12-10 22:59:23.424078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.882 [2024-12-10 22:59:23.424113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.882 qpair failed and we were unable to recover it. 00:28:15.882 [2024-12-10 22:59:23.424271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.883 [2024-12-10 22:59:23.424306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.883 qpair failed and we were unable to recover it. 00:28:15.883 [2024-12-10 22:59:23.424442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.883 [2024-12-10 22:59:23.424477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.883 qpair failed and we were unable to recover it. 00:28:15.883 [2024-12-10 22:59:23.424613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.883 [2024-12-10 22:59:23.424647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.883 qpair failed and we were unable to recover it. 00:28:15.883 [2024-12-10 22:59:23.424785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.883 [2024-12-10 22:59:23.424825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.883 qpair failed and we were unable to recover it. 00:28:15.883 [2024-12-10 22:59:23.425053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.883 [2024-12-10 22:59:23.425086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.883 qpair failed and we were unable to recover it. 00:28:15.883 [2024-12-10 22:59:23.425285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.883 [2024-12-10 22:59:23.425319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.883 qpair failed and we were unable to recover it. 00:28:15.883 [2024-12-10 22:59:23.425501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.883 [2024-12-10 22:59:23.425535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.883 qpair failed and we were unable to recover it. 00:28:15.883 [2024-12-10 22:59:23.425669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.883 [2024-12-10 22:59:23.425706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.883 qpair failed and we were unable to recover it. 00:28:15.883 [2024-12-10 22:59:23.425849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.883 [2024-12-10 22:59:23.425883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.883 qpair failed and we were unable to recover it. 00:28:15.883 [2024-12-10 22:59:23.426007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.883 [2024-12-10 22:59:23.426039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.883 qpair failed and we were unable to recover it. 00:28:15.883 [2024-12-10 22:59:23.426243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.883 [2024-12-10 22:59:23.426277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.883 qpair failed and we were unable to recover it. 00:28:15.883 [2024-12-10 22:59:23.426414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.883 [2024-12-10 22:59:23.426449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.883 qpair failed and we were unable to recover it. 00:28:15.883 [2024-12-10 22:59:23.426580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.883 [2024-12-10 22:59:23.426614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.883 qpair failed and we were unable to recover it. 00:28:15.883 [2024-12-10 22:59:23.426723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.883 [2024-12-10 22:59:23.426757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.883 qpair failed and we were unable to recover it. 00:28:15.883 [2024-12-10 22:59:23.427013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.883 [2024-12-10 22:59:23.427048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.883 qpair failed and we were unable to recover it. 00:28:15.883 [2024-12-10 22:59:23.427267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.883 [2024-12-10 22:59:23.427302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.883 qpair failed and we were unable to recover it. 00:28:15.883 [2024-12-10 22:59:23.427490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.883 [2024-12-10 22:59:23.427525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.883 qpair failed and we were unable to recover it. 00:28:15.883 [2024-12-10 22:59:23.427671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.883 [2024-12-10 22:59:23.427707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.883 qpair failed and we were unable to recover it. 00:28:15.883 [2024-12-10 22:59:23.427920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.883 [2024-12-10 22:59:23.427958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.883 qpair failed and we were unable to recover it. 00:28:15.883 [2024-12-10 22:59:23.428106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.883 [2024-12-10 22:59:23.428141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.883 qpair failed and we were unable to recover it. 00:28:15.883 [2024-12-10 22:59:23.428274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.883 [2024-12-10 22:59:23.428309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.883 qpair failed and we were unable to recover it. 00:28:15.883 [2024-12-10 22:59:23.428446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.883 [2024-12-10 22:59:23.428481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.883 qpair failed and we were unable to recover it. 00:28:15.883 [2024-12-10 22:59:23.428613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.883 [2024-12-10 22:59:23.428648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.883 qpair failed and we were unable to recover it. 00:28:15.883 [2024-12-10 22:59:23.428912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.883 [2024-12-10 22:59:23.428945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.883 qpair failed and we were unable to recover it. 00:28:15.883 [2024-12-10 22:59:23.429124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.883 [2024-12-10 22:59:23.429168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.883 qpair failed and we were unable to recover it. 00:28:15.883 [2024-12-10 22:59:23.429303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.883 [2024-12-10 22:59:23.429338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.883 qpair failed and we were unable to recover it. 00:28:15.883 [2024-12-10 22:59:23.429543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.883 [2024-12-10 22:59:23.429578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.883 qpair failed and we were unable to recover it. 00:28:15.883 [2024-12-10 22:59:23.429802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.883 [2024-12-10 22:59:23.429838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.883 qpair failed and we were unable to recover it. 00:28:15.883 [2024-12-10 22:59:23.430021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.883 [2024-12-10 22:59:23.430055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.883 qpair failed and we were unable to recover it. 00:28:15.883 [2024-12-10 22:59:23.430182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.883 [2024-12-10 22:59:23.430217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.883 qpair failed and we were unable to recover it. 00:28:15.883 [2024-12-10 22:59:23.430420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.883 [2024-12-10 22:59:23.430454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.883 qpair failed and we were unable to recover it. 00:28:15.883 [2024-12-10 22:59:23.430638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.883 [2024-12-10 22:59:23.430671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.883 qpair failed and we were unable to recover it. 00:28:15.883 [2024-12-10 22:59:23.430899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.883 [2024-12-10 22:59:23.430934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.883 qpair failed and we were unable to recover it. 00:28:15.883 [2024-12-10 22:59:23.431117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.883 [2024-12-10 22:59:23.431151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.883 qpair failed and we were unable to recover it. 00:28:15.883 [2024-12-10 22:59:23.431368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.883 [2024-12-10 22:59:23.431404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.883 qpair failed and we were unable to recover it. 00:28:15.883 [2024-12-10 22:59:23.431586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.883 [2024-12-10 22:59:23.431620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.883 qpair failed and we were unable to recover it. 00:28:15.883 [2024-12-10 22:59:23.431735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.883 [2024-12-10 22:59:23.431769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.883 qpair failed and we were unable to recover it. 00:28:15.883 [2024-12-10 22:59:23.431895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.883 [2024-12-10 22:59:23.431928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.883 qpair failed and we were unable to recover it. 00:28:15.883 [2024-12-10 22:59:23.432123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.883 [2024-12-10 22:59:23.432165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.883 qpair failed and we were unable to recover it. 00:28:15.883 [2024-12-10 22:59:23.432372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.884 [2024-12-10 22:59:23.432406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.884 qpair failed and we were unable to recover it. 00:28:15.884 [2024-12-10 22:59:23.432535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.884 [2024-12-10 22:59:23.432568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.884 qpair failed and we were unable to recover it. 00:28:15.884 [2024-12-10 22:59:23.432791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.884 [2024-12-10 22:59:23.432825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.884 qpair failed and we were unable to recover it. 00:28:15.884 [2024-12-10 22:59:23.433004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.884 [2024-12-10 22:59:23.433039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.884 qpair failed and we were unable to recover it. 00:28:15.884 [2024-12-10 22:59:23.433147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.884 [2024-12-10 22:59:23.433211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.884 qpair failed and we were unable to recover it. 00:28:15.884 [2024-12-10 22:59:23.433416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.884 [2024-12-10 22:59:23.433451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.884 qpair failed and we were unable to recover it. 00:28:15.884 [2024-12-10 22:59:23.433580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.884 [2024-12-10 22:59:23.433615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.884 qpair failed and we were unable to recover it. 00:28:15.884 [2024-12-10 22:59:23.433774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.884 [2024-12-10 22:59:23.433810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.884 qpair failed and we were unable to recover it. 00:28:15.884 [2024-12-10 22:59:23.434038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.884 [2024-12-10 22:59:23.434072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.884 qpair failed and we were unable to recover it. 00:28:15.884 [2024-12-10 22:59:23.434273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.884 [2024-12-10 22:59:23.434308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.884 qpair failed and we were unable to recover it. 00:28:15.884 [2024-12-10 22:59:23.434515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.884 [2024-12-10 22:59:23.434550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.884 qpair failed and we were unable to recover it. 00:28:15.884 [2024-12-10 22:59:23.434748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.884 [2024-12-10 22:59:23.434782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.884 qpair failed and we were unable to recover it. 00:28:15.884 [2024-12-10 22:59:23.435060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.884 [2024-12-10 22:59:23.435094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.884 qpair failed and we were unable to recover it. 00:28:15.884 [2024-12-10 22:59:23.435354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.884 [2024-12-10 22:59:23.435390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.884 qpair failed and we were unable to recover it. 00:28:15.884 [2024-12-10 22:59:23.435693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.884 [2024-12-10 22:59:23.435728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.884 qpair failed and we were unable to recover it. 00:28:15.884 [2024-12-10 22:59:23.435916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.884 [2024-12-10 22:59:23.435949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.884 qpair failed and we were unable to recover it. 00:28:15.884 [2024-12-10 22:59:23.436229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.884 [2024-12-10 22:59:23.436264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.884 qpair failed and we were unable to recover it. 00:28:15.884 [2024-12-10 22:59:23.436612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.884 [2024-12-10 22:59:23.436646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.884 qpair failed and we were unable to recover it. 00:28:15.884 [2024-12-10 22:59:23.436858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.884 [2024-12-10 22:59:23.436892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.884 qpair failed and we were unable to recover it. 00:28:15.884 [2024-12-10 22:59:23.437088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.884 [2024-12-10 22:59:23.437122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.884 qpair failed and we were unable to recover it. 00:28:15.884 [2024-12-10 22:59:23.437333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.884 [2024-12-10 22:59:23.437367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.884 qpair failed and we were unable to recover it. 00:28:15.884 [2024-12-10 22:59:23.437576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.884 [2024-12-10 22:59:23.437610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.884 qpair failed and we were unable to recover it. 00:28:15.884 [2024-12-10 22:59:23.437737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.884 [2024-12-10 22:59:23.437769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.884 qpair failed and we were unable to recover it. 00:28:15.884 [2024-12-10 22:59:23.437962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.884 [2024-12-10 22:59:23.437996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.884 qpair failed and we were unable to recover it. 00:28:15.884 [2024-12-10 22:59:23.438129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.884 [2024-12-10 22:59:23.438173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.884 qpair failed and we were unable to recover it. 00:28:15.884 [2024-12-10 22:59:23.438302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.884 [2024-12-10 22:59:23.438336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.884 qpair failed and we were unable to recover it. 00:28:15.884 [2024-12-10 22:59:23.438460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.884 [2024-12-10 22:59:23.438494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.884 qpair failed and we were unable to recover it. 00:28:15.884 [2024-12-10 22:59:23.438627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.884 [2024-12-10 22:59:23.438662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.884 qpair failed and we were unable to recover it. 00:28:15.884 [2024-12-10 22:59:23.438968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.884 [2024-12-10 22:59:23.439002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.884 qpair failed and we were unable to recover it. 00:28:15.884 [2024-12-10 22:59:23.439237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.884 [2024-12-10 22:59:23.439272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.884 qpair failed and we were unable to recover it. 00:28:15.884 [2024-12-10 22:59:23.439542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.884 [2024-12-10 22:59:23.439575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.884 qpair failed and we were unable to recover it. 00:28:15.884 [2024-12-10 22:59:23.439762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.884 [2024-12-10 22:59:23.439796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.884 qpair failed and we were unable to recover it. 00:28:15.884 [2024-12-10 22:59:23.440077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.884 [2024-12-10 22:59:23.440111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.884 qpair failed and we were unable to recover it. 00:28:15.884 [2024-12-10 22:59:23.440338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.884 [2024-12-10 22:59:23.440373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.884 qpair failed and we were unable to recover it. 00:28:15.884 [2024-12-10 22:59:23.440571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.884 [2024-12-10 22:59:23.440604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.884 qpair failed and we were unable to recover it. 00:28:15.884 [2024-12-10 22:59:23.440863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.884 [2024-12-10 22:59:23.440897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.884 qpair failed and we were unable to recover it. 00:28:15.884 [2024-12-10 22:59:23.441026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.884 [2024-12-10 22:59:23.441060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.884 qpair failed and we were unable to recover it. 00:28:15.884 [2024-12-10 22:59:23.441244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.884 [2024-12-10 22:59:23.441279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.884 qpair failed and we were unable to recover it. 00:28:15.884 [2024-12-10 22:59:23.441398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.884 [2024-12-10 22:59:23.441432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.884 qpair failed and we were unable to recover it. 00:28:15.884 [2024-12-10 22:59:23.441622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.885 [2024-12-10 22:59:23.441654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.885 qpair failed and we were unable to recover it. 00:28:15.885 [2024-12-10 22:59:23.441762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.885 [2024-12-10 22:59:23.441795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.885 qpair failed and we were unable to recover it. 00:28:15.885 [2024-12-10 22:59:23.441992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.885 [2024-12-10 22:59:23.442027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.885 qpair failed and we were unable to recover it. 00:28:15.885 [2024-12-10 22:59:23.442143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.885 [2024-12-10 22:59:23.442185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.885 qpair failed and we were unable to recover it. 00:28:15.885 [2024-12-10 22:59:23.442368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.885 [2024-12-10 22:59:23.442401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.885 qpair failed and we were unable to recover it. 00:28:15.885 [2024-12-10 22:59:23.442546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.885 [2024-12-10 22:59:23.442580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.885 qpair failed and we were unable to recover it. 00:28:15.885 [2024-12-10 22:59:23.442785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.885 [2024-12-10 22:59:23.442819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.885 qpair failed and we were unable to recover it. 00:28:15.885 [2024-12-10 22:59:23.442942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.885 [2024-12-10 22:59:23.442976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.885 qpair failed and we were unable to recover it. 00:28:15.885 [2024-12-10 22:59:23.443156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.885 [2024-12-10 22:59:23.443200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.885 qpair failed and we were unable to recover it. 00:28:15.885 [2024-12-10 22:59:23.443415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.885 [2024-12-10 22:59:23.443448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.885 qpair failed and we were unable to recover it. 00:28:15.885 [2024-12-10 22:59:23.443631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.885 [2024-12-10 22:59:23.443664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.885 qpair failed and we were unable to recover it. 00:28:15.885 [2024-12-10 22:59:23.443793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.885 [2024-12-10 22:59:23.443828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.885 qpair failed and we were unable to recover it. 00:28:15.885 [2024-12-10 22:59:23.444108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.885 [2024-12-10 22:59:23.444142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.885 qpair failed and we were unable to recover it. 00:28:15.885 [2024-12-10 22:59:23.444291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.885 [2024-12-10 22:59:23.444326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.885 qpair failed and we were unable to recover it. 00:28:15.885 [2024-12-10 22:59:23.444612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.885 [2024-12-10 22:59:23.444646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.885 qpair failed and we were unable to recover it. 00:28:15.885 [2024-12-10 22:59:23.444880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.885 [2024-12-10 22:59:23.444914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.885 qpair failed and we were unable to recover it. 00:28:15.885 [2024-12-10 22:59:23.445108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.885 [2024-12-10 22:59:23.445141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.885 qpair failed and we were unable to recover it. 00:28:15.885 [2024-12-10 22:59:23.445428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.885 [2024-12-10 22:59:23.445462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.885 qpair failed and we were unable to recover it. 00:28:15.885 [2024-12-10 22:59:23.445681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.885 [2024-12-10 22:59:23.445715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.885 qpair failed and we were unable to recover it. 00:28:15.885 [2024-12-10 22:59:23.445837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.885 [2024-12-10 22:59:23.445871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.885 qpair failed and we were unable to recover it. 00:28:15.885 [2024-12-10 22:59:23.446150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.885 [2024-12-10 22:59:23.446195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.885 qpair failed and we were unable to recover it. 00:28:15.885 [2024-12-10 22:59:23.446379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.885 [2024-12-10 22:59:23.446412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.885 qpair failed and we were unable to recover it. 00:28:15.885 [2024-12-10 22:59:23.446567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.885 [2024-12-10 22:59:23.446601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.885 qpair failed and we were unable to recover it. 00:28:15.885 [2024-12-10 22:59:23.446938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.885 [2024-12-10 22:59:23.446973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.885 qpair failed and we were unable to recover it. 00:28:15.885 [2024-12-10 22:59:23.447249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.885 [2024-12-10 22:59:23.447284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.885 qpair failed and we were unable to recover it. 00:28:15.885 [2024-12-10 22:59:23.447422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.885 [2024-12-10 22:59:23.447455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.885 qpair failed and we were unable to recover it. 00:28:15.885 [2024-12-10 22:59:23.447653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.885 [2024-12-10 22:59:23.447688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.885 qpair failed and we were unable to recover it. 00:28:15.885 [2024-12-10 22:59:23.447881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.885 [2024-12-10 22:59:23.447915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.885 qpair failed and we were unable to recover it. 00:28:15.885 [2024-12-10 22:59:23.448192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.885 [2024-12-10 22:59:23.448228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.885 qpair failed and we were unable to recover it. 00:28:15.885 [2024-12-10 22:59:23.448560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.885 [2024-12-10 22:59:23.448595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.885 qpair failed and we were unable to recover it. 00:28:15.885 [2024-12-10 22:59:23.448844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.885 [2024-12-10 22:59:23.448877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.885 qpair failed and we were unable to recover it. 00:28:15.885 [2024-12-10 22:59:23.449108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.885 [2024-12-10 22:59:23.449142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.885 qpair failed and we were unable to recover it. 00:28:15.885 [2024-12-10 22:59:23.449292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.885 [2024-12-10 22:59:23.449332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.885 qpair failed and we were unable to recover it. 00:28:15.885 [2024-12-10 22:59:23.449520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.886 [2024-12-10 22:59:23.449553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.886 qpair failed and we were unable to recover it. 00:28:15.886 [2024-12-10 22:59:23.449675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.886 [2024-12-10 22:59:23.449709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.886 qpair failed and we were unable to recover it. 00:28:15.886 [2024-12-10 22:59:23.449889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.886 [2024-12-10 22:59:23.449922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.886 qpair failed and we were unable to recover it. 00:28:15.886 [2024-12-10 22:59:23.450099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.886 [2024-12-10 22:59:23.450134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.886 qpair failed and we were unable to recover it. 00:28:15.886 [2024-12-10 22:59:23.450267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.886 [2024-12-10 22:59:23.450301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.886 qpair failed and we were unable to recover it. 00:28:15.886 [2024-12-10 22:59:23.450489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.886 [2024-12-10 22:59:23.450522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.886 qpair failed and we were unable to recover it. 00:28:15.886 [2024-12-10 22:59:23.450704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.886 [2024-12-10 22:59:23.450738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.886 qpair failed and we were unable to recover it. 00:28:15.886 [2024-12-10 22:59:23.450998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.886 [2024-12-10 22:59:23.451032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.886 qpair failed and we were unable to recover it. 00:28:15.886 [2024-12-10 22:59:23.451156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.886 [2024-12-10 22:59:23.451216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.886 qpair failed and we were unable to recover it. 00:28:15.886 [2024-12-10 22:59:23.451371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.886 [2024-12-10 22:59:23.451404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.886 qpair failed and we were unable to recover it. 00:28:15.886 [2024-12-10 22:59:23.451528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.886 [2024-12-10 22:59:23.451561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.886 qpair failed and we were unable to recover it. 00:28:15.886 [2024-12-10 22:59:23.451807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.886 [2024-12-10 22:59:23.451840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.886 qpair failed and we were unable to recover it. 00:28:15.886 [2024-12-10 22:59:23.452179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.886 [2024-12-10 22:59:23.452214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.886 qpair failed and we were unable to recover it. 00:28:15.886 [2024-12-10 22:59:23.452405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.886 [2024-12-10 22:59:23.452440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.886 qpair failed and we were unable to recover it. 00:28:15.886 [2024-12-10 22:59:23.452571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.886 [2024-12-10 22:59:23.452605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.886 qpair failed and we were unable to recover it. 00:28:15.886 [2024-12-10 22:59:23.452895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.886 [2024-12-10 22:59:23.452928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.886 qpair failed and we were unable to recover it. 00:28:15.886 [2024-12-10 22:59:23.453204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.886 [2024-12-10 22:59:23.453240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.886 qpair failed and we were unable to recover it. 00:28:15.886 [2024-12-10 22:59:23.453524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.886 [2024-12-10 22:59:23.453557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.886 qpair failed and we were unable to recover it. 00:28:15.886 [2024-12-10 22:59:23.453736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.886 [2024-12-10 22:59:23.453770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.886 qpair failed and we were unable to recover it. 00:28:15.886 [2024-12-10 22:59:23.453914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.886 [2024-12-10 22:59:23.453949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.886 qpair failed and we were unable to recover it. 00:28:15.886 [2024-12-10 22:59:23.454151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.886 [2024-12-10 22:59:23.454213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.886 qpair failed and we were unable to recover it. 00:28:15.886 [2024-12-10 22:59:23.454497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.886 [2024-12-10 22:59:23.454531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.886 qpair failed and we were unable to recover it. 00:28:15.886 [2024-12-10 22:59:23.454646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.886 [2024-12-10 22:59:23.454677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.886 qpair failed and we were unable to recover it. 00:28:15.886 [2024-12-10 22:59:23.454895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.886 [2024-12-10 22:59:23.454928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.886 qpair failed and we were unable to recover it. 00:28:15.886 [2024-12-10 22:59:23.455120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.886 [2024-12-10 22:59:23.455153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.886 qpair failed and we were unable to recover it. 00:28:15.886 [2024-12-10 22:59:23.455377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.886 [2024-12-10 22:59:23.455411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.886 qpair failed and we were unable to recover it. 00:28:15.886 [2024-12-10 22:59:23.455616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.886 [2024-12-10 22:59:23.455650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.886 qpair failed and we were unable to recover it. 00:28:15.886 [2024-12-10 22:59:23.455845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.886 [2024-12-10 22:59:23.455880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.886 qpair failed and we were unable to recover it. 00:28:15.886 [2024-12-10 22:59:23.456023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.886 [2024-12-10 22:59:23.456057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.886 qpair failed and we were unable to recover it. 00:28:15.886 [2024-12-10 22:59:23.456251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.886 [2024-12-10 22:59:23.456285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.886 qpair failed and we were unable to recover it. 00:28:15.886 [2024-12-10 22:59:23.456473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.886 [2024-12-10 22:59:23.456507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.886 qpair failed and we were unable to recover it. 00:28:15.886 [2024-12-10 22:59:23.456713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.886 [2024-12-10 22:59:23.456748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.886 qpair failed and we were unable to recover it. 00:28:15.886 [2024-12-10 22:59:23.457018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.886 [2024-12-10 22:59:23.457051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.886 qpair failed and we were unable to recover it. 00:28:15.886 [2024-12-10 22:59:23.457241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.886 [2024-12-10 22:59:23.457276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.886 qpair failed and we were unable to recover it. 00:28:15.886 [2024-12-10 22:59:23.457408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.886 [2024-12-10 22:59:23.457441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.886 qpair failed and we were unable to recover it. 00:28:15.886 [2024-12-10 22:59:23.457567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.886 [2024-12-10 22:59:23.457601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.886 qpair failed and we were unable to recover it. 00:28:15.886 [2024-12-10 22:59:23.457716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.886 [2024-12-10 22:59:23.457750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.886 qpair failed and we were unable to recover it. 00:28:15.886 [2024-12-10 22:59:23.458004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.886 [2024-12-10 22:59:23.458038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.886 qpair failed and we were unable to recover it. 00:28:15.886 [2024-12-10 22:59:23.458318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.886 [2024-12-10 22:59:23.458353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.886 qpair failed and we were unable to recover it. 00:28:15.886 [2024-12-10 22:59:23.458563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.887 [2024-12-10 22:59:23.458601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.887 qpair failed and we were unable to recover it. 00:28:15.887 [2024-12-10 22:59:23.458824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.887 [2024-12-10 22:59:23.458857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.887 qpair failed and we were unable to recover it. 00:28:15.887 [2024-12-10 22:59:23.459045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.887 [2024-12-10 22:59:23.459079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.887 qpair failed and we were unable to recover it. 00:28:15.887 [2024-12-10 22:59:23.459200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.887 [2024-12-10 22:59:23.459235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.887 qpair failed and we were unable to recover it. 00:28:15.887 [2024-12-10 22:59:23.459447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.887 [2024-12-10 22:59:23.459480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.887 qpair failed and we were unable to recover it. 00:28:15.887 [2024-12-10 22:59:23.459615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.887 [2024-12-10 22:59:23.459649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.887 qpair failed and we were unable to recover it. 00:28:15.887 [2024-12-10 22:59:23.459834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.887 [2024-12-10 22:59:23.459868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.887 qpair failed and we were unable to recover it. 00:28:15.887 [2024-12-10 22:59:23.460061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.887 [2024-12-10 22:59:23.460095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.887 qpair failed and we were unable to recover it. 00:28:15.887 [2024-12-10 22:59:23.460374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.887 [2024-12-10 22:59:23.460409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.887 qpair failed and we were unable to recover it. 00:28:15.887 [2024-12-10 22:59:23.460671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.887 [2024-12-10 22:59:23.460705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.887 qpair failed and we were unable to recover it. 00:28:15.887 [2024-12-10 22:59:23.460915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.887 [2024-12-10 22:59:23.460949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.887 qpair failed and we were unable to recover it. 00:28:15.887 [2024-12-10 22:59:23.461133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.887 [2024-12-10 22:59:23.461176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.887 qpair failed and we were unable to recover it. 00:28:15.887 [2024-12-10 22:59:23.461309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.887 [2024-12-10 22:59:23.461344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.887 qpair failed and we were unable to recover it. 00:28:15.887 [2024-12-10 22:59:23.461531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.887 [2024-12-10 22:59:23.461565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.887 qpair failed and we were unable to recover it. 00:28:15.887 [2024-12-10 22:59:23.461707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.887 [2024-12-10 22:59:23.461741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.887 qpair failed and we were unable to recover it. 00:28:15.887 [2024-12-10 22:59:23.461877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.887 [2024-12-10 22:59:23.461911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.887 qpair failed and we were unable to recover it. 00:28:15.887 [2024-12-10 22:59:23.462096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.887 [2024-12-10 22:59:23.462130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.887 qpair failed and we were unable to recover it. 00:28:15.887 [2024-12-10 22:59:23.462421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.887 [2024-12-10 22:59:23.462455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.887 qpair failed and we were unable to recover it. 00:28:15.887 [2024-12-10 22:59:23.462791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.887 [2024-12-10 22:59:23.462826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.887 qpair failed and we were unable to recover it. 00:28:15.887 [2024-12-10 22:59:23.463004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.887 [2024-12-10 22:59:23.463037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.887 qpair failed and we were unable to recover it. 00:28:15.887 [2024-12-10 22:59:23.463235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.887 [2024-12-10 22:59:23.463270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.887 qpair failed and we were unable to recover it. 00:28:15.887 [2024-12-10 22:59:23.463403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.887 [2024-12-10 22:59:23.463436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.887 qpair failed and we were unable to recover it. 00:28:15.887 [2024-12-10 22:59:23.463650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.887 [2024-12-10 22:59:23.463684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.887 qpair failed and we were unable to recover it. 00:28:15.887 [2024-12-10 22:59:23.463913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.887 [2024-12-10 22:59:23.463948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.887 qpair failed and we were unable to recover it. 00:28:15.887 [2024-12-10 22:59:23.464094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.887 [2024-12-10 22:59:23.464128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.887 qpair failed and we were unable to recover it. 00:28:15.887 [2024-12-10 22:59:23.464324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.887 [2024-12-10 22:59:23.464359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.887 qpair failed and we were unable to recover it. 00:28:15.887 [2024-12-10 22:59:23.464490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.887 [2024-12-10 22:59:23.464524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.887 qpair failed and we were unable to recover it. 00:28:15.887 [2024-12-10 22:59:23.464664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.887 [2024-12-10 22:59:23.464697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.887 qpair failed and we were unable to recover it. 00:28:15.887 [2024-12-10 22:59:23.464895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.887 [2024-12-10 22:59:23.464930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.887 qpair failed and we were unable to recover it. 00:28:15.887 [2024-12-10 22:59:23.465151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.887 [2024-12-10 22:59:23.465205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.887 qpair failed and we were unable to recover it. 00:28:15.887 [2024-12-10 22:59:23.465357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.887 [2024-12-10 22:59:23.465391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.887 qpair failed and we were unable to recover it. 00:28:15.887 [2024-12-10 22:59:23.465520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.887 [2024-12-10 22:59:23.465554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.887 qpair failed and we were unable to recover it. 00:28:15.887 [2024-12-10 22:59:23.465823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.887 [2024-12-10 22:59:23.465857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.887 qpair failed and we were unable to recover it. 00:28:15.887 [2024-12-10 22:59:23.466109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.887 [2024-12-10 22:59:23.466143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.887 qpair failed and we were unable to recover it. 00:28:15.887 [2024-12-10 22:59:23.466435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.887 [2024-12-10 22:59:23.466469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.887 qpair failed and we were unable to recover it. 00:28:15.887 [2024-12-10 22:59:23.466690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.887 [2024-12-10 22:59:23.466724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.887 qpair failed and we were unable to recover it. 00:28:15.887 [2024-12-10 22:59:23.467005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.887 [2024-12-10 22:59:23.467039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.887 qpair failed and we were unable to recover it. 00:28:15.887 [2024-12-10 22:59:23.467269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.887 [2024-12-10 22:59:23.467304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.887 qpair failed and we were unable to recover it. 00:28:15.887 [2024-12-10 22:59:23.467496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.887 [2024-12-10 22:59:23.467530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.887 qpair failed and we were unable to recover it. 00:28:15.888 [2024-12-10 22:59:23.467739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.888 [2024-12-10 22:59:23.467773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.888 qpair failed and we were unable to recover it. 00:28:15.888 [2024-12-10 22:59:23.467972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.888 [2024-12-10 22:59:23.468013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.888 qpair failed and we were unable to recover it. 00:28:15.888 [2024-12-10 22:59:23.468151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.888 [2024-12-10 22:59:23.468194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.888 qpair failed and we were unable to recover it. 00:28:15.888 [2024-12-10 22:59:23.468338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.888 [2024-12-10 22:59:23.468371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.888 qpair failed and we were unable to recover it. 00:28:15.888 [2024-12-10 22:59:23.468483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.888 [2024-12-10 22:59:23.468516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.888 qpair failed and we were unable to recover it. 00:28:15.888 [2024-12-10 22:59:23.468698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.888 [2024-12-10 22:59:23.468731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.888 qpair failed and we were unable to recover it. 00:28:15.888 [2024-12-10 22:59:23.469020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.888 [2024-12-10 22:59:23.469055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.888 qpair failed and we were unable to recover it. 00:28:15.888 [2024-12-10 22:59:23.469256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.888 [2024-12-10 22:59:23.469291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.888 qpair failed and we were unable to recover it. 00:28:15.888 [2024-12-10 22:59:23.469421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.888 [2024-12-10 22:59:23.469454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.888 qpair failed and we were unable to recover it. 00:28:15.888 [2024-12-10 22:59:23.469661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.888 [2024-12-10 22:59:23.469695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.888 qpair failed and we were unable to recover it. 00:28:15.888 [2024-12-10 22:59:23.469979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.888 [2024-12-10 22:59:23.470013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.888 qpair failed and we were unable to recover it. 00:28:15.888 [2024-12-10 22:59:23.470327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.888 [2024-12-10 22:59:23.470362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.888 qpair failed and we were unable to recover it. 00:28:15.888 [2024-12-10 22:59:23.470570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.888 [2024-12-10 22:59:23.470605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.888 qpair failed and we were unable to recover it. 00:28:15.888 [2024-12-10 22:59:23.470752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.888 [2024-12-10 22:59:23.470785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.888 qpair failed and we were unable to recover it. 00:28:15.888 [2024-12-10 22:59:23.470966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.888 [2024-12-10 22:59:23.471001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.888 qpair failed and we were unable to recover it. 00:28:15.888 [2024-12-10 22:59:23.471290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.888 [2024-12-10 22:59:23.471326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.888 qpair failed and we were unable to recover it. 00:28:15.888 [2024-12-10 22:59:23.471508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.888 [2024-12-10 22:59:23.471544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.888 qpair failed and we were unable to recover it. 00:28:15.888 [2024-12-10 22:59:23.471681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.888 [2024-12-10 22:59:23.471714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.888 qpair failed and we were unable to recover it. 00:28:15.888 [2024-12-10 22:59:23.471932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.888 [2024-12-10 22:59:23.471966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.888 qpair failed and we were unable to recover it. 00:28:15.888 [2024-12-10 22:59:23.472283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.888 [2024-12-10 22:59:23.472317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.888 qpair failed and we were unable to recover it. 00:28:15.888 [2024-12-10 22:59:23.472584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.888 [2024-12-10 22:59:23.472618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.888 qpair failed and we were unable to recover it. 00:28:15.888 [2024-12-10 22:59:23.472739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.888 [2024-12-10 22:59:23.472772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.888 qpair failed and we were unable to recover it. 00:28:15.888 [2024-12-10 22:59:23.473039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.888 [2024-12-10 22:59:23.473072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.888 qpair failed and we were unable to recover it. 00:28:15.888 [2024-12-10 22:59:23.473191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.888 [2024-12-10 22:59:23.473228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.888 qpair failed and we were unable to recover it. 00:28:15.888 [2024-12-10 22:59:23.473422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.888 [2024-12-10 22:59:23.473456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.888 qpair failed and we were unable to recover it. 00:28:15.888 [2024-12-10 22:59:23.473683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.888 [2024-12-10 22:59:23.473716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.888 qpair failed and we were unable to recover it. 00:28:15.888 [2024-12-10 22:59:23.473843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.888 [2024-12-10 22:59:23.473878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.888 qpair failed and we were unable to recover it. 00:28:15.888 [2024-12-10 22:59:23.474092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.888 [2024-12-10 22:59:23.474127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.888 qpair failed and we were unable to recover it. 00:28:15.888 [2024-12-10 22:59:23.474289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.888 [2024-12-10 22:59:23.474324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.888 qpair failed and we were unable to recover it. 00:28:15.888 [2024-12-10 22:59:23.474444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.888 [2024-12-10 22:59:23.474478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.888 qpair failed and we were unable to recover it. 00:28:15.888 [2024-12-10 22:59:23.474665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.888 [2024-12-10 22:59:23.474698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.888 qpair failed and we were unable to recover it. 00:28:15.888 [2024-12-10 22:59:23.474820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.888 [2024-12-10 22:59:23.474854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.888 qpair failed and we were unable to recover it. 00:28:15.888 [2024-12-10 22:59:23.475144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.888 [2024-12-10 22:59:23.475188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.888 qpair failed and we were unable to recover it. 00:28:15.888 [2024-12-10 22:59:23.475329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.888 [2024-12-10 22:59:23.475362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.888 qpair failed and we were unable to recover it. 00:28:15.888 [2024-12-10 22:59:23.475489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.888 [2024-12-10 22:59:23.475527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.888 qpair failed and we were unable to recover it. 00:28:15.888 [2024-12-10 22:59:23.475663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.888 [2024-12-10 22:59:23.475696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.888 qpair failed and we were unable to recover it. 00:28:15.888 [2024-12-10 22:59:23.475876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.888 [2024-12-10 22:59:23.475911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.888 qpair failed and we were unable to recover it. 00:28:15.888 [2024-12-10 22:59:23.476117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.889 [2024-12-10 22:59:23.476152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.889 qpair failed and we were unable to recover it. 00:28:15.889 [2024-12-10 22:59:23.476360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.889 [2024-12-10 22:59:23.476395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.889 qpair failed and we were unable to recover it. 00:28:15.889 [2024-12-10 22:59:23.476515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.889 [2024-12-10 22:59:23.476548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.889 qpair failed and we were unable to recover it. 00:28:15.889 [2024-12-10 22:59:23.476784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.889 [2024-12-10 22:59:23.476818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.889 qpair failed and we were unable to recover it. 00:28:15.889 [2024-12-10 22:59:23.476996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.889 [2024-12-10 22:59:23.477036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.889 qpair failed and we were unable to recover it. 00:28:15.889 [2024-12-10 22:59:23.477316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.889 [2024-12-10 22:59:23.477352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.889 qpair failed and we were unable to recover it. 00:28:15.889 [2024-12-10 22:59:23.477571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.889 [2024-12-10 22:59:23.477604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.889 qpair failed and we were unable to recover it. 00:28:15.889 [2024-12-10 22:59:23.477737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.889 [2024-12-10 22:59:23.477770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.889 qpair failed and we were unable to recover it. 00:28:15.889 [2024-12-10 22:59:23.477952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.889 [2024-12-10 22:59:23.477986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.889 qpair failed and we were unable to recover it. 00:28:15.889 [2024-12-10 22:59:23.478183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.889 [2024-12-10 22:59:23.478217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.889 qpair failed and we were unable to recover it. 00:28:15.889 [2024-12-10 22:59:23.478352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.889 [2024-12-10 22:59:23.478385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.889 qpair failed and we were unable to recover it. 00:28:15.889 [2024-12-10 22:59:23.478569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.889 [2024-12-10 22:59:23.478603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.889 qpair failed and we were unable to recover it. 00:28:15.889 [2024-12-10 22:59:23.478789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.889 [2024-12-10 22:59:23.478823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.889 qpair failed and we were unable to recover it. 00:28:15.889 [2024-12-10 22:59:23.479009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.889 [2024-12-10 22:59:23.479043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.889 qpair failed and we were unable to recover it. 00:28:15.889 [2024-12-10 22:59:23.479238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.889 [2024-12-10 22:59:23.479273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.889 qpair failed and we were unable to recover it. 00:28:15.889 [2024-12-10 22:59:23.479409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.889 [2024-12-10 22:59:23.479442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.889 qpair failed and we were unable to recover it. 00:28:15.889 [2024-12-10 22:59:23.479565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.889 [2024-12-10 22:59:23.479599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.889 qpair failed and we were unable to recover it. 00:28:15.889 [2024-12-10 22:59:23.479803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.889 [2024-12-10 22:59:23.479838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.889 qpair failed and we were unable to recover it. 00:28:15.889 [2024-12-10 22:59:23.480098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.889 [2024-12-10 22:59:23.480131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.889 qpair failed and we were unable to recover it. 00:28:15.889 [2024-12-10 22:59:23.480322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.889 [2024-12-10 22:59:23.480357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.889 qpair failed and we were unable to recover it. 00:28:15.889 [2024-12-10 22:59:23.480490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.889 [2024-12-10 22:59:23.480525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.889 qpair failed and we were unable to recover it. 00:28:15.889 [2024-12-10 22:59:23.480708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.889 [2024-12-10 22:59:23.480742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.889 qpair failed and we were unable to recover it. 00:28:15.889 [2024-12-10 22:59:23.481018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.889 [2024-12-10 22:59:23.481051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.889 qpair failed and we were unable to recover it. 00:28:15.889 [2024-12-10 22:59:23.481180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.889 [2024-12-10 22:59:23.481216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.889 qpair failed and we were unable to recover it. 00:28:15.889 [2024-12-10 22:59:23.481369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.889 [2024-12-10 22:59:23.481402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.889 qpair failed and we were unable to recover it. 00:28:15.889 [2024-12-10 22:59:23.481584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.889 [2024-12-10 22:59:23.481617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.889 qpair failed and we were unable to recover it. 00:28:15.889 [2024-12-10 22:59:23.481806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.889 [2024-12-10 22:59:23.481841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.889 qpair failed and we were unable to recover it. 00:28:15.889 [2024-12-10 22:59:23.482151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.889 [2024-12-10 22:59:23.482214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.889 qpair failed and we were unable to recover it. 00:28:15.889 [2024-12-10 22:59:23.482398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.889 [2024-12-10 22:59:23.482432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.889 qpair failed and we were unable to recover it. 00:28:15.889 [2024-12-10 22:59:23.482590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.889 [2024-12-10 22:59:23.482625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.889 qpair failed and we were unable to recover it. 00:28:15.889 [2024-12-10 22:59:23.482844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.889 [2024-12-10 22:59:23.482877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.889 qpair failed and we were unable to recover it. 00:28:15.889 [2024-12-10 22:59:23.483011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.889 [2024-12-10 22:59:23.483044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.889 qpair failed and we were unable to recover it. 00:28:15.889 [2024-12-10 22:59:23.483230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.889 [2024-12-10 22:59:23.483265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.889 qpair failed and we were unable to recover it. 00:28:15.889 [2024-12-10 22:59:23.483468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.889 [2024-12-10 22:59:23.483502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.889 qpair failed and we were unable to recover it. 00:28:15.889 [2024-12-10 22:59:23.483633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.890 [2024-12-10 22:59:23.483667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.890 qpair failed and we were unable to recover it. 00:28:15.890 [2024-12-10 22:59:23.483879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.890 [2024-12-10 22:59:23.483914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.890 qpair failed and we were unable to recover it. 00:28:15.890 [2024-12-10 22:59:23.484023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.890 [2024-12-10 22:59:23.484054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.890 qpair failed and we were unable to recover it. 00:28:15.890 [2024-12-10 22:59:23.484262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.890 [2024-12-10 22:59:23.484297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.890 qpair failed and we were unable to recover it. 00:28:15.890 [2024-12-10 22:59:23.484475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.890 [2024-12-10 22:59:23.484509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.890 qpair failed and we were unable to recover it. 00:28:15.890 [2024-12-10 22:59:23.484627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.890 [2024-12-10 22:59:23.484660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.890 qpair failed and we were unable to recover it. 00:28:15.890 [2024-12-10 22:59:23.484798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.890 [2024-12-10 22:59:23.484832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.890 qpair failed and we were unable to recover it. 00:28:15.890 [2024-12-10 22:59:23.485106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.890 [2024-12-10 22:59:23.485140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.890 qpair failed and we were unable to recover it. 00:28:15.890 [2024-12-10 22:59:23.485356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.890 [2024-12-10 22:59:23.485389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.890 qpair failed and we were unable to recover it. 00:28:15.890 [2024-12-10 22:59:23.485588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.890 [2024-12-10 22:59:23.485623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.890 qpair failed and we were unable to recover it. 00:28:15.890 [2024-12-10 22:59:23.485835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.890 [2024-12-10 22:59:23.485877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.890 qpair failed and we were unable to recover it. 00:28:15.890 [2024-12-10 22:59:23.486074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.890 [2024-12-10 22:59:23.486108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.890 qpair failed and we were unable to recover it. 00:28:15.890 [2024-12-10 22:59:23.486258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.890 [2024-12-10 22:59:23.486292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.890 qpair failed and we were unable to recover it. 00:28:15.890 [2024-12-10 22:59:23.486597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.890 [2024-12-10 22:59:23.486630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.890 qpair failed and we were unable to recover it. 00:28:15.890 [2024-12-10 22:59:23.486826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.890 [2024-12-10 22:59:23.486861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.890 qpair failed and we were unable to recover it. 00:28:15.890 [2024-12-10 22:59:23.487079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.890 [2024-12-10 22:59:23.487112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.890 qpair failed and we were unable to recover it. 00:28:15.890 [2024-12-10 22:59:23.487278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.890 [2024-12-10 22:59:23.487313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.890 qpair failed and we were unable to recover it. 00:28:15.890 [2024-12-10 22:59:23.487424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.890 [2024-12-10 22:59:23.487458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.890 qpair failed and we were unable to recover it. 00:28:15.890 [2024-12-10 22:59:23.487656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.890 [2024-12-10 22:59:23.487691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.890 qpair failed and we were unable to recover it. 00:28:15.890 [2024-12-10 22:59:23.487916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.890 [2024-12-10 22:59:23.487951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.890 qpair failed and we were unable to recover it. 00:28:15.890 [2024-12-10 22:59:23.488085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.890 [2024-12-10 22:59:23.488120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.890 qpair failed and we were unable to recover it. 00:28:15.890 [2024-12-10 22:59:23.488329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.890 [2024-12-10 22:59:23.488364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.890 qpair failed and we were unable to recover it. 00:28:15.890 [2024-12-10 22:59:23.488500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.890 [2024-12-10 22:59:23.488535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.890 qpair failed and we were unable to recover it. 00:28:15.890 [2024-12-10 22:59:23.488788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.890 [2024-12-10 22:59:23.488822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.890 qpair failed and we were unable to recover it. 00:28:15.890 [2024-12-10 22:59:23.489042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.890 [2024-12-10 22:59:23.489076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.890 qpair failed and we were unable to recover it. 00:28:15.890 [2024-12-10 22:59:23.489206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.890 [2024-12-10 22:59:23.489242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.890 qpair failed and we were unable to recover it. 00:28:15.890 [2024-12-10 22:59:23.489441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.890 [2024-12-10 22:59:23.489474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.890 qpair failed and we were unable to recover it. 00:28:15.890 [2024-12-10 22:59:23.489673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.890 [2024-12-10 22:59:23.489708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.890 qpair failed and we were unable to recover it. 00:28:15.890 [2024-12-10 22:59:23.489936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.890 [2024-12-10 22:59:23.489971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.890 qpair failed and we were unable to recover it. 00:28:15.890 [2024-12-10 22:59:23.490251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.890 [2024-12-10 22:59:23.490285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.890 qpair failed and we were unable to recover it. 00:28:15.890 [2024-12-10 22:59:23.490490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.890 [2024-12-10 22:59:23.490525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.890 qpair failed and we were unable to recover it. 00:28:15.890 [2024-12-10 22:59:23.490638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.890 [2024-12-10 22:59:23.490672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.890 qpair failed and we were unable to recover it. 00:28:15.890 [2024-12-10 22:59:23.490906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.890 [2024-12-10 22:59:23.490940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.890 qpair failed and we were unable to recover it. 00:28:15.890 [2024-12-10 22:59:23.491220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.890 [2024-12-10 22:59:23.491255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.890 qpair failed and we were unable to recover it. 00:28:15.890 [2024-12-10 22:59:23.491436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.890 [2024-12-10 22:59:23.491471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.890 qpair failed and we were unable to recover it. 00:28:15.890 [2024-12-10 22:59:23.491603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.890 [2024-12-10 22:59:23.491637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.890 qpair failed and we were unable to recover it. 00:28:15.890 [2024-12-10 22:59:23.491827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.890 [2024-12-10 22:59:23.491862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.890 qpair failed and we were unable to recover it. 00:28:15.890 [2024-12-10 22:59:23.492076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.890 [2024-12-10 22:59:23.492111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.890 qpair failed and we were unable to recover it. 00:28:15.890 [2024-12-10 22:59:23.492304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.890 [2024-12-10 22:59:23.492338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.890 qpair failed and we were unable to recover it. 00:28:15.890 [2024-12-10 22:59:23.492594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.890 [2024-12-10 22:59:23.492629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.891 qpair failed and we were unable to recover it. 00:28:15.891 [2024-12-10 22:59:23.492739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.891 [2024-12-10 22:59:23.492774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.891 qpair failed and we were unable to recover it. 00:28:15.891 [2024-12-10 22:59:23.492961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.891 [2024-12-10 22:59:23.492995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.891 qpair failed and we were unable to recover it. 00:28:15.891 [2024-12-10 22:59:23.493200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.891 [2024-12-10 22:59:23.493235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.891 qpair failed and we were unable to recover it. 00:28:15.891 [2024-12-10 22:59:23.493422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.891 [2024-12-10 22:59:23.493456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.891 qpair failed and we were unable to recover it. 00:28:15.891 [2024-12-10 22:59:23.493711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.891 [2024-12-10 22:59:23.493746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.891 qpair failed and we were unable to recover it. 00:28:15.891 [2024-12-10 22:59:23.493870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.891 [2024-12-10 22:59:23.493905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.891 qpair failed and we were unable to recover it. 00:28:15.891 [2024-12-10 22:59:23.494088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.891 [2024-12-10 22:59:23.494122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.891 qpair failed and we were unable to recover it. 00:28:15.891 [2024-12-10 22:59:23.494269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.891 [2024-12-10 22:59:23.494304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.891 qpair failed and we were unable to recover it. 00:28:15.891 [2024-12-10 22:59:23.494536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.891 [2024-12-10 22:59:23.494570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.891 qpair failed and we were unable to recover it. 00:28:15.891 [2024-12-10 22:59:23.494769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.891 [2024-12-10 22:59:23.494803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.891 qpair failed and we were unable to recover it. 00:28:15.891 [2024-12-10 22:59:23.494986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.891 [2024-12-10 22:59:23.495026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.891 qpair failed and we were unable to recover it. 00:28:15.891 [2024-12-10 22:59:23.495278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.891 [2024-12-10 22:59:23.495312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.891 qpair failed and we were unable to recover it. 00:28:15.891 [2024-12-10 22:59:23.495632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.891 [2024-12-10 22:59:23.495667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.891 qpair failed and we were unable to recover it. 00:28:15.891 [2024-12-10 22:59:23.495782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.891 [2024-12-10 22:59:23.495815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.891 qpair failed and we were unable to recover it. 00:28:15.891 [2024-12-10 22:59:23.495999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.891 [2024-12-10 22:59:23.496034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.891 qpair failed and we were unable to recover it. 00:28:15.891 [2024-12-10 22:59:23.496359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.891 [2024-12-10 22:59:23.496394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.891 qpair failed and we were unable to recover it. 00:28:15.891 [2024-12-10 22:59:23.496543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.891 [2024-12-10 22:59:23.496577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.891 qpair failed and we were unable to recover it. 00:28:15.891 [2024-12-10 22:59:23.496759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.891 [2024-12-10 22:59:23.496794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.891 qpair failed and we were unable to recover it. 00:28:15.891 [2024-12-10 22:59:23.497018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.891 [2024-12-10 22:59:23.497052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.891 qpair failed and we were unable to recover it. 00:28:15.891 [2024-12-10 22:59:23.497247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.891 [2024-12-10 22:59:23.497283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.891 qpair failed and we were unable to recover it. 00:28:15.891 [2024-12-10 22:59:23.497419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.891 [2024-12-10 22:59:23.497453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.891 qpair failed and we were unable to recover it. 00:28:15.891 [2024-12-10 22:59:23.497575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.891 [2024-12-10 22:59:23.497610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.891 qpair failed and we were unable to recover it. 00:28:15.891 [2024-12-10 22:59:23.497805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.891 [2024-12-10 22:59:23.497841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.891 qpair failed and we were unable to recover it. 00:28:15.891 [2024-12-10 22:59:23.498047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.891 [2024-12-10 22:59:23.498082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.891 qpair failed and we were unable to recover it. 00:28:15.891 [2024-12-10 22:59:23.498284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.891 [2024-12-10 22:59:23.498318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.891 qpair failed and we were unable to recover it. 00:28:15.891 [2024-12-10 22:59:23.498505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.891 [2024-12-10 22:59:23.498539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.891 qpair failed and we were unable to recover it. 00:28:15.891 [2024-12-10 22:59:23.498899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.891 [2024-12-10 22:59:23.498933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.891 qpair failed and we were unable to recover it. 00:28:15.891 [2024-12-10 22:59:23.499131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.891 [2024-12-10 22:59:23.499173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.891 qpair failed and we were unable to recover it. 00:28:15.891 [2024-12-10 22:59:23.499373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.891 [2024-12-10 22:59:23.499407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.891 qpair failed and we were unable to recover it. 00:28:15.891 [2024-12-10 22:59:23.499681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.891 [2024-12-10 22:59:23.499716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.891 qpair failed and we were unable to recover it. 00:28:15.891 [2024-12-10 22:59:23.499900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.891 [2024-12-10 22:59:23.499935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.891 qpair failed and we were unable to recover it. 00:28:15.891 [2024-12-10 22:59:23.500065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.891 [2024-12-10 22:59:23.500099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.891 qpair failed and we were unable to recover it. 00:28:15.891 [2024-12-10 22:59:23.500319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.891 [2024-12-10 22:59:23.500353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.891 qpair failed and we were unable to recover it. 00:28:15.891 [2024-12-10 22:59:23.500662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.891 [2024-12-10 22:59:23.500695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.891 qpair failed and we were unable to recover it. 00:28:15.891 [2024-12-10 22:59:23.500977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.891 [2024-12-10 22:59:23.501011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.891 qpair failed and we were unable to recover it. 00:28:15.891 [2024-12-10 22:59:23.501220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.891 [2024-12-10 22:59:23.501255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.891 qpair failed and we were unable to recover it. 00:28:15.891 [2024-12-10 22:59:23.501393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.891 [2024-12-10 22:59:23.501428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.891 qpair failed and we were unable to recover it. 00:28:15.891 [2024-12-10 22:59:23.501620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.891 [2024-12-10 22:59:23.501654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.891 qpair failed and we were unable to recover it. 00:28:15.891 [2024-12-10 22:59:23.501908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.891 [2024-12-10 22:59:23.501943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.892 qpair failed and we were unable to recover it. 00:28:15.892 [2024-12-10 22:59:23.502149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.892 [2024-12-10 22:59:23.502191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.892 qpair failed and we were unable to recover it. 00:28:15.892 [2024-12-10 22:59:23.502377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.892 [2024-12-10 22:59:23.502410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.892 qpair failed and we were unable to recover it. 00:28:15.892 [2024-12-10 22:59:23.502690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.892 [2024-12-10 22:59:23.502724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.892 qpair failed and we were unable to recover it. 00:28:15.892 [2024-12-10 22:59:23.503023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.892 [2024-12-10 22:59:23.503057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.892 qpair failed and we were unable to recover it. 00:28:15.892 [2024-12-10 22:59:23.503239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.892 [2024-12-10 22:59:23.503275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.892 qpair failed and we were unable to recover it. 00:28:15.892 [2024-12-10 22:59:23.503480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.892 [2024-12-10 22:59:23.503513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.892 qpair failed and we were unable to recover it. 00:28:15.892 [2024-12-10 22:59:23.503662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.892 [2024-12-10 22:59:23.503696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.892 qpair failed and we were unable to recover it. 00:28:15.892 [2024-12-10 22:59:23.503917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.892 [2024-12-10 22:59:23.503953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.892 qpair failed and we were unable to recover it. 00:28:15.892 [2024-12-10 22:59:23.504064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.892 [2024-12-10 22:59:23.504098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.892 qpair failed and we were unable to recover it. 00:28:15.892 [2024-12-10 22:59:23.504265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.892 [2024-12-10 22:59:23.504299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.892 qpair failed and we were unable to recover it. 00:28:15.892 [2024-12-10 22:59:23.504435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.892 [2024-12-10 22:59:23.504470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.892 qpair failed and we were unable to recover it. 00:28:15.892 [2024-12-10 22:59:23.504597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.892 [2024-12-10 22:59:23.504637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.892 qpair failed and we were unable to recover it. 00:28:15.892 [2024-12-10 22:59:23.504764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.892 [2024-12-10 22:59:23.504799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.892 qpair failed and we were unable to recover it. 00:28:15.892 [2024-12-10 22:59:23.504928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.892 [2024-12-10 22:59:23.504962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.892 qpair failed and we were unable to recover it. 00:28:15.892 [2024-12-10 22:59:23.505180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.892 [2024-12-10 22:59:23.505216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.892 qpair failed and we were unable to recover it. 00:28:15.892 [2024-12-10 22:59:23.505351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.892 [2024-12-10 22:59:23.505386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.892 qpair failed and we were unable to recover it. 00:28:15.892 [2024-12-10 22:59:23.505499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.892 [2024-12-10 22:59:23.505534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.892 qpair failed and we were unable to recover it. 00:28:15.892 [2024-12-10 22:59:23.505718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.892 [2024-12-10 22:59:23.505753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.892 qpair failed and we were unable to recover it. 00:28:15.892 [2024-12-10 22:59:23.505935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.892 [2024-12-10 22:59:23.505969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.892 qpair failed and we were unable to recover it. 00:28:15.892 [2024-12-10 22:59:23.506095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.892 [2024-12-10 22:59:23.506129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.892 qpair failed and we were unable to recover it. 00:28:15.892 [2024-12-10 22:59:23.506283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.892 [2024-12-10 22:59:23.506317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.892 qpair failed and we were unable to recover it. 00:28:15.892 [2024-12-10 22:59:23.506520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.892 [2024-12-10 22:59:23.506554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.892 qpair failed and we were unable to recover it. 00:28:15.892 [2024-12-10 22:59:23.506738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.892 [2024-12-10 22:59:23.506773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.892 qpair failed and we were unable to recover it. 00:28:15.892 [2024-12-10 22:59:23.506922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.892 [2024-12-10 22:59:23.506956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.892 qpair failed and we were unable to recover it. 00:28:15.892 [2024-12-10 22:59:23.507169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.892 [2024-12-10 22:59:23.507206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.892 qpair failed and we were unable to recover it. 00:28:15.892 [2024-12-10 22:59:23.507348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.892 [2024-12-10 22:59:23.507382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.892 qpair failed and we were unable to recover it. 00:28:15.892 [2024-12-10 22:59:23.507532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.892 [2024-12-10 22:59:23.507570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.892 qpair failed and we were unable to recover it. 00:28:15.892 [2024-12-10 22:59:23.507684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.892 [2024-12-10 22:59:23.507719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.892 qpair failed and we were unable to recover it. 00:28:15.892 [2024-12-10 22:59:23.507837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.892 [2024-12-10 22:59:23.507872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.892 qpair failed and we were unable to recover it. 00:28:15.892 [2024-12-10 22:59:23.508004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.892 [2024-12-10 22:59:23.508038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.892 qpair failed and we were unable to recover it. 00:28:15.892 [2024-12-10 22:59:23.508265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.892 [2024-12-10 22:59:23.508299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.892 qpair failed and we were unable to recover it. 00:28:15.892 [2024-12-10 22:59:23.508456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.892 [2024-12-10 22:59:23.508491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.892 qpair failed and we were unable to recover it. 00:28:15.892 [2024-12-10 22:59:23.508699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.892 [2024-12-10 22:59:23.508733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.892 qpair failed and we were unable to recover it. 00:28:15.892 [2024-12-10 22:59:23.508937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.892 [2024-12-10 22:59:23.508971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.892 qpair failed and we were unable to recover it. 00:28:15.892 [2024-12-10 22:59:23.509249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.892 [2024-12-10 22:59:23.509284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.892 qpair failed and we were unable to recover it. 00:28:15.892 [2024-12-10 22:59:23.509492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.892 [2024-12-10 22:59:23.509527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.892 qpair failed and we were unable to recover it. 00:28:15.892 [2024-12-10 22:59:23.509714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.892 [2024-12-10 22:59:23.509749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.892 qpair failed and we were unable to recover it. 00:28:15.892 [2024-12-10 22:59:23.509944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.892 [2024-12-10 22:59:23.509978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.892 qpair failed and we were unable to recover it. 00:28:15.892 [2024-12-10 22:59:23.510280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.892 [2024-12-10 22:59:23.510315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.892 qpair failed and we were unable to recover it. 00:28:15.893 [2024-12-10 22:59:23.510462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.893 [2024-12-10 22:59:23.510497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.893 qpair failed and we were unable to recover it. 00:28:15.893 [2024-12-10 22:59:23.510677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.893 [2024-12-10 22:59:23.510712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.893 qpair failed and we were unable to recover it. 00:28:15.893 [2024-12-10 22:59:23.510854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.893 [2024-12-10 22:59:23.510888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.893 qpair failed and we were unable to recover it. 00:28:15.893 [2024-12-10 22:59:23.511093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.893 [2024-12-10 22:59:23.511127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.893 qpair failed and we were unable to recover it. 00:28:15.893 [2024-12-10 22:59:23.511323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.893 [2024-12-10 22:59:23.511358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.893 qpair failed and we were unable to recover it. 00:28:15.893 [2024-12-10 22:59:23.511540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.893 [2024-12-10 22:59:23.511574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.893 qpair failed and we were unable to recover it. 00:28:15.893 [2024-12-10 22:59:23.511830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.893 [2024-12-10 22:59:23.511864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.893 qpair failed and we were unable to recover it. 00:28:15.893 [2024-12-10 22:59:23.512059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.893 [2024-12-10 22:59:23.512093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.893 qpair failed and we were unable to recover it. 00:28:15.893 [2024-12-10 22:59:23.512305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.893 [2024-12-10 22:59:23.512340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.893 qpair failed and we were unable to recover it. 00:28:15.893 [2024-12-10 22:59:23.512467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.893 [2024-12-10 22:59:23.512501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.893 qpair failed and we were unable to recover it. 00:28:15.893 [2024-12-10 22:59:23.512662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.893 [2024-12-10 22:59:23.512696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.893 qpair failed and we were unable to recover it. 00:28:15.893 [2024-12-10 22:59:23.512820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.893 [2024-12-10 22:59:23.512854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.893 qpair failed and we were unable to recover it. 00:28:15.893 [2024-12-10 22:59:23.513033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.893 [2024-12-10 22:59:23.513073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.893 qpair failed and we were unable to recover it. 00:28:15.893 [2024-12-10 22:59:23.513271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.893 [2024-12-10 22:59:23.513307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.893 qpair failed and we were unable to recover it. 00:28:15.893 [2024-12-10 22:59:23.513457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.893 [2024-12-10 22:59:23.513491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.893 qpair failed and we were unable to recover it. 00:28:15.893 [2024-12-10 22:59:23.513638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.893 [2024-12-10 22:59:23.513673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.893 qpair failed and we were unable to recover it. 00:28:15.893 [2024-12-10 22:59:23.513886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.893 [2024-12-10 22:59:23.513921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.893 qpair failed and we were unable to recover it. 00:28:15.893 [2024-12-10 22:59:23.514042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.893 [2024-12-10 22:59:23.514075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.893 qpair failed and we were unable to recover it. 00:28:15.893 [2024-12-10 22:59:23.514190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.893 [2024-12-10 22:59:23.514225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.893 qpair failed and we were unable to recover it. 00:28:15.893 [2024-12-10 22:59:23.514444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.893 [2024-12-10 22:59:23.514479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.893 qpair failed and we were unable to recover it. 00:28:15.893 [2024-12-10 22:59:23.514610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.893 [2024-12-10 22:59:23.514644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.893 qpair failed and we were unable to recover it. 00:28:15.893 [2024-12-10 22:59:23.514834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.893 [2024-12-10 22:59:23.514870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.893 qpair failed and we were unable to recover it. 00:28:15.893 [2024-12-10 22:59:23.515093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.893 [2024-12-10 22:59:23.515128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.893 qpair failed and we were unable to recover it. 00:28:15.893 [2024-12-10 22:59:23.515326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.893 [2024-12-10 22:59:23.515360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.893 qpair failed and we were unable to recover it. 00:28:15.893 [2024-12-10 22:59:23.515549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.893 [2024-12-10 22:59:23.515584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.893 qpair failed and we were unable to recover it. 00:28:15.893 [2024-12-10 22:59:23.515792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.893 [2024-12-10 22:59:23.515826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.893 qpair failed and we were unable to recover it. 00:28:15.893 [2024-12-10 22:59:23.515960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.893 [2024-12-10 22:59:23.515994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.893 qpair failed and we were unable to recover it. 00:28:15.893 [2024-12-10 22:59:23.516202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.893 [2024-12-10 22:59:23.516238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.893 qpair failed and we were unable to recover it. 00:28:15.893 [2024-12-10 22:59:23.516363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.893 [2024-12-10 22:59:23.516396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.893 qpair failed and we were unable to recover it. 00:28:15.893 [2024-12-10 22:59:23.516617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.893 [2024-12-10 22:59:23.516651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.893 qpair failed and we were unable to recover it. 00:28:15.893 [2024-12-10 22:59:23.516833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.893 [2024-12-10 22:59:23.516868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.893 qpair failed and we were unable to recover it. 00:28:15.893 [2024-12-10 22:59:23.516982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.893 [2024-12-10 22:59:23.517016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.893 qpair failed and we were unable to recover it. 00:28:15.893 [2024-12-10 22:59:23.517222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.893 [2024-12-10 22:59:23.517257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.893 qpair failed and we were unable to recover it. 00:28:15.893 [2024-12-10 22:59:23.517457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.893 [2024-12-10 22:59:23.517491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.893 qpair failed and we were unable to recover it. 00:28:15.893 [2024-12-10 22:59:23.517619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.893 [2024-12-10 22:59:23.517653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.893 qpair failed and we were unable to recover it. 00:28:15.893 [2024-12-10 22:59:23.517994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.893 [2024-12-10 22:59:23.518028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.893 qpair failed and we were unable to recover it. 00:28:15.893 [2024-12-10 22:59:23.518166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.893 [2024-12-10 22:59:23.518202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.893 qpair failed and we were unable to recover it. 00:28:15.893 [2024-12-10 22:59:23.518418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.893 [2024-12-10 22:59:23.518452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.893 qpair failed and we were unable to recover it. 00:28:15.894 [2024-12-10 22:59:23.518639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.894 [2024-12-10 22:59:23.518674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.894 qpair failed and we were unable to recover it. 00:28:15.894 [2024-12-10 22:59:23.519012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.894 [2024-12-10 22:59:23.519046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.894 qpair failed and we were unable to recover it. 00:28:15.894 [2024-12-10 22:59:23.519343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.894 [2024-12-10 22:59:23.519379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.894 qpair failed and we were unable to recover it. 00:28:15.894 [2024-12-10 22:59:23.519656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.894 [2024-12-10 22:59:23.519690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.894 qpair failed and we were unable to recover it. 00:28:15.894 [2024-12-10 22:59:23.519804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.894 [2024-12-10 22:59:23.519839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.894 qpair failed and we were unable to recover it. 00:28:15.894 [2024-12-10 22:59:23.520057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.894 [2024-12-10 22:59:23.520093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.894 qpair failed and we were unable to recover it. 00:28:15.894 [2024-12-10 22:59:23.520296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.894 [2024-12-10 22:59:23.520331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.894 qpair failed and we were unable to recover it. 00:28:15.894 [2024-12-10 22:59:23.520456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.894 [2024-12-10 22:59:23.520491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.894 qpair failed and we were unable to recover it. 00:28:15.894 [2024-12-10 22:59:23.520601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.894 [2024-12-10 22:59:23.520633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.894 qpair failed and we were unable to recover it. 00:28:15.894 [2024-12-10 22:59:23.520912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.894 [2024-12-10 22:59:23.520947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.894 qpair failed and we were unable to recover it. 00:28:15.894 [2024-12-10 22:59:23.521213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.894 [2024-12-10 22:59:23.521249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.894 qpair failed and we were unable to recover it. 00:28:15.894 [2024-12-10 22:59:23.521446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.894 [2024-12-10 22:59:23.521479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.894 qpair failed and we were unable to recover it. 00:28:15.894 [2024-12-10 22:59:23.521661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.894 [2024-12-10 22:59:23.521695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.894 qpair failed and we were unable to recover it. 00:28:15.894 [2024-12-10 22:59:23.521818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.894 [2024-12-10 22:59:23.521852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.894 qpair failed and we were unable to recover it. 00:28:15.894 [2024-12-10 22:59:23.522055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.894 [2024-12-10 22:59:23.522095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.894 qpair failed and we were unable to recover it. 00:28:15.894 [2024-12-10 22:59:23.522235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.894 [2024-12-10 22:59:23.522269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.894 qpair failed and we were unable to recover it. 00:28:15.894 [2024-12-10 22:59:23.522397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.894 [2024-12-10 22:59:23.522431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.894 qpair failed and we were unable to recover it. 00:28:15.894 [2024-12-10 22:59:23.522615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.894 [2024-12-10 22:59:23.522649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.894 qpair failed and we were unable to recover it. 00:28:15.894 [2024-12-10 22:59:23.522857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.894 [2024-12-10 22:59:23.522890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.894 qpair failed and we were unable to recover it. 00:28:15.894 [2024-12-10 22:59:23.523167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.894 [2024-12-10 22:59:23.523203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.894 qpair failed and we were unable to recover it. 00:28:15.894 [2024-12-10 22:59:23.523314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.894 [2024-12-10 22:59:23.523347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.894 qpair failed and we were unable to recover it. 00:28:15.894 [2024-12-10 22:59:23.523552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.894 [2024-12-10 22:59:23.523585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.894 qpair failed and we were unable to recover it. 00:28:15.894 [2024-12-10 22:59:23.523813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.894 [2024-12-10 22:59:23.523847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.894 qpair failed and we were unable to recover it. 00:28:15.894 [2024-12-10 22:59:23.524026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.894 [2024-12-10 22:59:23.524060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.894 qpair failed and we were unable to recover it. 00:28:15.894 [2024-12-10 22:59:23.524248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.894 [2024-12-10 22:59:23.524282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.894 qpair failed and we were unable to recover it. 00:28:15.894 [2024-12-10 22:59:23.524461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.894 [2024-12-10 22:59:23.524495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.894 qpair failed and we were unable to recover it. 00:28:15.894 [2024-12-10 22:59:23.524690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.894 [2024-12-10 22:59:23.524725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.894 qpair failed and we were unable to recover it. 00:28:15.894 [2024-12-10 22:59:23.524920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.894 [2024-12-10 22:59:23.524954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.894 qpair failed and we were unable to recover it. 00:28:15.894 [2024-12-10 22:59:23.525146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.894 [2024-12-10 22:59:23.525192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.894 qpair failed and we were unable to recover it. 00:28:15.894 [2024-12-10 22:59:23.525400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.894 [2024-12-10 22:59:23.525434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.894 qpair failed and we were unable to recover it. 00:28:15.894 [2024-12-10 22:59:23.525707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.894 [2024-12-10 22:59:23.525741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.894 qpair failed and we were unable to recover it. 00:28:15.894 [2024-12-10 22:59:23.525997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.894 [2024-12-10 22:59:23.526032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.894 qpair failed and we were unable to recover it. 00:28:15.894 [2024-12-10 22:59:23.526236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.895 [2024-12-10 22:59:23.526272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.895 qpair failed and we were unable to recover it. 00:28:15.895 [2024-12-10 22:59:23.526495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.895 [2024-12-10 22:59:23.526530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.895 qpair failed and we were unable to recover it. 00:28:15.895 [2024-12-10 22:59:23.526669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.895 [2024-12-10 22:59:23.526704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.895 qpair failed and we were unable to recover it. 00:28:15.895 [2024-12-10 22:59:23.526910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.895 [2024-12-10 22:59:23.526945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.895 qpair failed and we were unable to recover it. 00:28:15.895 [2024-12-10 22:59:23.527270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.895 [2024-12-10 22:59:23.527304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.895 qpair failed and we were unable to recover it. 00:28:15.895 [2024-12-10 22:59:23.527514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.895 [2024-12-10 22:59:23.527547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.895 qpair failed and we were unable to recover it. 00:28:15.895 [2024-12-10 22:59:23.527729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.895 [2024-12-10 22:59:23.527763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.895 qpair failed and we were unable to recover it. 00:28:15.895 [2024-12-10 22:59:23.528043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.895 [2024-12-10 22:59:23.528078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.895 qpair failed and we were unable to recover it. 00:28:15.895 [2024-12-10 22:59:23.528288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.895 [2024-12-10 22:59:23.528323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.895 qpair failed and we were unable to recover it. 00:28:15.895 [2024-12-10 22:59:23.528632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.895 [2024-12-10 22:59:23.528667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.895 qpair failed and we were unable to recover it. 00:28:15.895 [2024-12-10 22:59:23.528973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.895 [2024-12-10 22:59:23.529009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.895 qpair failed and we were unable to recover it. 00:28:15.895 [2024-12-10 22:59:23.529192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.895 [2024-12-10 22:59:23.529228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.895 qpair failed and we were unable to recover it. 00:28:15.895 [2024-12-10 22:59:23.529436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.895 [2024-12-10 22:59:23.529471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.895 qpair failed and we were unable to recover it. 00:28:15.895 [2024-12-10 22:59:23.529667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.895 [2024-12-10 22:59:23.529702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.895 qpair failed and we were unable to recover it. 00:28:15.895 [2024-12-10 22:59:23.529924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.895 [2024-12-10 22:59:23.529958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.895 qpair failed and we were unable to recover it. 00:28:15.895 [2024-12-10 22:59:23.530140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.895 [2024-12-10 22:59:23.530198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.895 qpair failed and we were unable to recover it. 00:28:15.895 [2024-12-10 22:59:23.530381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.895 [2024-12-10 22:59:23.530415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.895 qpair failed and we were unable to recover it. 00:28:15.895 [2024-12-10 22:59:23.530679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.895 [2024-12-10 22:59:23.530713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.895 qpair failed and we were unable to recover it. 00:28:15.895 [2024-12-10 22:59:23.530912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.895 [2024-12-10 22:59:23.530948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.895 qpair failed and we were unable to recover it. 00:28:15.895 [2024-12-10 22:59:23.531137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.895 [2024-12-10 22:59:23.531182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.895 qpair failed and we were unable to recover it. 00:28:15.895 [2024-12-10 22:59:23.531308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.895 [2024-12-10 22:59:23.531341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.895 qpair failed and we were unable to recover it. 00:28:15.895 [2024-12-10 22:59:23.531525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.895 [2024-12-10 22:59:23.531558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.895 qpair failed and we were unable to recover it. 00:28:15.895 [2024-12-10 22:59:23.531770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.895 [2024-12-10 22:59:23.531812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.895 qpair failed and we were unable to recover it. 00:28:15.895 [2024-12-10 22:59:23.532028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.895 [2024-12-10 22:59:23.532060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.895 qpair failed and we were unable to recover it. 00:28:15.895 [2024-12-10 22:59:23.532336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.895 [2024-12-10 22:59:23.532372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.895 qpair failed and we were unable to recover it. 00:28:15.895 [2024-12-10 22:59:23.532480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.895 [2024-12-10 22:59:23.532514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.895 qpair failed and we were unable to recover it. 00:28:15.895 [2024-12-10 22:59:23.532630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.895 [2024-12-10 22:59:23.532661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.895 qpair failed and we were unable to recover it. 00:28:15.895 [2024-12-10 22:59:23.532883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.895 [2024-12-10 22:59:23.532918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.895 qpair failed and we were unable to recover it. 00:28:15.895 [2024-12-10 22:59:23.533100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.895 [2024-12-10 22:59:23.533134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.895 qpair failed and we were unable to recover it. 00:28:15.895 [2024-12-10 22:59:23.533325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.895 [2024-12-10 22:59:23.533360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.895 qpair failed and we were unable to recover it. 00:28:15.895 [2024-12-10 22:59:23.533495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.895 [2024-12-10 22:59:23.533528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.895 qpair failed and we were unable to recover it. 00:28:15.895 [2024-12-10 22:59:23.533736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.895 [2024-12-10 22:59:23.533770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.895 qpair failed and we were unable to recover it. 00:28:15.895 [2024-12-10 22:59:23.533956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.895 [2024-12-10 22:59:23.533990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.895 qpair failed and we were unable to recover it. 00:28:15.895 [2024-12-10 22:59:23.534147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.895 [2024-12-10 22:59:23.534191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.895 qpair failed and we were unable to recover it. 00:28:15.895 [2024-12-10 22:59:23.534377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.895 [2024-12-10 22:59:23.534413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.895 qpair failed and we were unable to recover it. 00:28:15.895 [2024-12-10 22:59:23.534664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.895 [2024-12-10 22:59:23.534701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.895 qpair failed and we were unable to recover it. 00:28:15.895 [2024-12-10 22:59:23.534888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.895 [2024-12-10 22:59:23.534922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.895 qpair failed and we were unable to recover it. 00:28:15.895 [2024-12-10 22:59:23.535127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.895 [2024-12-10 22:59:23.535171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.895 qpair failed and we were unable to recover it. 00:28:15.895 [2024-12-10 22:59:23.535359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.895 [2024-12-10 22:59:23.535393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.895 qpair failed and we were unable to recover it. 00:28:15.895 [2024-12-10 22:59:23.535580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.896 [2024-12-10 22:59:23.535614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.896 qpair failed and we were unable to recover it. 00:28:15.896 [2024-12-10 22:59:23.535813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.896 [2024-12-10 22:59:23.535846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.896 qpair failed and we were unable to recover it. 00:28:15.896 [2024-12-10 22:59:23.536064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.896 [2024-12-10 22:59:23.536098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.896 qpair failed and we were unable to recover it. 00:28:15.896 [2024-12-10 22:59:23.536264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.896 [2024-12-10 22:59:23.536298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.896 qpair failed and we were unable to recover it. 00:28:15.896 [2024-12-10 22:59:23.536496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.896 [2024-12-10 22:59:23.536529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.896 qpair failed and we were unable to recover it. 00:28:15.896 [2024-12-10 22:59:23.536764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.896 [2024-12-10 22:59:23.536797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.896 qpair failed and we were unable to recover it. 00:28:15.896 [2024-12-10 22:59:23.537017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.896 [2024-12-10 22:59:23.537051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.896 qpair failed and we were unable to recover it. 00:28:15.896 [2024-12-10 22:59:23.537288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.896 [2024-12-10 22:59:23.537324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.896 qpair failed and we were unable to recover it. 00:28:15.896 [2024-12-10 22:59:23.537544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.896 [2024-12-10 22:59:23.537579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.896 qpair failed and we were unable to recover it. 00:28:15.896 [2024-12-10 22:59:23.537712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.896 [2024-12-10 22:59:23.537746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.896 qpair failed and we were unable to recover it. 00:28:15.896 [2024-12-10 22:59:23.538011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.896 [2024-12-10 22:59:23.538049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.896 qpair failed and we were unable to recover it. 00:28:15.896 [2024-12-10 22:59:23.538260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.896 [2024-12-10 22:59:23.538294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.896 qpair failed and we were unable to recover it. 00:28:15.896 [2024-12-10 22:59:23.538418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.896 [2024-12-10 22:59:23.538452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.896 qpair failed and we were unable to recover it. 00:28:15.896 [2024-12-10 22:59:23.538566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.896 [2024-12-10 22:59:23.538601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.896 qpair failed and we were unable to recover it. 00:28:15.896 [2024-12-10 22:59:23.538857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.896 [2024-12-10 22:59:23.538892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.896 qpair failed and we were unable to recover it. 00:28:15.896 [2024-12-10 22:59:23.539180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.896 [2024-12-10 22:59:23.539216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.896 qpair failed and we were unable to recover it. 00:28:15.896 [2024-12-10 22:59:23.539329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.896 [2024-12-10 22:59:23.539361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.896 qpair failed and we were unable to recover it. 00:28:15.896 [2024-12-10 22:59:23.539495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.896 [2024-12-10 22:59:23.539529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.896 qpair failed and we were unable to recover it. 00:28:15.896 [2024-12-10 22:59:23.539844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.896 [2024-12-10 22:59:23.539877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.896 qpair failed and we were unable to recover it. 00:28:15.896 [2024-12-10 22:59:23.540010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.896 [2024-12-10 22:59:23.540043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.896 qpair failed and we were unable to recover it. 00:28:15.896 [2024-12-10 22:59:23.540246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.896 [2024-12-10 22:59:23.540281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.896 qpair failed and we were unable to recover it. 00:28:15.896 [2024-12-10 22:59:23.540535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.896 [2024-12-10 22:59:23.540569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.896 qpair failed and we were unable to recover it. 00:28:15.896 [2024-12-10 22:59:23.540847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.896 [2024-12-10 22:59:23.540882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.896 qpair failed and we were unable to recover it. 00:28:15.896 [2024-12-10 22:59:23.541103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.896 [2024-12-10 22:59:23.541143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.896 qpair failed and we were unable to recover it. 00:28:15.896 [2024-12-10 22:59:23.541296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.896 [2024-12-10 22:59:23.541330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.896 qpair failed and we were unable to recover it. 00:28:15.896 [2024-12-10 22:59:23.541604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.896 [2024-12-10 22:59:23.541637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.896 qpair failed and we were unable to recover it. 00:28:15.896 [2024-12-10 22:59:23.541923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.896 [2024-12-10 22:59:23.541956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.896 qpair failed and we were unable to recover it. 00:28:15.896 [2024-12-10 22:59:23.542211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.896 [2024-12-10 22:59:23.542246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.896 qpair failed and we were unable to recover it. 00:28:15.896 [2024-12-10 22:59:23.542403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.896 [2024-12-10 22:59:23.542437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.896 qpair failed and we were unable to recover it. 00:28:15.896 [2024-12-10 22:59:23.542581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.896 [2024-12-10 22:59:23.542615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.896 qpair failed and we were unable to recover it. 00:28:15.896 [2024-12-10 22:59:23.542738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.896 [2024-12-10 22:59:23.542773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.896 qpair failed and we were unable to recover it. 00:28:15.896 [2024-12-10 22:59:23.542954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.896 [2024-12-10 22:59:23.542988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.896 qpair failed and we were unable to recover it. 00:28:15.896 [2024-12-10 22:59:23.543306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.896 [2024-12-10 22:59:23.543340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.896 qpair failed and we were unable to recover it. 00:28:15.896 [2024-12-10 22:59:23.543452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.896 [2024-12-10 22:59:23.543486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.896 qpair failed and we were unable to recover it. 00:28:15.896 [2024-12-10 22:59:23.543668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.896 [2024-12-10 22:59:23.543701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.896 qpair failed and we were unable to recover it. 00:28:15.896 [2024-12-10 22:59:23.543982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.896 [2024-12-10 22:59:23.544016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.896 qpair failed and we were unable to recover it. 00:28:15.896 [2024-12-10 22:59:23.544137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.896 [2024-12-10 22:59:23.544194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.896 qpair failed and we were unable to recover it. 00:28:15.896 [2024-12-10 22:59:23.544401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.896 [2024-12-10 22:59:23.544435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.896 qpair failed and we were unable to recover it. 00:28:15.896 [2024-12-10 22:59:23.544630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.896 [2024-12-10 22:59:23.544665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.896 qpair failed and we were unable to recover it. 00:28:15.896 [2024-12-10 22:59:23.544785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.897 [2024-12-10 22:59:23.544818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.897 qpair failed and we were unable to recover it. 00:28:15.897 [2024-12-10 22:59:23.545026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.897 [2024-12-10 22:59:23.545061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.897 qpair failed and we were unable to recover it. 00:28:15.897 [2024-12-10 22:59:23.545251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.897 [2024-12-10 22:59:23.545286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.897 qpair failed and we were unable to recover it. 00:28:15.897 [2024-12-10 22:59:23.545423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.897 [2024-12-10 22:59:23.545457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.897 qpair failed and we were unable to recover it. 00:28:15.897 [2024-12-10 22:59:23.545567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.897 [2024-12-10 22:59:23.545603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.897 qpair failed and we were unable to recover it. 00:28:15.897 [2024-12-10 22:59:23.545831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.897 [2024-12-10 22:59:23.545865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.897 qpair failed and we were unable to recover it. 00:28:15.897 [2024-12-10 22:59:23.546048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.897 [2024-12-10 22:59:23.546083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.897 qpair failed and we were unable to recover it. 00:28:15.897 [2024-12-10 22:59:23.546293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.897 [2024-12-10 22:59:23.546328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.897 qpair failed and we were unable to recover it. 00:28:15.897 [2024-12-10 22:59:23.546451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.897 [2024-12-10 22:59:23.546486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.897 qpair failed and we were unable to recover it. 00:28:15.897 [2024-12-10 22:59:23.546666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.897 [2024-12-10 22:59:23.546701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.897 qpair failed and we were unable to recover it. 00:28:15.897 [2024-12-10 22:59:23.546857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.897 [2024-12-10 22:59:23.546892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.897 qpair failed and we were unable to recover it. 00:28:15.897 [2024-12-10 22:59:23.547084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.897 [2024-12-10 22:59:23.547118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.897 qpair failed and we were unable to recover it. 00:28:15.897 [2024-12-10 22:59:23.547315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.897 [2024-12-10 22:59:23.547349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.897 qpair failed and we were unable to recover it. 00:28:15.897 [2024-12-10 22:59:23.547505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.897 [2024-12-10 22:59:23.547539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.897 qpair failed and we were unable to recover it. 00:28:15.897 [2024-12-10 22:59:23.547889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.897 [2024-12-10 22:59:23.547923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.897 qpair failed and we were unable to recover it. 00:28:15.897 [2024-12-10 22:59:23.548103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.897 [2024-12-10 22:59:23.548138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.897 qpair failed and we were unable to recover it. 00:28:15.897 [2024-12-10 22:59:23.548398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.897 [2024-12-10 22:59:23.548433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.897 qpair failed and we were unable to recover it. 00:28:15.897 [2024-12-10 22:59:23.548568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.897 [2024-12-10 22:59:23.548604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.897 qpair failed and we were unable to recover it. 00:28:15.897 [2024-12-10 22:59:23.548745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.897 [2024-12-10 22:59:23.548781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.897 qpair failed and we were unable to recover it. 00:28:15.897 [2024-12-10 22:59:23.549034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.897 [2024-12-10 22:59:23.549068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.897 qpair failed and we were unable to recover it. 00:28:15.897 [2024-12-10 22:59:23.549252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.897 [2024-12-10 22:59:23.549288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.897 qpair failed and we were unable to recover it. 00:28:15.897 [2024-12-10 22:59:23.549504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.897 [2024-12-10 22:59:23.549538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.897 qpair failed and we were unable to recover it. 00:28:15.897 [2024-12-10 22:59:23.549670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.897 [2024-12-10 22:59:23.549703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.897 qpair failed and we were unable to recover it. 00:28:15.897 [2024-12-10 22:59:23.549847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.897 [2024-12-10 22:59:23.549881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.897 qpair failed and we were unable to recover it. 00:28:15.897 [2024-12-10 22:59:23.550172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.897 [2024-12-10 22:59:23.550214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.897 qpair failed and we were unable to recover it. 00:28:15.897 [2024-12-10 22:59:23.550441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.897 [2024-12-10 22:59:23.550476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.897 qpair failed and we were unable to recover it. 00:28:15.897 [2024-12-10 22:59:23.550631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.897 [2024-12-10 22:59:23.550666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.897 qpair failed and we were unable to recover it. 00:28:15.897 [2024-12-10 22:59:23.550858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.897 [2024-12-10 22:59:23.550892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.897 qpair failed and we were unable to recover it. 00:28:15.897 [2024-12-10 22:59:23.551191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.897 [2024-12-10 22:59:23.551228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.897 qpair failed and we were unable to recover it. 00:28:15.897 [2024-12-10 22:59:23.551418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.897 [2024-12-10 22:59:23.551453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.897 qpair failed and we were unable to recover it. 00:28:15.897 [2024-12-10 22:59:23.551645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.897 [2024-12-10 22:59:23.551680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.897 qpair failed and we were unable to recover it. 00:28:15.897 [2024-12-10 22:59:23.551905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.897 [2024-12-10 22:59:23.551939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.897 qpair failed and we were unable to recover it. 00:28:15.897 [2024-12-10 22:59:23.552188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.897 [2024-12-10 22:59:23.552241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.897 qpair failed and we were unable to recover it. 00:28:15.897 [2024-12-10 22:59:23.552507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.897 [2024-12-10 22:59:23.552542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.897 qpair failed and we were unable to recover it. 00:28:15.897 [2024-12-10 22:59:23.552671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.897 [2024-12-10 22:59:23.552705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.897 qpair failed and we were unable to recover it. 00:28:15.897 [2024-12-10 22:59:23.552904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.897 [2024-12-10 22:59:23.552939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.897 qpair failed and we were unable to recover it. 00:28:15.897 [2024-12-10 22:59:23.553135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.897 [2024-12-10 22:59:23.553179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.897 qpair failed and we were unable to recover it. 00:28:15.897 [2024-12-10 22:59:23.553329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.897 [2024-12-10 22:59:23.553362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.897 qpair failed and we were unable to recover it. 00:28:15.897 [2024-12-10 22:59:23.553549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.897 [2024-12-10 22:59:23.553583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.897 qpair failed and we were unable to recover it. 00:28:15.897 [2024-12-10 22:59:23.553768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.898 [2024-12-10 22:59:23.553801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.898 qpair failed and we were unable to recover it. 00:28:15.898 [2024-12-10 22:59:23.553989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.898 [2024-12-10 22:59:23.554023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.898 qpair failed and we were unable to recover it. 00:28:15.898 [2024-12-10 22:59:23.554210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.898 [2024-12-10 22:59:23.554244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.898 qpair failed and we were unable to recover it. 00:28:15.898 [2024-12-10 22:59:23.554387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.898 [2024-12-10 22:59:23.554423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.898 qpair failed and we were unable to recover it. 00:28:15.898 [2024-12-10 22:59:23.554544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.898 [2024-12-10 22:59:23.554576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.898 qpair failed and we were unable to recover it. 00:28:15.898 [2024-12-10 22:59:23.554849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.898 [2024-12-10 22:59:23.554883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.898 qpair failed and we were unable to recover it. 00:28:15.898 [2024-12-10 22:59:23.555017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.898 [2024-12-10 22:59:23.555051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.898 qpair failed and we were unable to recover it. 00:28:15.898 [2024-12-10 22:59:23.555237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.898 [2024-12-10 22:59:23.555272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.898 qpair failed and we were unable to recover it. 00:28:15.898 [2024-12-10 22:59:23.555454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.898 [2024-12-10 22:59:23.555488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.898 qpair failed and we were unable to recover it. 00:28:15.898 [2024-12-10 22:59:23.555634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.898 [2024-12-10 22:59:23.555667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.898 qpair failed and we were unable to recover it. 00:28:15.898 [2024-12-10 22:59:23.555785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.898 [2024-12-10 22:59:23.555819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.898 qpair failed and we were unable to recover it. 00:28:15.898 [2024-12-10 22:59:23.556120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.898 [2024-12-10 22:59:23.556154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.898 qpair failed and we were unable to recover it. 00:28:15.898 [2024-12-10 22:59:23.556384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.898 [2024-12-10 22:59:23.556421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.898 qpair failed and we were unable to recover it. 00:28:15.898 [2024-12-10 22:59:23.556554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.898 [2024-12-10 22:59:23.556590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.898 qpair failed and we were unable to recover it. 00:28:15.898 [2024-12-10 22:59:23.556732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.898 [2024-12-10 22:59:23.556767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.898 qpair failed and we were unable to recover it. 00:28:15.898 [2024-12-10 22:59:23.556885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.898 [2024-12-10 22:59:23.556918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.898 qpair failed and we were unable to recover it. 00:28:15.898 [2024-12-10 22:59:23.557100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.898 [2024-12-10 22:59:23.557134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.898 qpair failed and we were unable to recover it. 00:28:15.898 [2024-12-10 22:59:23.557267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.898 [2024-12-10 22:59:23.557301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.898 qpair failed and we were unable to recover it. 00:28:15.898 [2024-12-10 22:59:23.557442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.898 [2024-12-10 22:59:23.557477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.898 qpair failed and we were unable to recover it. 00:28:15.898 [2024-12-10 22:59:23.557686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.898 [2024-12-10 22:59:23.557720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.898 qpair failed and we were unable to recover it. 00:28:15.898 [2024-12-10 22:59:23.557848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.898 [2024-12-10 22:59:23.557882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.898 qpair failed and we were unable to recover it. 00:28:15.898 [2024-12-10 22:59:23.557994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.898 [2024-12-10 22:59:23.558027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.898 qpair failed and we were unable to recover it. 00:28:15.898 [2024-12-10 22:59:23.558227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.898 [2024-12-10 22:59:23.558260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.898 qpair failed and we were unable to recover it. 00:28:15.898 [2024-12-10 22:59:23.558395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.898 [2024-12-10 22:59:23.558429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.898 qpair failed and we were unable to recover it. 00:28:15.898 [2024-12-10 22:59:23.558614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.898 [2024-12-10 22:59:23.558648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.898 qpair failed and we were unable to recover it. 00:28:15.898 [2024-12-10 22:59:23.558832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.898 [2024-12-10 22:59:23.558866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.898 qpair failed and we were unable to recover it. 00:28:15.898 [2024-12-10 22:59:23.558989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.898 [2024-12-10 22:59:23.559024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.898 qpair failed and we were unable to recover it. 00:28:15.898 [2024-12-10 22:59:23.559210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.898 [2024-12-10 22:59:23.559252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.898 qpair failed and we were unable to recover it. 00:28:15.898 [2024-12-10 22:59:23.559467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.898 [2024-12-10 22:59:23.559502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.898 qpair failed and we were unable to recover it. 00:28:15.898 [2024-12-10 22:59:23.559623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.898 [2024-12-10 22:59:23.559658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.898 qpair failed and we were unable to recover it. 00:28:15.898 [2024-12-10 22:59:23.559967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.898 [2024-12-10 22:59:23.560000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.898 qpair failed and we were unable to recover it. 00:28:15.898 [2024-12-10 22:59:23.560194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.898 [2024-12-10 22:59:23.560228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.898 qpair failed and we were unable to recover it. 00:28:15.898 [2024-12-10 22:59:23.560434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.898 [2024-12-10 22:59:23.560468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.898 qpair failed and we were unable to recover it. 00:28:15.898 [2024-12-10 22:59:23.560658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.898 [2024-12-10 22:59:23.560692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.898 qpair failed and we were unable to recover it. 00:28:15.898 [2024-12-10 22:59:23.560931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.899 [2024-12-10 22:59:23.560966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.899 qpair failed and we were unable to recover it. 00:28:15.899 [2024-12-10 22:59:23.561118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.899 [2024-12-10 22:59:23.561152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.899 qpair failed and we were unable to recover it. 00:28:15.899 [2024-12-10 22:59:23.561440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.899 [2024-12-10 22:59:23.561474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.899 qpair failed and we were unable to recover it. 00:28:15.899 [2024-12-10 22:59:23.561608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.899 [2024-12-10 22:59:23.561642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.899 qpair failed and we were unable to recover it. 00:28:15.899 [2024-12-10 22:59:23.561876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.899 [2024-12-10 22:59:23.561909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.899 qpair failed and we were unable to recover it. 00:28:15.899 [2024-12-10 22:59:23.562121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.899 [2024-12-10 22:59:23.562156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.899 qpair failed and we were unable to recover it. 00:28:15.899 [2024-12-10 22:59:23.562419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.899 [2024-12-10 22:59:23.562453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.899 qpair failed and we were unable to recover it. 00:28:15.899 [2024-12-10 22:59:23.562578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.899 [2024-12-10 22:59:23.562612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.899 qpair failed and we were unable to recover it. 00:28:15.899 [2024-12-10 22:59:23.562759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.899 [2024-12-10 22:59:23.562793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.899 qpair failed and we were unable to recover it. 00:28:15.899 [2024-12-10 22:59:23.562984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.899 [2024-12-10 22:59:23.563017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.899 qpair failed and we were unable to recover it. 00:28:15.899 [2024-12-10 22:59:23.563290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.899 [2024-12-10 22:59:23.563325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.899 qpair failed and we were unable to recover it. 00:28:15.899 [2024-12-10 22:59:23.563453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.899 [2024-12-10 22:59:23.563487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.899 qpair failed and we were unable to recover it. 00:28:15.899 [2024-12-10 22:59:23.563691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.899 [2024-12-10 22:59:23.563726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.899 qpair failed and we were unable to recover it. 00:28:15.899 [2024-12-10 22:59:23.563857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.899 [2024-12-10 22:59:23.563891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.899 qpair failed and we were unable to recover it. 00:28:15.899 [2024-12-10 22:59:23.564090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.899 [2024-12-10 22:59:23.564123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.899 qpair failed and we were unable to recover it. 00:28:15.899 [2024-12-10 22:59:23.564281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.899 [2024-12-10 22:59:23.564317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.899 qpair failed and we were unable to recover it. 00:28:15.899 [2024-12-10 22:59:23.564451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.899 [2024-12-10 22:59:23.564486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.899 qpair failed and we were unable to recover it. 00:28:15.899 [2024-12-10 22:59:23.564677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.899 [2024-12-10 22:59:23.564711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.899 qpair failed and we were unable to recover it. 00:28:15.899 [2024-12-10 22:59:23.564909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.899 [2024-12-10 22:59:23.564954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.899 qpair failed and we were unable to recover it. 00:28:15.899 [2024-12-10 22:59:23.565139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.899 [2024-12-10 22:59:23.565180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.899 qpair failed and we were unable to recover it. 00:28:15.899 [2024-12-10 22:59:23.565389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.899 [2024-12-10 22:59:23.565423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.899 qpair failed and we were unable to recover it. 00:28:15.899 [2024-12-10 22:59:23.565553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.899 [2024-12-10 22:59:23.565587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.899 qpair failed and we were unable to recover it. 00:28:15.899 [2024-12-10 22:59:23.565829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.899 [2024-12-10 22:59:23.565864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.899 qpair failed and we were unable to recover it. 00:28:15.899 [2024-12-10 22:59:23.566051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.899 [2024-12-10 22:59:23.566084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.899 qpair failed and we were unable to recover it. 00:28:15.899 [2024-12-10 22:59:23.566284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.899 [2024-12-10 22:59:23.566320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.899 qpair failed and we were unable to recover it. 00:28:15.899 [2024-12-10 22:59:23.566454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.899 [2024-12-10 22:59:23.566488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.899 qpair failed and we were unable to recover it. 00:28:15.899 [2024-12-10 22:59:23.566698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.899 [2024-12-10 22:59:23.566732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.899 qpair failed and we were unable to recover it. 00:28:15.899 [2024-12-10 22:59:23.566842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.899 [2024-12-10 22:59:23.566876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.899 qpair failed and we were unable to recover it. 00:28:15.899 [2024-12-10 22:59:23.567149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.899 [2024-12-10 22:59:23.567195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.899 qpair failed and we were unable to recover it. 00:28:15.899 [2024-12-10 22:59:23.567378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.899 [2024-12-10 22:59:23.567411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.899 qpair failed and we were unable to recover it. 00:28:15.899 [2024-12-10 22:59:23.567596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.899 [2024-12-10 22:59:23.567629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.899 qpair failed and we were unable to recover it. 00:28:15.899 [2024-12-10 22:59:23.567752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.899 [2024-12-10 22:59:23.567786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.899 qpair failed and we were unable to recover it. 00:28:15.899 [2024-12-10 22:59:23.568059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.899 [2024-12-10 22:59:23.568093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.899 qpair failed and we were unable to recover it. 00:28:15.899 [2024-12-10 22:59:23.568384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.899 [2024-12-10 22:59:23.568420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.899 qpair failed and we were unable to recover it. 00:28:15.899 [2024-12-10 22:59:23.568534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:15.899 [2024-12-10 22:59:23.568567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:15.899 qpair failed and we were unable to recover it. 00:28:16.178 [2024-12-10 22:59:23.568724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.178 [2024-12-10 22:59:23.568758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.178 qpair failed and we were unable to recover it. 00:28:16.178 [2024-12-10 22:59:23.568962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.178 [2024-12-10 22:59:23.568996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.178 qpair failed and we were unable to recover it. 00:28:16.178 [2024-12-10 22:59:23.569218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.178 [2024-12-10 22:59:23.569254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.178 qpair failed and we were unable to recover it. 00:28:16.178 [2024-12-10 22:59:23.569408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.178 [2024-12-10 22:59:23.569441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.178 qpair failed and we were unable to recover it. 00:28:16.178 [2024-12-10 22:59:23.569646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.178 [2024-12-10 22:59:23.569680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.178 qpair failed and we were unable to recover it. 00:28:16.178 [2024-12-10 22:59:23.570011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.178 [2024-12-10 22:59:23.570045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.178 qpair failed and we were unable to recover it. 00:28:16.178 [2024-12-10 22:59:23.570211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.178 [2024-12-10 22:59:23.570245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.178 qpair failed and we were unable to recover it. 00:28:16.178 [2024-12-10 22:59:23.570377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.178 [2024-12-10 22:59:23.570412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.178 qpair failed and we were unable to recover it. 00:28:16.178 [2024-12-10 22:59:23.570616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.178 [2024-12-10 22:59:23.570649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.178 qpair failed and we were unable to recover it. 00:28:16.178 [2024-12-10 22:59:23.570778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.178 [2024-12-10 22:59:23.570813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.178 qpair failed and we were unable to recover it. 00:28:16.178 [2024-12-10 22:59:23.571023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.178 [2024-12-10 22:59:23.571059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.178 qpair failed and we were unable to recover it. 00:28:16.178 [2024-12-10 22:59:23.571249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.178 [2024-12-10 22:59:23.571283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.178 qpair failed and we were unable to recover it. 00:28:16.178 [2024-12-10 22:59:23.571473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.178 [2024-12-10 22:59:23.571506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.178 qpair failed and we were unable to recover it. 00:28:16.178 [2024-12-10 22:59:23.571717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.178 [2024-12-10 22:59:23.571752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.178 qpair failed and we were unable to recover it. 00:28:16.178 [2024-12-10 22:59:23.571867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.178 [2024-12-10 22:59:23.571900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.178 qpair failed and we were unable to recover it. 00:28:16.178 [2024-12-10 22:59:23.572056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.178 [2024-12-10 22:59:23.572089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.178 qpair failed and we were unable to recover it. 00:28:16.178 [2024-12-10 22:59:23.572290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.178 [2024-12-10 22:59:23.572325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.178 qpair failed and we were unable to recover it. 00:28:16.178 [2024-12-10 22:59:23.572528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.178 [2024-12-10 22:59:23.572562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.178 qpair failed and we were unable to recover it. 00:28:16.178 [2024-12-10 22:59:23.572686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.178 [2024-12-10 22:59:23.572720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.178 qpair failed and we were unable to recover it. 00:28:16.178 [2024-12-10 22:59:23.572924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.178 [2024-12-10 22:59:23.572961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.178 qpair failed and we were unable to recover it. 00:28:16.178 [2024-12-10 22:59:23.573073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.178 [2024-12-10 22:59:23.573107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.178 qpair failed and we were unable to recover it. 00:28:16.178 [2024-12-10 22:59:23.573411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.178 [2024-12-10 22:59:23.573444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.178 qpair failed and we were unable to recover it. 00:28:16.178 [2024-12-10 22:59:23.573656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.178 [2024-12-10 22:59:23.573690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.178 qpair failed and we were unable to recover it. 00:28:16.179 [2024-12-10 22:59:23.573988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.179 [2024-12-10 22:59:23.574028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.179 qpair failed and we were unable to recover it. 00:28:16.179 [2024-12-10 22:59:23.574193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.179 [2024-12-10 22:59:23.574231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.179 qpair failed and we were unable to recover it. 00:28:16.179 [2024-12-10 22:59:23.574487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.179 [2024-12-10 22:59:23.574521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.179 qpair failed and we were unable to recover it. 00:28:16.179 [2024-12-10 22:59:23.574643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.179 [2024-12-10 22:59:23.574677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.179 qpair failed and we were unable to recover it. 00:28:16.179 [2024-12-10 22:59:23.574869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.179 [2024-12-10 22:59:23.574905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.179 qpair failed and we were unable to recover it. 00:28:16.179 [2024-12-10 22:59:23.575087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.179 [2024-12-10 22:59:23.575122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.179 qpair failed and we were unable to recover it. 00:28:16.179 [2024-12-10 22:59:23.575260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.179 [2024-12-10 22:59:23.575296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.179 qpair failed and we were unable to recover it. 00:28:16.179 [2024-12-10 22:59:23.575458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.179 [2024-12-10 22:59:23.575491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.179 qpair failed and we were unable to recover it. 00:28:16.179 [2024-12-10 22:59:23.575718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.179 [2024-12-10 22:59:23.575754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.179 qpair failed and we were unable to recover it. 00:28:16.179 [2024-12-10 22:59:23.575884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.179 [2024-12-10 22:59:23.575917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.179 qpair failed and we were unable to recover it. 00:28:16.179 [2024-12-10 22:59:23.576194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.179 [2024-12-10 22:59:23.576230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.179 qpair failed and we were unable to recover it. 00:28:16.179 [2024-12-10 22:59:23.576428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.179 [2024-12-10 22:59:23.576463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.179 qpair failed and we were unable to recover it. 00:28:16.179 [2024-12-10 22:59:23.576661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.179 [2024-12-10 22:59:23.576695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.179 qpair failed and we were unable to recover it. 00:28:16.179 [2024-12-10 22:59:23.576821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.179 [2024-12-10 22:59:23.576856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.179 qpair failed and we were unable to recover it. 00:28:16.179 [2024-12-10 22:59:23.577058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.179 [2024-12-10 22:59:23.577092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.179 qpair failed and we were unable to recover it. 00:28:16.179 [2024-12-10 22:59:23.577261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.179 [2024-12-10 22:59:23.577295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.179 qpair failed and we were unable to recover it. 00:28:16.179 [2024-12-10 22:59:23.577502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.179 [2024-12-10 22:59:23.577536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.179 qpair failed and we were unable to recover it. 00:28:16.179 [2024-12-10 22:59:23.577741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.179 [2024-12-10 22:59:23.577777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.179 qpair failed and we were unable to recover it. 00:28:16.179 [2024-12-10 22:59:23.578054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.179 [2024-12-10 22:59:23.578088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.179 qpair failed and we were unable to recover it. 00:28:16.179 [2024-12-10 22:59:23.578222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.179 [2024-12-10 22:59:23.578257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.179 qpair failed and we were unable to recover it. 00:28:16.179 [2024-12-10 22:59:23.578463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.179 [2024-12-10 22:59:23.578498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.179 qpair failed and we were unable to recover it. 00:28:16.179 [2024-12-10 22:59:23.578682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.179 [2024-12-10 22:59:23.578719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.179 qpair failed and we were unable to recover it. 00:28:16.179 [2024-12-10 22:59:23.578971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.179 [2024-12-10 22:59:23.579006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.179 qpair failed and we were unable to recover it. 00:28:16.179 [2024-12-10 22:59:23.579187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.179 [2024-12-10 22:59:23.579221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.179 qpair failed and we were unable to recover it. 00:28:16.179 [2024-12-10 22:59:23.579361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.179 [2024-12-10 22:59:23.579394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.179 qpair failed and we were unable to recover it. 00:28:16.179 [2024-12-10 22:59:23.579524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.179 [2024-12-10 22:59:23.579557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.179 qpair failed and we were unable to recover it. 00:28:16.179 [2024-12-10 22:59:23.579739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.179 [2024-12-10 22:59:23.579775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.179 qpair failed and we were unable to recover it. 00:28:16.179 [2024-12-10 22:59:23.579976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.179 [2024-12-10 22:59:23.580012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.179 qpair failed and we were unable to recover it. 00:28:16.179 [2024-12-10 22:59:23.580125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.179 [2024-12-10 22:59:23.580172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.179 qpair failed and we were unable to recover it. 00:28:16.179 [2024-12-10 22:59:23.580360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.179 [2024-12-10 22:59:23.580394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.179 qpair failed and we were unable to recover it. 00:28:16.179 [2024-12-10 22:59:23.580529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.179 [2024-12-10 22:59:23.580563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.179 qpair failed and we were unable to recover it. 00:28:16.179 [2024-12-10 22:59:23.580676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.179 [2024-12-10 22:59:23.580709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.179 qpair failed and we were unable to recover it. 00:28:16.179 [2024-12-10 22:59:23.580991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.179 [2024-12-10 22:59:23.581025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.179 qpair failed and we were unable to recover it. 00:28:16.179 [2024-12-10 22:59:23.581301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.179 [2024-12-10 22:59:23.581337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.179 qpair failed and we were unable to recover it. 00:28:16.179 [2024-12-10 22:59:23.581560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.179 [2024-12-10 22:59:23.581595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.179 qpair failed and we were unable to recover it. 00:28:16.179 [2024-12-10 22:59:23.581872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.179 [2024-12-10 22:59:23.581905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.179 qpair failed and we were unable to recover it. 00:28:16.179 [2024-12-10 22:59:23.582042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.179 [2024-12-10 22:59:23.582076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.179 qpair failed and we were unable to recover it. 00:28:16.179 [2024-12-10 22:59:23.582300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.179 [2024-12-10 22:59:23.582336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.179 qpair failed and we were unable to recover it. 00:28:16.179 [2024-12-10 22:59:23.582488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.179 [2024-12-10 22:59:23.582522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.179 qpair failed and we were unable to recover it. 00:28:16.179 [2024-12-10 22:59:23.582728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.179 [2024-12-10 22:59:23.582763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.179 qpair failed and we were unable to recover it. 00:28:16.179 [2024-12-10 22:59:23.582953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.179 [2024-12-10 22:59:23.582993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.179 qpair failed and we were unable to recover it. 00:28:16.179 [2024-12-10 22:59:23.583115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.179 [2024-12-10 22:59:23.583148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.179 qpair failed and we were unable to recover it. 00:28:16.179 [2024-12-10 22:59:23.583359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.179 [2024-12-10 22:59:23.583395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.179 qpair failed and we were unable to recover it. 00:28:16.179 [2024-12-10 22:59:23.583601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.179 [2024-12-10 22:59:23.583635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.179 qpair failed and we were unable to recover it. 00:28:16.179 [2024-12-10 22:59:23.583863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.179 [2024-12-10 22:59:23.583899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.179 qpair failed and we were unable to recover it. 00:28:16.179 [2024-12-10 22:59:23.584109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.179 [2024-12-10 22:59:23.584143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.179 qpair failed and we were unable to recover it. 00:28:16.179 [2024-12-10 22:59:23.584292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.179 [2024-12-10 22:59:23.584328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.179 qpair failed and we were unable to recover it. 00:28:16.179 [2024-12-10 22:59:23.584467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.179 [2024-12-10 22:59:23.584502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.179 qpair failed and we were unable to recover it. 00:28:16.179 [2024-12-10 22:59:23.584632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.179 [2024-12-10 22:59:23.584666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.179 qpair failed and we were unable to recover it. 00:28:16.179 [2024-12-10 22:59:23.584804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.179 [2024-12-10 22:59:23.584839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.179 qpair failed and we were unable to recover it. 00:28:16.179 [2024-12-10 22:59:23.585121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.179 [2024-12-10 22:59:23.585154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.179 qpair failed and we were unable to recover it. 00:28:16.179 [2024-12-10 22:59:23.585301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.179 [2024-12-10 22:59:23.585336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.179 qpair failed and we were unable to recover it. 00:28:16.179 [2024-12-10 22:59:23.585550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.179 [2024-12-10 22:59:23.585584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.179 qpair failed and we were unable to recover it. 00:28:16.179 [2024-12-10 22:59:23.585797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.179 [2024-12-10 22:59:23.585832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.179 qpair failed and we were unable to recover it. 00:28:16.179 [2024-12-10 22:59:23.585966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.179 [2024-12-10 22:59:23.586000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.179 qpair failed and we were unable to recover it. 00:28:16.179 [2024-12-10 22:59:23.586194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.179 [2024-12-10 22:59:23.586229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.179 qpair failed and we were unable to recover it. 00:28:16.179 [2024-12-10 22:59:23.586371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.179 [2024-12-10 22:59:23.586407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.179 qpair failed and we were unable to recover it. 00:28:16.179 [2024-12-10 22:59:23.586603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.179 [2024-12-10 22:59:23.586636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.179 qpair failed and we were unable to recover it. 00:28:16.179 [2024-12-10 22:59:23.586829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.179 [2024-12-10 22:59:23.586863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.179 qpair failed and we were unable to recover it. 00:28:16.179 [2024-12-10 22:59:23.587138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.179 [2024-12-10 22:59:23.587182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.179 qpair failed and we were unable to recover it. 00:28:16.179 [2024-12-10 22:59:23.587311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.179 [2024-12-10 22:59:23.587345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.179 qpair failed and we were unable to recover it. 00:28:16.179 [2024-12-10 22:59:23.587484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.179 [2024-12-10 22:59:23.587518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.179 qpair failed and we were unable to recover it. 00:28:16.179 [2024-12-10 22:59:23.587714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.179 [2024-12-10 22:59:23.587750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.179 qpair failed and we were unable to recover it. 00:28:16.179 [2024-12-10 22:59:23.587975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.179 [2024-12-10 22:59:23.588009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.179 qpair failed and we were unable to recover it. 00:28:16.179 [2024-12-10 22:59:23.588131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.179 [2024-12-10 22:59:23.588172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.179 qpair failed and we were unable to recover it. 00:28:16.179 [2024-12-10 22:59:23.588390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.179 [2024-12-10 22:59:23.588426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.180 qpair failed and we were unable to recover it. 00:28:16.180 [2024-12-10 22:59:23.588681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.180 [2024-12-10 22:59:23.588715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.180 qpair failed and we were unable to recover it. 00:28:16.180 [2024-12-10 22:59:23.588976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.180 [2024-12-10 22:59:23.589012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.180 qpair failed and we were unable to recover it. 00:28:16.180 [2024-12-10 22:59:23.589143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.180 [2024-12-10 22:59:23.589187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.180 qpair failed and we were unable to recover it. 00:28:16.180 [2024-12-10 22:59:23.589376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.180 [2024-12-10 22:59:23.589411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.180 qpair failed and we were unable to recover it. 00:28:16.180 [2024-12-10 22:59:23.589550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.180 [2024-12-10 22:59:23.589583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.180 qpair failed and we were unable to recover it. 00:28:16.180 [2024-12-10 22:59:23.589737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.180 [2024-12-10 22:59:23.589773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.180 qpair failed and we were unable to recover it. 00:28:16.180 [2024-12-10 22:59:23.589956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.180 [2024-12-10 22:59:23.589989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.180 qpair failed and we were unable to recover it. 00:28:16.180 [2024-12-10 22:59:23.590138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.180 [2024-12-10 22:59:23.590200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.180 qpair failed and we were unable to recover it. 00:28:16.180 [2024-12-10 22:59:23.590347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.180 [2024-12-10 22:59:23.590381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.180 qpair failed and we were unable to recover it. 00:28:16.180 [2024-12-10 22:59:23.590513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.180 [2024-12-10 22:59:23.590547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.180 qpair failed and we were unable to recover it. 00:28:16.180 [2024-12-10 22:59:23.590682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.180 [2024-12-10 22:59:23.590716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.180 qpair failed and we were unable to recover it. 00:28:16.180 [2024-12-10 22:59:23.591076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.180 [2024-12-10 22:59:23.591110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.180 qpair failed and we were unable to recover it. 00:28:16.180 [2024-12-10 22:59:23.591316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.180 [2024-12-10 22:59:23.591351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.180 qpair failed and we were unable to recover it. 00:28:16.180 [2024-12-10 22:59:23.591464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.180 [2024-12-10 22:59:23.591498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.180 qpair failed and we were unable to recover it. 00:28:16.180 [2024-12-10 22:59:23.591722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.180 [2024-12-10 22:59:23.591763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.180 qpair failed and we were unable to recover it. 00:28:16.180 [2024-12-10 22:59:23.591968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.180 [2024-12-10 22:59:23.592002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.180 qpair failed and we were unable to recover it. 00:28:16.180 [2024-12-10 22:59:23.592217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.180 [2024-12-10 22:59:23.592251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.180 qpair failed and we were unable to recover it. 00:28:16.180 [2024-12-10 22:59:23.592450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.180 [2024-12-10 22:59:23.592484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.180 qpair failed and we were unable to recover it. 00:28:16.180 [2024-12-10 22:59:23.592669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.180 [2024-12-10 22:59:23.592703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.180 qpair failed and we were unable to recover it. 00:28:16.180 [2024-12-10 22:59:23.592981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.180 [2024-12-10 22:59:23.593014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.180 qpair failed and we were unable to recover it. 00:28:16.180 [2024-12-10 22:59:23.593231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.180 [2024-12-10 22:59:23.593267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.180 qpair failed and we were unable to recover it. 00:28:16.180 [2024-12-10 22:59:23.593452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.180 [2024-12-10 22:59:23.593485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.180 qpair failed and we were unable to recover it. 00:28:16.180 [2024-12-10 22:59:23.593610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.180 [2024-12-10 22:59:23.593645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.180 qpair failed and we were unable to recover it. 00:28:16.180 [2024-12-10 22:59:23.593778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.180 [2024-12-10 22:59:23.593812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.180 qpair failed and we were unable to recover it. 00:28:16.180 [2024-12-10 22:59:23.593939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.180 [2024-12-10 22:59:23.593973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.180 qpair failed and we were unable to recover it. 00:28:16.180 [2024-12-10 22:59:23.594205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.180 [2024-12-10 22:59:23.594241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.180 qpair failed and we were unable to recover it. 00:28:16.180 [2024-12-10 22:59:23.594443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.180 [2024-12-10 22:59:23.594478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.180 qpair failed and we were unable to recover it. 00:28:16.180 [2024-12-10 22:59:23.594662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.180 [2024-12-10 22:59:23.594696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.180 qpair failed and we were unable to recover it. 00:28:16.180 [2024-12-10 22:59:23.594832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.180 [2024-12-10 22:59:23.594866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.180 qpair failed and we were unable to recover it. 00:28:16.180 [2024-12-10 22:59:23.595064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.180 [2024-12-10 22:59:23.595096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.180 qpair failed and we were unable to recover it. 00:28:16.180 [2024-12-10 22:59:23.595345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.180 [2024-12-10 22:59:23.595379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.180 qpair failed and we were unable to recover it. 00:28:16.180 [2024-12-10 22:59:23.595634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.180 [2024-12-10 22:59:23.595668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.180 qpair failed and we were unable to recover it. 00:28:16.180 [2024-12-10 22:59:23.595792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.180 [2024-12-10 22:59:23.595825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.180 qpair failed and we were unable to recover it. 00:28:16.180 [2024-12-10 22:59:23.596079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.180 [2024-12-10 22:59:23.596112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.180 qpair failed and we were unable to recover it. 00:28:16.180 [2024-12-10 22:59:23.596231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.180 [2024-12-10 22:59:23.596265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.180 qpair failed and we were unable to recover it. 00:28:16.180 [2024-12-10 22:59:23.596448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.180 [2024-12-10 22:59:23.596484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.180 qpair failed and we were unable to recover it. 00:28:16.180 [2024-12-10 22:59:23.596612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.180 [2024-12-10 22:59:23.596647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.180 qpair failed and we were unable to recover it. 00:28:16.180 [2024-12-10 22:59:23.596781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.180 [2024-12-10 22:59:23.596816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.180 qpair failed and we were unable to recover it. 00:28:16.180 [2024-12-10 22:59:23.596943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.180 [2024-12-10 22:59:23.596978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.180 qpair failed and we were unable to recover it. 00:28:16.180 [2024-12-10 22:59:23.597235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.180 [2024-12-10 22:59:23.597271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.180 qpair failed and we were unable to recover it. 00:28:16.180 [2024-12-10 22:59:23.597455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.180 [2024-12-10 22:59:23.597490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.180 qpair failed and we were unable to recover it. 00:28:16.180 [2024-12-10 22:59:23.597698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.180 [2024-12-10 22:59:23.597731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.180 qpair failed and we were unable to recover it. 00:28:16.180 [2024-12-10 22:59:23.597922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.180 [2024-12-10 22:59:23.597957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.180 qpair failed and we were unable to recover it. 00:28:16.180 [2024-12-10 22:59:23.598214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.180 [2024-12-10 22:59:23.598248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.180 qpair failed and we were unable to recover it. 00:28:16.180 [2024-12-10 22:59:23.598546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.180 [2024-12-10 22:59:23.598581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.180 qpair failed and we were unable to recover it. 00:28:16.180 [2024-12-10 22:59:23.598720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.180 [2024-12-10 22:59:23.598756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.180 qpair failed and we were unable to recover it. 00:28:16.180 [2024-12-10 22:59:23.598979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.180 [2024-12-10 22:59:23.599012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.180 qpair failed and we were unable to recover it. 00:28:16.180 [2024-12-10 22:59:23.599201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.180 [2024-12-10 22:59:23.599237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.180 qpair failed and we were unable to recover it. 00:28:16.180 [2024-12-10 22:59:23.599521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.180 [2024-12-10 22:59:23.599555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.180 qpair failed and we were unable to recover it. 00:28:16.180 [2024-12-10 22:59:23.599870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.180 [2024-12-10 22:59:23.599904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.180 qpair failed and we were unable to recover it. 00:28:16.180 [2024-12-10 22:59:23.600105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.180 [2024-12-10 22:59:23.600139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.180 qpair failed and we were unable to recover it. 00:28:16.180 [2024-12-10 22:59:23.600268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.180 [2024-12-10 22:59:23.600299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.180 qpair failed and we were unable to recover it. 00:28:16.180 [2024-12-10 22:59:23.600438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.180 [2024-12-10 22:59:23.600471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.180 qpair failed and we were unable to recover it. 00:28:16.180 [2024-12-10 22:59:23.600621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.180 [2024-12-10 22:59:23.600655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.180 qpair failed and we were unable to recover it. 00:28:16.180 [2024-12-10 22:59:23.600764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.180 [2024-12-10 22:59:23.600803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.180 qpair failed and we were unable to recover it. 00:28:16.180 [2024-12-10 22:59:23.600939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.180 [2024-12-10 22:59:23.600973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.180 qpair failed and we were unable to recover it. 00:28:16.180 [2024-12-10 22:59:23.601188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.180 [2024-12-10 22:59:23.601222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.180 qpair failed and we were unable to recover it. 00:28:16.180 [2024-12-10 22:59:23.601409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.180 [2024-12-10 22:59:23.601444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.180 qpair failed and we were unable to recover it. 00:28:16.180 [2024-12-10 22:59:23.601636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.180 [2024-12-10 22:59:23.601668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.180 qpair failed and we were unable to recover it. 00:28:16.180 [2024-12-10 22:59:23.601815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.180 [2024-12-10 22:59:23.601851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.180 qpair failed and we were unable to recover it. 00:28:16.180 [2024-12-10 22:59:23.601957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.180 [2024-12-10 22:59:23.601991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.180 qpair failed and we were unable to recover it. 00:28:16.180 [2024-12-10 22:59:23.602202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.180 [2024-12-10 22:59:23.602239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.180 qpair failed and we were unable to recover it. 00:28:16.180 [2024-12-10 22:59:23.602393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.180 [2024-12-10 22:59:23.602428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.180 qpair failed and we were unable to recover it. 00:28:16.180 [2024-12-10 22:59:23.602552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.180 [2024-12-10 22:59:23.602588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.180 qpair failed and we were unable to recover it. 00:28:16.180 [2024-12-10 22:59:23.602810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.180 [2024-12-10 22:59:23.602845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.180 qpair failed and we were unable to recover it. 00:28:16.180 [2024-12-10 22:59:23.603025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.180 [2024-12-10 22:59:23.603058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.180 qpair failed and we were unable to recover it. 00:28:16.180 [2024-12-10 22:59:23.603188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.180 [2024-12-10 22:59:23.603223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.180 qpair failed and we were unable to recover it. 00:28:16.180 [2024-12-10 22:59:23.603344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.180 [2024-12-10 22:59:23.603378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.180 qpair failed and we were unable to recover it. 00:28:16.180 [2024-12-10 22:59:23.603589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.180 [2024-12-10 22:59:23.603623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.180 qpair failed and we were unable to recover it. 00:28:16.180 [2024-12-10 22:59:23.603844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.180 [2024-12-10 22:59:23.603878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.180 qpair failed and we were unable to recover it. 00:28:16.180 [2024-12-10 22:59:23.604103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.180 [2024-12-10 22:59:23.604138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.180 qpair failed and we were unable to recover it. 00:28:16.180 [2024-12-10 22:59:23.604275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.180 [2024-12-10 22:59:23.604308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.180 qpair failed and we were unable to recover it. 00:28:16.180 [2024-12-10 22:59:23.604447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.180 [2024-12-10 22:59:23.604483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.180 qpair failed and we were unable to recover it. 00:28:16.180 [2024-12-10 22:59:23.604605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.180 [2024-12-10 22:59:23.604639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.180 qpair failed and we were unable to recover it. 00:28:16.181 [2024-12-10 22:59:23.604777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.181 [2024-12-10 22:59:23.604811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.181 qpair failed and we were unable to recover it. 00:28:16.181 [2024-12-10 22:59:23.604932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.181 [2024-12-10 22:59:23.604967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.181 qpair failed and we were unable to recover it. 00:28:16.181 [2024-12-10 22:59:23.605168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.181 [2024-12-10 22:59:23.605206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.181 qpair failed and we were unable to recover it. 00:28:16.181 [2024-12-10 22:59:23.605325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.181 [2024-12-10 22:59:23.605358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.181 qpair failed and we were unable to recover it. 00:28:16.181 [2024-12-10 22:59:23.605480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.181 [2024-12-10 22:59:23.605513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.181 qpair failed and we were unable to recover it. 00:28:16.181 [2024-12-10 22:59:23.605727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.181 [2024-12-10 22:59:23.605761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.181 qpair failed and we were unable to recover it. 00:28:16.181 [2024-12-10 22:59:23.605873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.181 [2024-12-10 22:59:23.605907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.181 qpair failed and we were unable to recover it. 00:28:16.181 [2024-12-10 22:59:23.606095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.181 [2024-12-10 22:59:23.606127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.181 qpair failed and we were unable to recover it. 00:28:16.181 [2024-12-10 22:59:23.606393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.181 [2024-12-10 22:59:23.606427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.181 qpair failed and we were unable to recover it. 00:28:16.181 [2024-12-10 22:59:23.606615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.181 [2024-12-10 22:59:23.606651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.181 qpair failed and we were unable to recover it. 00:28:16.181 [2024-12-10 22:59:23.606844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.181 [2024-12-10 22:59:23.606878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.181 qpair failed and we were unable to recover it. 00:28:16.181 [2024-12-10 22:59:23.606997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.181 [2024-12-10 22:59:23.607032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.181 qpair failed and we were unable to recover it. 00:28:16.181 [2024-12-10 22:59:23.607144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.181 [2024-12-10 22:59:23.607191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.181 qpair failed and we were unable to recover it. 00:28:16.181 [2024-12-10 22:59:23.607328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.181 [2024-12-10 22:59:23.607362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.181 qpair failed and we were unable to recover it. 00:28:16.181 [2024-12-10 22:59:23.607483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.181 [2024-12-10 22:59:23.607516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.181 qpair failed and we were unable to recover it. 00:28:16.181 [2024-12-10 22:59:23.607723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.181 [2024-12-10 22:59:23.607756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.181 qpair failed and we were unable to recover it. 00:28:16.181 [2024-12-10 22:59:23.607881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.181 [2024-12-10 22:59:23.607915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.181 qpair failed and we were unable to recover it. 00:28:16.181 [2024-12-10 22:59:23.608033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.181 [2024-12-10 22:59:23.608066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.181 qpair failed and we were unable to recover it. 00:28:16.181 [2024-12-10 22:59:23.608189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.181 [2024-12-10 22:59:23.608224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.181 qpair failed and we were unable to recover it. 00:28:16.181 [2024-12-10 22:59:23.608336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.181 [2024-12-10 22:59:23.608369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.181 qpair failed and we were unable to recover it. 00:28:16.181 [2024-12-10 22:59:23.608492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.181 [2024-12-10 22:59:23.608536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.181 qpair failed and we were unable to recover it. 00:28:16.181 [2024-12-10 22:59:23.608721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.181 [2024-12-10 22:59:23.608756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.181 qpair failed and we were unable to recover it. 00:28:16.181 [2024-12-10 22:59:23.608877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.181 [2024-12-10 22:59:23.608911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.181 qpair failed and we were unable to recover it. 00:28:16.181 [2024-12-10 22:59:23.609060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.181 [2024-12-10 22:59:23.609095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.181 qpair failed and we were unable to recover it. 00:28:16.181 [2024-12-10 22:59:23.609249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.181 [2024-12-10 22:59:23.609284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.181 qpair failed and we were unable to recover it. 00:28:16.181 [2024-12-10 22:59:23.609411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.181 [2024-12-10 22:59:23.609447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.181 qpair failed and we were unable to recover it. 00:28:16.181 [2024-12-10 22:59:23.609586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.181 [2024-12-10 22:59:23.609620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.181 qpair failed and we were unable to recover it. 00:28:16.181 [2024-12-10 22:59:23.609728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.181 [2024-12-10 22:59:23.609763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.181 qpair failed and we were unable to recover it. 00:28:16.181 [2024-12-10 22:59:23.609885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.181 [2024-12-10 22:59:23.609919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.181 qpair failed and we were unable to recover it. 00:28:16.181 [2024-12-10 22:59:23.610029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.181 [2024-12-10 22:59:23.610064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.181 qpair failed and we were unable to recover it. 00:28:16.181 [2024-12-10 22:59:23.610308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.181 [2024-12-10 22:59:23.610343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.181 qpair failed and we were unable to recover it. 00:28:16.181 [2024-12-10 22:59:23.610458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.181 [2024-12-10 22:59:23.610492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.181 qpair failed and we were unable to recover it. 00:28:16.181 [2024-12-10 22:59:23.610610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.181 [2024-12-10 22:59:23.610645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.181 qpair failed and we were unable to recover it. 00:28:16.181 [2024-12-10 22:59:23.610864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.181 [2024-12-10 22:59:23.610898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.181 qpair failed and we were unable to recover it. 00:28:16.181 [2024-12-10 22:59:23.611025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.181 [2024-12-10 22:59:23.611059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.181 qpair failed and we were unable to recover it. 00:28:16.181 [2024-12-10 22:59:23.611178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.181 [2024-12-10 22:59:23.611214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.181 qpair failed and we were unable to recover it. 00:28:16.181 [2024-12-10 22:59:23.611431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.181 [2024-12-10 22:59:23.611465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.181 qpair failed and we were unable to recover it. 00:28:16.181 [2024-12-10 22:59:23.611721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.181 [2024-12-10 22:59:23.611755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.181 qpair failed and we were unable to recover it. 00:28:16.181 [2024-12-10 22:59:23.611961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.181 [2024-12-10 22:59:23.611994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.181 qpair failed and we were unable to recover it. 00:28:16.181 [2024-12-10 22:59:23.612141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.181 [2024-12-10 22:59:23.612185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.181 qpair failed and we were unable to recover it. 00:28:16.181 [2024-12-10 22:59:23.612385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.181 [2024-12-10 22:59:23.612421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.181 qpair failed and we were unable to recover it. 00:28:16.181 [2024-12-10 22:59:23.612605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.181 [2024-12-10 22:59:23.612640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.181 qpair failed and we were unable to recover it. 00:28:16.181 [2024-12-10 22:59:23.612835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.181 [2024-12-10 22:59:23.612872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.181 qpair failed and we were unable to recover it. 00:28:16.181 [2024-12-10 22:59:23.612991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.181 [2024-12-10 22:59:23.613024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.181 qpair failed and we were unable to recover it. 00:28:16.181 [2024-12-10 22:59:23.613298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.181 [2024-12-10 22:59:23.613332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.181 qpair failed and we were unable to recover it. 00:28:16.181 [2024-12-10 22:59:23.613449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.181 [2024-12-10 22:59:23.613484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.181 qpair failed and we were unable to recover it. 00:28:16.181 [2024-12-10 22:59:23.613628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.181 [2024-12-10 22:59:23.613663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.181 qpair failed and we were unable to recover it. 00:28:16.181 [2024-12-10 22:59:23.613801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.181 [2024-12-10 22:59:23.613834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.181 qpair failed and we were unable to recover it. 00:28:16.181 [2024-12-10 22:59:23.614046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.181 [2024-12-10 22:59:23.614081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.181 qpair failed and we were unable to recover it. 00:28:16.181 [2024-12-10 22:59:23.614287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.181 [2024-12-10 22:59:23.614323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.181 qpair failed and we were unable to recover it. 00:28:16.181 [2024-12-10 22:59:23.614580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.181 [2024-12-10 22:59:23.614615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.181 qpair failed and we were unable to recover it. 00:28:16.181 [2024-12-10 22:59:23.614728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.181 [2024-12-10 22:59:23.614762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.181 qpair failed and we were unable to recover it. 00:28:16.181 [2024-12-10 22:59:23.615019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.181 [2024-12-10 22:59:23.615055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.181 qpair failed and we were unable to recover it. 00:28:16.181 [2024-12-10 22:59:23.615180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.181 [2024-12-10 22:59:23.615215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.181 qpair failed and we were unable to recover it. 00:28:16.181 [2024-12-10 22:59:23.615401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.181 [2024-12-10 22:59:23.615434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.181 qpair failed and we were unable to recover it. 00:28:16.181 [2024-12-10 22:59:23.615636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.181 [2024-12-10 22:59:23.615672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.181 qpair failed and we were unable to recover it. 00:28:16.181 [2024-12-10 22:59:23.615799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.181 [2024-12-10 22:59:23.615833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.181 qpair failed and we were unable to recover it. 00:28:16.181 [2024-12-10 22:59:23.616036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.181 [2024-12-10 22:59:23.616070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.181 qpair failed and we were unable to recover it. 00:28:16.181 [2024-12-10 22:59:23.616251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.181 [2024-12-10 22:59:23.616286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.181 qpair failed and we were unable to recover it. 00:28:16.181 [2024-12-10 22:59:23.616410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.181 [2024-12-10 22:59:23.616443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.181 qpair failed and we were unable to recover it. 00:28:16.181 [2024-12-10 22:59:23.616637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.181 [2024-12-10 22:59:23.616676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.181 qpair failed and we were unable to recover it. 00:28:16.181 [2024-12-10 22:59:23.616945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.181 [2024-12-10 22:59:23.616979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.181 qpair failed and we were unable to recover it. 00:28:16.181 [2024-12-10 22:59:23.617120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.181 [2024-12-10 22:59:23.617154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.181 qpair failed and we were unable to recover it. 00:28:16.181 [2024-12-10 22:59:23.617301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.181 [2024-12-10 22:59:23.617335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.181 qpair failed and we were unable to recover it. 00:28:16.181 [2024-12-10 22:59:23.617558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.181 [2024-12-10 22:59:23.617592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.181 qpair failed and we were unable to recover it. 00:28:16.181 [2024-12-10 22:59:23.617735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.181 [2024-12-10 22:59:23.617769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.181 qpair failed and we were unable to recover it. 00:28:16.181 [2024-12-10 22:59:23.617951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.181 [2024-12-10 22:59:23.617986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.182 qpair failed and we were unable to recover it. 00:28:16.182 [2024-12-10 22:59:23.618224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.182 [2024-12-10 22:59:23.618260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.182 qpair failed and we were unable to recover it. 00:28:16.182 [2024-12-10 22:59:23.618462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.182 [2024-12-10 22:59:23.618496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.182 qpair failed and we were unable to recover it. 00:28:16.182 [2024-12-10 22:59:23.618623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.182 [2024-12-10 22:59:23.618657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.182 qpair failed and we were unable to recover it. 00:28:16.182 [2024-12-10 22:59:23.618834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.182 [2024-12-10 22:59:23.618868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.182 qpair failed and we were unable to recover it. 00:28:16.182 [2024-12-10 22:59:23.619075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.182 [2024-12-10 22:59:23.619110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.182 qpair failed and we were unable to recover it. 00:28:16.182 [2024-12-10 22:59:23.619239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.182 [2024-12-10 22:59:23.619273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.182 qpair failed and we were unable to recover it. 00:28:16.182 [2024-12-10 22:59:23.619455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.182 [2024-12-10 22:59:23.619488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.182 qpair failed and we were unable to recover it. 00:28:16.182 [2024-12-10 22:59:23.619699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.182 [2024-12-10 22:59:23.619733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.182 qpair failed and we were unable to recover it. 00:28:16.182 [2024-12-10 22:59:23.619921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.182 [2024-12-10 22:59:23.619955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.182 qpair failed and we were unable to recover it. 00:28:16.182 [2024-12-10 22:59:23.620168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.182 [2024-12-10 22:59:23.620203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.182 qpair failed and we were unable to recover it. 00:28:16.182 [2024-12-10 22:59:23.620321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.182 [2024-12-10 22:59:23.620355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.182 qpair failed and we were unable to recover it. 00:28:16.182 [2024-12-10 22:59:23.620561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.182 [2024-12-10 22:59:23.620593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.182 qpair failed and we were unable to recover it. 00:28:16.182 [2024-12-10 22:59:23.620717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.182 [2024-12-10 22:59:23.620751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.182 qpair failed and we were unable to recover it. 00:28:16.182 [2024-12-10 22:59:23.621039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.182 [2024-12-10 22:59:23.621075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.182 qpair failed and we were unable to recover it. 00:28:16.182 [2024-12-10 22:59:23.621257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.182 [2024-12-10 22:59:23.621292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.182 qpair failed and we were unable to recover it. 00:28:16.182 [2024-12-10 22:59:23.621501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.182 [2024-12-10 22:59:23.621535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.182 qpair failed and we were unable to recover it. 00:28:16.182 [2024-12-10 22:59:23.621655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.182 [2024-12-10 22:59:23.621689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.182 qpair failed and we were unable to recover it. 00:28:16.182 [2024-12-10 22:59:23.622000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.182 [2024-12-10 22:59:23.622033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.182 qpair failed and we were unable to recover it. 00:28:16.182 [2024-12-10 22:59:23.622219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.182 [2024-12-10 22:59:23.622254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.182 qpair failed and we were unable to recover it. 00:28:16.182 [2024-12-10 22:59:23.622432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.182 [2024-12-10 22:59:23.622466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.182 qpair failed and we were unable to recover it. 00:28:16.182 [2024-12-10 22:59:23.622602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.182 [2024-12-10 22:59:23.622638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.182 qpair failed and we were unable to recover it. 00:28:16.182 [2024-12-10 22:59:23.622761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.182 [2024-12-10 22:59:23.622795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.182 qpair failed and we were unable to recover it. 00:28:16.182 [2024-12-10 22:59:23.623009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.182 [2024-12-10 22:59:23.623043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.182 qpair failed and we were unable to recover it. 00:28:16.182 [2024-12-10 22:59:23.623224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.182 [2024-12-10 22:59:23.623259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.182 qpair failed and we were unable to recover it. 00:28:16.182 [2024-12-10 22:59:23.623449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.182 [2024-12-10 22:59:23.623482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.182 qpair failed and we were unable to recover it. 00:28:16.182 [2024-12-10 22:59:23.623762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.182 [2024-12-10 22:59:23.623797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.182 qpair failed and we were unable to recover it. 00:28:16.182 [2024-12-10 22:59:23.623984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.182 [2024-12-10 22:59:23.624017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.182 qpair failed and we were unable to recover it. 00:28:16.182 [2024-12-10 22:59:23.624283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.182 [2024-12-10 22:59:23.624318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.182 qpair failed and we were unable to recover it. 00:28:16.182 [2024-12-10 22:59:23.624533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.182 [2024-12-10 22:59:23.624566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.182 qpair failed and we were unable to recover it. 00:28:16.182 [2024-12-10 22:59:23.624713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.182 [2024-12-10 22:59:23.624747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.182 qpair failed and we were unable to recover it. 00:28:16.182 [2024-12-10 22:59:23.624872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.182 [2024-12-10 22:59:23.624906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.182 qpair failed and we were unable to recover it. 00:28:16.182 [2024-12-10 22:59:23.625089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.182 [2024-12-10 22:59:23.625123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.182 qpair failed and we were unable to recover it. 00:28:16.182 [2024-12-10 22:59:23.625324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.182 [2024-12-10 22:59:23.625359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.182 qpair failed and we were unable to recover it. 00:28:16.182 [2024-12-10 22:59:23.625589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.182 [2024-12-10 22:59:23.625629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.182 qpair failed and we were unable to recover it. 00:28:16.182 [2024-12-10 22:59:23.625761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.182 [2024-12-10 22:59:23.625795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.182 qpair failed and we were unable to recover it. 00:28:16.182 [2024-12-10 22:59:23.625974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.182 [2024-12-10 22:59:23.626009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.182 qpair failed and we were unable to recover it. 00:28:16.182 [2024-12-10 22:59:23.626280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.182 [2024-12-10 22:59:23.626316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.182 qpair failed and we were unable to recover it. 00:28:16.182 [2024-12-10 22:59:23.626623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.182 [2024-12-10 22:59:23.626657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.182 qpair failed and we were unable to recover it. 00:28:16.182 [2024-12-10 22:59:23.626773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.182 [2024-12-10 22:59:23.626806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.182 qpair failed and we were unable to recover it. 00:28:16.182 [2024-12-10 22:59:23.627011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.182 [2024-12-10 22:59:23.627045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.182 qpair failed and we were unable to recover it. 00:28:16.182 [2024-12-10 22:59:23.627181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.182 [2024-12-10 22:59:23.627216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.182 qpair failed and we were unable to recover it. 00:28:16.182 [2024-12-10 22:59:23.627402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.182 [2024-12-10 22:59:23.627436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.182 qpair failed and we were unable to recover it. 00:28:16.182 [2024-12-10 22:59:23.627645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.182 [2024-12-10 22:59:23.627679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.182 qpair failed and we were unable to recover it. 00:28:16.182 [2024-12-10 22:59:23.627946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.182 [2024-12-10 22:59:23.627981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.182 qpair failed and we were unable to recover it. 00:28:16.182 [2024-12-10 22:59:23.628181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.182 [2024-12-10 22:59:23.628215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.182 qpair failed and we were unable to recover it. 00:28:16.182 [2024-12-10 22:59:23.628491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.182 [2024-12-10 22:59:23.628525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.182 qpair failed and we were unable to recover it. 00:28:16.182 [2024-12-10 22:59:23.628707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.182 [2024-12-10 22:59:23.628741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.182 qpair failed and we were unable to recover it. 00:28:16.182 [2024-12-10 22:59:23.628959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.182 [2024-12-10 22:59:23.628993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.182 qpair failed and we were unable to recover it. 00:28:16.182 [2024-12-10 22:59:23.629121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.182 [2024-12-10 22:59:23.629154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.182 qpair failed and we were unable to recover it. 00:28:16.182 [2024-12-10 22:59:23.629299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.182 [2024-12-10 22:59:23.629334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.182 qpair failed and we were unable to recover it. 00:28:16.182 [2024-12-10 22:59:23.629634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.182 [2024-12-10 22:59:23.629668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.182 qpair failed and we were unable to recover it. 00:28:16.182 [2024-12-10 22:59:23.629778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.182 [2024-12-10 22:59:23.629812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.182 qpair failed and we were unable to recover it. 00:28:16.182 [2024-12-10 22:59:23.630065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.182 [2024-12-10 22:59:23.630097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.182 qpair failed and we were unable to recover it. 00:28:16.182 [2024-12-10 22:59:23.630274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.182 [2024-12-10 22:59:23.630309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.182 qpair failed and we were unable to recover it. 00:28:16.182 [2024-12-10 22:59:23.630458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.182 [2024-12-10 22:59:23.630492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.182 qpair failed and we were unable to recover it. 00:28:16.182 [2024-12-10 22:59:23.630616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.182 [2024-12-10 22:59:23.630651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.182 qpair failed and we were unable to recover it. 00:28:16.182 [2024-12-10 22:59:23.630774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.182 [2024-12-10 22:59:23.630807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.182 qpair failed and we were unable to recover it. 00:28:16.182 [2024-12-10 22:59:23.630987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.182 [2024-12-10 22:59:23.631021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.182 qpair failed and we were unable to recover it. 00:28:16.182 [2024-12-10 22:59:23.631199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.182 [2024-12-10 22:59:23.631235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.182 qpair failed and we were unable to recover it. 00:28:16.182 [2024-12-10 22:59:23.631383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.182 [2024-12-10 22:59:23.631417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.182 qpair failed and we were unable to recover it. 00:28:16.182 [2024-12-10 22:59:23.631542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.182 [2024-12-10 22:59:23.631576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.182 qpair failed and we were unable to recover it. 00:28:16.182 [2024-12-10 22:59:23.631753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.182 [2024-12-10 22:59:23.631787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.182 qpair failed and we were unable to recover it. 00:28:16.182 [2024-12-10 22:59:23.631973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.182 [2024-12-10 22:59:23.632007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.182 qpair failed and we were unable to recover it. 00:28:16.182 [2024-12-10 22:59:23.632193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.182 [2024-12-10 22:59:23.632227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.182 qpair failed and we were unable to recover it. 00:28:16.182 [2024-12-10 22:59:23.632360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.182 [2024-12-10 22:59:23.632395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.182 qpair failed and we were unable to recover it. 00:28:16.182 [2024-12-10 22:59:23.632606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.182 [2024-12-10 22:59:23.632639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.182 qpair failed and we were unable to recover it. 00:28:16.182 [2024-12-10 22:59:23.632826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.182 [2024-12-10 22:59:23.632860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.182 qpair failed and we were unable to recover it. 00:28:16.182 [2024-12-10 22:59:23.632994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.182 [2024-12-10 22:59:23.633028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.182 qpair failed and we were unable to recover it. 00:28:16.182 [2024-12-10 22:59:23.633280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.182 [2024-12-10 22:59:23.633316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.182 qpair failed and we were unable to recover it. 00:28:16.182 [2024-12-10 22:59:23.633500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.182 [2024-12-10 22:59:23.633534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.182 qpair failed and we were unable to recover it. 00:28:16.182 [2024-12-10 22:59:23.633687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.182 [2024-12-10 22:59:23.633720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.182 qpair failed and we were unable to recover it. 00:28:16.182 [2024-12-10 22:59:23.633945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.182 [2024-12-10 22:59:23.633980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.182 qpair failed and we were unable to recover it. 00:28:16.182 [2024-12-10 22:59:23.634266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.182 [2024-12-10 22:59:23.634301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.182 qpair failed and we were unable to recover it. 00:28:16.182 [2024-12-10 22:59:23.634511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.182 [2024-12-10 22:59:23.634550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.182 qpair failed and we were unable to recover it. 00:28:16.183 [2024-12-10 22:59:23.634671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.183 [2024-12-10 22:59:23.634704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.183 qpair failed and we were unable to recover it. 00:28:16.183 [2024-12-10 22:59:23.634909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.183 [2024-12-10 22:59:23.634942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.183 qpair failed and we were unable to recover it. 00:28:16.183 [2024-12-10 22:59:23.635147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.183 [2024-12-10 22:59:23.635193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.183 qpair failed and we were unable to recover it. 00:28:16.183 [2024-12-10 22:59:23.635429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.183 [2024-12-10 22:59:23.635462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.183 qpair failed and we were unable to recover it. 00:28:16.183 [2024-12-10 22:59:23.635610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.183 [2024-12-10 22:59:23.635643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.183 qpair failed and we were unable to recover it. 00:28:16.183 [2024-12-10 22:59:23.635790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.183 [2024-12-10 22:59:23.635823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.183 qpair failed and we were unable to recover it. 00:28:16.183 [2024-12-10 22:59:23.635951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.183 [2024-12-10 22:59:23.635984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.183 qpair failed and we were unable to recover it. 00:28:16.183 [2024-12-10 22:59:23.636191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.183 [2024-12-10 22:59:23.636226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.183 qpair failed and we were unable to recover it. 00:28:16.183 [2024-12-10 22:59:23.636429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.183 [2024-12-10 22:59:23.636463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.183 qpair failed and we were unable to recover it. 00:28:16.183 [2024-12-10 22:59:23.636671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.183 [2024-12-10 22:59:23.636705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.183 qpair failed and we were unable to recover it. 00:28:16.183 [2024-12-10 22:59:23.636916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.183 [2024-12-10 22:59:23.636951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.183 qpair failed and we were unable to recover it. 00:28:16.183 [2024-12-10 22:59:23.637131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.183 [2024-12-10 22:59:23.637172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.183 qpair failed and we were unable to recover it. 00:28:16.183 [2024-12-10 22:59:23.637307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.183 [2024-12-10 22:59:23.637340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.183 qpair failed and we were unable to recover it. 00:28:16.183 [2024-12-10 22:59:23.637458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.183 [2024-12-10 22:59:23.637492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.183 qpair failed and we were unable to recover it. 00:28:16.183 [2024-12-10 22:59:23.637674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.183 [2024-12-10 22:59:23.637708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.183 qpair failed and we were unable to recover it. 00:28:16.183 [2024-12-10 22:59:23.637889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.183 [2024-12-10 22:59:23.637922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.183 qpair failed and we were unable to recover it. 00:28:16.183 [2024-12-10 22:59:23.638198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.183 [2024-12-10 22:59:23.638234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.183 qpair failed and we were unable to recover it. 00:28:16.183 [2024-12-10 22:59:23.638488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.183 [2024-12-10 22:59:23.638523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.183 qpair failed and we were unable to recover it. 00:28:16.183 [2024-12-10 22:59:23.638720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.183 [2024-12-10 22:59:23.638755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.183 qpair failed and we were unable to recover it. 00:28:16.183 [2024-12-10 22:59:23.638991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.183 [2024-12-10 22:59:23.639025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.183 qpair failed and we were unable to recover it. 00:28:16.183 [2024-12-10 22:59:23.639145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.183 [2024-12-10 22:59:23.639186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.183 qpair failed and we were unable to recover it. 00:28:16.183 [2024-12-10 22:59:23.639408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.183 [2024-12-10 22:59:23.639442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.183 qpair failed and we were unable to recover it. 00:28:16.183 [2024-12-10 22:59:23.639685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.183 [2024-12-10 22:59:23.639718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.183 qpair failed and we were unable to recover it. 00:28:16.183 [2024-12-10 22:59:23.639828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.183 [2024-12-10 22:59:23.639862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.183 qpair failed and we were unable to recover it. 00:28:16.183 [2024-12-10 22:59:23.639984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.183 [2024-12-10 22:59:23.640018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.183 qpair failed and we were unable to recover it. 00:28:16.183 [2024-12-10 22:59:23.640209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.183 [2024-12-10 22:59:23.640245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.183 qpair failed and we were unable to recover it. 00:28:16.183 [2024-12-10 22:59:23.640446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.183 [2024-12-10 22:59:23.640480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.183 qpair failed and we were unable to recover it. 00:28:16.183 [2024-12-10 22:59:23.640664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.183 [2024-12-10 22:59:23.640698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.183 qpair failed and we were unable to recover it. 00:28:16.183 [2024-12-10 22:59:23.641001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.183 [2024-12-10 22:59:23.641035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.183 qpair failed and we were unable to recover it. 00:28:16.183 [2024-12-10 22:59:23.641275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.183 [2024-12-10 22:59:23.641311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.183 qpair failed and we were unable to recover it. 00:28:16.183 [2024-12-10 22:59:23.641513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.183 [2024-12-10 22:59:23.641547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.183 qpair failed and we were unable to recover it. 00:28:16.183 [2024-12-10 22:59:23.641731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.183 [2024-12-10 22:59:23.641765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.183 qpair failed and we were unable to recover it. 00:28:16.183 [2024-12-10 22:59:23.641946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.183 [2024-12-10 22:59:23.641980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.183 qpair failed and we were unable to recover it. 00:28:16.183 [2024-12-10 22:59:23.642211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.183 [2024-12-10 22:59:23.642246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.183 qpair failed and we were unable to recover it. 00:28:16.183 [2024-12-10 22:59:23.642394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.183 [2024-12-10 22:59:23.642429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.183 qpair failed and we were unable to recover it. 00:28:16.183 [2024-12-10 22:59:23.642550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.183 [2024-12-10 22:59:23.642584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.183 qpair failed and we were unable to recover it. 00:28:16.183 [2024-12-10 22:59:23.642692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.183 [2024-12-10 22:59:23.642726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.183 qpair failed and we were unable to recover it. 00:28:16.183 [2024-12-10 22:59:23.642837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.183 [2024-12-10 22:59:23.642871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.183 qpair failed and we were unable to recover it. 00:28:16.183 [2024-12-10 22:59:23.643078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.183 [2024-12-10 22:59:23.643114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.183 qpair failed and we were unable to recover it. 00:28:16.183 [2024-12-10 22:59:23.643270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.183 [2024-12-10 22:59:23.643309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.183 qpair failed and we were unable to recover it. 00:28:16.183 [2024-12-10 22:59:23.643494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.183 [2024-12-10 22:59:23.643529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.183 qpair failed and we were unable to recover it. 00:28:16.183 [2024-12-10 22:59:23.643712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.183 [2024-12-10 22:59:23.643745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.183 qpair failed and we were unable to recover it. 00:28:16.183 [2024-12-10 22:59:23.644010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.183 [2024-12-10 22:59:23.644045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.183 qpair failed and we were unable to recover it. 00:28:16.183 [2024-12-10 22:59:23.644235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.183 [2024-12-10 22:59:23.644270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.183 qpair failed and we were unable to recover it. 00:28:16.183 [2024-12-10 22:59:23.644472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.183 [2024-12-10 22:59:23.644506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.183 qpair failed and we were unable to recover it. 00:28:16.183 [2024-12-10 22:59:23.644688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.183 [2024-12-10 22:59:23.644721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.183 qpair failed and we were unable to recover it. 00:28:16.183 [2024-12-10 22:59:23.645016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.183 [2024-12-10 22:59:23.645049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.183 qpair failed and we were unable to recover it. 00:28:16.183 [2024-12-10 22:59:23.645175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.183 [2024-12-10 22:59:23.645211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.183 qpair failed and we were unable to recover it. 00:28:16.183 [2024-12-10 22:59:23.645355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.183 [2024-12-10 22:59:23.645388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.183 qpair failed and we were unable to recover it. 00:28:16.183 [2024-12-10 22:59:23.645520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.183 [2024-12-10 22:59:23.645554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.183 qpair failed and we were unable to recover it. 00:28:16.183 [2024-12-10 22:59:23.645675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.183 [2024-12-10 22:59:23.645708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.183 qpair failed and we were unable to recover it. 00:28:16.183 [2024-12-10 22:59:23.645943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.183 [2024-12-10 22:59:23.645977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.183 qpair failed and we were unable to recover it. 00:28:16.183 [2024-12-10 22:59:23.646184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.183 [2024-12-10 22:59:23.646219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.183 qpair failed and we were unable to recover it. 00:28:16.183 [2024-12-10 22:59:23.646370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.183 [2024-12-10 22:59:23.646403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.183 qpair failed and we were unable to recover it. 00:28:16.183 [2024-12-10 22:59:23.646582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.183 [2024-12-10 22:59:23.646615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.183 qpair failed and we were unable to recover it. 00:28:16.183 [2024-12-10 22:59:23.646845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.183 [2024-12-10 22:59:23.646879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.183 qpair failed and we were unable to recover it. 00:28:16.183 [2024-12-10 22:59:23.647060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.183 [2024-12-10 22:59:23.647095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.183 qpair failed and we were unable to recover it. 00:28:16.183 [2024-12-10 22:59:23.647301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.183 [2024-12-10 22:59:23.647336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.183 qpair failed and we were unable to recover it. 00:28:16.183 [2024-12-10 22:59:23.647457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.183 [2024-12-10 22:59:23.647490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.183 qpair failed and we were unable to recover it. 00:28:16.183 [2024-12-10 22:59:23.647627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.183 [2024-12-10 22:59:23.647661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.183 qpair failed and we were unable to recover it. 00:28:16.183 [2024-12-10 22:59:23.647792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.183 [2024-12-10 22:59:23.647826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.183 qpair failed and we were unable to recover it. 00:28:16.183 [2024-12-10 22:59:23.648031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.183 [2024-12-10 22:59:23.648066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.183 qpair failed and we were unable to recover it. 00:28:16.183 [2024-12-10 22:59:23.648299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.183 [2024-12-10 22:59:23.648334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.183 qpair failed and we were unable to recover it. 00:28:16.183 [2024-12-10 22:59:23.648467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.183 [2024-12-10 22:59:23.648500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.183 qpair failed and we were unable to recover it. 00:28:16.183 [2024-12-10 22:59:23.648692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.183 [2024-12-10 22:59:23.648727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.183 qpair failed and we were unable to recover it. 00:28:16.184 [2024-12-10 22:59:23.648862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.184 [2024-12-10 22:59:23.648895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.184 qpair failed and we were unable to recover it. 00:28:16.184 [2024-12-10 22:59:23.649088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.184 [2024-12-10 22:59:23.649122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.184 qpair failed and we were unable to recover it. 00:28:16.184 [2024-12-10 22:59:23.649334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.184 [2024-12-10 22:59:23.649370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.184 qpair failed and we were unable to recover it. 00:28:16.184 [2024-12-10 22:59:23.649499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.184 [2024-12-10 22:59:23.649532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.184 qpair failed and we were unable to recover it. 00:28:16.184 [2024-12-10 22:59:23.649644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.184 [2024-12-10 22:59:23.649675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.184 qpair failed and we were unable to recover it. 00:28:16.184 [2024-12-10 22:59:23.649803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.184 [2024-12-10 22:59:23.649836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.184 qpair failed and we were unable to recover it. 00:28:16.184 [2024-12-10 22:59:23.650068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.184 [2024-12-10 22:59:23.650102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.184 qpair failed and we were unable to recover it. 00:28:16.184 [2024-12-10 22:59:23.650311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.184 [2024-12-10 22:59:23.650346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.184 qpair failed and we were unable to recover it. 00:28:16.184 [2024-12-10 22:59:23.650471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.184 [2024-12-10 22:59:23.650504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.184 qpair failed and we were unable to recover it. 00:28:16.184 [2024-12-10 22:59:23.650635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.184 [2024-12-10 22:59:23.650669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.184 qpair failed and we were unable to recover it. 00:28:16.184 [2024-12-10 22:59:23.650886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.184 [2024-12-10 22:59:23.650919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.184 qpair failed and we were unable to recover it. 00:28:16.184 [2024-12-10 22:59:23.651202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.184 [2024-12-10 22:59:23.651236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.184 qpair failed and we were unable to recover it. 00:28:16.184 [2024-12-10 22:59:23.651546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.184 [2024-12-10 22:59:23.651580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.184 qpair failed and we were unable to recover it. 00:28:16.184 [2024-12-10 22:59:23.651777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.184 [2024-12-10 22:59:23.651810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.184 qpair failed and we were unable to recover it. 00:28:16.184 [2024-12-10 22:59:23.651991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.184 [2024-12-10 22:59:23.652036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.184 qpair failed and we were unable to recover it. 00:28:16.184 [2024-12-10 22:59:23.652245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.184 [2024-12-10 22:59:23.652279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.184 qpair failed and we were unable to recover it. 00:28:16.184 [2024-12-10 22:59:23.652406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.184 [2024-12-10 22:59:23.652440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.184 qpair failed and we were unable to recover it. 00:28:16.184 [2024-12-10 22:59:23.652560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.184 [2024-12-10 22:59:23.652593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.184 qpair failed and we were unable to recover it. 00:28:16.184 [2024-12-10 22:59:23.652703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.184 [2024-12-10 22:59:23.652737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.184 qpair failed and we were unable to recover it. 00:28:16.184 [2024-12-10 22:59:23.652941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.184 [2024-12-10 22:59:23.652975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.184 qpair failed and we were unable to recover it. 00:28:16.184 [2024-12-10 22:59:23.653252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.184 [2024-12-10 22:59:23.653287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.184 qpair failed and we were unable to recover it. 00:28:16.184 [2024-12-10 22:59:23.653486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.184 [2024-12-10 22:59:23.653520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.184 qpair failed and we were unable to recover it. 00:28:16.184 [2024-12-10 22:59:23.653701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.184 [2024-12-10 22:59:23.653735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.184 qpair failed and we were unable to recover it. 00:28:16.184 [2024-12-10 22:59:23.653992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.184 [2024-12-10 22:59:23.654026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.184 qpair failed and we were unable to recover it. 00:28:16.184 [2024-12-10 22:59:23.654208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.184 [2024-12-10 22:59:23.654242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.184 qpair failed and we were unable to recover it. 00:28:16.184 [2024-12-10 22:59:23.654514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.184 [2024-12-10 22:59:23.654549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.184 qpair failed and we were unable to recover it. 00:28:16.184 [2024-12-10 22:59:23.654928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.184 [2024-12-10 22:59:23.654962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.184 qpair failed and we were unable to recover it. 00:28:16.184 [2024-12-10 22:59:23.655232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.184 [2024-12-10 22:59:23.655268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.184 qpair failed and we were unable to recover it. 00:28:16.184 [2024-12-10 22:59:23.655431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.184 [2024-12-10 22:59:23.655464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.184 qpair failed and we were unable to recover it. 00:28:16.184 [2024-12-10 22:59:23.655585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.184 [2024-12-10 22:59:23.655618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.184 qpair failed and we were unable to recover it. 00:28:16.184 [2024-12-10 22:59:23.655895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.184 [2024-12-10 22:59:23.655929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.184 qpair failed and we were unable to recover it. 00:28:16.184 [2024-12-10 22:59:23.656122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.184 [2024-12-10 22:59:23.656156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.184 qpair failed and we were unable to recover it. 00:28:16.184 [2024-12-10 22:59:23.656395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.184 [2024-12-10 22:59:23.656430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.184 qpair failed and we were unable to recover it. 00:28:16.184 [2024-12-10 22:59:23.656608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.184 [2024-12-10 22:59:23.656642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.184 qpair failed and we were unable to recover it. 00:28:16.184 [2024-12-10 22:59:23.656911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.184 [2024-12-10 22:59:23.656945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.184 qpair failed and we were unable to recover it. 00:28:16.184 [2024-12-10 22:59:23.657171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.184 [2024-12-10 22:59:23.657207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.184 qpair failed and we were unable to recover it. 00:28:16.184 [2024-12-10 22:59:23.657363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.184 [2024-12-10 22:59:23.657396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.184 qpair failed and we were unable to recover it. 00:28:16.184 [2024-12-10 22:59:23.657520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.184 [2024-12-10 22:59:23.657555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.184 qpair failed and we were unable to recover it. 00:28:16.184 [2024-12-10 22:59:23.657665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.184 [2024-12-10 22:59:23.657699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.184 qpair failed and we were unable to recover it. 00:28:16.184 [2024-12-10 22:59:23.657994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.184 [2024-12-10 22:59:23.658029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.184 qpair failed and we were unable to recover it. 00:28:16.184 [2024-12-10 22:59:23.658190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.184 [2024-12-10 22:59:23.658227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.184 qpair failed and we were unable to recover it. 00:28:16.184 [2024-12-10 22:59:23.658418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.184 [2024-12-10 22:59:23.658453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.184 qpair failed and we were unable to recover it. 00:28:16.184 [2024-12-10 22:59:23.658642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.184 [2024-12-10 22:59:23.658675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.184 qpair failed and we were unable to recover it. 00:28:16.184 [2024-12-10 22:59:23.658785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.184 [2024-12-10 22:59:23.658819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.184 qpair failed and we were unable to recover it. 00:28:16.184 [2024-12-10 22:59:23.658962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.184 [2024-12-10 22:59:23.658996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.184 qpair failed and we were unable to recover it. 00:28:16.184 [2024-12-10 22:59:23.659292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.184 [2024-12-10 22:59:23.659328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.184 qpair failed and we were unable to recover it. 00:28:16.184 [2024-12-10 22:59:23.659454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.184 [2024-12-10 22:59:23.659489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.184 qpair failed and we were unable to recover it. 00:28:16.184 [2024-12-10 22:59:23.659696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.184 [2024-12-10 22:59:23.659730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.184 qpair failed and we were unable to recover it. 00:28:16.184 [2024-12-10 22:59:23.659920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.184 [2024-12-10 22:59:23.659953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.184 qpair failed and we were unable to recover it. 00:28:16.184 [2024-12-10 22:59:23.660250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.184 [2024-12-10 22:59:23.660284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.184 qpair failed and we were unable to recover it. 00:28:16.184 [2024-12-10 22:59:23.660440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.184 [2024-12-10 22:59:23.660473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.184 qpair failed and we were unable to recover it. 00:28:16.184 [2024-12-10 22:59:23.660690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.184 [2024-12-10 22:59:23.660724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.184 qpair failed and we were unable to recover it. 00:28:16.184 [2024-12-10 22:59:23.660846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.184 [2024-12-10 22:59:23.660881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.184 qpair failed and we were unable to recover it. 00:28:16.184 [2024-12-10 22:59:23.661136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.184 [2024-12-10 22:59:23.661177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.184 qpair failed and we were unable to recover it. 00:28:16.184 [2024-12-10 22:59:23.661301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.184 [2024-12-10 22:59:23.661340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.184 qpair failed and we were unable to recover it. 00:28:16.184 [2024-12-10 22:59:23.661570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.184 [2024-12-10 22:59:23.661604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.184 qpair failed and we were unable to recover it. 00:28:16.184 [2024-12-10 22:59:23.661979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.184 [2024-12-10 22:59:23.662013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.184 qpair failed and we were unable to recover it. 00:28:16.184 [2024-12-10 22:59:23.662127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.184 [2024-12-10 22:59:23.662169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.184 qpair failed and we were unable to recover it. 00:28:16.184 [2024-12-10 22:59:23.662427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.184 [2024-12-10 22:59:23.662461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.184 qpair failed and we were unable to recover it. 00:28:16.184 [2024-12-10 22:59:23.662661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.184 [2024-12-10 22:59:23.662694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.184 qpair failed and we were unable to recover it. 00:28:16.184 [2024-12-10 22:59:23.662922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.184 [2024-12-10 22:59:23.662956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.184 qpair failed and we were unable to recover it. 00:28:16.184 [2024-12-10 22:59:23.663232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.184 [2024-12-10 22:59:23.663266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.184 qpair failed and we were unable to recover it. 00:28:16.184 [2024-12-10 22:59:23.663401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.184 [2024-12-10 22:59:23.663434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.184 qpair failed and we were unable to recover it. 00:28:16.184 [2024-12-10 22:59:23.663540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.184 [2024-12-10 22:59:23.663574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.184 qpair failed and we were unable to recover it. 00:28:16.184 [2024-12-10 22:59:23.663806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.184 [2024-12-10 22:59:23.663840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.184 qpair failed and we were unable to recover it. 00:28:16.184 [2024-12-10 22:59:23.663983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.184 [2024-12-10 22:59:23.664017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.184 qpair failed and we were unable to recover it. 00:28:16.184 [2024-12-10 22:59:23.664171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.184 [2024-12-10 22:59:23.664206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.184 qpair failed and we were unable to recover it. 00:28:16.184 [2024-12-10 22:59:23.664387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.184 [2024-12-10 22:59:23.664422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.184 qpair failed and we were unable to recover it. 00:28:16.184 [2024-12-10 22:59:23.664704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.184 [2024-12-10 22:59:23.664739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.184 qpair failed and we were unable to recover it. 00:28:16.184 [2024-12-10 22:59:23.665017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.184 [2024-12-10 22:59:23.665051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.184 qpair failed and we were unable to recover it. 00:28:16.184 [2024-12-10 22:59:23.665249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.184 [2024-12-10 22:59:23.665284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.184 qpair failed and we were unable to recover it. 00:28:16.184 [2024-12-10 22:59:23.665558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.184 [2024-12-10 22:59:23.665591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.184 qpair failed and we were unable to recover it. 00:28:16.184 [2024-12-10 22:59:23.665864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.184 [2024-12-10 22:59:23.665899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.184 qpair failed and we were unable to recover it. 00:28:16.184 [2024-12-10 22:59:23.666110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.185 [2024-12-10 22:59:23.666144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.185 qpair failed and we were unable to recover it. 00:28:16.185 [2024-12-10 22:59:23.666428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.185 [2024-12-10 22:59:23.666463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.185 qpair failed and we were unable to recover it. 00:28:16.185 [2024-12-10 22:59:23.666646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.185 [2024-12-10 22:59:23.666680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.185 qpair failed and we were unable to recover it. 00:28:16.185 [2024-12-10 22:59:23.666804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.185 [2024-12-10 22:59:23.666838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.185 qpair failed and we were unable to recover it. 00:28:16.185 [2024-12-10 22:59:23.667032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.185 [2024-12-10 22:59:23.667067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.185 qpair failed and we were unable to recover it. 00:28:16.185 [2024-12-10 22:59:23.667203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.185 [2024-12-10 22:59:23.667239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.185 qpair failed and we were unable to recover it. 00:28:16.185 [2024-12-10 22:59:23.667498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.185 [2024-12-10 22:59:23.667532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.185 qpair failed and we were unable to recover it. 00:28:16.185 [2024-12-10 22:59:23.667763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.185 [2024-12-10 22:59:23.667796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.185 qpair failed and we were unable to recover it. 00:28:16.185 [2024-12-10 22:59:23.668006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.185 [2024-12-10 22:59:23.668041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.185 qpair failed and we were unable to recover it. 00:28:16.185 [2024-12-10 22:59:23.668223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.185 [2024-12-10 22:59:23.668258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.185 qpair failed and we were unable to recover it. 00:28:16.185 [2024-12-10 22:59:23.668394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.185 [2024-12-10 22:59:23.668428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.185 qpair failed and we were unable to recover it. 00:28:16.185 [2024-12-10 22:59:23.668609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.185 [2024-12-10 22:59:23.668643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.185 qpair failed and we were unable to recover it. 00:28:16.185 [2024-12-10 22:59:23.668855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.185 [2024-12-10 22:59:23.668889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.185 qpair failed and we were unable to recover it. 00:28:16.185 [2024-12-10 22:59:23.669155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.185 [2024-12-10 22:59:23.669197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.185 qpair failed and we were unable to recover it. 00:28:16.185 [2024-12-10 22:59:23.669311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.185 [2024-12-10 22:59:23.669342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.185 qpair failed and we were unable to recover it. 00:28:16.185 [2024-12-10 22:59:23.669519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.185 [2024-12-10 22:59:23.669553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.185 qpair failed and we were unable to recover it. 00:28:16.185 [2024-12-10 22:59:23.669758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.185 [2024-12-10 22:59:23.669792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.185 qpair failed and we were unable to recover it. 00:28:16.185 [2024-12-10 22:59:23.670058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.185 [2024-12-10 22:59:23.670092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.185 qpair failed and we were unable to recover it. 00:28:16.185 [2024-12-10 22:59:23.670211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.185 [2024-12-10 22:59:23.670245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.185 qpair failed and we were unable to recover it. 00:28:16.185 [2024-12-10 22:59:23.670445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.185 [2024-12-10 22:59:23.670478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.185 qpair failed and we were unable to recover it. 00:28:16.185 [2024-12-10 22:59:23.670666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.185 [2024-12-10 22:59:23.670700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.185 qpair failed and we were unable to recover it. 00:28:16.185 [2024-12-10 22:59:23.670913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.185 [2024-12-10 22:59:23.670947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.185 qpair failed and we were unable to recover it. 00:28:16.185 [2024-12-10 22:59:23.671213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.185 [2024-12-10 22:59:23.671249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.185 qpair failed and we were unable to recover it. 00:28:16.185 [2024-12-10 22:59:23.671455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.185 [2024-12-10 22:59:23.671489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.185 qpair failed and we were unable to recover it. 00:28:16.185 [2024-12-10 22:59:23.671612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.185 [2024-12-10 22:59:23.671644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.185 qpair failed and we were unable to recover it. 00:28:16.185 [2024-12-10 22:59:23.671867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.185 [2024-12-10 22:59:23.671901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.185 qpair failed and we were unable to recover it. 00:28:16.185 [2024-12-10 22:59:23.672155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.185 [2024-12-10 22:59:23.672197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.185 qpair failed and we were unable to recover it. 00:28:16.185 [2024-12-10 22:59:23.672357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.185 [2024-12-10 22:59:23.672391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.185 qpair failed and we were unable to recover it. 00:28:16.185 [2024-12-10 22:59:23.672595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.185 [2024-12-10 22:59:23.672628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.185 qpair failed and we were unable to recover it. 00:28:16.185 [2024-12-10 22:59:23.672814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.185 [2024-12-10 22:59:23.672848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.185 qpair failed and we were unable to recover it. 00:28:16.185 [2024-12-10 22:59:23.673125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.185 [2024-12-10 22:59:23.673166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.185 qpair failed and we were unable to recover it. 00:28:16.185 [2024-12-10 22:59:23.673447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.185 [2024-12-10 22:59:23.673481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.185 qpair failed and we were unable to recover it. 00:28:16.185 [2024-12-10 22:59:23.673760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.185 [2024-12-10 22:59:23.673793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.185 qpair failed and we were unable to recover it. 00:28:16.185 [2024-12-10 22:59:23.674020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.185 [2024-12-10 22:59:23.674054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.185 qpair failed and we were unable to recover it. 00:28:16.185 [2024-12-10 22:59:23.674274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.185 [2024-12-10 22:59:23.674309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.185 qpair failed and we were unable to recover it. 00:28:16.185 [2024-12-10 22:59:23.674465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.185 [2024-12-10 22:59:23.674499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.185 qpair failed and we were unable to recover it. 00:28:16.185 [2024-12-10 22:59:23.674681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.185 [2024-12-10 22:59:23.674714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.185 qpair failed and we were unable to recover it. 00:28:16.185 [2024-12-10 22:59:23.674929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.185 [2024-12-10 22:59:23.674962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.185 qpair failed and we were unable to recover it. 00:28:16.185 [2024-12-10 22:59:23.675219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.185 [2024-12-10 22:59:23.675254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.185 qpair failed and we were unable to recover it. 00:28:16.185 [2024-12-10 22:59:23.675409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.185 [2024-12-10 22:59:23.675443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.185 qpair failed and we were unable to recover it. 00:28:16.185 [2024-12-10 22:59:23.675648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.185 [2024-12-10 22:59:23.675686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.185 qpair failed and we were unable to recover it. 00:28:16.185 [2024-12-10 22:59:23.675817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.185 [2024-12-10 22:59:23.675850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.185 qpair failed and we were unable to recover it. 00:28:16.185 [2024-12-10 22:59:23.676128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.185 [2024-12-10 22:59:23.676172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.185 qpair failed and we were unable to recover it. 00:28:16.185 [2024-12-10 22:59:23.676294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.185 [2024-12-10 22:59:23.676327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.185 qpair failed and we were unable to recover it. 00:28:16.185 [2024-12-10 22:59:23.676527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.185 [2024-12-10 22:59:23.676560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.185 qpair failed and we were unable to recover it. 00:28:16.185 [2024-12-10 22:59:23.676678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.185 [2024-12-10 22:59:23.676712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.185 qpair failed and we were unable to recover it. 00:28:16.185 [2024-12-10 22:59:23.676841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.185 [2024-12-10 22:59:23.676875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.185 qpair failed and we were unable to recover it. 00:28:16.185 [2024-12-10 22:59:23.677018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.185 [2024-12-10 22:59:23.677052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.185 qpair failed and we were unable to recover it. 00:28:16.185 [2024-12-10 22:59:23.677264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.185 [2024-12-10 22:59:23.677306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.185 qpair failed and we were unable to recover it. 00:28:16.185 [2024-12-10 22:59:23.677423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.185 [2024-12-10 22:59:23.677458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.185 qpair failed and we were unable to recover it. 00:28:16.185 [2024-12-10 22:59:23.677637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.185 [2024-12-10 22:59:23.677671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.185 qpair failed and we were unable to recover it. 00:28:16.185 [2024-12-10 22:59:23.677905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.185 [2024-12-10 22:59:23.677939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.185 qpair failed and we were unable to recover it. 00:28:16.185 [2024-12-10 22:59:23.678137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.185 [2024-12-10 22:59:23.678180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.185 qpair failed and we were unable to recover it. 00:28:16.185 [2024-12-10 22:59:23.678295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.185 [2024-12-10 22:59:23.678328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.185 qpair failed and we were unable to recover it. 00:28:16.185 [2024-12-10 22:59:23.678604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.185 [2024-12-10 22:59:23.678639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.185 qpair failed and we were unable to recover it. 00:28:16.185 [2024-12-10 22:59:23.678753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.185 [2024-12-10 22:59:23.678787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.185 qpair failed and we were unable to recover it. 00:28:16.185 [2024-12-10 22:59:23.679069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.185 [2024-12-10 22:59:23.679103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.185 qpair failed and we were unable to recover it. 00:28:16.185 [2024-12-10 22:59:23.679382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.185 [2024-12-10 22:59:23.679418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.185 qpair failed and we were unable to recover it. 00:28:16.185 [2024-12-10 22:59:23.679611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.185 [2024-12-10 22:59:23.679644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.185 qpair failed and we were unable to recover it. 00:28:16.185 [2024-12-10 22:59:23.679876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.185 [2024-12-10 22:59:23.679909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.185 qpair failed and we were unable to recover it. 00:28:16.185 [2024-12-10 22:59:23.680052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.185 [2024-12-10 22:59:23.680086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.185 qpair failed and we were unable to recover it. 00:28:16.185 [2024-12-10 22:59:23.680286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.185 [2024-12-10 22:59:23.680321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.185 qpair failed and we were unable to recover it. 00:28:16.185 [2024-12-10 22:59:23.680457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.185 [2024-12-10 22:59:23.680492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.185 qpair failed and we were unable to recover it. 00:28:16.185 [2024-12-10 22:59:23.680770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.185 [2024-12-10 22:59:23.680803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.185 qpair failed and we were unable to recover it. 00:28:16.185 [2024-12-10 22:59:23.680982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.185 [2024-12-10 22:59:23.681015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.185 qpair failed and we were unable to recover it. 00:28:16.185 [2024-12-10 22:59:23.681210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.185 [2024-12-10 22:59:23.681246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.185 qpair failed and we were unable to recover it. 00:28:16.185 [2024-12-10 22:59:23.681438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.185 [2024-12-10 22:59:23.681472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.185 qpair failed and we were unable to recover it. 00:28:16.185 [2024-12-10 22:59:23.681752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.185 [2024-12-10 22:59:23.681786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.185 qpair failed and we were unable to recover it. 00:28:16.185 [2024-12-10 22:59:23.681994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.186 [2024-12-10 22:59:23.682028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.186 qpair failed and we were unable to recover it. 00:28:16.186 [2024-12-10 22:59:23.682149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.186 [2024-12-10 22:59:23.682191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.186 qpair failed and we were unable to recover it. 00:28:16.186 [2024-12-10 22:59:23.682319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.186 [2024-12-10 22:59:23.682353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.186 qpair failed and we were unable to recover it. 00:28:16.186 [2024-12-10 22:59:23.682488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.186 [2024-12-10 22:59:23.682521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.186 qpair failed and we were unable to recover it. 00:28:16.186 [2024-12-10 22:59:23.682718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.186 [2024-12-10 22:59:23.682752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.186 qpair failed and we were unable to recover it. 00:28:16.186 [2024-12-10 22:59:23.682950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.186 [2024-12-10 22:59:23.682985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.186 qpair failed and we were unable to recover it. 00:28:16.186 [2024-12-10 22:59:23.683103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.186 [2024-12-10 22:59:23.683137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.186 qpair failed and we were unable to recover it. 00:28:16.186 [2024-12-10 22:59:23.683263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.186 [2024-12-10 22:59:23.683298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.186 qpair failed and we were unable to recover it. 00:28:16.186 [2024-12-10 22:59:23.683501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.186 [2024-12-10 22:59:23.683534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.186 qpair failed and we were unable to recover it. 00:28:16.186 [2024-12-10 22:59:23.683742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.186 [2024-12-10 22:59:23.683775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.186 qpair failed and we were unable to recover it. 00:28:16.186 [2024-12-10 22:59:23.683956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.186 [2024-12-10 22:59:23.683989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.186 qpair failed and we were unable to recover it. 00:28:16.186 [2024-12-10 22:59:23.684244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.186 [2024-12-10 22:59:23.684278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.186 qpair failed and we were unable to recover it. 00:28:16.186 [2024-12-10 22:59:23.684506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.186 [2024-12-10 22:59:23.684540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.186 qpair failed and we were unable to recover it. 00:28:16.186 [2024-12-10 22:59:23.684728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.186 [2024-12-10 22:59:23.684762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.186 qpair failed and we were unable to recover it. 00:28:16.186 [2024-12-10 22:59:23.684892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.186 [2024-12-10 22:59:23.684927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.186 qpair failed and we were unable to recover it. 00:28:16.186 [2024-12-10 22:59:23.685114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.186 [2024-12-10 22:59:23.685147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.186 qpair failed and we were unable to recover it. 00:28:16.186 [2024-12-10 22:59:23.685361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.186 [2024-12-10 22:59:23.685395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.186 qpair failed and we were unable to recover it. 00:28:16.186 [2024-12-10 22:59:23.685596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.186 [2024-12-10 22:59:23.685630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.186 qpair failed and we were unable to recover it. 00:28:16.186 [2024-12-10 22:59:23.685763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.186 [2024-12-10 22:59:23.685795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.186 qpair failed and we were unable to recover it. 00:28:16.186 [2024-12-10 22:59:23.685992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.186 [2024-12-10 22:59:23.686026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.186 qpair failed and we were unable to recover it. 00:28:16.186 [2024-12-10 22:59:23.686294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.186 [2024-12-10 22:59:23.686334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.186 qpair failed and we were unable to recover it. 00:28:16.186 [2024-12-10 22:59:23.686470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.186 [2024-12-10 22:59:23.686503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.186 qpair failed and we were unable to recover it. 00:28:16.186 [2024-12-10 22:59:23.686684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.186 [2024-12-10 22:59:23.686718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.186 qpair failed and we were unable to recover it. 00:28:16.186 [2024-12-10 22:59:23.687046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.186 [2024-12-10 22:59:23.687081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.186 qpair failed and we were unable to recover it. 00:28:16.186 [2024-12-10 22:59:23.687302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.186 [2024-12-10 22:59:23.687338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.186 qpair failed and we were unable to recover it. 00:28:16.186 [2024-12-10 22:59:23.687469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.186 [2024-12-10 22:59:23.687503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.186 qpair failed and we were unable to recover it. 00:28:16.186 [2024-12-10 22:59:23.687688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.186 [2024-12-10 22:59:23.687722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.186 qpair failed and we were unable to recover it. 00:28:16.186 [2024-12-10 22:59:23.687937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.186 [2024-12-10 22:59:23.687971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.186 qpair failed and we were unable to recover it. 00:28:16.186 [2024-12-10 22:59:23.688205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.186 [2024-12-10 22:59:23.688240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.186 qpair failed and we were unable to recover it. 00:28:16.186 [2024-12-10 22:59:23.688376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.186 [2024-12-10 22:59:23.688410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.186 qpair failed and we were unable to recover it. 00:28:16.186 [2024-12-10 22:59:23.688543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.186 [2024-12-10 22:59:23.688576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.186 qpair failed and we were unable to recover it. 00:28:16.186 [2024-12-10 22:59:23.688728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.186 [2024-12-10 22:59:23.688763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.186 qpair failed and we were unable to recover it. 00:28:16.186 [2024-12-10 22:59:23.689018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.186 [2024-12-10 22:59:23.689052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.186 qpair failed and we were unable to recover it. 00:28:16.186 [2024-12-10 22:59:23.689172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.186 [2024-12-10 22:59:23.689208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.186 qpair failed and we were unable to recover it. 00:28:16.186 [2024-12-10 22:59:23.689365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.186 [2024-12-10 22:59:23.689399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.186 qpair failed and we were unable to recover it. 00:28:16.186 [2024-12-10 22:59:23.689679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.186 [2024-12-10 22:59:23.689713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.186 qpair failed and we were unable to recover it. 00:28:16.186 [2024-12-10 22:59:23.689913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.186 [2024-12-10 22:59:23.689947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.186 qpair failed and we were unable to recover it. 00:28:16.186 [2024-12-10 22:59:23.690083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.186 [2024-12-10 22:59:23.690118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.186 qpair failed and we were unable to recover it. 00:28:16.186 [2024-12-10 22:59:23.690275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.186 [2024-12-10 22:59:23.690310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.186 qpair failed and we were unable to recover it. 00:28:16.186 [2024-12-10 22:59:23.690489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.186 [2024-12-10 22:59:23.690522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.186 qpair failed and we were unable to recover it. 00:28:16.186 [2024-12-10 22:59:23.690740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.186 [2024-12-10 22:59:23.690773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.186 qpair failed and we were unable to recover it. 00:28:16.186 [2024-12-10 22:59:23.690952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.186 [2024-12-10 22:59:23.690986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.186 qpair failed and we were unable to recover it. 00:28:16.186 [2024-12-10 22:59:23.691173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.186 [2024-12-10 22:59:23.691209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.186 qpair failed and we were unable to recover it. 00:28:16.186 [2024-12-10 22:59:23.691345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.186 [2024-12-10 22:59:23.691378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.186 qpair failed and we were unable to recover it. 00:28:16.186 [2024-12-10 22:59:23.691507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.186 [2024-12-10 22:59:23.691540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.186 qpair failed and we were unable to recover it. 00:28:16.186 [2024-12-10 22:59:23.691739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.186 [2024-12-10 22:59:23.691773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.186 qpair failed and we were unable to recover it. 00:28:16.186 [2024-12-10 22:59:23.692005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.186 [2024-12-10 22:59:23.692039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.186 qpair failed and we were unable to recover it. 00:28:16.186 [2024-12-10 22:59:23.692239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.186 [2024-12-10 22:59:23.692274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.186 qpair failed and we were unable to recover it. 00:28:16.186 [2024-12-10 22:59:23.692390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.186 [2024-12-10 22:59:23.692424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.186 qpair failed and we were unable to recover it. 00:28:16.186 [2024-12-10 22:59:23.692630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.186 [2024-12-10 22:59:23.692664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.186 qpair failed and we were unable to recover it. 00:28:16.186 [2024-12-10 22:59:23.692886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.186 [2024-12-10 22:59:23.692920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.186 qpair failed and we were unable to recover it. 00:28:16.186 [2024-12-10 22:59:23.693208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.186 [2024-12-10 22:59:23.693243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.186 qpair failed and we were unable to recover it. 00:28:16.186 [2024-12-10 22:59:23.693363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.186 [2024-12-10 22:59:23.693397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.186 qpair failed and we were unable to recover it. 00:28:16.186 [2024-12-10 22:59:23.693598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.186 [2024-12-10 22:59:23.693631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.186 qpair failed and we were unable to recover it. 00:28:16.186 [2024-12-10 22:59:23.693755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.186 [2024-12-10 22:59:23.693789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.186 qpair failed and we were unable to recover it. 00:28:16.186 [2024-12-10 22:59:23.693988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.186 [2024-12-10 22:59:23.694022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.186 qpair failed and we were unable to recover it. 00:28:16.186 [2024-12-10 22:59:23.694146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.186 [2024-12-10 22:59:23.694189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.186 qpair failed and we were unable to recover it. 00:28:16.186 [2024-12-10 22:59:23.694326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.186 [2024-12-10 22:59:23.694360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.186 qpair failed and we were unable to recover it. 00:28:16.186 [2024-12-10 22:59:23.694618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.186 [2024-12-10 22:59:23.694651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.186 qpair failed and we were unable to recover it. 00:28:16.186 [2024-12-10 22:59:23.694911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.186 [2024-12-10 22:59:23.694946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.186 qpair failed and we were unable to recover it. 00:28:16.186 [2024-12-10 22:59:23.695070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.186 [2024-12-10 22:59:23.695111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.186 qpair failed and we were unable to recover it. 00:28:16.186 [2024-12-10 22:59:23.695322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.186 [2024-12-10 22:59:23.695356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.186 qpair failed and we were unable to recover it. 00:28:16.186 [2024-12-10 22:59:23.695490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.186 [2024-12-10 22:59:23.695525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.186 qpair failed and we were unable to recover it. 00:28:16.186 [2024-12-10 22:59:23.695710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.186 [2024-12-10 22:59:23.695744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.186 qpair failed and we were unable to recover it. 00:28:16.186 [2024-12-10 22:59:23.695876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.186 [2024-12-10 22:59:23.695910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.186 qpair failed and we were unable to recover it. 00:28:16.186 [2024-12-10 22:59:23.696113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.186 [2024-12-10 22:59:23.696148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.186 qpair failed and we were unable to recover it. 00:28:16.186 [2024-12-10 22:59:23.696340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.186 [2024-12-10 22:59:23.696375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.186 qpair failed and we were unable to recover it. 00:28:16.186 [2024-12-10 22:59:23.696559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.186 [2024-12-10 22:59:23.696592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.186 qpair failed and we were unable to recover it. 00:28:16.186 [2024-12-10 22:59:23.696776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.186 [2024-12-10 22:59:23.696809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.186 qpair failed and we were unable to recover it. 00:28:16.186 [2024-12-10 22:59:23.696942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.186 [2024-12-10 22:59:23.696976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.186 qpair failed and we were unable to recover it. 00:28:16.186 [2024-12-10 22:59:23.697153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.186 [2024-12-10 22:59:23.697201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.186 qpair failed and we were unable to recover it. 00:28:16.186 [2024-12-10 22:59:23.697382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.186 [2024-12-10 22:59:23.697415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.186 qpair failed and we were unable to recover it. 00:28:16.186 [2024-12-10 22:59:23.697559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.186 [2024-12-10 22:59:23.697592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.186 qpair failed and we were unable to recover it. 00:28:16.186 [2024-12-10 22:59:23.697725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.186 [2024-12-10 22:59:23.697759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.186 qpair failed and we were unable to recover it. 00:28:16.186 [2024-12-10 22:59:23.697953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.187 [2024-12-10 22:59:23.697988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.187 qpair failed and we were unable to recover it. 00:28:16.187 [2024-12-10 22:59:23.698290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.187 [2024-12-10 22:59:23.698324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.187 qpair failed and we were unable to recover it. 00:28:16.187 [2024-12-10 22:59:23.698462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.187 [2024-12-10 22:59:23.698495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.187 qpair failed and we were unable to recover it. 00:28:16.187 [2024-12-10 22:59:23.698602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.187 [2024-12-10 22:59:23.698636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.187 qpair failed and we were unable to recover it. 00:28:16.187 [2024-12-10 22:59:23.698878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.187 [2024-12-10 22:59:23.698912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.187 qpair failed and we were unable to recover it. 00:28:16.187 [2024-12-10 22:59:23.699130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.187 [2024-12-10 22:59:23.699172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.187 qpair failed and we were unable to recover it. 00:28:16.187 [2024-12-10 22:59:23.699312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.187 [2024-12-10 22:59:23.699344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.187 qpair failed and we were unable to recover it. 00:28:16.187 [2024-12-10 22:59:23.699552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.187 [2024-12-10 22:59:23.699585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.187 qpair failed and we were unable to recover it. 00:28:16.187 [2024-12-10 22:59:23.699770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.187 [2024-12-10 22:59:23.699803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.187 qpair failed and we were unable to recover it. 00:28:16.187 [2024-12-10 22:59:23.700001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.187 [2024-12-10 22:59:23.700034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.187 qpair failed and we were unable to recover it. 00:28:16.187 [2024-12-10 22:59:23.700167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.187 [2024-12-10 22:59:23.700202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.187 qpair failed and we were unable to recover it. 00:28:16.187 [2024-12-10 22:59:23.700458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.187 [2024-12-10 22:59:23.700491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.187 qpair failed and we were unable to recover it. 00:28:16.187 [2024-12-10 22:59:23.700722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.187 [2024-12-10 22:59:23.700757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.187 qpair failed and we were unable to recover it. 00:28:16.187 [2024-12-10 22:59:23.700946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.187 [2024-12-10 22:59:23.700981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.187 qpair failed and we were unable to recover it. 00:28:16.187 [2024-12-10 22:59:23.701187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.187 [2024-12-10 22:59:23.701222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.187 qpair failed and we were unable to recover it. 00:28:16.187 [2024-12-10 22:59:23.701340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.187 [2024-12-10 22:59:23.701372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.187 qpair failed and we were unable to recover it. 00:28:16.187 [2024-12-10 22:59:23.701629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.187 [2024-12-10 22:59:23.701662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.187 qpair failed and we were unable to recover it. 00:28:16.187 [2024-12-10 22:59:23.701902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.187 [2024-12-10 22:59:23.701936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.187 qpair failed and we were unable to recover it. 00:28:16.187 [2024-12-10 22:59:23.702083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.187 [2024-12-10 22:59:23.702116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.187 qpair failed and we were unable to recover it. 00:28:16.187 [2024-12-10 22:59:23.702308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.187 [2024-12-10 22:59:23.702342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.187 qpair failed and we were unable to recover it. 00:28:16.187 [2024-12-10 22:59:23.702555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.187 [2024-12-10 22:59:23.702589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.187 qpair failed and we were unable to recover it. 00:28:16.187 [2024-12-10 22:59:23.702729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.187 [2024-12-10 22:59:23.702762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.187 qpair failed and we were unable to recover it. 00:28:16.187 [2024-12-10 22:59:23.702963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.187 [2024-12-10 22:59:23.702998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.187 qpair failed and we were unable to recover it. 00:28:16.187 [2024-12-10 22:59:23.703213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.187 [2024-12-10 22:59:23.703248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.187 qpair failed and we were unable to recover it. 00:28:16.187 [2024-12-10 22:59:23.703383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.187 [2024-12-10 22:59:23.703418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.187 qpair failed and we were unable to recover it. 00:28:16.187 [2024-12-10 22:59:23.703644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.187 [2024-12-10 22:59:23.703678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.187 qpair failed and we were unable to recover it. 00:28:16.187 [2024-12-10 22:59:23.703918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.187 [2024-12-10 22:59:23.703957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.187 qpair failed and we were unable to recover it. 00:28:16.187 [2024-12-10 22:59:23.704232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.187 [2024-12-10 22:59:23.704269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.187 qpair failed and we were unable to recover it. 00:28:16.187 [2024-12-10 22:59:23.704403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.187 [2024-12-10 22:59:23.704437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.187 qpair failed and we were unable to recover it. 00:28:16.187 [2024-12-10 22:59:23.704566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.187 [2024-12-10 22:59:23.704599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.187 qpair failed and we were unable to recover it. 00:28:16.187 [2024-12-10 22:59:23.704804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.187 [2024-12-10 22:59:23.704838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.187 qpair failed and we were unable to recover it. 00:28:16.187 [2024-12-10 22:59:23.705032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.187 [2024-12-10 22:59:23.705066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.187 qpair failed and we were unable to recover it. 00:28:16.187 [2024-12-10 22:59:23.705321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.187 [2024-12-10 22:59:23.705356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.187 qpair failed and we were unable to recover it. 00:28:16.187 [2024-12-10 22:59:23.705590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.187 [2024-12-10 22:59:23.705625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.187 qpair failed and we were unable to recover it. 00:28:16.187 [2024-12-10 22:59:23.705812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.187 [2024-12-10 22:59:23.705846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.187 qpair failed and we were unable to recover it. 00:28:16.187 [2024-12-10 22:59:23.706099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.187 [2024-12-10 22:59:23.706133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.187 qpair failed and we were unable to recover it. 00:28:16.187 [2024-12-10 22:59:23.706393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.187 [2024-12-10 22:59:23.706426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.187 qpair failed and we were unable to recover it. 00:28:16.187 [2024-12-10 22:59:23.706577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.187 [2024-12-10 22:59:23.706612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.187 qpair failed and we were unable to recover it. 00:28:16.187 [2024-12-10 22:59:23.706877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.187 [2024-12-10 22:59:23.706910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.187 qpair failed and we were unable to recover it. 00:28:16.187 [2024-12-10 22:59:23.707114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.187 [2024-12-10 22:59:23.707148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.187 qpair failed and we were unable to recover it. 00:28:16.187 [2024-12-10 22:59:23.707419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.187 [2024-12-10 22:59:23.707454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.187 qpair failed and we were unable to recover it. 00:28:16.187 [2024-12-10 22:59:23.707637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.187 [2024-12-10 22:59:23.707672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.187 qpair failed and we were unable to recover it. 00:28:16.187 [2024-12-10 22:59:23.707860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.187 [2024-12-10 22:59:23.707892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.187 qpair failed and we were unable to recover it. 00:28:16.187 [2024-12-10 22:59:23.708097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.187 [2024-12-10 22:59:23.708131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.187 qpair failed and we were unable to recover it. 00:28:16.187 [2024-12-10 22:59:23.708272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.187 [2024-12-10 22:59:23.708304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.187 qpair failed and we were unable to recover it. 00:28:16.187 [2024-12-10 22:59:23.708485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.187 [2024-12-10 22:59:23.708518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.187 qpair failed and we were unable to recover it. 00:28:16.187 [2024-12-10 22:59:23.708644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.187 [2024-12-10 22:59:23.708678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.187 qpair failed and we were unable to recover it. 00:28:16.187 [2024-12-10 22:59:23.708933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.187 [2024-12-10 22:59:23.708970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.187 qpair failed and we were unable to recover it. 00:28:16.187 [2024-12-10 22:59:23.709153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.187 [2024-12-10 22:59:23.709211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.187 qpair failed and we were unable to recover it. 00:28:16.187 [2024-12-10 22:59:23.709366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.187 [2024-12-10 22:59:23.709399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.187 qpair failed and we were unable to recover it. 00:28:16.187 [2024-12-10 22:59:23.709601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.187 [2024-12-10 22:59:23.709635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.187 qpair failed and we were unable to recover it. 00:28:16.187 [2024-12-10 22:59:23.709858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.187 [2024-12-10 22:59:23.709892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.187 qpair failed and we were unable to recover it. 00:28:16.187 [2024-12-10 22:59:23.710088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.187 [2024-12-10 22:59:23.710122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.187 qpair failed and we were unable to recover it. 00:28:16.187 [2024-12-10 22:59:23.710363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.187 [2024-12-10 22:59:23.710397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.187 qpair failed and we were unable to recover it. 00:28:16.187 [2024-12-10 22:59:23.710578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.187 [2024-12-10 22:59:23.710612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.187 qpair failed and we were unable to recover it. 00:28:16.187 [2024-12-10 22:59:23.710804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.187 [2024-12-10 22:59:23.710839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.187 qpair failed and we were unable to recover it. 00:28:16.187 [2024-12-10 22:59:23.711022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.187 [2024-12-10 22:59:23.711055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.187 qpair failed and we were unable to recover it. 00:28:16.187 [2024-12-10 22:59:23.711241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.187 [2024-12-10 22:59:23.711278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.187 qpair failed and we were unable to recover it. 00:28:16.187 [2024-12-10 22:59:23.711416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.187 [2024-12-10 22:59:23.711449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.187 qpair failed and we were unable to recover it. 00:28:16.187 [2024-12-10 22:59:23.711629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.187 [2024-12-10 22:59:23.711663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.187 qpair failed and we were unable to recover it. 00:28:16.187 [2024-12-10 22:59:23.711883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.187 [2024-12-10 22:59:23.711917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.187 qpair failed and we were unable to recover it. 00:28:16.187 [2024-12-10 22:59:23.712125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.187 [2024-12-10 22:59:23.712167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.187 qpair failed and we were unable to recover it. 00:28:16.187 [2024-12-10 22:59:23.712382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.187 [2024-12-10 22:59:23.712415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.187 qpair failed and we were unable to recover it. 00:28:16.187 [2024-12-10 22:59:23.712596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.187 [2024-12-10 22:59:23.712630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.187 qpair failed and we were unable to recover it. 00:28:16.187 [2024-12-10 22:59:23.712862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.187 [2024-12-10 22:59:23.712897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.187 qpair failed and we were unable to recover it. 00:28:16.187 [2024-12-10 22:59:23.713100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.187 [2024-12-10 22:59:23.713134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.187 qpair failed and we were unable to recover it. 00:28:16.187 [2024-12-10 22:59:23.713326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.187 [2024-12-10 22:59:23.713366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.187 qpair failed and we were unable to recover it. 00:28:16.187 [2024-12-10 22:59:23.713520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.187 [2024-12-10 22:59:23.713554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.187 qpair failed and we were unable to recover it. 00:28:16.187 [2024-12-10 22:59:23.713677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.187 [2024-12-10 22:59:23.713710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.187 qpair failed and we were unable to recover it. 00:28:16.187 [2024-12-10 22:59:23.713851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.187 [2024-12-10 22:59:23.713885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.187 qpair failed and we were unable to recover it. 00:28:16.187 [2024-12-10 22:59:23.714021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.187 [2024-12-10 22:59:23.714054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.187 qpair failed and we were unable to recover it. 00:28:16.187 [2024-12-10 22:59:23.714170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.187 [2024-12-10 22:59:23.714205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.187 qpair failed and we were unable to recover it. 00:28:16.187 [2024-12-10 22:59:23.714324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.187 [2024-12-10 22:59:23.714357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.187 qpair failed and we were unable to recover it. 00:28:16.187 [2024-12-10 22:59:23.714551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.188 [2024-12-10 22:59:23.714585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.188 qpair failed and we were unable to recover it. 00:28:16.188 [2024-12-10 22:59:23.714856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.188 [2024-12-10 22:59:23.714892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.188 qpair failed and we were unable to recover it. 00:28:16.188 [2024-12-10 22:59:23.715018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.188 [2024-12-10 22:59:23.715053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.188 qpair failed and we were unable to recover it. 00:28:16.188 [2024-12-10 22:59:23.715241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.188 [2024-12-10 22:59:23.715276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.188 qpair failed and we were unable to recover it. 00:28:16.188 [2024-12-10 22:59:23.715491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.188 [2024-12-10 22:59:23.715527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.188 qpair failed and we were unable to recover it. 00:28:16.188 [2024-12-10 22:59:23.715709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.188 [2024-12-10 22:59:23.715746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.188 qpair failed and we were unable to recover it. 00:28:16.188 [2024-12-10 22:59:23.715938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.188 [2024-12-10 22:59:23.715971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.188 qpair failed and we were unable to recover it. 00:28:16.188 [2024-12-10 22:59:23.716286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.188 [2024-12-10 22:59:23.716322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.188 qpair failed and we were unable to recover it. 00:28:16.188 [2024-12-10 22:59:23.716531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.188 [2024-12-10 22:59:23.716566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.188 qpair failed and we were unable to recover it. 00:28:16.188 [2024-12-10 22:59:23.716696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.188 [2024-12-10 22:59:23.716730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.188 qpair failed and we were unable to recover it. 00:28:16.188 [2024-12-10 22:59:23.716910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.188 [2024-12-10 22:59:23.716942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.188 qpair failed and we were unable to recover it. 00:28:16.188 [2024-12-10 22:59:23.717119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.188 [2024-12-10 22:59:23.717153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.188 qpair failed and we were unable to recover it. 00:28:16.188 [2024-12-10 22:59:23.717305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.188 [2024-12-10 22:59:23.717338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.188 qpair failed and we were unable to recover it. 00:28:16.188 [2024-12-10 22:59:23.717596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.188 [2024-12-10 22:59:23.717629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.188 qpair failed and we were unable to recover it. 00:28:16.188 [2024-12-10 22:59:23.717828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.188 [2024-12-10 22:59:23.717862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.188 qpair failed and we were unable to recover it. 00:28:16.188 [2024-12-10 22:59:23.718040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.188 [2024-12-10 22:59:23.718073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.188 qpair failed and we were unable to recover it. 00:28:16.188 [2024-12-10 22:59:23.718267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.188 [2024-12-10 22:59:23.718302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.188 qpair failed and we were unable to recover it. 00:28:16.188 [2024-12-10 22:59:23.718488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.188 [2024-12-10 22:59:23.718522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.188 qpair failed and we were unable to recover it. 00:28:16.188 [2024-12-10 22:59:23.718793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.188 [2024-12-10 22:59:23.718826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.188 qpair failed and we were unable to recover it. 00:28:16.188 [2024-12-10 22:59:23.719009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.188 [2024-12-10 22:59:23.719043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.188 qpair failed and we were unable to recover it. 00:28:16.188 [2024-12-10 22:59:23.719367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.188 [2024-12-10 22:59:23.719402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.188 qpair failed and we were unable to recover it. 00:28:16.188 [2024-12-10 22:59:23.719632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.188 [2024-12-10 22:59:23.719666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.188 qpair failed and we were unable to recover it. 00:28:16.188 [2024-12-10 22:59:23.719866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.188 [2024-12-10 22:59:23.719900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.188 qpair failed and we were unable to recover it. 00:28:16.188 [2024-12-10 22:59:23.720085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.188 [2024-12-10 22:59:23.720119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.188 qpair failed and we were unable to recover it. 00:28:16.188 [2024-12-10 22:59:23.720283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.188 [2024-12-10 22:59:23.720317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.188 qpair failed and we were unable to recover it. 00:28:16.188 [2024-12-10 22:59:23.720497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.188 [2024-12-10 22:59:23.720531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.188 qpair failed and we were unable to recover it. 00:28:16.188 [2024-12-10 22:59:23.720654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.188 [2024-12-10 22:59:23.720688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.188 qpair failed and we were unable to recover it. 00:28:16.188 [2024-12-10 22:59:23.721005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.188 [2024-12-10 22:59:23.721039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.188 qpair failed and we were unable to recover it. 00:28:16.188 [2024-12-10 22:59:23.721182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.188 [2024-12-10 22:59:23.721219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.188 qpair failed and we were unable to recover it. 00:28:16.188 [2024-12-10 22:59:23.721370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.188 [2024-12-10 22:59:23.721404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.188 qpair failed and we were unable to recover it. 00:28:16.188 [2024-12-10 22:59:23.721588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.188 [2024-12-10 22:59:23.721622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.188 qpair failed and we were unable to recover it. 00:28:16.188 [2024-12-10 22:59:23.721900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.188 [2024-12-10 22:59:23.721935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.188 qpair failed and we were unable to recover it. 00:28:16.188 [2024-12-10 22:59:23.722115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.188 [2024-12-10 22:59:23.722149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.188 qpair failed and we were unable to recover it. 00:28:16.188 [2024-12-10 22:59:23.722442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.188 [2024-12-10 22:59:23.722484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.188 qpair failed and we were unable to recover it. 00:28:16.188 [2024-12-10 22:59:23.722747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.188 [2024-12-10 22:59:23.722781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.188 qpair failed and we were unable to recover it. 00:28:16.188 [2024-12-10 22:59:23.723070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.188 [2024-12-10 22:59:23.723105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.188 qpair failed and we were unable to recover it. 00:28:16.188 [2024-12-10 22:59:23.723373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.188 [2024-12-10 22:59:23.723407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.188 qpair failed and we were unable to recover it. 00:28:16.188 [2024-12-10 22:59:23.723642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.188 [2024-12-10 22:59:23.723676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.188 qpair failed and we were unable to recover it. 00:28:16.188 [2024-12-10 22:59:23.723871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.188 [2024-12-10 22:59:23.723905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.188 qpair failed and we were unable to recover it. 00:28:16.188 [2024-12-10 22:59:23.724088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.188 [2024-12-10 22:59:23.724122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.188 qpair failed and we were unable to recover it. 00:28:16.188 [2024-12-10 22:59:23.724256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.188 [2024-12-10 22:59:23.724290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.188 qpair failed and we were unable to recover it. 00:28:16.188 [2024-12-10 22:59:23.724474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.188 [2024-12-10 22:59:23.724509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.188 qpair failed and we were unable to recover it. 00:28:16.188 [2024-12-10 22:59:23.724704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.188 [2024-12-10 22:59:23.724737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.188 qpair failed and we were unable to recover it. 00:28:16.188 [2024-12-10 22:59:23.724933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.188 [2024-12-10 22:59:23.724967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.188 qpair failed and we were unable to recover it. 00:28:16.188 [2024-12-10 22:59:23.725245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.188 [2024-12-10 22:59:23.725280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.188 qpair failed and we were unable to recover it. 00:28:16.188 [2024-12-10 22:59:23.725408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.188 [2024-12-10 22:59:23.725443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.188 qpair failed and we were unable to recover it. 00:28:16.188 [2024-12-10 22:59:23.725599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.188 [2024-12-10 22:59:23.725632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.188 qpair failed and we were unable to recover it. 00:28:16.188 [2024-12-10 22:59:23.725764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.188 [2024-12-10 22:59:23.725798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.188 qpair failed and we were unable to recover it. 00:28:16.188 [2024-12-10 22:59:23.725908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.188 [2024-12-10 22:59:23.725942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.188 qpair failed and we were unable to recover it. 00:28:16.188 [2024-12-10 22:59:23.726218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.188 [2024-12-10 22:59:23.726252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.188 qpair failed and we were unable to recover it. 00:28:16.188 [2024-12-10 22:59:23.726388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.188 [2024-12-10 22:59:23.726422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.188 qpair failed and we were unable to recover it. 00:28:16.188 [2024-12-10 22:59:23.726543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.188 [2024-12-10 22:59:23.726577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.188 qpair failed and we were unable to recover it. 00:28:16.188 [2024-12-10 22:59:23.726756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.188 [2024-12-10 22:59:23.726790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.188 qpair failed and we were unable to recover it. 00:28:16.188 [2024-12-10 22:59:23.726995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.188 [2024-12-10 22:59:23.727029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.188 qpair failed and we were unable to recover it. 00:28:16.188 [2024-12-10 22:59:23.727213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.188 [2024-12-10 22:59:23.727247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.188 qpair failed and we were unable to recover it. 00:28:16.188 [2024-12-10 22:59:23.727429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.188 [2024-12-10 22:59:23.727463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.188 qpair failed and we were unable to recover it. 00:28:16.188 [2024-12-10 22:59:23.727657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.188 [2024-12-10 22:59:23.727689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.188 qpair failed and we were unable to recover it. 00:28:16.188 [2024-12-10 22:59:23.727896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.188 [2024-12-10 22:59:23.727930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.188 qpair failed and we were unable to recover it. 00:28:16.188 [2024-12-10 22:59:23.728114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.188 [2024-12-10 22:59:23.728149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.188 qpair failed and we were unable to recover it. 00:28:16.188 [2024-12-10 22:59:23.728359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.188 [2024-12-10 22:59:23.728394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.188 qpair failed and we were unable to recover it. 00:28:16.188 [2024-12-10 22:59:23.728602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.188 [2024-12-10 22:59:23.728635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.188 qpair failed and we were unable to recover it. 00:28:16.188 [2024-12-10 22:59:23.728748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.188 [2024-12-10 22:59:23.728781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.188 qpair failed and we were unable to recover it. 00:28:16.188 [2024-12-10 22:59:23.729035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.188 [2024-12-10 22:59:23.729069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.188 qpair failed and we were unable to recover it. 00:28:16.188 [2024-12-10 22:59:23.729325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.188 [2024-12-10 22:59:23.729360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.188 qpair failed and we were unable to recover it. 00:28:16.188 [2024-12-10 22:59:23.729548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.188 [2024-12-10 22:59:23.729583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.188 qpair failed and we were unable to recover it. 00:28:16.188 [2024-12-10 22:59:23.729782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.188 [2024-12-10 22:59:23.729817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.188 qpair failed and we were unable to recover it. 00:28:16.188 [2024-12-10 22:59:23.730093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.188 [2024-12-10 22:59:23.730126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.188 qpair failed and we were unable to recover it. 00:28:16.188 [2024-12-10 22:59:23.730411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.188 [2024-12-10 22:59:23.730446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.188 qpair failed and we were unable to recover it. 00:28:16.188 [2024-12-10 22:59:23.730577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.189 [2024-12-10 22:59:23.730611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.189 qpair failed and we were unable to recover it. 00:28:16.189 [2024-12-10 22:59:23.730917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.189 [2024-12-10 22:59:23.730950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.189 qpair failed and we were unable to recover it. 00:28:16.189 [2024-12-10 22:59:23.731212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.189 [2024-12-10 22:59:23.731246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.189 qpair failed and we were unable to recover it. 00:28:16.189 [2024-12-10 22:59:23.731434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.189 [2024-12-10 22:59:23.731467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.189 qpair failed and we were unable to recover it. 00:28:16.189 [2024-12-10 22:59:23.731718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.189 [2024-12-10 22:59:23.731752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.189 qpair failed and we were unable to recover it. 00:28:16.189 [2024-12-10 22:59:23.731934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.189 [2024-12-10 22:59:23.731973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.189 qpair failed and we were unable to recover it. 00:28:16.189 [2024-12-10 22:59:23.732155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.189 [2024-12-10 22:59:23.732208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.189 qpair failed and we were unable to recover it. 00:28:16.189 [2024-12-10 22:59:23.732480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.189 [2024-12-10 22:59:23.732514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.189 qpair failed and we were unable to recover it. 00:28:16.189 [2024-12-10 22:59:23.732728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.189 [2024-12-10 22:59:23.732761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.189 qpair failed and we were unable to recover it. 00:28:16.189 [2024-12-10 22:59:23.732941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.189 [2024-12-10 22:59:23.732974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.189 qpair failed and we were unable to recover it. 00:28:16.189 [2024-12-10 22:59:23.733178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.189 [2024-12-10 22:59:23.733212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.189 qpair failed and we were unable to recover it. 00:28:16.189 [2024-12-10 22:59:23.733340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.189 [2024-12-10 22:59:23.733374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.189 qpair failed and we were unable to recover it. 00:28:16.189 [2024-12-10 22:59:23.733571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.189 [2024-12-10 22:59:23.733604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.189 qpair failed and we were unable to recover it. 00:28:16.189 [2024-12-10 22:59:23.733893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.189 [2024-12-10 22:59:23.733926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.189 qpair failed and we were unable to recover it. 00:28:16.189 [2024-12-10 22:59:23.734167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.189 [2024-12-10 22:59:23.734203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.189 qpair failed and we were unable to recover it. 00:28:16.189 [2024-12-10 22:59:23.734408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.189 [2024-12-10 22:59:23.734440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.189 qpair failed and we were unable to recover it. 00:28:16.189 [2024-12-10 22:59:23.734568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.189 [2024-12-10 22:59:23.734602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.189 qpair failed and we were unable to recover it. 00:28:16.189 [2024-12-10 22:59:23.734801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.189 [2024-12-10 22:59:23.734834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.189 qpair failed and we were unable to recover it. 00:28:16.189 [2024-12-10 22:59:23.734939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.189 [2024-12-10 22:59:23.734972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.189 qpair failed and we were unable to recover it. 00:28:16.189 [2024-12-10 22:59:23.735281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.189 [2024-12-10 22:59:23.735316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.189 qpair failed and we were unable to recover it. 00:28:16.189 [2024-12-10 22:59:23.735571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.189 [2024-12-10 22:59:23.735604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.189 qpair failed and we were unable to recover it. 00:28:16.189 [2024-12-10 22:59:23.735802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.189 [2024-12-10 22:59:23.735837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.189 qpair failed and we were unable to recover it. 00:28:16.189 [2024-12-10 22:59:23.736115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.189 [2024-12-10 22:59:23.736150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.189 qpair failed and we were unable to recover it. 00:28:16.189 [2024-12-10 22:59:23.736435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.189 [2024-12-10 22:59:23.736469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.189 qpair failed and we were unable to recover it. 00:28:16.189 [2024-12-10 22:59:23.736596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.189 [2024-12-10 22:59:23.736630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.189 qpair failed and we were unable to recover it. 00:28:16.189 [2024-12-10 22:59:23.736839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.189 [2024-12-10 22:59:23.736872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.189 qpair failed and we were unable to recover it. 00:28:16.189 [2024-12-10 22:59:23.737076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.189 [2024-12-10 22:59:23.737111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.189 qpair failed and we were unable to recover it. 00:28:16.189 [2024-12-10 22:59:23.737240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.189 [2024-12-10 22:59:23.737273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.189 qpair failed and we were unable to recover it. 00:28:16.189 [2024-12-10 22:59:23.737547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.189 [2024-12-10 22:59:23.737581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.189 qpair failed and we were unable to recover it. 00:28:16.189 [2024-12-10 22:59:23.737860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.189 [2024-12-10 22:59:23.737893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.189 qpair failed and we were unable to recover it. 00:28:16.189 [2024-12-10 22:59:23.738105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.189 [2024-12-10 22:59:23.738139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.189 qpair failed and we were unable to recover it. 00:28:16.189 [2024-12-10 22:59:23.738351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.189 [2024-12-10 22:59:23.738385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.189 qpair failed and we were unable to recover it. 00:28:16.189 [2024-12-10 22:59:23.738600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.189 [2024-12-10 22:59:23.738633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.189 qpair failed and we were unable to recover it. 00:28:16.189 [2024-12-10 22:59:23.738889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.189 [2024-12-10 22:59:23.738924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.189 qpair failed and we were unable to recover it. 00:28:16.189 [2024-12-10 22:59:23.739048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.189 [2024-12-10 22:59:23.739083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.189 qpair failed and we were unable to recover it. 00:28:16.189 [2024-12-10 22:59:23.739191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.189 [2024-12-10 22:59:23.739225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.189 qpair failed and we were unable to recover it. 00:28:16.189 [2024-12-10 22:59:23.739424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.189 [2024-12-10 22:59:23.739458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.189 qpair failed and we were unable to recover it. 00:28:16.189 [2024-12-10 22:59:23.739734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.189 [2024-12-10 22:59:23.739768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.189 qpair failed and we were unable to recover it. 00:28:16.189 [2024-12-10 22:59:23.739963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.189 [2024-12-10 22:59:23.739997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.189 qpair failed and we were unable to recover it. 00:28:16.189 [2024-12-10 22:59:23.740201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.189 [2024-12-10 22:59:23.740238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.189 qpair failed and we were unable to recover it. 00:28:16.189 [2024-12-10 22:59:23.740365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.189 [2024-12-10 22:59:23.740399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.189 qpair failed and we were unable to recover it. 00:28:16.189 [2024-12-10 22:59:23.740655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.189 [2024-12-10 22:59:23.740688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.189 qpair failed and we were unable to recover it. 00:28:16.189 [2024-12-10 22:59:23.740872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.189 [2024-12-10 22:59:23.740905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.189 qpair failed and we were unable to recover it. 00:28:16.189 [2024-12-10 22:59:23.741153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.189 [2024-12-10 22:59:23.741195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.189 qpair failed and we were unable to recover it. 00:28:16.189 [2024-12-10 22:59:23.741376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.189 [2024-12-10 22:59:23.741408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.189 qpair failed and we were unable to recover it. 00:28:16.189 [2024-12-10 22:59:23.741584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.189 [2024-12-10 22:59:23.741624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.189 qpair failed and we were unable to recover it. 00:28:16.189 [2024-12-10 22:59:23.741744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.189 [2024-12-10 22:59:23.741777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.189 qpair failed and we were unable to recover it. 00:28:16.189 [2024-12-10 22:59:23.741966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.189 [2024-12-10 22:59:23.742000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.189 qpair failed and we were unable to recover it. 00:28:16.189 [2024-12-10 22:59:23.742209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.189 [2024-12-10 22:59:23.742244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.189 qpair failed and we were unable to recover it. 00:28:16.189 [2024-12-10 22:59:23.742522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.189 [2024-12-10 22:59:23.742556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.189 qpair failed and we were unable to recover it. 00:28:16.189 [2024-12-10 22:59:23.742837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.189 [2024-12-10 22:59:23.742870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.189 qpair failed and we were unable to recover it. 00:28:16.189 [2024-12-10 22:59:23.743153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.189 [2024-12-10 22:59:23.743199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.189 qpair failed and we were unable to recover it. 00:28:16.189 [2024-12-10 22:59:23.743384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.189 [2024-12-10 22:59:23.743416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.189 qpair failed and we were unable to recover it. 00:28:16.189 [2024-12-10 22:59:23.743617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.189 [2024-12-10 22:59:23.743650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.189 qpair failed and we were unable to recover it. 00:28:16.189 [2024-12-10 22:59:23.743846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.189 [2024-12-10 22:59:23.743879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.189 qpair failed and we were unable to recover it. 00:28:16.189 [2024-12-10 22:59:23.744003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.189 [2024-12-10 22:59:23.744036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.189 qpair failed and we were unable to recover it. 00:28:16.189 [2024-12-10 22:59:23.744247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.189 [2024-12-10 22:59:23.744283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.189 qpair failed and we were unable to recover it. 00:28:16.189 [2024-12-10 22:59:23.744476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.189 [2024-12-10 22:59:23.744508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.189 qpair failed and we were unable to recover it. 00:28:16.189 [2024-12-10 22:59:23.744656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.189 [2024-12-10 22:59:23.744689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.189 qpair failed and we were unable to recover it. 00:28:16.189 [2024-12-10 22:59:23.744898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.189 [2024-12-10 22:59:23.744931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.189 qpair failed and we were unable to recover it. 00:28:16.189 [2024-12-10 22:59:23.745190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.189 [2024-12-10 22:59:23.745225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.189 qpair failed and we were unable to recover it. 00:28:16.189 [2024-12-10 22:59:23.745522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.189 [2024-12-10 22:59:23.745555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.189 qpair failed and we were unable to recover it. 00:28:16.189 [2024-12-10 22:59:23.745822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.189 [2024-12-10 22:59:23.745856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.189 qpair failed and we were unable to recover it. 00:28:16.189 [2024-12-10 22:59:23.746150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.189 [2024-12-10 22:59:23.746197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.189 qpair failed and we were unable to recover it. 00:28:16.189 [2024-12-10 22:59:23.746479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.189 [2024-12-10 22:59:23.746512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.189 qpair failed and we were unable to recover it. 00:28:16.189 [2024-12-10 22:59:23.746713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.189 [2024-12-10 22:59:23.746747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.189 qpair failed and we were unable to recover it. 00:28:16.189 [2024-12-10 22:59:23.747004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.189 [2024-12-10 22:59:23.747036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.189 qpair failed and we were unable to recover it. 00:28:16.189 [2024-12-10 22:59:23.747252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.189 [2024-12-10 22:59:23.747286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.189 qpair failed and we were unable to recover it. 00:28:16.189 [2024-12-10 22:59:23.747419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.189 [2024-12-10 22:59:23.747453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.189 qpair failed and we were unable to recover it. 00:28:16.189 [2024-12-10 22:59:23.747718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.189 [2024-12-10 22:59:23.747751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.189 qpair failed and we were unable to recover it. 00:28:16.189 [2024-12-10 22:59:23.747931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.189 [2024-12-10 22:59:23.747965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.189 qpair failed and we were unable to recover it. 00:28:16.189 [2024-12-10 22:59:23.748097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.189 [2024-12-10 22:59:23.748130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.189 qpair failed and we were unable to recover it. 00:28:16.189 [2024-12-10 22:59:23.748389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.189 [2024-12-10 22:59:23.748425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.189 qpair failed and we were unable to recover it. 00:28:16.189 [2024-12-10 22:59:23.748607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.189 [2024-12-10 22:59:23.748640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.189 qpair failed and we were unable to recover it. 00:28:16.189 [2024-12-10 22:59:23.748850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.189 [2024-12-10 22:59:23.748883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.189 qpair failed and we were unable to recover it. 00:28:16.189 [2024-12-10 22:59:23.749145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.190 [2024-12-10 22:59:23.749191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.190 qpair failed and we were unable to recover it. 00:28:16.190 [2024-12-10 22:59:23.749445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.190 [2024-12-10 22:59:23.749478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.190 qpair failed and we were unable to recover it. 00:28:16.190 [2024-12-10 22:59:23.749663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.190 [2024-12-10 22:59:23.749696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.190 qpair failed and we were unable to recover it. 00:28:16.190 [2024-12-10 22:59:23.749905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.190 [2024-12-10 22:59:23.749939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.190 qpair failed and we were unable to recover it. 00:28:16.190 [2024-12-10 22:59:23.750060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.190 [2024-12-10 22:59:23.750094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.190 qpair failed and we were unable to recover it. 00:28:16.190 [2024-12-10 22:59:23.750287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.190 [2024-12-10 22:59:23.750322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.190 qpair failed and we were unable to recover it. 00:28:16.190 [2024-12-10 22:59:23.750511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.190 [2024-12-10 22:59:23.750544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.190 qpair failed and we were unable to recover it. 00:28:16.190 [2024-12-10 22:59:23.750749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.190 [2024-12-10 22:59:23.750783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.190 qpair failed and we were unable to recover it. 00:28:16.190 [2024-12-10 22:59:23.750929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.190 [2024-12-10 22:59:23.750962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.190 qpair failed and we were unable to recover it. 00:28:16.190 [2024-12-10 22:59:23.751144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.190 [2024-12-10 22:59:23.751192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.190 qpair failed and we were unable to recover it. 00:28:16.190 [2024-12-10 22:59:23.751377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.190 [2024-12-10 22:59:23.751415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.190 qpair failed and we were unable to recover it. 00:28:16.190 [2024-12-10 22:59:23.751696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.190 [2024-12-10 22:59:23.751729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.190 qpair failed and we were unable to recover it. 00:28:16.190 [2024-12-10 22:59:23.751986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.190 [2024-12-10 22:59:23.752020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.190 qpair failed and we were unable to recover it. 00:28:16.190 [2024-12-10 22:59:23.752312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.190 [2024-12-10 22:59:23.752348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.190 qpair failed and we were unable to recover it. 00:28:16.190 [2024-12-10 22:59:23.752532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.190 [2024-12-10 22:59:23.752566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.190 qpair failed and we were unable to recover it. 00:28:16.190 [2024-12-10 22:59:23.752844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.190 [2024-12-10 22:59:23.752878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.190 qpair failed and we were unable to recover it. 00:28:16.190 [2024-12-10 22:59:23.753175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.190 [2024-12-10 22:59:23.753211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.190 qpair failed and we were unable to recover it. 00:28:16.190 [2024-12-10 22:59:23.753422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.190 [2024-12-10 22:59:23.753456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.190 qpair failed and we were unable to recover it. 00:28:16.190 [2024-12-10 22:59:23.753579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.190 [2024-12-10 22:59:23.753613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.190 qpair failed and we were unable to recover it. 00:28:16.190 [2024-12-10 22:59:23.753793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.190 [2024-12-10 22:59:23.753826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.190 qpair failed and we were unable to recover it. 00:28:16.190 [2024-12-10 22:59:23.754006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.190 [2024-12-10 22:59:23.754041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.190 qpair failed and we were unable to recover it. 00:28:16.190 [2024-12-10 22:59:23.754325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.190 [2024-12-10 22:59:23.754359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.190 qpair failed and we were unable to recover it. 00:28:16.190 [2024-12-10 22:59:23.754484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.190 [2024-12-10 22:59:23.754517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.190 qpair failed and we were unable to recover it. 00:28:16.190 [2024-12-10 22:59:23.754734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.190 [2024-12-10 22:59:23.754768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.190 qpair failed and we were unable to recover it. 00:28:16.190 [2024-12-10 22:59:23.755072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.190 [2024-12-10 22:59:23.755106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.190 qpair failed and we were unable to recover it. 00:28:16.190 [2024-12-10 22:59:23.755393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.190 [2024-12-10 22:59:23.755427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.190 qpair failed and we were unable to recover it. 00:28:16.190 [2024-12-10 22:59:23.755634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.190 [2024-12-10 22:59:23.755668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.190 qpair failed and we were unable to recover it. 00:28:16.190 [2024-12-10 22:59:23.755967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.190 [2024-12-10 22:59:23.756001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.190 qpair failed and we were unable to recover it. 00:28:16.190 [2024-12-10 22:59:23.756197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.190 [2024-12-10 22:59:23.756232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.190 qpair failed and we were unable to recover it. 00:28:16.190 [2024-12-10 22:59:23.756358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.190 [2024-12-10 22:59:23.756392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.190 qpair failed and we were unable to recover it. 00:28:16.190 [2024-12-10 22:59:23.756575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.190 [2024-12-10 22:59:23.756609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.190 qpair failed and we were unable to recover it. 00:28:16.190 [2024-12-10 22:59:23.756871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.190 [2024-12-10 22:59:23.756904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.190 qpair failed and we were unable to recover it. 00:28:16.190 [2024-12-10 22:59:23.757083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.190 [2024-12-10 22:59:23.757117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.190 qpair failed and we were unable to recover it. 00:28:16.190 [2024-12-10 22:59:23.757345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.190 [2024-12-10 22:59:23.757381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.190 qpair failed and we were unable to recover it. 00:28:16.190 [2024-12-10 22:59:23.757491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.190 [2024-12-10 22:59:23.757524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.190 qpair failed and we were unable to recover it. 00:28:16.190 [2024-12-10 22:59:23.757798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.190 [2024-12-10 22:59:23.757832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.190 qpair failed and we were unable to recover it. 00:28:16.190 [2024-12-10 22:59:23.758012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.190 [2024-12-10 22:59:23.758046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.190 qpair failed and we were unable to recover it. 00:28:16.190 [2024-12-10 22:59:23.758237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.190 [2024-12-10 22:59:23.758273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.190 qpair failed and we were unable to recover it. 00:28:16.190 [2024-12-10 22:59:23.758452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.190 [2024-12-10 22:59:23.758485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.190 qpair failed and we were unable to recover it. 00:28:16.190 [2024-12-10 22:59:23.758626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.190 [2024-12-10 22:59:23.758659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.190 qpair failed and we were unable to recover it. 00:28:16.190 [2024-12-10 22:59:23.758839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.190 [2024-12-10 22:59:23.758873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.190 qpair failed and we were unable to recover it. 00:28:16.190 [2024-12-10 22:59:23.759060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.190 [2024-12-10 22:59:23.759093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.190 qpair failed and we were unable to recover it. 00:28:16.190 [2024-12-10 22:59:23.759370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.190 [2024-12-10 22:59:23.759404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.190 qpair failed and we were unable to recover it. 00:28:16.190 [2024-12-10 22:59:23.759521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.190 [2024-12-10 22:59:23.759554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.190 qpair failed and we were unable to recover it. 00:28:16.190 [2024-12-10 22:59:23.759678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.190 [2024-12-10 22:59:23.759711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.190 qpair failed and we were unable to recover it. 00:28:16.190 [2024-12-10 22:59:23.759890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.190 [2024-12-10 22:59:23.759924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.190 qpair failed and we were unable to recover it. 00:28:16.190 [2024-12-10 22:59:23.760104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.190 [2024-12-10 22:59:23.760137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.190 qpair failed and we were unable to recover it. 00:28:16.190 [2024-12-10 22:59:23.760359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.190 [2024-12-10 22:59:23.760394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.190 qpair failed and we were unable to recover it. 00:28:16.190 [2024-12-10 22:59:23.760590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.190 [2024-12-10 22:59:23.760622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.190 qpair failed and we were unable to recover it. 00:28:16.190 [2024-12-10 22:59:23.760735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.190 [2024-12-10 22:59:23.760768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.190 qpair failed and we were unable to recover it. 00:28:16.190 [2024-12-10 22:59:23.761021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.190 [2024-12-10 22:59:23.761060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.190 qpair failed and we were unable to recover it. 00:28:16.190 [2024-12-10 22:59:23.761279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.190 [2024-12-10 22:59:23.761313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.190 qpair failed and we were unable to recover it. 00:28:16.190 [2024-12-10 22:59:23.761492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.190 [2024-12-10 22:59:23.761525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.190 qpair failed and we were unable to recover it. 00:28:16.190 [2024-12-10 22:59:23.761809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.190 [2024-12-10 22:59:23.761843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.190 qpair failed and we were unable to recover it. 00:28:16.190 [2024-12-10 22:59:23.761975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.190 [2024-12-10 22:59:23.762009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.190 qpair failed and we were unable to recover it. 00:28:16.190 [2024-12-10 22:59:23.762216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.190 [2024-12-10 22:59:23.762251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.190 qpair failed and we were unable to recover it. 00:28:16.190 [2024-12-10 22:59:23.762524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.190 [2024-12-10 22:59:23.762558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.190 qpair failed and we were unable to recover it. 00:28:16.190 [2024-12-10 22:59:23.762816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.190 [2024-12-10 22:59:23.762849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.190 qpair failed and we were unable to recover it. 00:28:16.190 [2024-12-10 22:59:23.763074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.190 [2024-12-10 22:59:23.763108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.190 qpair failed and we were unable to recover it. 00:28:16.190 [2024-12-10 22:59:23.763400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.190 [2024-12-10 22:59:23.763435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.190 qpair failed and we were unable to recover it. 00:28:16.190 [2024-12-10 22:59:23.763615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.190 [2024-12-10 22:59:23.763649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.190 qpair failed and we were unable to recover it. 00:28:16.190 [2024-12-10 22:59:23.763830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.190 [2024-12-10 22:59:23.763864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.190 qpair failed and we were unable to recover it. 00:28:16.190 [2024-12-10 22:59:23.764040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.190 [2024-12-10 22:59:23.764072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.190 qpair failed and we were unable to recover it. 00:28:16.190 [2024-12-10 22:59:23.764259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.190 [2024-12-10 22:59:23.764294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.190 qpair failed and we were unable to recover it. 00:28:16.190 [2024-12-10 22:59:23.764501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.190 [2024-12-10 22:59:23.764535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.190 qpair failed and we were unable to recover it. 00:28:16.190 [2024-12-10 22:59:23.764814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.190 [2024-12-10 22:59:23.764847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.190 qpair failed and we were unable to recover it. 00:28:16.190 [2024-12-10 22:59:23.765125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.190 [2024-12-10 22:59:23.765171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.190 qpair failed and we were unable to recover it. 00:28:16.190 [2024-12-10 22:59:23.765374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.190 [2024-12-10 22:59:23.765408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.191 qpair failed and we were unable to recover it. 00:28:16.191 [2024-12-10 22:59:23.765585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.191 [2024-12-10 22:59:23.765617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.191 qpair failed and we were unable to recover it. 00:28:16.191 [2024-12-10 22:59:23.765801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.191 [2024-12-10 22:59:23.765835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.191 qpair failed and we were unable to recover it. 00:28:16.191 [2024-12-10 22:59:23.766094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.191 [2024-12-10 22:59:23.766129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.191 qpair failed and we were unable to recover it. 00:28:16.191 [2024-12-10 22:59:23.766343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.191 [2024-12-10 22:59:23.766378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.191 qpair failed and we were unable to recover it. 00:28:16.191 [2024-12-10 22:59:23.766598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.191 [2024-12-10 22:59:23.766631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.191 qpair failed and we were unable to recover it. 00:28:16.191 [2024-12-10 22:59:23.766753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.191 [2024-12-10 22:59:23.766787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.191 qpair failed and we were unable to recover it. 00:28:16.191 [2024-12-10 22:59:23.767084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.191 [2024-12-10 22:59:23.767118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.191 qpair failed and we were unable to recover it. 00:28:16.191 [2024-12-10 22:59:23.767314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.191 [2024-12-10 22:59:23.767350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.191 qpair failed and we were unable to recover it. 00:28:16.191 [2024-12-10 22:59:23.767554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.191 [2024-12-10 22:59:23.767589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.191 qpair failed and we were unable to recover it. 00:28:16.191 [2024-12-10 22:59:23.767824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.191 [2024-12-10 22:59:23.767858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.191 qpair failed and we were unable to recover it. 00:28:16.191 [2024-12-10 22:59:23.768134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.191 [2024-12-10 22:59:23.768190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.191 qpair failed and we were unable to recover it. 00:28:16.191 [2024-12-10 22:59:23.768488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.191 [2024-12-10 22:59:23.768522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.191 qpair failed and we were unable to recover it. 00:28:16.191 [2024-12-10 22:59:23.768799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.191 [2024-12-10 22:59:23.768833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.191 qpair failed and we were unable to recover it. 00:28:16.191 [2024-12-10 22:59:23.769061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.191 [2024-12-10 22:59:23.769095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.191 qpair failed and we were unable to recover it. 00:28:16.191 [2024-12-10 22:59:23.769298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.191 [2024-12-10 22:59:23.769333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.191 qpair failed and we were unable to recover it. 00:28:16.191 [2024-12-10 22:59:23.769613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.191 [2024-12-10 22:59:23.769647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.191 qpair failed and we were unable to recover it. 00:28:16.191 [2024-12-10 22:59:23.769853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.191 [2024-12-10 22:59:23.769889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.191 qpair failed and we were unable to recover it. 00:28:16.191 [2024-12-10 22:59:23.770177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.191 [2024-12-10 22:59:23.770212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.191 qpair failed and we were unable to recover it. 00:28:16.191 [2024-12-10 22:59:23.770420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.191 [2024-12-10 22:59:23.770454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.191 qpair failed and we were unable to recover it. 00:28:16.191 [2024-12-10 22:59:23.770743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.191 [2024-12-10 22:59:23.770776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.191 qpair failed and we were unable to recover it. 00:28:16.191 [2024-12-10 22:59:23.771053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.191 [2024-12-10 22:59:23.771087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.191 qpair failed and we were unable to recover it. 00:28:16.191 [2024-12-10 22:59:23.771371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.191 [2024-12-10 22:59:23.771406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.191 qpair failed and we were unable to recover it. 00:28:16.191 [2024-12-10 22:59:23.771541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.191 [2024-12-10 22:59:23.771580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.191 qpair failed and we were unable to recover it. 00:28:16.191 [2024-12-10 22:59:23.771784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.191 [2024-12-10 22:59:23.771817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.191 qpair failed and we were unable to recover it. 00:28:16.191 [2024-12-10 22:59:23.772113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.191 [2024-12-10 22:59:23.772146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.191 qpair failed and we were unable to recover it. 00:28:16.191 [2024-12-10 22:59:23.772361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.191 [2024-12-10 22:59:23.772396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.191 qpair failed and we were unable to recover it. 00:28:16.191 [2024-12-10 22:59:23.772574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.191 [2024-12-10 22:59:23.772607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.191 qpair failed and we were unable to recover it. 00:28:16.191 [2024-12-10 22:59:23.772803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.191 [2024-12-10 22:59:23.772838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.191 qpair failed and we were unable to recover it. 00:28:16.191 [2024-12-10 22:59:23.773022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.191 [2024-12-10 22:59:23.773056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.191 qpair failed and we were unable to recover it. 00:28:16.191 [2024-12-10 22:59:23.773332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.191 [2024-12-10 22:59:23.773367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.191 qpair failed and we were unable to recover it. 00:28:16.191 [2024-12-10 22:59:23.773491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.191 [2024-12-10 22:59:23.773525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.191 qpair failed and we were unable to recover it. 00:28:16.191 [2024-12-10 22:59:23.773638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.191 [2024-12-10 22:59:23.773672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.191 qpair failed and we were unable to recover it. 00:28:16.191 [2024-12-10 22:59:23.773855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.191 [2024-12-10 22:59:23.773889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.191 qpair failed and we were unable to recover it. 00:28:16.191 [2024-12-10 22:59:23.774087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.191 [2024-12-10 22:59:23.774120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.191 qpair failed and we were unable to recover it. 00:28:16.191 [2024-12-10 22:59:23.774339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.191 [2024-12-10 22:59:23.774374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.191 qpair failed and we were unable to recover it. 00:28:16.191 [2024-12-10 22:59:23.774499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.191 [2024-12-10 22:59:23.774532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.191 qpair failed and we were unable to recover it. 00:28:16.191 [2024-12-10 22:59:23.774825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.191 [2024-12-10 22:59:23.774859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.191 qpair failed and we were unable to recover it. 00:28:16.191 [2024-12-10 22:59:23.775062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.191 [2024-12-10 22:59:23.775096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.191 qpair failed and we were unable to recover it. 00:28:16.191 [2024-12-10 22:59:23.775289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.191 [2024-12-10 22:59:23.775324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.191 qpair failed and we were unable to recover it. 00:28:16.191 [2024-12-10 22:59:23.775508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.191 [2024-12-10 22:59:23.775541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.191 qpair failed and we were unable to recover it. 00:28:16.191 [2024-12-10 22:59:23.775736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.191 [2024-12-10 22:59:23.775774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.191 qpair failed and we were unable to recover it. 00:28:16.191 [2024-12-10 22:59:23.776056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.191 [2024-12-10 22:59:23.776091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.191 qpair failed and we were unable to recover it. 00:28:16.191 [2024-12-10 22:59:23.776391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.191 [2024-12-10 22:59:23.776427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.191 qpair failed and we were unable to recover it. 00:28:16.191 [2024-12-10 22:59:23.776688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.191 [2024-12-10 22:59:23.776721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.191 qpair failed and we were unable to recover it. 00:28:16.191 [2024-12-10 22:59:23.776850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.191 [2024-12-10 22:59:23.776883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.191 qpair failed and we were unable to recover it. 00:28:16.191 [2024-12-10 22:59:23.777087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.191 [2024-12-10 22:59:23.777121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.191 qpair failed and we were unable to recover it. 00:28:16.191 [2024-12-10 22:59:23.777406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.191 [2024-12-10 22:59:23.777441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.191 qpair failed and we were unable to recover it. 00:28:16.191 [2024-12-10 22:59:23.777585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.191 [2024-12-10 22:59:23.777619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.191 qpair failed and we were unable to recover it. 00:28:16.191 [2024-12-10 22:59:23.777740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.191 [2024-12-10 22:59:23.777773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.191 qpair failed and we were unable to recover it. 00:28:16.191 [2024-12-10 22:59:23.777890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.191 [2024-12-10 22:59:23.777925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.191 qpair failed and we were unable to recover it. 00:28:16.191 [2024-12-10 22:59:23.778056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.191 [2024-12-10 22:59:23.778091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.191 qpair failed and we were unable to recover it. 00:28:16.191 [2024-12-10 22:59:23.778276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.191 [2024-12-10 22:59:23.778311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.191 qpair failed and we were unable to recover it. 00:28:16.191 [2024-12-10 22:59:23.778500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.191 [2024-12-10 22:59:23.778534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.191 qpair failed and we were unable to recover it. 00:28:16.191 [2024-12-10 22:59:23.778662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.191 [2024-12-10 22:59:23.778696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.191 qpair failed and we were unable to recover it. 00:28:16.191 [2024-12-10 22:59:23.778877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.191 [2024-12-10 22:59:23.778910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.191 qpair failed and we were unable to recover it. 00:28:16.191 [2024-12-10 22:59:23.779113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.191 [2024-12-10 22:59:23.779147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.191 qpair failed and we were unable to recover it. 00:28:16.191 [2024-12-10 22:59:23.779424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.191 [2024-12-10 22:59:23.779458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.191 qpair failed and we were unable to recover it. 00:28:16.191 [2024-12-10 22:59:23.779740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.191 [2024-12-10 22:59:23.779773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.191 qpair failed and we were unable to recover it. 00:28:16.191 [2024-12-10 22:59:23.779995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.191 [2024-12-10 22:59:23.780029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.191 qpair failed and we were unable to recover it. 00:28:16.191 [2024-12-10 22:59:23.780212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.191 [2024-12-10 22:59:23.780248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.191 qpair failed and we were unable to recover it. 00:28:16.191 [2024-12-10 22:59:23.780431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.191 [2024-12-10 22:59:23.780465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.191 qpair failed and we were unable to recover it. 00:28:16.191 [2024-12-10 22:59:23.780719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.191 [2024-12-10 22:59:23.780752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.191 qpair failed and we were unable to recover it. 00:28:16.191 [2024-12-10 22:59:23.780864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.191 [2024-12-10 22:59:23.780904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.191 qpair failed and we were unable to recover it. 00:28:16.191 [2024-12-10 22:59:23.781155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.191 [2024-12-10 22:59:23.781200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.191 qpair failed and we were unable to recover it. 00:28:16.191 [2024-12-10 22:59:23.781470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.191 [2024-12-10 22:59:23.781504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.191 qpair failed and we were unable to recover it. 00:28:16.191 [2024-12-10 22:59:23.781787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.191 [2024-12-10 22:59:23.781823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.191 qpair failed and we were unable to recover it. 00:28:16.191 [2024-12-10 22:59:23.782032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.191 [2024-12-10 22:59:23.782069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.191 qpair failed and we were unable to recover it. 00:28:16.191 [2024-12-10 22:59:23.782335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.191 [2024-12-10 22:59:23.782369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.191 qpair failed and we were unable to recover it. 00:28:16.191 [2024-12-10 22:59:23.782553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.191 [2024-12-10 22:59:23.782588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.191 qpair failed and we were unable to recover it. 00:28:16.191 [2024-12-10 22:59:23.782867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.191 [2024-12-10 22:59:23.782901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.191 qpair failed and we were unable to recover it. 00:28:16.191 [2024-12-10 22:59:23.783078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.191 [2024-12-10 22:59:23.783111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.191 qpair failed and we were unable to recover it. 00:28:16.191 [2024-12-10 22:59:23.783393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.191 [2024-12-10 22:59:23.783428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.191 qpair failed and we were unable to recover it. 00:28:16.191 [2024-12-10 22:59:23.783707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.191 [2024-12-10 22:59:23.783741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.191 qpair failed and we were unable to recover it. 00:28:16.191 [2024-12-10 22:59:23.784029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.191 [2024-12-10 22:59:23.784062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.191 qpair failed and we were unable to recover it. 00:28:16.191 [2024-12-10 22:59:23.784284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.191 [2024-12-10 22:59:23.784319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.192 qpair failed and we were unable to recover it. 00:28:16.192 [2024-12-10 22:59:23.784576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.192 [2024-12-10 22:59:23.784612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.192 qpair failed and we were unable to recover it. 00:28:16.192 [2024-12-10 22:59:23.784898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.192 [2024-12-10 22:59:23.784930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.192 qpair failed and we were unable to recover it. 00:28:16.192 [2024-12-10 22:59:23.785234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.192 [2024-12-10 22:59:23.785269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.192 qpair failed and we were unable to recover it. 00:28:16.192 [2024-12-10 22:59:23.785539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.192 [2024-12-10 22:59:23.785574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.192 qpair failed and we were unable to recover it. 00:28:16.192 [2024-12-10 22:59:23.785867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.192 [2024-12-10 22:59:23.785900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.192 qpair failed and we were unable to recover it. 00:28:16.192 [2024-12-10 22:59:23.786081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.192 [2024-12-10 22:59:23.786114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.192 qpair failed and we were unable to recover it. 00:28:16.192 [2024-12-10 22:59:23.786314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.192 [2024-12-10 22:59:23.786349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.192 qpair failed and we were unable to recover it. 00:28:16.192 [2024-12-10 22:59:23.786480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.192 [2024-12-10 22:59:23.786513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.192 qpair failed and we were unable to recover it. 00:28:16.192 [2024-12-10 22:59:23.786713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.192 [2024-12-10 22:59:23.786747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.192 qpair failed and we were unable to recover it. 00:28:16.192 [2024-12-10 22:59:23.786960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.192 [2024-12-10 22:59:23.786995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.192 qpair failed and we were unable to recover it. 00:28:16.192 [2024-12-10 22:59:23.787293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.192 [2024-12-10 22:59:23.787329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.192 qpair failed and we were unable to recover it. 00:28:16.192 [2024-12-10 22:59:23.787542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.192 [2024-12-10 22:59:23.787577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.192 qpair failed and we were unable to recover it. 00:28:16.192 [2024-12-10 22:59:23.787797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.192 [2024-12-10 22:59:23.787833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.192 qpair failed and we were unable to recover it. 00:28:16.192 [2024-12-10 22:59:23.788075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.192 [2024-12-10 22:59:23.788109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.192 qpair failed and we were unable to recover it. 00:28:16.192 [2024-12-10 22:59:23.788251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.192 [2024-12-10 22:59:23.788287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.192 qpair failed and we were unable to recover it. 00:28:16.192 [2024-12-10 22:59:23.788540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.192 [2024-12-10 22:59:23.788573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.192 qpair failed and we were unable to recover it. 00:28:16.192 [2024-12-10 22:59:23.788756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.192 [2024-12-10 22:59:23.788791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.192 qpair failed and we were unable to recover it. 00:28:16.192 [2024-12-10 22:59:23.788980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.192 [2024-12-10 22:59:23.789014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.192 qpair failed and we were unable to recover it. 00:28:16.192 [2024-12-10 22:59:23.789206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.192 [2024-12-10 22:59:23.789241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.192 qpair failed and we were unable to recover it. 00:28:16.192 [2024-12-10 22:59:23.789515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.192 [2024-12-10 22:59:23.789550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.192 qpair failed and we were unable to recover it. 00:28:16.192 [2024-12-10 22:59:23.789676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.192 [2024-12-10 22:59:23.789712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.192 qpair failed and we were unable to recover it. 00:28:16.192 [2024-12-10 22:59:23.789891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.192 [2024-12-10 22:59:23.789924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.192 qpair failed and we were unable to recover it. 00:28:16.192 [2024-12-10 22:59:23.790206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.192 [2024-12-10 22:59:23.790243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.192 qpair failed and we were unable to recover it. 00:28:16.192 [2024-12-10 22:59:23.790525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.192 [2024-12-10 22:59:23.790559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.192 qpair failed and we were unable to recover it. 00:28:16.192 [2024-12-10 22:59:23.790745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.192 [2024-12-10 22:59:23.790779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.192 qpair failed and we were unable to recover it. 00:28:16.192 [2024-12-10 22:59:23.790989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.192 [2024-12-10 22:59:23.791025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.192 qpair failed and we were unable to recover it. 00:28:16.192 [2024-12-10 22:59:23.791210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.192 [2024-12-10 22:59:23.791245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.192 qpair failed and we were unable to recover it. 00:28:16.192 [2024-12-10 22:59:23.791523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.192 [2024-12-10 22:59:23.791557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.192 qpair failed and we were unable to recover it. 00:28:16.192 [2024-12-10 22:59:23.791845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.192 [2024-12-10 22:59:23.791881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.192 qpair failed and we were unable to recover it. 00:28:16.192 [2024-12-10 22:59:23.792003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.192 [2024-12-10 22:59:23.792037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.192 qpair failed and we were unable to recover it. 00:28:16.192 [2024-12-10 22:59:23.792219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.192 [2024-12-10 22:59:23.792254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.192 qpair failed and we were unable to recover it. 00:28:16.192 [2024-12-10 22:59:23.792376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.192 [2024-12-10 22:59:23.792412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.192 qpair failed and we were unable to recover it. 00:28:16.192 [2024-12-10 22:59:23.792613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.192 [2024-12-10 22:59:23.792650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.192 qpair failed and we were unable to recover it. 00:28:16.192 [2024-12-10 22:59:23.792849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.192 [2024-12-10 22:59:23.792882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.192 qpair failed and we were unable to recover it. 00:28:16.192 [2024-12-10 22:59:23.793061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.192 [2024-12-10 22:59:23.793097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.192 qpair failed and we were unable to recover it. 00:28:16.192 [2024-12-10 22:59:23.793300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.192 [2024-12-10 22:59:23.793338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.192 qpair failed and we were unable to recover it. 00:28:16.192 [2024-12-10 22:59:23.793536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.192 [2024-12-10 22:59:23.793570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.192 qpair failed and we were unable to recover it. 00:28:16.192 [2024-12-10 22:59:23.793695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.192 [2024-12-10 22:59:23.793728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.192 qpair failed and we were unable to recover it. 00:28:16.192 [2024-12-10 22:59:23.793917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.192 [2024-12-10 22:59:23.793951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.192 qpair failed and we were unable to recover it. 00:28:16.192 [2024-12-10 22:59:23.794085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.192 [2024-12-10 22:59:23.794119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.192 qpair failed and we were unable to recover it. 00:28:16.192 [2024-12-10 22:59:23.794388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.192 [2024-12-10 22:59:23.794424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.192 qpair failed and we were unable to recover it. 00:28:16.192 [2024-12-10 22:59:23.794547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.192 [2024-12-10 22:59:23.794584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.192 qpair failed and we were unable to recover it. 00:28:16.192 [2024-12-10 22:59:23.794763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.192 [2024-12-10 22:59:23.794797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.192 qpair failed and we were unable to recover it. 00:28:16.192 [2024-12-10 22:59:23.794925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.192 [2024-12-10 22:59:23.794959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.192 qpair failed and we were unable to recover it. 00:28:16.192 [2024-12-10 22:59:23.795142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.192 [2024-12-10 22:59:23.795189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.192 qpair failed and we were unable to recover it. 00:28:16.192 [2024-12-10 22:59:23.795373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.192 [2024-12-10 22:59:23.795406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.192 qpair failed and we were unable to recover it. 00:28:16.192 [2024-12-10 22:59:23.795591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.192 [2024-12-10 22:59:23.795627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.192 qpair failed and we were unable to recover it. 00:28:16.192 [2024-12-10 22:59:23.795808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.192 [2024-12-10 22:59:23.795842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.192 qpair failed and we were unable to recover it. 00:28:16.192 [2024-12-10 22:59:23.795970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.192 [2024-12-10 22:59:23.796006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.192 qpair failed and we were unable to recover it. 00:28:16.192 [2024-12-10 22:59:23.796211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.192 [2024-12-10 22:59:23.796248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.192 qpair failed and we were unable to recover it. 00:28:16.192 [2024-12-10 22:59:23.796462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.192 [2024-12-10 22:59:23.796496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.192 qpair failed and we were unable to recover it. 00:28:16.192 [2024-12-10 22:59:23.796687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.192 [2024-12-10 22:59:23.796722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.192 qpair failed and we were unable to recover it. 00:28:16.192 [2024-12-10 22:59:23.796904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.192 [2024-12-10 22:59:23.796939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.192 qpair failed and we were unable to recover it. 00:28:16.192 [2024-12-10 22:59:23.797126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.192 [2024-12-10 22:59:23.797172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.192 qpair failed and we were unable to recover it. 00:28:16.192 [2024-12-10 22:59:23.797444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.192 [2024-12-10 22:59:23.797484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.192 qpair failed and we were unable to recover it. 00:28:16.192 [2024-12-10 22:59:23.797756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.192 [2024-12-10 22:59:23.797790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.192 qpair failed and we were unable to recover it. 00:28:16.192 [2024-12-10 22:59:23.798011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.192 [2024-12-10 22:59:23.798046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.192 qpair failed and we were unable to recover it. 00:28:16.192 [2024-12-10 22:59:23.798229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.192 [2024-12-10 22:59:23.798264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.192 qpair failed and we were unable to recover it. 00:28:16.192 [2024-12-10 22:59:23.798463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.192 [2024-12-10 22:59:23.798497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.192 qpair failed and we were unable to recover it. 00:28:16.192 [2024-12-10 22:59:23.798624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.192 [2024-12-10 22:59:23.798660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.192 qpair failed and we were unable to recover it. 00:28:16.192 [2024-12-10 22:59:23.798771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.192 [2024-12-10 22:59:23.798806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.192 qpair failed and we were unable to recover it. 00:28:16.192 [2024-12-10 22:59:23.798986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.192 [2024-12-10 22:59:23.799022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.192 qpair failed and we were unable to recover it. 00:28:16.192 [2024-12-10 22:59:23.799232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.192 [2024-12-10 22:59:23.799269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.192 qpair failed and we were unable to recover it. 00:28:16.192 [2024-12-10 22:59:23.799526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.192 [2024-12-10 22:59:23.799561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.192 qpair failed and we were unable to recover it. 00:28:16.192 [2024-12-10 22:59:23.799793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.192 [2024-12-10 22:59:23.799827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.192 qpair failed and we were unable to recover it. 00:28:16.192 [2024-12-10 22:59:23.799943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.192 [2024-12-10 22:59:23.799974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.192 qpair failed and we were unable to recover it. 00:28:16.192 [2024-12-10 22:59:23.800083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.192 [2024-12-10 22:59:23.800116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.192 qpair failed and we were unable to recover it. 00:28:16.192 [2024-12-10 22:59:23.800314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.192 [2024-12-10 22:59:23.800345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.192 qpair failed and we were unable to recover it. 00:28:16.192 [2024-12-10 22:59:23.800533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.192 [2024-12-10 22:59:23.800563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.192 qpair failed and we were unable to recover it. 00:28:16.192 [2024-12-10 22:59:23.800843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.193 [2024-12-10 22:59:23.800873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.193 qpair failed and we were unable to recover it. 00:28:16.193 [2024-12-10 22:59:23.801072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.193 [2024-12-10 22:59:23.801102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.193 qpair failed and we were unable to recover it. 00:28:16.193 [2024-12-10 22:59:23.801329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.193 [2024-12-10 22:59:23.801361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.193 qpair failed and we were unable to recover it. 00:28:16.193 [2024-12-10 22:59:23.801484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.193 [2024-12-10 22:59:23.801514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.193 qpair failed and we were unable to recover it. 00:28:16.193 [2024-12-10 22:59:23.801692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.193 [2024-12-10 22:59:23.801723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.193 qpair failed and we were unable to recover it. 00:28:16.193 [2024-12-10 22:59:23.801835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.193 [2024-12-10 22:59:23.801866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.193 qpair failed and we were unable to recover it. 00:28:16.193 [2024-12-10 22:59:23.802058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.193 [2024-12-10 22:59:23.802089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.193 qpair failed and we were unable to recover it. 00:28:16.193 [2024-12-10 22:59:23.802339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.193 [2024-12-10 22:59:23.802370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.193 qpair failed and we were unable to recover it. 00:28:16.193 [2024-12-10 22:59:23.802486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.193 [2024-12-10 22:59:23.802518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.193 qpair failed and we were unable to recover it. 00:28:16.193 [2024-12-10 22:59:23.802794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.193 [2024-12-10 22:59:23.802826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.193 qpair failed and we were unable to recover it. 00:28:16.193 [2024-12-10 22:59:23.803092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.193 [2024-12-10 22:59:23.803124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.193 qpair failed and we were unable to recover it. 00:28:16.193 [2024-12-10 22:59:23.803425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.193 [2024-12-10 22:59:23.803458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.193 qpair failed and we were unable to recover it. 00:28:16.193 [2024-12-10 22:59:23.803721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.193 [2024-12-10 22:59:23.803754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.193 qpair failed and we were unable to recover it. 00:28:16.193 [2024-12-10 22:59:23.803879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.193 [2024-12-10 22:59:23.803909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.193 qpair failed and we were unable to recover it. 00:28:16.193 [2024-12-10 22:59:23.804201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.193 [2024-12-10 22:59:23.804236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.193 qpair failed and we were unable to recover it. 00:28:16.193 [2024-12-10 22:59:23.804365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.193 [2024-12-10 22:59:23.804398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.193 qpair failed and we were unable to recover it. 00:28:16.193 [2024-12-10 22:59:23.804607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.193 [2024-12-10 22:59:23.804639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.193 qpair failed and we were unable to recover it. 00:28:16.193 [2024-12-10 22:59:23.804843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.193 [2024-12-10 22:59:23.804874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.193 qpair failed and we were unable to recover it. 00:28:16.193 [2024-12-10 22:59:23.805178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.193 [2024-12-10 22:59:23.805211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.193 qpair failed and we were unable to recover it. 00:28:16.193 [2024-12-10 22:59:23.805435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.193 [2024-12-10 22:59:23.805469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.193 qpair failed and we were unable to recover it. 00:28:16.193 [2024-12-10 22:59:23.805586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.193 [2024-12-10 22:59:23.805617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.193 qpair failed and we were unable to recover it. 00:28:16.193 [2024-12-10 22:59:23.805812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.193 [2024-12-10 22:59:23.805844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.193 qpair failed and we were unable to recover it. 00:28:16.193 [2024-12-10 22:59:23.806119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.193 [2024-12-10 22:59:23.806153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.193 qpair failed and we were unable to recover it. 00:28:16.193 [2024-12-10 22:59:23.806393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.193 [2024-12-10 22:59:23.806429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.193 qpair failed and we were unable to recover it. 00:28:16.193 [2024-12-10 22:59:23.806608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.193 [2024-12-10 22:59:23.806645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.193 qpair failed and we were unable to recover it. 00:28:16.193 [2024-12-10 22:59:23.806929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.193 [2024-12-10 22:59:23.806968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.193 qpair failed and we were unable to recover it. 00:28:16.193 [2024-12-10 22:59:23.807264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.193 [2024-12-10 22:59:23.807300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.193 qpair failed and we were unable to recover it. 00:28:16.193 [2024-12-10 22:59:23.807415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.193 [2024-12-10 22:59:23.807448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.193 qpair failed and we were unable to recover it. 00:28:16.193 [2024-12-10 22:59:23.807707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.193 [2024-12-10 22:59:23.807740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.193 qpair failed and we were unable to recover it. 00:28:16.193 [2024-12-10 22:59:23.808024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.193 [2024-12-10 22:59:23.808058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.193 qpair failed and we were unable to recover it. 00:28:16.193 [2024-12-10 22:59:23.808251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.193 [2024-12-10 22:59:23.808286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.193 qpair failed and we were unable to recover it. 00:28:16.193 [2024-12-10 22:59:23.808548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.193 [2024-12-10 22:59:23.808581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.193 qpair failed and we were unable to recover it. 00:28:16.193 [2024-12-10 22:59:23.808795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.193 [2024-12-10 22:59:23.808830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.193 qpair failed and we were unable to recover it. 00:28:16.193 [2024-12-10 22:59:23.809068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.193 [2024-12-10 22:59:23.809102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.193 qpair failed and we were unable to recover it. 00:28:16.193 [2024-12-10 22:59:23.809374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.193 [2024-12-10 22:59:23.809408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.193 qpair failed and we were unable to recover it. 00:28:16.193 [2024-12-10 22:59:23.809620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.193 [2024-12-10 22:59:23.809654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.193 qpair failed and we were unable to recover it. 00:28:16.193 [2024-12-10 22:59:23.809835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.193 [2024-12-10 22:59:23.809868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.193 qpair failed and we were unable to recover it. 00:28:16.193 [2024-12-10 22:59:23.810047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.193 [2024-12-10 22:59:23.810081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.193 qpair failed and we were unable to recover it. 00:28:16.193 [2024-12-10 22:59:23.810381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.193 [2024-12-10 22:59:23.810416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.193 qpair failed and we were unable to recover it. 00:28:16.193 [2024-12-10 22:59:23.810627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.193 [2024-12-10 22:59:23.810662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.193 qpair failed and we were unable to recover it. 00:28:16.193 [2024-12-10 22:59:23.810783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.193 [2024-12-10 22:59:23.810816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.193 qpair failed and we were unable to recover it. 00:28:16.193 [2024-12-10 22:59:23.810947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.193 [2024-12-10 22:59:23.810981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.193 qpair failed and we were unable to recover it. 00:28:16.193 [2024-12-10 22:59:23.811173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.193 [2024-12-10 22:59:23.811209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.193 qpair failed and we were unable to recover it. 00:28:16.193 [2024-12-10 22:59:23.811442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.193 [2024-12-10 22:59:23.811475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.193 qpair failed and we were unable to recover it. 00:28:16.193 [2024-12-10 22:59:23.811776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.193 [2024-12-10 22:59:23.811811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.193 qpair failed and we were unable to recover it. 00:28:16.193 [2024-12-10 22:59:23.812026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.193 [2024-12-10 22:59:23.812062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.193 qpair failed and we were unable to recover it. 00:28:16.193 [2024-12-10 22:59:23.812248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.193 [2024-12-10 22:59:23.812284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.193 qpair failed and we were unable to recover it. 00:28:16.193 [2024-12-10 22:59:23.812409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.193 [2024-12-10 22:59:23.812442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.193 qpair failed and we were unable to recover it. 00:28:16.193 [2024-12-10 22:59:23.812618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.193 [2024-12-10 22:59:23.812652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.193 qpair failed and we were unable to recover it. 00:28:16.193 [2024-12-10 22:59:23.812952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.193 [2024-12-10 22:59:23.812987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.193 qpair failed and we were unable to recover it. 00:28:16.193 [2024-12-10 22:59:23.813250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.193 [2024-12-10 22:59:23.813284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.193 qpair failed and we were unable to recover it. 00:28:16.193 [2024-12-10 22:59:23.813398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.193 [2024-12-10 22:59:23.813428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.193 qpair failed and we were unable to recover it. 00:28:16.193 [2024-12-10 22:59:23.813613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.193 [2024-12-10 22:59:23.813647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.193 qpair failed and we were unable to recover it. 00:28:16.193 [2024-12-10 22:59:23.813916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.193 [2024-12-10 22:59:23.813950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.193 qpair failed and we were unable to recover it. 00:28:16.193 [2024-12-10 22:59:23.814156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.193 [2024-12-10 22:59:23.814204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.193 qpair failed and we were unable to recover it. 00:28:16.193 [2024-12-10 22:59:23.814392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.193 [2024-12-10 22:59:23.814425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.193 qpair failed and we were unable to recover it. 00:28:16.193 [2024-12-10 22:59:23.814555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.193 [2024-12-10 22:59:23.814588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.193 qpair failed and we were unable to recover it. 00:28:16.193 [2024-12-10 22:59:23.814711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.193 [2024-12-10 22:59:23.814745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.193 qpair failed and we were unable to recover it. 00:28:16.193 [2024-12-10 22:59:23.814949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.193 [2024-12-10 22:59:23.814983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.193 qpair failed and we were unable to recover it. 00:28:16.193 [2024-12-10 22:59:23.815107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.193 [2024-12-10 22:59:23.815141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.193 qpair failed and we were unable to recover it. 00:28:16.193 [2024-12-10 22:59:23.815394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.193 [2024-12-10 22:59:23.815428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.193 qpair failed and we were unable to recover it. 00:28:16.193 [2024-12-10 22:59:23.815743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.193 [2024-12-10 22:59:23.815777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.193 qpair failed and we were unable to recover it. 00:28:16.193 [2024-12-10 22:59:23.815988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.193 [2024-12-10 22:59:23.816023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.193 qpair failed and we were unable to recover it. 00:28:16.193 [2024-12-10 22:59:23.816221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.193 [2024-12-10 22:59:23.816258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.193 qpair failed and we were unable to recover it. 00:28:16.193 [2024-12-10 22:59:23.816533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.193 [2024-12-10 22:59:23.816567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.193 qpair failed and we were unable to recover it. 00:28:16.193 [2024-12-10 22:59:23.816804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.193 [2024-12-10 22:59:23.816844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.193 qpair failed and we were unable to recover it. 00:28:16.193 [2024-12-10 22:59:23.817105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.193 [2024-12-10 22:59:23.817139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.193 qpair failed and we were unable to recover it. 00:28:16.193 [2024-12-10 22:59:23.817375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.193 [2024-12-10 22:59:23.817410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.193 qpair failed and we were unable to recover it. 00:28:16.193 [2024-12-10 22:59:23.817591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.193 [2024-12-10 22:59:23.817627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.193 qpair failed and we were unable to recover it. 00:28:16.194 [2024-12-10 22:59:23.817818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.194 [2024-12-10 22:59:23.817851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.194 qpair failed and we were unable to recover it. 00:28:16.194 [2024-12-10 22:59:23.817974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.194 [2024-12-10 22:59:23.818010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.194 qpair failed and we were unable to recover it. 00:28:16.194 [2024-12-10 22:59:23.818194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.194 [2024-12-10 22:59:23.818232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.194 qpair failed and we were unable to recover it. 00:28:16.194 [2024-12-10 22:59:23.818491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.194 [2024-12-10 22:59:23.818525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.194 qpair failed and we were unable to recover it. 00:28:16.194 [2024-12-10 22:59:23.818752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.194 [2024-12-10 22:59:23.818786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.194 qpair failed and we were unable to recover it. 00:28:16.194 [2024-12-10 22:59:23.818967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.194 [2024-12-10 22:59:23.819004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.194 qpair failed and we were unable to recover it. 00:28:16.194 [2024-12-10 22:59:23.819281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.194 [2024-12-10 22:59:23.819319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.194 qpair failed and we were unable to recover it. 00:28:16.194 [2024-12-10 22:59:23.819523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.194 [2024-12-10 22:59:23.819558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.194 qpair failed and we were unable to recover it. 00:28:16.194 [2024-12-10 22:59:23.819742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.194 [2024-12-10 22:59:23.819775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.194 qpair failed and we were unable to recover it. 00:28:16.194 [2024-12-10 22:59:23.819982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.194 [2024-12-10 22:59:23.820017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.194 qpair failed and we were unable to recover it. 00:28:16.194 [2024-12-10 22:59:23.820231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.194 [2024-12-10 22:59:23.820267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.194 qpair failed and we were unable to recover it. 00:28:16.194 [2024-12-10 22:59:23.820384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.194 [2024-12-10 22:59:23.820421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.194 qpair failed and we were unable to recover it. 00:28:16.194 [2024-12-10 22:59:23.820613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.194 [2024-12-10 22:59:23.820650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.194 qpair failed and we were unable to recover it. 00:28:16.194 [2024-12-10 22:59:23.820963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.194 [2024-12-10 22:59:23.820998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.194 qpair failed and we were unable to recover it. 00:28:16.194 [2024-12-10 22:59:23.821270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.194 [2024-12-10 22:59:23.821305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.194 qpair failed and we were unable to recover it. 00:28:16.194 [2024-12-10 22:59:23.821442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.194 [2024-12-10 22:59:23.821475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.194 qpair failed and we were unable to recover it. 00:28:16.194 [2024-12-10 22:59:23.821673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.194 [2024-12-10 22:59:23.821710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.194 qpair failed and we were unable to recover it. 00:28:16.194 [2024-12-10 22:59:23.821989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.194 [2024-12-10 22:59:23.822023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.194 qpair failed and we were unable to recover it. 00:28:16.194 [2024-12-10 22:59:23.822132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.194 [2024-12-10 22:59:23.822177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.194 qpair failed and we were unable to recover it. 00:28:16.194 [2024-12-10 22:59:23.822355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.194 [2024-12-10 22:59:23.822389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.194 qpair failed and we were unable to recover it. 00:28:16.194 [2024-12-10 22:59:23.822643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.194 [2024-12-10 22:59:23.822676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.194 qpair failed and we were unable to recover it. 00:28:16.194 [2024-12-10 22:59:23.822978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.194 [2024-12-10 22:59:23.823013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.194 qpair failed and we were unable to recover it. 00:28:16.194 [2024-12-10 22:59:23.823220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.194 [2024-12-10 22:59:23.823255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.194 qpair failed and we were unable to recover it. 00:28:16.194 [2024-12-10 22:59:23.823442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.194 [2024-12-10 22:59:23.823479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.194 qpair failed and we were unable to recover it. 00:28:16.194 [2024-12-10 22:59:23.823593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.194 [2024-12-10 22:59:23.823628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.194 qpair failed and we were unable to recover it. 00:28:16.194 [2024-12-10 22:59:23.823905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.194 [2024-12-10 22:59:23.823939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.194 qpair failed and we were unable to recover it. 00:28:16.194 [2024-12-10 22:59:23.824074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.194 [2024-12-10 22:59:23.824109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.194 qpair failed and we were unable to recover it. 00:28:16.194 [2024-12-10 22:59:23.824274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.194 [2024-12-10 22:59:23.824309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.194 qpair failed and we were unable to recover it. 00:28:16.194 [2024-12-10 22:59:23.824520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.194 [2024-12-10 22:59:23.824556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.194 qpair failed and we were unable to recover it. 00:28:16.194 [2024-12-10 22:59:23.824760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.194 [2024-12-10 22:59:23.824795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.194 qpair failed and we were unable to recover it. 00:28:16.194 [2024-12-10 22:59:23.824978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.194 [2024-12-10 22:59:23.825014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.194 qpair failed and we were unable to recover it. 00:28:16.194 [2024-12-10 22:59:23.825197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.194 [2024-12-10 22:59:23.825233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.194 qpair failed and we were unable to recover it. 00:28:16.194 [2024-12-10 22:59:23.825412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.194 [2024-12-10 22:59:23.825447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.194 qpair failed and we were unable to recover it. 00:28:16.194 [2024-12-10 22:59:23.825627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.194 [2024-12-10 22:59:23.825668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.194 qpair failed and we were unable to recover it. 00:28:16.194 [2024-12-10 22:59:23.825854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.194 [2024-12-10 22:59:23.825889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.194 qpair failed and we were unable to recover it. 00:28:16.194 [2024-12-10 22:59:23.826081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.194 [2024-12-10 22:59:23.826115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.194 qpair failed and we were unable to recover it. 00:28:16.194 [2024-12-10 22:59:23.826242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.194 [2024-12-10 22:59:23.826284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.194 qpair failed and we were unable to recover it. 00:28:16.194 [2024-12-10 22:59:23.826563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.194 [2024-12-10 22:59:23.826596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.194 qpair failed and we were unable to recover it. 00:28:16.194 [2024-12-10 22:59:23.826709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.194 [2024-12-10 22:59:23.826740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.194 qpair failed and we were unable to recover it. 00:28:16.194 [2024-12-10 22:59:23.826921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.194 [2024-12-10 22:59:23.826955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.194 qpair failed and we were unable to recover it. 00:28:16.194 [2024-12-10 22:59:23.827067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.194 [2024-12-10 22:59:23.827100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.194 qpair failed and we were unable to recover it. 00:28:16.194 [2024-12-10 22:59:23.827329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.194 [2024-12-10 22:59:23.827364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.194 qpair failed and we were unable to recover it. 00:28:16.194 [2024-12-10 22:59:23.827546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.194 [2024-12-10 22:59:23.827579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.194 qpair failed and we were unable to recover it. 00:28:16.194 [2024-12-10 22:59:23.827700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.194 [2024-12-10 22:59:23.827733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.194 qpair failed and we were unable to recover it. 00:28:16.194 [2024-12-10 22:59:23.827951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.194 [2024-12-10 22:59:23.827986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.194 qpair failed and we were unable to recover it. 00:28:16.194 [2024-12-10 22:59:23.828177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.194 [2024-12-10 22:59:23.828212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.194 qpair failed and we were unable to recover it. 00:28:16.194 [2024-12-10 22:59:23.828396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.194 [2024-12-10 22:59:23.828430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.194 qpair failed and we were unable to recover it. 00:28:16.194 [2024-12-10 22:59:23.828558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.194 [2024-12-10 22:59:23.828594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.194 qpair failed and we were unable to recover it. 00:28:16.194 [2024-12-10 22:59:23.828718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.194 [2024-12-10 22:59:23.828752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.194 qpair failed and we were unable to recover it. 00:28:16.194 [2024-12-10 22:59:23.829008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.194 [2024-12-10 22:59:23.829042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.194 qpair failed and we were unable to recover it. 00:28:16.194 [2024-12-10 22:59:23.829253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.194 [2024-12-10 22:59:23.829288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.194 qpair failed and we were unable to recover it. 00:28:16.194 [2024-12-10 22:59:23.829399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.194 [2024-12-10 22:59:23.829429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.194 qpair failed and we were unable to recover it. 00:28:16.194 [2024-12-10 22:59:23.829625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.194 [2024-12-10 22:59:23.829659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.194 qpair failed and we were unable to recover it. 00:28:16.194 [2024-12-10 22:59:23.829768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.194 [2024-12-10 22:59:23.829802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.194 qpair failed and we were unable to recover it. 00:28:16.194 [2024-12-10 22:59:23.829947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.194 [2024-12-10 22:59:23.829980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.194 qpair failed and we were unable to recover it. 00:28:16.194 [2024-12-10 22:59:23.830108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.194 [2024-12-10 22:59:23.830142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.194 qpair failed and we were unable to recover it. 00:28:16.194 [2024-12-10 22:59:23.830311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.194 [2024-12-10 22:59:23.830346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.194 qpair failed and we were unable to recover it. 00:28:16.194 [2024-12-10 22:59:23.830459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.194 [2024-12-10 22:59:23.830493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.194 qpair failed and we were unable to recover it. 00:28:16.194 [2024-12-10 22:59:23.830621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.194 [2024-12-10 22:59:23.830654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.194 qpair failed and we were unable to recover it. 00:28:16.194 [2024-12-10 22:59:23.830830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.194 [2024-12-10 22:59:23.830864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.194 qpair failed and we were unable to recover it. 00:28:16.194 [2024-12-10 22:59:23.831068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.194 [2024-12-10 22:59:23.831102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.194 qpair failed and we were unable to recover it. 00:28:16.194 [2024-12-10 22:59:23.831232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.194 [2024-12-10 22:59:23.831268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.194 qpair failed and we were unable to recover it. 00:28:16.194 [2024-12-10 22:59:23.831454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.194 [2024-12-10 22:59:23.831489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.194 qpair failed and we were unable to recover it. 00:28:16.194 [2024-12-10 22:59:23.831699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.194 [2024-12-10 22:59:23.831732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.194 qpair failed and we were unable to recover it. 00:28:16.194 [2024-12-10 22:59:23.831844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.194 [2024-12-10 22:59:23.831878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.194 qpair failed and we were unable to recover it. 00:28:16.194 [2024-12-10 22:59:23.832029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.194 [2024-12-10 22:59:23.832063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.194 qpair failed and we were unable to recover it. 00:28:16.194 [2024-12-10 22:59:23.832177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.194 [2024-12-10 22:59:23.832212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.194 qpair failed and we were unable to recover it. 00:28:16.194 [2024-12-10 22:59:23.832393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.194 [2024-12-10 22:59:23.832427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.194 qpair failed and we were unable to recover it. 00:28:16.194 [2024-12-10 22:59:23.832558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.194 [2024-12-10 22:59:23.832592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.194 qpair failed and we were unable to recover it. 00:28:16.194 [2024-12-10 22:59:23.832778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.194 [2024-12-10 22:59:23.832811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.194 qpair failed and we were unable to recover it. 00:28:16.194 [2024-12-10 22:59:23.832929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.194 [2024-12-10 22:59:23.832963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.194 qpair failed and we were unable to recover it. 00:28:16.194 [2024-12-10 22:59:23.833144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.194 [2024-12-10 22:59:23.833190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.194 qpair failed and we were unable to recover it. 00:28:16.194 [2024-12-10 22:59:23.833299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.194 [2024-12-10 22:59:23.833333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.194 qpair failed and we were unable to recover it. 00:28:16.194 [2024-12-10 22:59:23.833453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.194 [2024-12-10 22:59:23.833487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.194 qpair failed and we were unable to recover it. 00:28:16.194 [2024-12-10 22:59:23.833669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.194 [2024-12-10 22:59:23.833704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.195 qpair failed and we were unable to recover it. 00:28:16.195 [2024-12-10 22:59:23.833960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.195 [2024-12-10 22:59:23.833995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.195 qpair failed and we were unable to recover it. 00:28:16.195 [2024-12-10 22:59:23.834208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.195 [2024-12-10 22:59:23.834252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.195 qpair failed and we were unable to recover it. 00:28:16.195 [2024-12-10 22:59:23.834374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.195 [2024-12-10 22:59:23.834409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.195 qpair failed and we were unable to recover it. 00:28:16.195 [2024-12-10 22:59:23.834669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.195 [2024-12-10 22:59:23.834703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.195 qpair failed and we were unable to recover it. 00:28:16.195 [2024-12-10 22:59:23.834887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.195 [2024-12-10 22:59:23.834920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.195 qpair failed and we were unable to recover it. 00:28:16.195 [2024-12-10 22:59:23.835031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.195 [2024-12-10 22:59:23.835065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.195 qpair failed and we were unable to recover it. 00:28:16.195 [2024-12-10 22:59:23.835266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.195 [2024-12-10 22:59:23.835301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.195 qpair failed and we were unable to recover it. 00:28:16.195 [2024-12-10 22:59:23.835524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.195 [2024-12-10 22:59:23.835557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.195 qpair failed and we were unable to recover it. 00:28:16.195 [2024-12-10 22:59:23.835680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.195 [2024-12-10 22:59:23.835714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.195 qpair failed and we were unable to recover it. 00:28:16.195 [2024-12-10 22:59:23.835837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.195 [2024-12-10 22:59:23.835871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.195 qpair failed and we were unable to recover it. 00:28:16.195 [2024-12-10 22:59:23.836070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.195 [2024-12-10 22:59:23.836106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.195 qpair failed and we were unable to recover it. 00:28:16.195 [2024-12-10 22:59:23.836253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.195 [2024-12-10 22:59:23.836289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.195 qpair failed and we were unable to recover it. 00:28:16.195 [2024-12-10 22:59:23.836487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.195 [2024-12-10 22:59:23.836522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.195 qpair failed and we were unable to recover it. 00:28:16.195 [2024-12-10 22:59:23.836703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.195 [2024-12-10 22:59:23.836736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.195 qpair failed and we were unable to recover it. 00:28:16.195 [2024-12-10 22:59:23.836931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.195 [2024-12-10 22:59:23.836966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.195 qpair failed and we were unable to recover it. 00:28:16.195 [2024-12-10 22:59:23.837291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.195 [2024-12-10 22:59:23.837328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.195 qpair failed and we were unable to recover it. 00:28:16.195 [2024-12-10 22:59:23.837526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.195 [2024-12-10 22:59:23.837560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.195 qpair failed and we were unable to recover it. 00:28:16.195 [2024-12-10 22:59:23.837765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.195 [2024-12-10 22:59:23.837800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.195 qpair failed and we were unable to recover it. 00:28:16.195 [2024-12-10 22:59:23.838077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.195 [2024-12-10 22:59:23.838113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.195 qpair failed and we were unable to recover it. 00:28:16.195 [2024-12-10 22:59:23.838395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.195 [2024-12-10 22:59:23.838432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.195 qpair failed and we were unable to recover it. 00:28:16.195 [2024-12-10 22:59:23.838616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.195 [2024-12-10 22:59:23.838651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.195 qpair failed and we were unable to recover it. 00:28:16.195 [2024-12-10 22:59:23.838872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.195 [2024-12-10 22:59:23.838907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.195 qpair failed and we were unable to recover it. 00:28:16.195 [2024-12-10 22:59:23.839128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.195 [2024-12-10 22:59:23.839178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.195 qpair failed and we were unable to recover it. 00:28:16.195 [2024-12-10 22:59:23.839364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.195 [2024-12-10 22:59:23.839401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.195 qpair failed and we were unable to recover it. 00:28:16.195 [2024-12-10 22:59:23.839629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.195 [2024-12-10 22:59:23.839663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.195 qpair failed and we were unable to recover it. 00:28:16.195 [2024-12-10 22:59:23.839897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.195 [2024-12-10 22:59:23.839931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.195 qpair failed and we were unable to recover it. 00:28:16.195 [2024-12-10 22:59:23.840213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.195 [2024-12-10 22:59:23.840249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.195 qpair failed and we were unable to recover it. 00:28:16.195 [2024-12-10 22:59:23.840449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.195 [2024-12-10 22:59:23.840481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.195 qpair failed and we were unable to recover it. 00:28:16.195 [2024-12-10 22:59:23.840615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.195 [2024-12-10 22:59:23.840652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.195 qpair failed and we were unable to recover it. 00:28:16.195 [2024-12-10 22:59:23.840848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.195 [2024-12-10 22:59:23.840884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.195 qpair failed and we were unable to recover it. 00:28:16.195 [2024-12-10 22:59:23.841016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.195 [2024-12-10 22:59:23.841049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.195 qpair failed and we were unable to recover it. 00:28:16.195 [2024-12-10 22:59:23.841308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.195 [2024-12-10 22:59:23.841342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.195 qpair failed and we were unable to recover it. 00:28:16.195 [2024-12-10 22:59:23.841525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.195 [2024-12-10 22:59:23.841558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.195 qpair failed and we were unable to recover it. 00:28:16.195 [2024-12-10 22:59:23.841741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.195 [2024-12-10 22:59:23.841774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.195 qpair failed and we were unable to recover it. 00:28:16.195 [2024-12-10 22:59:23.842076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.195 [2024-12-10 22:59:23.842109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.195 qpair failed and we were unable to recover it. 00:28:16.195 [2024-12-10 22:59:23.842302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.195 [2024-12-10 22:59:23.842337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.195 qpair failed and we were unable to recover it. 00:28:16.195 [2024-12-10 22:59:23.842533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.195 [2024-12-10 22:59:23.842566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.195 qpair failed and we were unable to recover it. 00:28:16.195 [2024-12-10 22:59:23.842839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.195 [2024-12-10 22:59:23.842873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.195 qpair failed and we were unable to recover it. 00:28:16.195 [2024-12-10 22:59:23.843067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.195 [2024-12-10 22:59:23.843101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.195 qpair failed and we were unable to recover it. 00:28:16.195 [2024-12-10 22:59:23.843292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.195 [2024-12-10 22:59:23.843327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.195 qpair failed and we were unable to recover it. 00:28:16.195 [2024-12-10 22:59:23.843607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.195 [2024-12-10 22:59:23.843640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.195 qpair failed and we were unable to recover it. 00:28:16.195 [2024-12-10 22:59:23.843853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.195 [2024-12-10 22:59:23.843892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.195 qpair failed and we were unable to recover it. 00:28:16.195 [2024-12-10 22:59:23.844085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.195 [2024-12-10 22:59:23.844118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.195 qpair failed and we were unable to recover it. 00:28:16.195 [2024-12-10 22:59:23.844410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.195 [2024-12-10 22:59:23.844446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.195 qpair failed and we were unable to recover it. 00:28:16.195 [2024-12-10 22:59:23.844559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.195 [2024-12-10 22:59:23.844592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.195 qpair failed and we were unable to recover it. 00:28:16.195 [2024-12-10 22:59:23.844712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.195 [2024-12-10 22:59:23.844746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.195 qpair failed and we were unable to recover it. 00:28:16.195 [2024-12-10 22:59:23.845003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.195 [2024-12-10 22:59:23.845038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.195 qpair failed and we were unable to recover it. 00:28:16.195 [2024-12-10 22:59:23.845261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.195 [2024-12-10 22:59:23.845299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.195 qpair failed and we were unable to recover it. 00:28:16.195 [2024-12-10 22:59:23.845503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.195 [2024-12-10 22:59:23.845539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.195 qpair failed and we were unable to recover it. 00:28:16.195 [2024-12-10 22:59:23.845797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.195 [2024-12-10 22:59:23.845835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.195 qpair failed and we were unable to recover it. 00:28:16.195 [2024-12-10 22:59:23.846019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.195 [2024-12-10 22:59:23.846053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.195 qpair failed and we were unable to recover it. 00:28:16.195 [2024-12-10 22:59:23.846196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.195 [2024-12-10 22:59:23.846234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.195 qpair failed and we were unable to recover it. 00:28:16.195 [2024-12-10 22:59:23.846431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.195 [2024-12-10 22:59:23.846465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.195 qpair failed and we were unable to recover it. 00:28:16.195 [2024-12-10 22:59:23.846741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.195 [2024-12-10 22:59:23.846774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.195 qpair failed and we were unable to recover it. 00:28:16.195 [2024-12-10 22:59:23.846961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.195 [2024-12-10 22:59:23.846994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.195 qpair failed and we were unable to recover it. 00:28:16.195 [2024-12-10 22:59:23.847181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.195 [2024-12-10 22:59:23.847217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.195 qpair failed and we were unable to recover it. 00:28:16.195 [2024-12-10 22:59:23.847464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.195 [2024-12-10 22:59:23.847497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.195 qpair failed and we were unable to recover it. 00:28:16.195 [2024-12-10 22:59:23.847606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.195 [2024-12-10 22:59:23.847640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.195 qpair failed and we were unable to recover it. 00:28:16.195 [2024-12-10 22:59:23.847818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.195 [2024-12-10 22:59:23.847852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.195 qpair failed and we were unable to recover it. 00:28:16.195 [2024-12-10 22:59:23.848036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.195 [2024-12-10 22:59:23.848072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.195 qpair failed and we were unable to recover it. 00:28:16.195 [2024-12-10 22:59:23.848274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.195 [2024-12-10 22:59:23.848309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.195 qpair failed and we were unable to recover it. 00:28:16.195 [2024-12-10 22:59:23.848434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.195 [2024-12-10 22:59:23.848467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.195 qpair failed and we were unable to recover it. 00:28:16.195 [2024-12-10 22:59:23.848671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.195 [2024-12-10 22:59:23.848705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.195 qpair failed and we were unable to recover it. 00:28:16.195 [2024-12-10 22:59:23.848883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.195 [2024-12-10 22:59:23.848916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.195 qpair failed and we were unable to recover it. 00:28:16.195 [2024-12-10 22:59:23.849027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.195 [2024-12-10 22:59:23.849058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.195 qpair failed and we were unable to recover it. 00:28:16.195 [2024-12-10 22:59:23.849241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.195 [2024-12-10 22:59:23.849276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.195 qpair failed and we were unable to recover it. 00:28:16.195 [2024-12-10 22:59:23.849543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.195 [2024-12-10 22:59:23.849577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.195 qpair failed and we were unable to recover it. 00:28:16.195 [2024-12-10 22:59:23.849788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.196 [2024-12-10 22:59:23.849822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.196 qpair failed and we were unable to recover it. 00:28:16.196 [2024-12-10 22:59:23.850010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.196 [2024-12-10 22:59:23.850046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.196 qpair failed and we were unable to recover it. 00:28:16.196 [2024-12-10 22:59:23.850255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.196 [2024-12-10 22:59:23.850289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.196 qpair failed and we were unable to recover it. 00:28:16.196 [2024-12-10 22:59:23.850418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.196 [2024-12-10 22:59:23.850452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.196 qpair failed and we were unable to recover it. 00:28:16.196 [2024-12-10 22:59:23.850577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.196 [2024-12-10 22:59:23.850611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.196 qpair failed and we were unable to recover it. 00:28:16.196 [2024-12-10 22:59:23.850863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.196 [2024-12-10 22:59:23.850898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.196 qpair failed and we were unable to recover it. 00:28:16.196 [2024-12-10 22:59:23.851083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.196 [2024-12-10 22:59:23.851117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.196 qpair failed and we were unable to recover it. 00:28:16.196 [2024-12-10 22:59:23.851363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.196 [2024-12-10 22:59:23.851398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.196 qpair failed and we were unable to recover it. 00:28:16.196 [2024-12-10 22:59:23.851605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.196 [2024-12-10 22:59:23.851639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.196 qpair failed and we were unable to recover it. 00:28:16.196 [2024-12-10 22:59:23.851821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.196 [2024-12-10 22:59:23.851854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.196 qpair failed and we were unable to recover it. 00:28:16.196 [2024-12-10 22:59:23.851971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.196 [2024-12-10 22:59:23.852007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.196 qpair failed and we were unable to recover it. 00:28:16.196 [2024-12-10 22:59:23.852137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.196 [2024-12-10 22:59:23.852184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.196 qpair failed and we were unable to recover it. 00:28:16.196 [2024-12-10 22:59:23.852318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.196 [2024-12-10 22:59:23.852353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.196 qpair failed and we were unable to recover it. 00:28:16.196 [2024-12-10 22:59:23.852536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.196 [2024-12-10 22:59:23.852570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.196 qpair failed and we were unable to recover it. 00:28:16.196 [2024-12-10 22:59:23.852691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.196 [2024-12-10 22:59:23.852730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.196 qpair failed and we were unable to recover it. 00:28:16.196 [2024-12-10 22:59:23.852912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.196 [2024-12-10 22:59:23.852946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.196 qpair failed and we were unable to recover it. 00:28:16.196 [2024-12-10 22:59:23.853061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.196 [2024-12-10 22:59:23.853095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.196 qpair failed and we were unable to recover it. 00:28:16.196 [2024-12-10 22:59:23.853288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.196 [2024-12-10 22:59:23.853322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.196 qpair failed and we were unable to recover it. 00:28:16.196 [2024-12-10 22:59:23.853460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.196 [2024-12-10 22:59:23.853494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.196 qpair failed and we were unable to recover it. 00:28:16.196 [2024-12-10 22:59:23.853698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.196 [2024-12-10 22:59:23.853732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.196 qpair failed and we were unable to recover it. 00:28:16.196 [2024-12-10 22:59:23.853867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.196 [2024-12-10 22:59:23.853900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.196 qpair failed and we were unable to recover it. 00:28:16.196 [2024-12-10 22:59:23.854084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.196 [2024-12-10 22:59:23.854118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.196 qpair failed and we were unable to recover it. 00:28:16.196 [2024-12-10 22:59:23.854323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.196 [2024-12-10 22:59:23.854359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.196 qpair failed and we were unable to recover it. 00:28:16.196 [2024-12-10 22:59:23.854478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.196 [2024-12-10 22:59:23.854509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.196 qpair failed and we were unable to recover it. 00:28:16.196 [2024-12-10 22:59:23.854705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.196 [2024-12-10 22:59:23.854738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.196 qpair failed and we were unable to recover it. 00:28:16.196 [2024-12-10 22:59:23.854854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.196 [2024-12-10 22:59:23.854885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.196 qpair failed and we were unable to recover it. 00:28:16.196 [2024-12-10 22:59:23.855004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.196 [2024-12-10 22:59:23.855035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.196 qpair failed and we were unable to recover it. 00:28:16.196 [2024-12-10 22:59:23.855215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.196 [2024-12-10 22:59:23.855250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.196 qpair failed and we were unable to recover it. 00:28:16.196 [2024-12-10 22:59:23.855387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.196 [2024-12-10 22:59:23.855421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.196 qpair failed and we were unable to recover it. 00:28:16.196 [2024-12-10 22:59:23.855609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.196 [2024-12-10 22:59:23.855643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.196 qpair failed and we were unable to recover it. 00:28:16.196 [2024-12-10 22:59:23.855822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.196 [2024-12-10 22:59:23.855857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.196 qpair failed and we were unable to recover it. 00:28:16.196 [2024-12-10 22:59:23.855976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.196 [2024-12-10 22:59:23.856010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.196 qpair failed and we were unable to recover it. 00:28:16.196 [2024-12-10 22:59:23.856198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.196 [2024-12-10 22:59:23.856234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.196 qpair failed and we were unable to recover it. 00:28:16.196 [2024-12-10 22:59:23.856351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.196 [2024-12-10 22:59:23.856385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.196 qpair failed and we were unable to recover it. 00:28:16.196 [2024-12-10 22:59:23.856588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.196 [2024-12-10 22:59:23.856622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.196 qpair failed and we were unable to recover it. 00:28:16.196 [2024-12-10 22:59:23.856911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.196 [2024-12-10 22:59:23.856945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.196 qpair failed and we were unable to recover it. 00:28:16.196 [2024-12-10 22:59:23.857071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.196 [2024-12-10 22:59:23.857105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.196 qpair failed and we were unable to recover it. 00:28:16.196 [2024-12-10 22:59:23.857313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.196 [2024-12-10 22:59:23.857349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.196 qpair failed and we were unable to recover it. 00:28:16.196 [2024-12-10 22:59:23.857461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.196 [2024-12-10 22:59:23.857496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.196 qpair failed and we were unable to recover it. 00:28:16.196 [2024-12-10 22:59:23.857687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.196 [2024-12-10 22:59:23.857722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.196 qpair failed and we were unable to recover it. 00:28:16.196 [2024-12-10 22:59:23.858002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.196 [2024-12-10 22:59:23.858037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.196 qpair failed and we were unable to recover it. 00:28:16.196 [2024-12-10 22:59:23.858178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.196 [2024-12-10 22:59:23.858214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.196 qpair failed and we were unable to recover it. 00:28:16.196 [2024-12-10 22:59:23.858402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.196 [2024-12-10 22:59:23.858436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.196 qpair failed and we were unable to recover it. 00:28:16.196 [2024-12-10 22:59:23.858616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.196 [2024-12-10 22:59:23.858650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.196 qpair failed and we were unable to recover it. 00:28:16.196 [2024-12-10 22:59:23.858854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.196 [2024-12-10 22:59:23.858888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.196 qpair failed and we were unable to recover it. 00:28:16.196 [2024-12-10 22:59:23.858996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.196 [2024-12-10 22:59:23.859031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.196 qpair failed and we were unable to recover it. 00:28:16.196 [2024-12-10 22:59:23.859181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.196 [2024-12-10 22:59:23.859217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.196 qpair failed and we were unable to recover it. 00:28:16.196 [2024-12-10 22:59:23.859495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.196 [2024-12-10 22:59:23.859529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.196 qpair failed and we were unable to recover it. 00:28:16.196 [2024-12-10 22:59:23.859730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.196 [2024-12-10 22:59:23.859763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.196 qpair failed and we were unable to recover it. 00:28:16.196 [2024-12-10 22:59:23.860067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.196 [2024-12-10 22:59:23.860102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.196 qpair failed and we were unable to recover it. 00:28:16.196 [2024-12-10 22:59:23.860374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.196 [2024-12-10 22:59:23.860408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.196 qpair failed and we were unable to recover it. 00:28:16.196 [2024-12-10 22:59:23.860672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.196 [2024-12-10 22:59:23.860705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.196 qpair failed and we were unable to recover it. 00:28:16.196 [2024-12-10 22:59:23.861003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.196 [2024-12-10 22:59:23.861037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.196 qpair failed and we were unable to recover it. 00:28:16.196 [2024-12-10 22:59:23.861307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.196 [2024-12-10 22:59:23.861344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.196 qpair failed and we were unable to recover it. 00:28:16.196 [2024-12-10 22:59:23.861530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.196 [2024-12-10 22:59:23.861568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.196 qpair failed and we were unable to recover it. 00:28:16.196 [2024-12-10 22:59:23.861841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.196 [2024-12-10 22:59:23.861875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.196 qpair failed and we were unable to recover it. 00:28:16.196 [2024-12-10 22:59:23.862081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.196 [2024-12-10 22:59:23.862114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.196 qpair failed and we were unable to recover it. 00:28:16.196 [2024-12-10 22:59:23.862325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.196 [2024-12-10 22:59:23.862360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.196 qpair failed and we were unable to recover it. 00:28:16.196 [2024-12-10 22:59:23.862482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.196 [2024-12-10 22:59:23.862515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.196 qpair failed and we were unable to recover it. 00:28:16.196 [2024-12-10 22:59:23.862713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.196 [2024-12-10 22:59:23.862746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.196 qpair failed and we were unable to recover it. 00:28:16.196 [2024-12-10 22:59:23.863045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.196 [2024-12-10 22:59:23.863080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.196 qpair failed and we were unable to recover it. 00:28:16.196 [2024-12-10 22:59:23.863215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.196 [2024-12-10 22:59:23.863250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.196 qpair failed and we were unable to recover it. 00:28:16.196 [2024-12-10 22:59:23.863462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.196 [2024-12-10 22:59:23.863496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.196 qpair failed and we were unable to recover it. 00:28:16.196 [2024-12-10 22:59:23.863679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.196 [2024-12-10 22:59:23.863712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.196 qpair failed and we were unable to recover it. 00:28:16.196 [2024-12-10 22:59:23.863835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.196 [2024-12-10 22:59:23.863868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.196 qpair failed and we were unable to recover it. 00:28:16.196 [2024-12-10 22:59:23.864075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.196 [2024-12-10 22:59:23.864109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.196 qpair failed and we were unable to recover it. 00:28:16.196 [2024-12-10 22:59:23.864347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.196 [2024-12-10 22:59:23.864381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.196 qpair failed and we were unable to recover it. 00:28:16.196 [2024-12-10 22:59:23.864594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.196 [2024-12-10 22:59:23.864629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.196 qpair failed and we were unable to recover it. 00:28:16.196 [2024-12-10 22:59:23.864755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.196 [2024-12-10 22:59:23.864790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.196 qpair failed and we were unable to recover it. 00:28:16.196 [2024-12-10 22:59:23.864989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.196 [2024-12-10 22:59:23.865023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.196 qpair failed and we were unable to recover it. 00:28:16.196 [2024-12-10 22:59:23.865215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.196 [2024-12-10 22:59:23.865249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.196 qpair failed and we were unable to recover it. 00:28:16.196 [2024-12-10 22:59:23.865432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.196 [2024-12-10 22:59:23.865465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.196 qpair failed and we were unable to recover it. 00:28:16.196 [2024-12-10 22:59:23.865586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.196 [2024-12-10 22:59:23.865623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.196 qpair failed and we were unable to recover it. 00:28:16.196 [2024-12-10 22:59:23.865804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.196 [2024-12-10 22:59:23.865840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.196 qpair failed and we were unable to recover it. 00:28:16.196 [2024-12-10 22:59:23.866121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.196 [2024-12-10 22:59:23.866155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.196 qpair failed and we were unable to recover it. 00:28:16.197 [2024-12-10 22:59:23.866373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.197 [2024-12-10 22:59:23.866409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.197 qpair failed and we were unable to recover it. 00:28:16.197 [2024-12-10 22:59:23.866617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.197 [2024-12-10 22:59:23.866653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.197 qpair failed and we were unable to recover it. 00:28:16.197 [2024-12-10 22:59:23.866845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.197 [2024-12-10 22:59:23.866879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.197 qpair failed and we were unable to recover it. 00:28:16.197 [2024-12-10 22:59:23.867062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.197 [2024-12-10 22:59:23.867097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.197 qpair failed and we were unable to recover it. 00:28:16.197 [2024-12-10 22:59:23.867366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.197 [2024-12-10 22:59:23.867402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.197 qpair failed and we were unable to recover it. 00:28:16.197 [2024-12-10 22:59:23.867687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.197 [2024-12-10 22:59:23.867721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.197 qpair failed and we were unable to recover it. 00:28:16.197 [2024-12-10 22:59:23.867923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.197 [2024-12-10 22:59:23.867958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.197 qpair failed and we were unable to recover it. 00:28:16.197 [2024-12-10 22:59:23.868089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.197 [2024-12-10 22:59:23.868123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.197 qpair failed and we were unable to recover it. 00:28:16.197 [2024-12-10 22:59:23.868385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.197 [2024-12-10 22:59:23.868421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.197 qpair failed and we were unable to recover it. 00:28:16.197 [2024-12-10 22:59:23.868528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.197 [2024-12-10 22:59:23.868562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.197 qpair failed and we were unable to recover it. 00:28:16.197 [2024-12-10 22:59:23.868670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.197 [2024-12-10 22:59:23.868706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.197 qpair failed and we were unable to recover it. 00:28:16.197 [2024-12-10 22:59:23.868885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.197 [2024-12-10 22:59:23.868919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.197 qpair failed and we were unable to recover it. 00:28:16.197 [2024-12-10 22:59:23.869097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.197 [2024-12-10 22:59:23.869131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.197 qpair failed and we were unable to recover it. 00:28:16.197 [2024-12-10 22:59:23.869260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.197 [2024-12-10 22:59:23.869291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.197 qpair failed and we were unable to recover it. 00:28:16.197 [2024-12-10 22:59:23.869470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.197 [2024-12-10 22:59:23.869503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.197 qpair failed and we were unable to recover it. 00:28:16.197 [2024-12-10 22:59:23.869700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.197 [2024-12-10 22:59:23.869734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.197 qpair failed and we were unable to recover it. 00:28:16.197 [2024-12-10 22:59:23.869992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.197 [2024-12-10 22:59:23.870025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.197 qpair failed and we were unable to recover it. 00:28:16.197 [2024-12-10 22:59:23.870207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.197 [2024-12-10 22:59:23.870243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.197 qpair failed and we were unable to recover it. 00:28:16.197 [2024-12-10 22:59:23.870421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.197 [2024-12-10 22:59:23.870455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.197 qpair failed and we were unable to recover it. 00:28:16.197 [2024-12-10 22:59:23.870715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.197 [2024-12-10 22:59:23.870755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.197 qpair failed and we were unable to recover it. 00:28:16.197 [2024-12-10 22:59:23.870998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.197 [2024-12-10 22:59:23.871032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.197 qpair failed and we were unable to recover it. 00:28:16.197 [2024-12-10 22:59:23.871245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.197 [2024-12-10 22:59:23.871280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.197 qpair failed and we were unable to recover it. 00:28:16.197 [2024-12-10 22:59:23.871491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.197 [2024-12-10 22:59:23.871525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.197 qpair failed and we were unable to recover it. 00:28:16.197 [2024-12-10 22:59:23.871752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.197 [2024-12-10 22:59:23.871785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.197 qpair failed and we were unable to recover it. 00:28:16.197 [2024-12-10 22:59:23.871985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.197 [2024-12-10 22:59:23.872019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.197 qpair failed and we were unable to recover it. 00:28:16.197 [2024-12-10 22:59:23.872199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.197 [2024-12-10 22:59:23.872234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.197 qpair failed and we were unable to recover it. 00:28:16.197 [2024-12-10 22:59:23.872418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.197 [2024-12-10 22:59:23.872452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.197 qpair failed and we were unable to recover it. 00:28:16.197 [2024-12-10 22:59:23.872667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.197 [2024-12-10 22:59:23.872703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.197 qpair failed and we were unable to recover it. 00:28:16.197 [2024-12-10 22:59:23.872960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.197 [2024-12-10 22:59:23.872994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.197 qpair failed and we were unable to recover it. 00:28:16.197 [2024-12-10 22:59:23.873288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.197 [2024-12-10 22:59:23.873325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.197 qpair failed and we were unable to recover it. 00:28:16.197 [2024-12-10 22:59:23.873625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.197 [2024-12-10 22:59:23.873660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.197 qpair failed and we were unable to recover it. 00:28:16.197 [2024-12-10 22:59:23.873925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.197 [2024-12-10 22:59:23.873959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.197 qpair failed and we were unable to recover it. 00:28:16.197 [2024-12-10 22:59:23.874088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.197 [2024-12-10 22:59:23.874121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.197 qpair failed and we were unable to recover it. 00:28:16.197 [2024-12-10 22:59:23.874278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.197 [2024-12-10 22:59:23.874315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.197 qpair failed and we were unable to recover it. 00:28:16.197 [2024-12-10 22:59:23.874589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.197 [2024-12-10 22:59:23.874625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.197 qpair failed and we were unable to recover it. 00:28:16.197 [2024-12-10 22:59:23.874902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.197 [2024-12-10 22:59:23.874937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.197 qpair failed and we were unable to recover it. 00:28:16.197 [2024-12-10 22:59:23.875222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.197 [2024-12-10 22:59:23.875259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.197 qpair failed and we were unable to recover it. 00:28:16.197 [2024-12-10 22:59:23.875444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.197 [2024-12-10 22:59:23.875480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.197 qpair failed and we were unable to recover it. 00:28:16.197 [2024-12-10 22:59:23.875660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.197 [2024-12-10 22:59:23.875696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.197 qpair failed and we were unable to recover it. 00:28:16.197 [2024-12-10 22:59:23.875877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.197 [2024-12-10 22:59:23.875916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.197 qpair failed and we were unable to recover it. 00:28:16.197 [2024-12-10 22:59:23.876145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.197 [2024-12-10 22:59:23.876193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.197 qpair failed and we were unable to recover it. 00:28:16.197 [2024-12-10 22:59:23.876409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.197 [2024-12-10 22:59:23.876441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.197 qpair failed and we were unable to recover it. 00:28:16.197 [2024-12-10 22:59:23.876624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.197 [2024-12-10 22:59:23.876659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.197 qpair failed and we were unable to recover it. 00:28:16.197 [2024-12-10 22:59:23.876912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.197 [2024-12-10 22:59:23.876947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.197 qpair failed and we were unable to recover it. 00:28:16.197 [2024-12-10 22:59:23.877078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.197 [2024-12-10 22:59:23.877111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.197 qpair failed and we were unable to recover it. 00:28:16.197 [2024-12-10 22:59:23.877250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.197 [2024-12-10 22:59:23.877283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.197 qpair failed and we were unable to recover it. 00:28:16.197 [2024-12-10 22:59:23.877458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.197 [2024-12-10 22:59:23.877537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.197 qpair failed and we were unable to recover it. 00:28:16.197 [2024-12-10 22:59:23.877746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.197 [2024-12-10 22:59:23.877783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.197 qpair failed and we were unable to recover it. 00:28:16.197 [2024-12-10 22:59:23.877906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.197 [2024-12-10 22:59:23.877941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.197 qpair failed and we were unable to recover it. 00:28:16.197 [2024-12-10 22:59:23.878156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.197 [2024-12-10 22:59:23.878206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.197 qpair failed and we were unable to recover it. 00:28:16.197 [2024-12-10 22:59:23.878388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.197 [2024-12-10 22:59:23.878422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.197 qpair failed and we were unable to recover it. 00:28:16.197 [2024-12-10 22:59:23.878532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.197 [2024-12-10 22:59:23.878566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.197 qpair failed and we were unable to recover it. 00:28:16.197 [2024-12-10 22:59:23.878677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.197 [2024-12-10 22:59:23.878707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.197 qpair failed and we were unable to recover it. 00:28:16.197 [2024-12-10 22:59:23.878899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.197 [2024-12-10 22:59:23.878933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.197 qpair failed and we were unable to recover it. 00:28:16.197 [2024-12-10 22:59:23.879066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.197 [2024-12-10 22:59:23.879100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.197 qpair failed and we were unable to recover it. 00:28:16.197 [2024-12-10 22:59:23.879235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.197 [2024-12-10 22:59:23.879270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.197 qpair failed and we were unable to recover it. 00:28:16.197 [2024-12-10 22:59:23.879466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.197 [2024-12-10 22:59:23.879501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.197 qpair failed and we were unable to recover it. 00:28:16.197 [2024-12-10 22:59:23.879777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.197 [2024-12-10 22:59:23.879811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.197 qpair failed and we were unable to recover it. 00:28:16.197 [2024-12-10 22:59:23.879995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.197 [2024-12-10 22:59:23.880030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.197 qpair failed and we were unable to recover it. 00:28:16.197 [2024-12-10 22:59:23.880210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.197 [2024-12-10 22:59:23.880245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.197 qpair failed and we were unable to recover it. 00:28:16.197 [2024-12-10 22:59:23.880384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.197 [2024-12-10 22:59:23.880418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.197 qpair failed and we were unable to recover it. 00:28:16.197 [2024-12-10 22:59:23.880598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.197 [2024-12-10 22:59:23.880632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.197 qpair failed and we were unable to recover it. 00:28:16.197 [2024-12-10 22:59:23.880812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.197 [2024-12-10 22:59:23.880847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.197 qpair failed and we were unable to recover it. 00:28:16.197 [2024-12-10 22:59:23.880957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.197 [2024-12-10 22:59:23.880989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.197 qpair failed and we were unable to recover it. 00:28:16.197 [2024-12-10 22:59:23.881109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.197 [2024-12-10 22:59:23.881144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.197 qpair failed and we were unable to recover it. 00:28:16.197 [2024-12-10 22:59:23.881276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.197 [2024-12-10 22:59:23.881309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.197 qpair failed and we were unable to recover it. 00:28:16.197 [2024-12-10 22:59:23.881429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.197 [2024-12-10 22:59:23.881462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.197 qpair failed and we were unable to recover it. 00:28:16.197 [2024-12-10 22:59:23.881640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.197 [2024-12-10 22:59:23.881674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.197 qpair failed and we were unable to recover it. 00:28:16.197 [2024-12-10 22:59:23.881867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.197 [2024-12-10 22:59:23.881900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.198 qpair failed and we were unable to recover it. 00:28:16.198 [2024-12-10 22:59:23.882026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.198 [2024-12-10 22:59:23.882060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.198 qpair failed and we were unable to recover it. 00:28:16.198 [2024-12-10 22:59:23.882267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.198 [2024-12-10 22:59:23.882302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.198 qpair failed and we were unable to recover it. 00:28:16.198 [2024-12-10 22:59:23.882507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.198 [2024-12-10 22:59:23.882542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.198 qpair failed and we were unable to recover it. 00:28:16.198 [2024-12-10 22:59:23.882726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.198 [2024-12-10 22:59:23.882760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.198 qpair failed and we were unable to recover it. 00:28:16.198 [2024-12-10 22:59:23.882991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.198 [2024-12-10 22:59:23.883032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.198 qpair failed and we were unable to recover it. 00:28:16.198 [2024-12-10 22:59:23.883169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.198 [2024-12-10 22:59:23.883205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.198 qpair failed and we were unable to recover it. 00:28:16.198 [2024-12-10 22:59:23.883340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.198 [2024-12-10 22:59:23.883375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.198 qpair failed and we were unable to recover it. 00:28:16.198 [2024-12-10 22:59:23.883565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.198 [2024-12-10 22:59:23.883599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.198 qpair failed and we were unable to recover it. 00:28:16.198 [2024-12-10 22:59:23.883780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.198 [2024-12-10 22:59:23.883814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.198 qpair failed and we were unable to recover it. 00:28:16.198 [2024-12-10 22:59:23.883957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.198 [2024-12-10 22:59:23.883992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.198 qpair failed and we were unable to recover it. 00:28:16.198 [2024-12-10 22:59:23.884115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.198 [2024-12-10 22:59:23.884149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.198 qpair failed and we were unable to recover it. 00:28:16.198 [2024-12-10 22:59:23.884283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.198 [2024-12-10 22:59:23.884316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.198 qpair failed and we were unable to recover it. 00:28:16.198 [2024-12-10 22:59:23.884509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.198 [2024-12-10 22:59:23.884542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.198 qpair failed and we were unable to recover it. 00:28:16.198 [2024-12-10 22:59:23.884660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.198 [2024-12-10 22:59:23.884692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.198 qpair failed and we were unable to recover it. 00:28:16.475 [2024-12-10 22:59:23.884875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.475 [2024-12-10 22:59:23.884910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.475 qpair failed and we were unable to recover it. 00:28:16.475 [2024-12-10 22:59:23.885094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.475 [2024-12-10 22:59:23.885131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.475 qpair failed and we were unable to recover it. 00:28:16.475 [2024-12-10 22:59:23.885280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.475 [2024-12-10 22:59:23.885316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.475 qpair failed and we were unable to recover it. 00:28:16.475 [2024-12-10 22:59:23.885430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.475 [2024-12-10 22:59:23.885462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.475 qpair failed and we were unable to recover it. 00:28:16.475 [2024-12-10 22:59:23.885687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.475 [2024-12-10 22:59:23.885721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.475 qpair failed and we were unable to recover it. 00:28:16.475 [2024-12-10 22:59:23.885906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.475 [2024-12-10 22:59:23.885940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.475 qpair failed and we were unable to recover it. 00:28:16.475 [2024-12-10 22:59:23.886177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.475 [2024-12-10 22:59:23.886215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.475 qpair failed and we were unable to recover it. 00:28:16.475 [2024-12-10 22:59:23.886418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.475 [2024-12-10 22:59:23.886471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.475 qpair failed and we were unable to recover it. 00:28:16.475 [2024-12-10 22:59:23.886585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.475 [2024-12-10 22:59:23.886618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.475 qpair failed and we were unable to recover it. 00:28:16.475 [2024-12-10 22:59:23.886740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.475 [2024-12-10 22:59:23.886774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.475 qpair failed and we were unable to recover it. 00:28:16.475 [2024-12-10 22:59:23.886896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.475 [2024-12-10 22:59:23.886930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.475 qpair failed and we were unable to recover it. 00:28:16.475 [2024-12-10 22:59:23.887062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.475 [2024-12-10 22:59:23.887097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.475 qpair failed and we were unable to recover it. 00:28:16.475 [2024-12-10 22:59:23.887298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.475 [2024-12-10 22:59:23.887333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.475 qpair failed and we were unable to recover it. 00:28:16.475 [2024-12-10 22:59:23.887456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.475 [2024-12-10 22:59:23.887490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.475 qpair failed and we were unable to recover it. 00:28:16.475 [2024-12-10 22:59:23.887667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.475 [2024-12-10 22:59:23.887701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.475 qpair failed and we were unable to recover it. 00:28:16.475 [2024-12-10 22:59:23.887903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.475 [2024-12-10 22:59:23.887939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.475 qpair failed and we were unable to recover it. 00:28:16.475 [2024-12-10 22:59:23.888120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.475 [2024-12-10 22:59:23.888154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.475 qpair failed and we were unable to recover it. 00:28:16.475 [2024-12-10 22:59:23.888345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.475 [2024-12-10 22:59:23.888379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.475 qpair failed and we were unable to recover it. 00:28:16.475 [2024-12-10 22:59:23.888653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.475 [2024-12-10 22:59:23.888687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.475 qpair failed and we were unable to recover it. 00:28:16.475 [2024-12-10 22:59:23.888809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.475 [2024-12-10 22:59:23.888844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.475 qpair failed and we were unable to recover it. 00:28:16.475 [2024-12-10 22:59:23.889047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.475 [2024-12-10 22:59:23.889080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.475 qpair failed and we were unable to recover it. 00:28:16.475 [2024-12-10 22:59:23.889243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.475 [2024-12-10 22:59:23.889279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.475 qpair failed and we were unable to recover it. 00:28:16.475 [2024-12-10 22:59:23.889464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.475 [2024-12-10 22:59:23.889497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.475 qpair failed and we were unable to recover it. 00:28:16.475 [2024-12-10 22:59:23.889773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.475 [2024-12-10 22:59:23.889807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.475 qpair failed and we were unable to recover it. 00:28:16.475 [2024-12-10 22:59:23.889926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.475 [2024-12-10 22:59:23.889959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.475 qpair failed and we were unable to recover it. 00:28:16.475 [2024-12-10 22:59:23.890135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.475 [2024-12-10 22:59:23.890181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.475 qpair failed and we were unable to recover it. 00:28:16.475 [2024-12-10 22:59:23.890298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.475 [2024-12-10 22:59:23.890331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.475 qpair failed and we were unable to recover it. 00:28:16.475 [2024-12-10 22:59:23.890446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.475 [2024-12-10 22:59:23.890480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.475 qpair failed and we were unable to recover it. 00:28:16.475 [2024-12-10 22:59:23.890767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.475 [2024-12-10 22:59:23.890802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.475 qpair failed and we were unable to recover it. 00:28:16.475 [2024-12-10 22:59:23.891098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.475 [2024-12-10 22:59:23.891133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.475 qpair failed and we were unable to recover it. 00:28:16.475 [2024-12-10 22:59:23.891405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.475 [2024-12-10 22:59:23.891439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.475 qpair failed and we were unable to recover it. 00:28:16.475 [2024-12-10 22:59:23.891702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.475 [2024-12-10 22:59:23.891781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.475 qpair failed and we were unable to recover it. 00:28:16.475 [2024-12-10 22:59:23.891987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.475 [2024-12-10 22:59:23.892025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.475 qpair failed and we were unable to recover it. 00:28:16.475 [2024-12-10 22:59:23.892210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.475 [2024-12-10 22:59:23.892247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.475 qpair failed and we were unable to recover it. 00:28:16.475 [2024-12-10 22:59:23.892368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.475 [2024-12-10 22:59:23.892401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.475 qpair failed and we were unable to recover it. 00:28:16.475 [2024-12-10 22:59:23.892586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.475 [2024-12-10 22:59:23.892619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.475 qpair failed and we were unable to recover it. 00:28:16.475 [2024-12-10 22:59:23.892807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.475 [2024-12-10 22:59:23.892842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.475 qpair failed and we were unable to recover it. 00:28:16.475 [2024-12-10 22:59:23.892967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.475 [2024-12-10 22:59:23.893001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.475 qpair failed and we were unable to recover it. 00:28:16.475 [2024-12-10 22:59:23.893211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.475 [2024-12-10 22:59:23.893246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.476 qpair failed and we were unable to recover it. 00:28:16.476 [2024-12-10 22:59:23.893446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.476 [2024-12-10 22:59:23.893481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.476 qpair failed and we were unable to recover it. 00:28:16.476 [2024-12-10 22:59:23.893680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.476 [2024-12-10 22:59:23.893714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.476 qpair failed and we were unable to recover it. 00:28:16.476 [2024-12-10 22:59:23.893963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.476 [2024-12-10 22:59:23.893997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.476 qpair failed and we were unable to recover it. 00:28:16.476 [2024-12-10 22:59:23.894121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.476 [2024-12-10 22:59:23.894155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.476 qpair failed and we were unable to recover it. 00:28:16.476 [2024-12-10 22:59:23.894383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.476 [2024-12-10 22:59:23.894418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.476 qpair failed and we were unable to recover it. 00:28:16.476 [2024-12-10 22:59:23.894622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.476 [2024-12-10 22:59:23.894665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.476 qpair failed and we were unable to recover it. 00:28:16.476 [2024-12-10 22:59:23.894950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.476 [2024-12-10 22:59:23.894983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.476 qpair failed and we were unable to recover it. 00:28:16.476 [2024-12-10 22:59:23.895253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.476 [2024-12-10 22:59:23.895289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.476 qpair failed and we were unable to recover it. 00:28:16.476 [2024-12-10 22:59:23.895493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.476 [2024-12-10 22:59:23.895528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.476 qpair failed and we were unable to recover it. 00:28:16.476 [2024-12-10 22:59:23.895659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.476 [2024-12-10 22:59:23.895692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.476 qpair failed and we were unable to recover it. 00:28:16.476 [2024-12-10 22:59:23.895921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.476 [2024-12-10 22:59:23.895955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.476 qpair failed and we were unable to recover it. 00:28:16.476 [2024-12-10 22:59:23.896173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.476 [2024-12-10 22:59:23.896210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.476 qpair failed and we were unable to recover it. 00:28:16.476 [2024-12-10 22:59:23.896489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.476 [2024-12-10 22:59:23.896523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.476 qpair failed and we were unable to recover it. 00:28:16.476 [2024-12-10 22:59:23.896702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.476 [2024-12-10 22:59:23.896735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.476 qpair failed and we were unable to recover it. 00:28:16.476 [2024-12-10 22:59:23.896940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.476 [2024-12-10 22:59:23.896976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.476 qpair failed and we were unable to recover it. 00:28:16.476 [2024-12-10 22:59:23.897106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.476 [2024-12-10 22:59:23.897140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.476 qpair failed and we were unable to recover it. 00:28:16.476 [2024-12-10 22:59:23.897449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.476 [2024-12-10 22:59:23.897484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.476 qpair failed and we were unable to recover it. 00:28:16.476 [2024-12-10 22:59:23.897609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.476 [2024-12-10 22:59:23.897645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.476 qpair failed and we were unable to recover it. 00:28:16.476 [2024-12-10 22:59:23.897849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.476 [2024-12-10 22:59:23.897882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.476 qpair failed and we were unable to recover it. 00:28:16.476 [2024-12-10 22:59:23.898100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.476 [2024-12-10 22:59:23.898134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.476 qpair failed and we were unable to recover it. 00:28:16.476 [2024-12-10 22:59:23.898330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.476 [2024-12-10 22:59:23.898366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.476 qpair failed and we were unable to recover it. 00:28:16.476 [2024-12-10 22:59:23.898549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.476 [2024-12-10 22:59:23.898582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.476 qpair failed and we were unable to recover it. 00:28:16.476 [2024-12-10 22:59:23.898859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.476 [2024-12-10 22:59:23.898893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.476 qpair failed and we were unable to recover it. 00:28:16.476 [2024-12-10 22:59:23.899090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.476 [2024-12-10 22:59:23.899124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.476 qpair failed and we were unable to recover it. 00:28:16.476 [2024-12-10 22:59:23.899395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.476 [2024-12-10 22:59:23.899435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.476 qpair failed and we were unable to recover it. 00:28:16.476 [2024-12-10 22:59:23.899725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.476 [2024-12-10 22:59:23.899763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.476 qpair failed and we were unable to recover it. 00:28:16.476 [2024-12-10 22:59:23.900171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.476 [2024-12-10 22:59:23.900211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.476 qpair failed and we were unable to recover it. 00:28:16.476 [2024-12-10 22:59:23.900402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.476 [2024-12-10 22:59:23.900438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.476 qpair failed and we were unable to recover it. 00:28:16.476 [2024-12-10 22:59:23.900632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.476 [2024-12-10 22:59:23.900665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.476 qpair failed and we were unable to recover it. 00:28:16.476 [2024-12-10 22:59:23.900789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.476 [2024-12-10 22:59:23.900823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.476 qpair failed and we were unable to recover it. 00:28:16.476 [2024-12-10 22:59:23.900949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.476 [2024-12-10 22:59:23.900982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.476 qpair failed and we were unable to recover it. 00:28:16.476 [2024-12-10 22:59:23.901168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.476 [2024-12-10 22:59:23.901203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.476 qpair failed and we were unable to recover it. 00:28:16.476 [2024-12-10 22:59:23.901443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.476 [2024-12-10 22:59:23.901485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.476 qpair failed and we were unable to recover it. 00:28:16.476 [2024-12-10 22:59:23.901686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.476 [2024-12-10 22:59:23.901719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.476 qpair failed and we were unable to recover it. 00:28:16.476 [2024-12-10 22:59:23.901918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.476 [2024-12-10 22:59:23.901952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.476 qpair failed and we were unable to recover it. 00:28:16.476 [2024-12-10 22:59:23.902133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.476 [2024-12-10 22:59:23.902199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.476 qpair failed and we were unable to recover it. 00:28:16.476 [2024-12-10 22:59:23.902349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.476 [2024-12-10 22:59:23.902385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.476 qpair failed and we were unable to recover it. 00:28:16.476 [2024-12-10 22:59:23.902666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.476 [2024-12-10 22:59:23.902700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.476 qpair failed and we were unable to recover it. 00:28:16.476 [2024-12-10 22:59:23.902902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.476 [2024-12-10 22:59:23.902936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.476 qpair failed and we were unable to recover it. 00:28:16.477 [2024-12-10 22:59:23.903191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.477 [2024-12-10 22:59:23.903229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.477 qpair failed and we were unable to recover it. 00:28:16.477 [2024-12-10 22:59:23.903413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.477 [2024-12-10 22:59:23.903447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.477 qpair failed and we were unable to recover it. 00:28:16.477 [2024-12-10 22:59:23.903627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.477 [2024-12-10 22:59:23.903661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.477 qpair failed and we were unable to recover it. 00:28:16.477 [2024-12-10 22:59:23.903865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.477 [2024-12-10 22:59:23.903900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.477 qpair failed and we were unable to recover it. 00:28:16.477 [2024-12-10 22:59:23.904104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.477 [2024-12-10 22:59:23.904137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.477 qpair failed and we were unable to recover it. 00:28:16.477 [2024-12-10 22:59:23.904399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.477 [2024-12-10 22:59:23.904434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.477 qpair failed and we were unable to recover it. 00:28:16.477 [2024-12-10 22:59:23.904729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.477 [2024-12-10 22:59:23.904764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.477 qpair failed and we were unable to recover it. 00:28:16.477 [2024-12-10 22:59:23.904995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.477 [2024-12-10 22:59:23.905029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.477 qpair failed and we were unable to recover it. 00:28:16.477 [2024-12-10 22:59:23.905238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.477 [2024-12-10 22:59:23.905275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.477 qpair failed and we were unable to recover it. 00:28:16.477 [2024-12-10 22:59:23.905478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.477 [2024-12-10 22:59:23.905513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.477 qpair failed and we were unable to recover it. 00:28:16.477 [2024-12-10 22:59:23.905626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.477 [2024-12-10 22:59:23.905657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.477 qpair failed and we were unable to recover it. 00:28:16.477 [2024-12-10 22:59:23.905859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.477 [2024-12-10 22:59:23.905894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.477 qpair failed and we were unable to recover it. 00:28:16.477 [2024-12-10 22:59:23.906034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.477 [2024-12-10 22:59:23.906068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.477 qpair failed and we were unable to recover it. 00:28:16.477 [2024-12-10 22:59:23.906186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.477 [2024-12-10 22:59:23.906220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.477 qpair failed and we were unable to recover it. 00:28:16.477 [2024-12-10 22:59:23.906471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.477 [2024-12-10 22:59:23.906506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.477 qpair failed and we were unable to recover it. 00:28:16.477 [2024-12-10 22:59:23.906639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.477 [2024-12-10 22:59:23.906674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.477 qpair failed and we were unable to recover it. 00:28:16.477 [2024-12-10 22:59:23.906875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.477 [2024-12-10 22:59:23.906910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.477 qpair failed and we were unable to recover it. 00:28:16.477 [2024-12-10 22:59:23.907105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.477 [2024-12-10 22:59:23.907139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.477 qpair failed and we were unable to recover it. 00:28:16.477 [2024-12-10 22:59:23.907404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.477 [2024-12-10 22:59:23.907439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.477 qpair failed and we were unable to recover it. 00:28:16.477 [2024-12-10 22:59:23.907658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.477 [2024-12-10 22:59:23.907693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.477 qpair failed and we were unable to recover it. 00:28:16.477 [2024-12-10 22:59:23.907897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.477 [2024-12-10 22:59:23.907931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.477 qpair failed and we were unable to recover it. 00:28:16.477 [2024-12-10 22:59:23.908217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.477 [2024-12-10 22:59:23.908252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.477 qpair failed and we were unable to recover it. 00:28:16.477 [2024-12-10 22:59:23.908464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.477 [2024-12-10 22:59:23.908499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.477 qpair failed and we were unable to recover it. 00:28:16.477 [2024-12-10 22:59:23.908751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.477 [2024-12-10 22:59:23.908785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.477 qpair failed and we were unable to recover it. 00:28:16.477 [2024-12-10 22:59:23.909085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.477 [2024-12-10 22:59:23.909119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.477 qpair failed and we were unable to recover it. 00:28:16.477 [2024-12-10 22:59:23.909326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.477 [2024-12-10 22:59:23.909362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.477 qpair failed and we were unable to recover it. 00:28:16.477 [2024-12-10 22:59:23.909641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.477 [2024-12-10 22:59:23.909675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.477 qpair failed and we were unable to recover it. 00:28:16.477 [2024-12-10 22:59:23.909949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.477 [2024-12-10 22:59:23.909984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.477 qpair failed and we were unable to recover it. 00:28:16.477 [2024-12-10 22:59:23.910194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.477 [2024-12-10 22:59:23.910229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.477 qpair failed and we were unable to recover it. 00:28:16.477 [2024-12-10 22:59:23.910435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.477 [2024-12-10 22:59:23.910468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.477 qpair failed and we were unable to recover it. 00:28:16.477 [2024-12-10 22:59:23.910672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.477 [2024-12-10 22:59:23.910706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.477 qpair failed and we were unable to recover it. 00:28:16.477 [2024-12-10 22:59:23.910903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.477 [2024-12-10 22:59:23.910936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.477 qpair failed and we were unable to recover it. 00:28:16.477 [2024-12-10 22:59:23.911060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.477 [2024-12-10 22:59:23.911094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.477 qpair failed and we were unable to recover it. 00:28:16.477 [2024-12-10 22:59:23.911273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.477 [2024-12-10 22:59:23.911307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.477 qpair failed and we were unable to recover it. 00:28:16.477 [2024-12-10 22:59:23.911440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.477 [2024-12-10 22:59:23.911476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.477 qpair failed and we were unable to recover it. 00:28:16.477 [2024-12-10 22:59:23.911736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.477 [2024-12-10 22:59:23.911770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.477 qpair failed and we were unable to recover it. 00:28:16.477 [2024-12-10 22:59:23.911992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.477 [2024-12-10 22:59:23.912026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.477 qpair failed and we were unable to recover it. 00:28:16.478 [2024-12-10 22:59:23.912285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.478 [2024-12-10 22:59:23.912321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.478 qpair failed and we were unable to recover it. 00:28:16.478 [2024-12-10 22:59:23.912645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.478 [2024-12-10 22:59:23.912677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.478 qpair failed and we were unable to recover it. 00:28:16.478 [2024-12-10 22:59:23.912857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.478 [2024-12-10 22:59:23.912890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.478 qpair failed and we were unable to recover it. 00:28:16.478 [2024-12-10 22:59:23.913105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.478 [2024-12-10 22:59:23.913139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.478 qpair failed and we were unable to recover it. 00:28:16.478 [2024-12-10 22:59:23.913262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.478 [2024-12-10 22:59:23.913296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.478 qpair failed and we were unable to recover it. 00:28:16.478 [2024-12-10 22:59:23.913492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.478 [2024-12-10 22:59:23.913526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.478 qpair failed and we were unable to recover it. 00:28:16.478 [2024-12-10 22:59:23.913715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.478 [2024-12-10 22:59:23.913748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.478 qpair failed and we were unable to recover it. 00:28:16.478 [2024-12-10 22:59:23.913946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.478 [2024-12-10 22:59:23.913980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.478 qpair failed and we were unable to recover it. 00:28:16.478 [2024-12-10 22:59:23.914258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.478 [2024-12-10 22:59:23.914293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.478 qpair failed and we were unable to recover it. 00:28:16.478 [2024-12-10 22:59:23.914475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.478 [2024-12-10 22:59:23.914509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.478 qpair failed and we were unable to recover it. 00:28:16.478 [2024-12-10 22:59:23.914784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.478 [2024-12-10 22:59:23.914818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.478 qpair failed and we were unable to recover it. 00:28:16.478 [2024-12-10 22:59:23.915134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.478 [2024-12-10 22:59:23.915180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.478 qpair failed and we were unable to recover it. 00:28:16.478 [2024-12-10 22:59:23.915310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.478 [2024-12-10 22:59:23.915344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.478 qpair failed and we were unable to recover it. 00:28:16.478 [2024-12-10 22:59:23.915527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.478 [2024-12-10 22:59:23.915561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.478 qpair failed and we were unable to recover it. 00:28:16.478 [2024-12-10 22:59:23.915742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.478 [2024-12-10 22:59:23.915776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.478 qpair failed and we were unable to recover it. 00:28:16.478 [2024-12-10 22:59:23.916055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.478 [2024-12-10 22:59:23.916088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.478 qpair failed and we were unable to recover it. 00:28:16.478 [2024-12-10 22:59:23.916343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.478 [2024-12-10 22:59:23.916379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.478 qpair failed and we were unable to recover it. 00:28:16.478 [2024-12-10 22:59:23.916573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.478 [2024-12-10 22:59:23.916607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.478 qpair failed and we were unable to recover it. 00:28:16.478 [2024-12-10 22:59:23.916866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.478 [2024-12-10 22:59:23.916900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.478 qpair failed and we were unable to recover it. 00:28:16.478 [2024-12-10 22:59:23.917104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.478 [2024-12-10 22:59:23.917139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.478 qpair failed and we were unable to recover it. 00:28:16.478 [2024-12-10 22:59:23.917334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.478 [2024-12-10 22:59:23.917369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.478 qpair failed and we were unable to recover it. 00:28:16.478 [2024-12-10 22:59:23.917566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.478 [2024-12-10 22:59:23.917600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.478 qpair failed and we were unable to recover it. 00:28:16.478 [2024-12-10 22:59:23.917730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.478 [2024-12-10 22:59:23.917764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.478 qpair failed and we were unable to recover it. 00:28:16.478 [2024-12-10 22:59:23.917972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.478 [2024-12-10 22:59:23.918007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.478 qpair failed and we were unable to recover it. 00:28:16.478 [2024-12-10 22:59:23.918212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.478 [2024-12-10 22:59:23.918254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.478 qpair failed and we were unable to recover it. 00:28:16.478 [2024-12-10 22:59:23.918477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.478 [2024-12-10 22:59:23.918511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.478 qpair failed and we were unable to recover it. 00:28:16.478 [2024-12-10 22:59:23.918739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.478 [2024-12-10 22:59:23.918773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.478 qpair failed and we were unable to recover it. 00:28:16.478 [2024-12-10 22:59:23.918952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.478 [2024-12-10 22:59:23.918987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.478 qpair failed and we were unable to recover it. 00:28:16.478 [2024-12-10 22:59:23.919240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.478 [2024-12-10 22:59:23.919274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.478 qpair failed and we were unable to recover it. 00:28:16.478 [2024-12-10 22:59:23.919405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.478 [2024-12-10 22:59:23.919439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.478 qpair failed and we were unable to recover it. 00:28:16.478 [2024-12-10 22:59:23.919632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.478 [2024-12-10 22:59:23.919665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.478 qpair failed and we were unable to recover it. 00:28:16.478 [2024-12-10 22:59:23.919843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.478 [2024-12-10 22:59:23.919878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.478 qpair failed and we were unable to recover it. 00:28:16.478 [2024-12-10 22:59:23.920053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.478 [2024-12-10 22:59:23.920088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.478 qpair failed and we were unable to recover it. 00:28:16.478 [2024-12-10 22:59:23.920277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.478 [2024-12-10 22:59:23.920313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.478 qpair failed and we were unable to recover it. 00:28:16.478 [2024-12-10 22:59:23.920507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.478 [2024-12-10 22:59:23.920541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.478 qpair failed and we were unable to recover it. 00:28:16.478 [2024-12-10 22:59:23.920740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.478 [2024-12-10 22:59:23.920775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.478 qpair failed and we were unable to recover it. 00:28:16.478 [2024-12-10 22:59:23.920977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.478 [2024-12-10 22:59:23.921011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.478 qpair failed and we were unable to recover it. 00:28:16.478 [2024-12-10 22:59:23.921215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.478 [2024-12-10 22:59:23.921250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.478 qpair failed and we were unable to recover it. 00:28:16.478 [2024-12-10 22:59:23.921443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.478 [2024-12-10 22:59:23.921478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.478 qpair failed and we were unable to recover it. 00:28:16.478 [2024-12-10 22:59:23.921608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.478 [2024-12-10 22:59:23.921642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.479 qpair failed and we were unable to recover it. 00:28:16.479 [2024-12-10 22:59:23.921872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.479 [2024-12-10 22:59:23.921906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.479 qpair failed and we were unable to recover it. 00:28:16.479 [2024-12-10 22:59:23.922190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.479 [2024-12-10 22:59:23.922225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.479 qpair failed and we were unable to recover it. 00:28:16.479 [2024-12-10 22:59:23.922504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.479 [2024-12-10 22:59:23.922536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.479 qpair failed and we were unable to recover it. 00:28:16.479 [2024-12-10 22:59:23.922717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.479 [2024-12-10 22:59:23.922751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.479 qpair failed and we were unable to recover it. 00:28:16.479 [2024-12-10 22:59:23.922871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.479 [2024-12-10 22:59:23.922905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.479 qpair failed and we were unable to recover it. 00:28:16.479 [2024-12-10 22:59:23.923120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.479 [2024-12-10 22:59:23.923155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.479 qpair failed and we were unable to recover it. 00:28:16.479 [2024-12-10 22:59:23.923483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.479 [2024-12-10 22:59:23.923517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.479 qpair failed and we were unable to recover it. 00:28:16.479 [2024-12-10 22:59:23.923747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.479 [2024-12-10 22:59:23.923780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.479 qpair failed and we were unable to recover it. 00:28:16.479 [2024-12-10 22:59:23.924034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.479 [2024-12-10 22:59:23.924068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.479 qpair failed and we were unable to recover it. 00:28:16.479 [2024-12-10 22:59:23.924250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.479 [2024-12-10 22:59:23.924286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.479 qpair failed and we were unable to recover it. 00:28:16.479 [2024-12-10 22:59:23.924562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.479 [2024-12-10 22:59:23.924595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.479 qpair failed and we were unable to recover it. 00:28:16.479 [2024-12-10 22:59:23.924792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.479 [2024-12-10 22:59:23.924826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.479 qpair failed and we were unable to recover it. 00:28:16.479 [2024-12-10 22:59:23.925035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.479 [2024-12-10 22:59:23.925070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.479 qpair failed and we were unable to recover it. 00:28:16.479 [2024-12-10 22:59:23.925204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.479 [2024-12-10 22:59:23.925240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.479 qpair failed and we were unable to recover it. 00:28:16.479 [2024-12-10 22:59:23.925429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.479 [2024-12-10 22:59:23.925463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.479 qpair failed and we were unable to recover it. 00:28:16.479 [2024-12-10 22:59:23.925673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.479 [2024-12-10 22:59:23.925707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.479 qpair failed and we were unable to recover it. 00:28:16.479 [2024-12-10 22:59:23.925951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.479 [2024-12-10 22:59:23.925986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.479 qpair failed and we were unable to recover it. 00:28:16.479 [2024-12-10 22:59:23.926262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.479 [2024-12-10 22:59:23.926297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.479 qpair failed and we were unable to recover it. 00:28:16.479 [2024-12-10 22:59:23.926496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.479 [2024-12-10 22:59:23.926529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.479 qpair failed and we were unable to recover it. 00:28:16.479 [2024-12-10 22:59:23.926729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.479 [2024-12-10 22:59:23.926763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.479 qpair failed and we were unable to recover it. 00:28:16.479 [2024-12-10 22:59:23.926874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.479 [2024-12-10 22:59:23.926908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.479 qpair failed and we were unable to recover it. 00:28:16.479 [2024-12-10 22:59:23.927168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.479 [2024-12-10 22:59:23.927203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.479 qpair failed and we were unable to recover it. 00:28:16.479 [2024-12-10 22:59:23.927316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.479 [2024-12-10 22:59:23.927349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.479 qpair failed and we were unable to recover it. 00:28:16.479 [2024-12-10 22:59:23.927548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.479 [2024-12-10 22:59:23.927581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.479 qpair failed and we were unable to recover it. 00:28:16.479 [2024-12-10 22:59:23.927858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.479 [2024-12-10 22:59:23.927893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.479 qpair failed and we were unable to recover it. 00:28:16.479 [2024-12-10 22:59:23.928010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.479 [2024-12-10 22:59:23.928045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.479 qpair failed and we were unable to recover it. 00:28:16.479 [2024-12-10 22:59:23.928323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.479 [2024-12-10 22:59:23.928359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.479 qpair failed and we were unable to recover it. 00:28:16.479 [2024-12-10 22:59:23.928555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.479 [2024-12-10 22:59:23.928589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.479 qpair failed and we were unable to recover it. 00:28:16.479 [2024-12-10 22:59:23.928767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.479 [2024-12-10 22:59:23.928801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.479 qpair failed and we were unable to recover it. 00:28:16.479 [2024-12-10 22:59:23.929078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.479 [2024-12-10 22:59:23.929113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.479 qpair failed and we were unable to recover it. 00:28:16.479 [2024-12-10 22:59:23.929340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.479 [2024-12-10 22:59:23.929375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.479 qpair failed and we were unable to recover it. 00:28:16.479 [2024-12-10 22:59:23.929555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.479 [2024-12-10 22:59:23.929589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.479 qpair failed and we were unable to recover it. 00:28:16.479 [2024-12-10 22:59:23.929742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.479 [2024-12-10 22:59:23.929777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.479 qpair failed and we were unable to recover it. 00:28:16.479 [2024-12-10 22:59:23.930076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.479 [2024-12-10 22:59:23.930109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.479 qpair failed and we were unable to recover it. 00:28:16.479 [2024-12-10 22:59:23.930273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.480 [2024-12-10 22:59:23.930308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.480 qpair failed and we were unable to recover it. 00:28:16.480 [2024-12-10 22:59:23.930444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.480 [2024-12-10 22:59:23.930479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.480 qpair failed and we were unable to recover it. 00:28:16.480 [2024-12-10 22:59:23.930601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.480 [2024-12-10 22:59:23.930636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.480 qpair failed and we were unable to recover it. 00:28:16.480 [2024-12-10 22:59:23.930930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.480 [2024-12-10 22:59:23.930964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.480 qpair failed and we were unable to recover it. 00:28:16.480 [2024-12-10 22:59:23.931169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.480 [2024-12-10 22:59:23.931205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.480 qpair failed and we were unable to recover it. 00:28:16.480 [2024-12-10 22:59:23.931433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.480 [2024-12-10 22:59:23.931467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.480 qpair failed and we were unable to recover it. 00:28:16.480 [2024-12-10 22:59:23.931610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.480 [2024-12-10 22:59:23.931644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.480 qpair failed and we were unable to recover it. 00:28:16.480 [2024-12-10 22:59:23.931832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.480 [2024-12-10 22:59:23.931867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.480 qpair failed and we were unable to recover it. 00:28:16.480 [2024-12-10 22:59:23.932071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.480 [2024-12-10 22:59:23.932106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.480 qpair failed and we were unable to recover it. 00:28:16.480 [2024-12-10 22:59:23.932372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.480 [2024-12-10 22:59:23.932408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.480 qpair failed and we were unable to recover it. 00:28:16.480 [2024-12-10 22:59:23.932697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.480 [2024-12-10 22:59:23.932731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.480 qpair failed and we were unable to recover it. 00:28:16.480 [2024-12-10 22:59:23.932854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.480 [2024-12-10 22:59:23.932888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.480 qpair failed and we were unable to recover it. 00:28:16.480 [2024-12-10 22:59:23.933175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.480 [2024-12-10 22:59:23.933212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.480 qpair failed and we were unable to recover it. 00:28:16.480 [2024-12-10 22:59:23.933407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.480 [2024-12-10 22:59:23.933441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.480 qpair failed and we were unable to recover it. 00:28:16.480 [2024-12-10 22:59:23.933740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.480 [2024-12-10 22:59:23.933775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.480 qpair failed and we were unable to recover it. 00:28:16.480 [2024-12-10 22:59:23.933956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.480 [2024-12-10 22:59:23.933990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.480 qpair failed and we were unable to recover it. 00:28:16.480 [2024-12-10 22:59:23.934119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.480 [2024-12-10 22:59:23.934153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.480 qpair failed and we were unable to recover it. 00:28:16.480 [2024-12-10 22:59:23.934373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.480 [2024-12-10 22:59:23.934408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.480 qpair failed and we were unable to recover it. 00:28:16.480 [2024-12-10 22:59:23.934591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.480 [2024-12-10 22:59:23.934631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.480 qpair failed and we were unable to recover it. 00:28:16.480 [2024-12-10 22:59:23.934759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.480 [2024-12-10 22:59:23.934793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.480 qpair failed and we were unable to recover it. 00:28:16.480 [2024-12-10 22:59:23.935045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.480 [2024-12-10 22:59:23.935080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.480 qpair failed and we were unable to recover it. 00:28:16.480 [2024-12-10 22:59:23.935285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.480 [2024-12-10 22:59:23.935320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.480 qpair failed and we were unable to recover it. 00:28:16.480 [2024-12-10 22:59:23.935517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.480 [2024-12-10 22:59:23.935551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.480 qpair failed and we were unable to recover it. 00:28:16.480 [2024-12-10 22:59:23.935740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.480 [2024-12-10 22:59:23.935773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.480 qpair failed and we were unable to recover it. 00:28:16.480 [2024-12-10 22:59:23.935979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.480 [2024-12-10 22:59:23.936013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.480 qpair failed and we were unable to recover it. 00:28:16.480 [2024-12-10 22:59:23.936195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.480 [2024-12-10 22:59:23.936231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.480 qpair failed and we were unable to recover it. 00:28:16.480 [2024-12-10 22:59:23.936492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.480 [2024-12-10 22:59:23.936525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.480 qpair failed and we were unable to recover it. 00:28:16.480 [2024-12-10 22:59:23.936778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.480 [2024-12-10 22:59:23.936812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.480 qpair failed and we were unable to recover it. 00:28:16.480 [2024-12-10 22:59:23.936997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.480 [2024-12-10 22:59:23.937031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.480 qpair failed and we were unable to recover it. 00:28:16.480 [2024-12-10 22:59:23.937214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.480 [2024-12-10 22:59:23.937249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.480 qpair failed and we were unable to recover it. 00:28:16.480 [2024-12-10 22:59:23.937458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.480 [2024-12-10 22:59:23.937493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.480 qpair failed and we were unable to recover it. 00:28:16.480 [2024-12-10 22:59:23.937686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.480 [2024-12-10 22:59:23.937718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.480 qpair failed and we were unable to recover it. 00:28:16.480 [2024-12-10 22:59:23.937905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.480 [2024-12-10 22:59:23.937940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.480 qpair failed and we were unable to recover it. 00:28:16.480 [2024-12-10 22:59:23.938229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.480 [2024-12-10 22:59:23.938266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.480 qpair failed and we were unable to recover it. 00:28:16.480 [2024-12-10 22:59:23.938541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.480 [2024-12-10 22:59:23.938576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.480 qpair failed and we were unable to recover it. 00:28:16.480 [2024-12-10 22:59:23.938843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.480 [2024-12-10 22:59:23.938876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.480 qpair failed and we were unable to recover it. 00:28:16.480 [2024-12-10 22:59:23.939173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.480 [2024-12-10 22:59:23.939210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.480 qpair failed and we were unable to recover it. 00:28:16.481 [2024-12-10 22:59:23.939477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.481 [2024-12-10 22:59:23.939510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.481 qpair failed and we were unable to recover it. 00:28:16.481 [2024-12-10 22:59:23.939788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.481 [2024-12-10 22:59:23.939822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.481 qpair failed and we were unable to recover it. 00:28:16.481 [2024-12-10 22:59:23.940019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.481 [2024-12-10 22:59:23.940053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.481 qpair failed and we were unable to recover it. 00:28:16.481 [2024-12-10 22:59:23.940233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.481 [2024-12-10 22:59:23.940268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.481 qpair failed and we were unable to recover it. 00:28:16.481 [2024-12-10 22:59:23.940452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.481 [2024-12-10 22:59:23.940485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.481 qpair failed and we were unable to recover it. 00:28:16.481 [2024-12-10 22:59:23.940613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.481 [2024-12-10 22:59:23.940646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.481 qpair failed and we were unable to recover it. 00:28:16.481 [2024-12-10 22:59:23.940854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.481 [2024-12-10 22:59:23.940889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.481 qpair failed and we were unable to recover it. 00:28:16.481 [2024-12-10 22:59:23.941117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.481 [2024-12-10 22:59:23.941150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.481 qpair failed and we were unable to recover it. 00:28:16.481 [2024-12-10 22:59:23.941291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.481 [2024-12-10 22:59:23.941326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.481 qpair failed and we were unable to recover it. 00:28:16.481 [2024-12-10 22:59:23.941586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.481 [2024-12-10 22:59:23.941619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.481 qpair failed and we were unable to recover it. 00:28:16.481 [2024-12-10 22:59:23.941812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.481 [2024-12-10 22:59:23.941847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.481 qpair failed and we were unable to recover it. 00:28:16.481 [2024-12-10 22:59:23.941974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.481 [2024-12-10 22:59:23.942008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.481 qpair failed and we were unable to recover it. 00:28:16.481 [2024-12-10 22:59:23.942287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.481 [2024-12-10 22:59:23.942323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.481 qpair failed and we were unable to recover it. 00:28:16.481 [2024-12-10 22:59:23.942521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.481 [2024-12-10 22:59:23.942555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.481 qpair failed and we were unable to recover it. 00:28:16.481 [2024-12-10 22:59:23.942737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.481 [2024-12-10 22:59:23.942771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.481 qpair failed and we were unable to recover it. 00:28:16.481 [2024-12-10 22:59:23.942878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.481 [2024-12-10 22:59:23.942913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.481 qpair failed and we were unable to recover it. 00:28:16.481 [2024-12-10 22:59:23.943187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.481 [2024-12-10 22:59:23.943222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.481 qpair failed and we were unable to recover it. 00:28:16.481 [2024-12-10 22:59:23.943488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.481 [2024-12-10 22:59:23.943522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.481 qpair failed and we were unable to recover it. 00:28:16.481 [2024-12-10 22:59:23.943819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.481 [2024-12-10 22:59:23.943853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.481 qpair failed and we were unable to recover it. 00:28:16.481 [2024-12-10 22:59:23.944137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.481 [2024-12-10 22:59:23.944182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.481 qpair failed and we were unable to recover it. 00:28:16.481 [2024-12-10 22:59:23.944453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.481 [2024-12-10 22:59:23.944487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.481 qpair failed and we were unable to recover it. 00:28:16.481 [2024-12-10 22:59:23.944672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.481 [2024-12-10 22:59:23.944707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.481 qpair failed and we were unable to recover it. 00:28:16.481 [2024-12-10 22:59:23.944891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.481 [2024-12-10 22:59:23.944931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.481 qpair failed and we were unable to recover it. 00:28:16.481 [2024-12-10 22:59:23.945062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.481 [2024-12-10 22:59:23.945096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.481 qpair failed and we were unable to recover it. 00:28:16.481 [2024-12-10 22:59:23.945296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.481 [2024-12-10 22:59:23.945331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.481 qpair failed and we were unable to recover it. 00:28:16.481 [2024-12-10 22:59:23.945508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.481 [2024-12-10 22:59:23.945542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.481 qpair failed and we were unable to recover it. 00:28:16.481 [2024-12-10 22:59:23.945825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.481 [2024-12-10 22:59:23.945858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.481 qpair failed and we were unable to recover it. 00:28:16.481 [2024-12-10 22:59:23.946134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.481 [2024-12-10 22:59:23.946190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.481 qpair failed and we were unable to recover it. 00:28:16.481 [2024-12-10 22:59:23.946375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.481 [2024-12-10 22:59:23.946409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.481 qpair failed and we were unable to recover it. 00:28:16.481 [2024-12-10 22:59:23.946587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.481 [2024-12-10 22:59:23.946621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.481 qpair failed and we were unable to recover it. 00:28:16.481 [2024-12-10 22:59:23.946898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.481 [2024-12-10 22:59:23.946932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.481 qpair failed and we were unable to recover it. 00:28:16.481 [2024-12-10 22:59:23.947219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.481 [2024-12-10 22:59:23.947255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.481 qpair failed and we were unable to recover it. 00:28:16.481 [2024-12-10 22:59:23.947439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.481 [2024-12-10 22:59:23.947473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.481 qpair failed and we were unable to recover it. 00:28:16.481 [2024-12-10 22:59:23.947674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.481 [2024-12-10 22:59:23.947707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.481 qpair failed and we were unable to recover it. 00:28:16.481 [2024-12-10 22:59:23.947889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.481 [2024-12-10 22:59:23.947923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.481 qpair failed and we were unable to recover it. 00:28:16.481 [2024-12-10 22:59:23.948059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.481 [2024-12-10 22:59:23.948093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.481 qpair failed and we were unable to recover it. 00:28:16.481 [2024-12-10 22:59:23.948387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.481 [2024-12-10 22:59:23.948423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.481 qpair failed and we were unable to recover it. 00:28:16.481 [2024-12-10 22:59:23.948710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.481 [2024-12-10 22:59:23.948744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.481 qpair failed and we were unable to recover it. 00:28:16.481 [2024-12-10 22:59:23.949019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.481 [2024-12-10 22:59:23.949054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.481 qpair failed and we were unable to recover it. 00:28:16.481 [2024-12-10 22:59:23.949344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.482 [2024-12-10 22:59:23.949378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.482 qpair failed and we were unable to recover it. 00:28:16.482 [2024-12-10 22:59:23.949487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.482 [2024-12-10 22:59:23.949520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.482 qpair failed and we were unable to recover it. 00:28:16.482 [2024-12-10 22:59:23.949799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.482 [2024-12-10 22:59:23.949834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.482 qpair failed and we were unable to recover it. 00:28:16.482 [2024-12-10 22:59:23.950120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.482 [2024-12-10 22:59:23.950154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.482 qpair failed and we were unable to recover it. 00:28:16.482 [2024-12-10 22:59:23.950378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.482 [2024-12-10 22:59:23.950413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.482 qpair failed and we were unable to recover it. 00:28:16.482 [2024-12-10 22:59:23.950668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.482 [2024-12-10 22:59:23.950702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.482 qpair failed and we were unable to recover it. 00:28:16.482 [2024-12-10 22:59:23.950959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.482 [2024-12-10 22:59:23.950993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.482 qpair failed and we were unable to recover it. 00:28:16.482 [2024-12-10 22:59:23.951201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.482 [2024-12-10 22:59:23.951238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.482 qpair failed and we were unable to recover it. 00:28:16.482 [2024-12-10 22:59:23.951365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.482 [2024-12-10 22:59:23.951398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.482 qpair failed and we were unable to recover it. 00:28:16.482 [2024-12-10 22:59:23.951675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.482 [2024-12-10 22:59:23.951710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.482 qpair failed and we were unable to recover it. 00:28:16.482 [2024-12-10 22:59:23.951913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.482 [2024-12-10 22:59:23.951952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.482 qpair failed and we were unable to recover it. 00:28:16.482 [2024-12-10 22:59:23.952137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.482 [2024-12-10 22:59:23.952181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.482 qpair failed and we were unable to recover it. 00:28:16.482 [2024-12-10 22:59:23.952362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.482 [2024-12-10 22:59:23.952396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.482 qpair failed and we were unable to recover it. 00:28:16.482 [2024-12-10 22:59:23.952508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.482 [2024-12-10 22:59:23.952542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.482 qpair failed and we were unable to recover it. 00:28:16.482 [2024-12-10 22:59:23.952737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.482 [2024-12-10 22:59:23.952771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.482 qpair failed and we were unable to recover it. 00:28:16.482 [2024-12-10 22:59:23.952900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.482 [2024-12-10 22:59:23.952935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.482 qpair failed and we were unable to recover it. 00:28:16.482 [2024-12-10 22:59:23.953136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.482 [2024-12-10 22:59:23.953179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.482 qpair failed and we were unable to recover it. 00:28:16.482 [2024-12-10 22:59:23.953395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.482 [2024-12-10 22:59:23.953429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.482 qpair failed and we were unable to recover it. 00:28:16.482 [2024-12-10 22:59:23.953611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.482 [2024-12-10 22:59:23.953645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.482 qpair failed and we were unable to recover it. 00:28:16.482 [2024-12-10 22:59:23.953919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.482 [2024-12-10 22:59:23.953953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.482 qpair failed and we were unable to recover it. 00:28:16.482 [2024-12-10 22:59:23.954153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.482 [2024-12-10 22:59:23.954206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.482 qpair failed and we were unable to recover it. 00:28:16.482 [2024-12-10 22:59:23.954414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.482 [2024-12-10 22:59:23.954447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.482 qpair failed and we were unable to recover it. 00:28:16.482 [2024-12-10 22:59:23.954700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.482 [2024-12-10 22:59:23.954734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.482 qpair failed and we were unable to recover it. 00:28:16.482 [2024-12-10 22:59:23.954926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.482 [2024-12-10 22:59:23.954960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.482 qpair failed and we were unable to recover it. 00:28:16.482 [2024-12-10 22:59:23.955172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.482 [2024-12-10 22:59:23.955208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.482 qpair failed and we were unable to recover it. 00:28:16.482 [2024-12-10 22:59:23.955335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.482 [2024-12-10 22:59:23.955369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.482 qpair failed and we were unable to recover it. 00:28:16.482 [2024-12-10 22:59:23.955497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.482 [2024-12-10 22:59:23.955531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.482 qpair failed and we were unable to recover it. 00:28:16.482 [2024-12-10 22:59:23.955809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.482 [2024-12-10 22:59:23.955843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.482 qpair failed and we were unable to recover it. 00:28:16.482 [2024-12-10 22:59:23.955967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.482 [2024-12-10 22:59:23.956001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.482 qpair failed and we were unable to recover it. 00:28:16.482 [2024-12-10 22:59:23.956184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.482 [2024-12-10 22:59:23.956221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.482 qpair failed and we were unable to recover it. 00:28:16.482 [2024-12-10 22:59:23.956434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.482 [2024-12-10 22:59:23.956468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.482 qpair failed and we were unable to recover it. 00:28:16.482 [2024-12-10 22:59:23.956587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.482 [2024-12-10 22:59:23.956618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.482 qpair failed and we were unable to recover it. 00:28:16.482 [2024-12-10 22:59:23.956815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.482 [2024-12-10 22:59:23.956849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.482 qpair failed and we were unable to recover it. 00:28:16.482 [2024-12-10 22:59:23.957125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.482 [2024-12-10 22:59:23.957181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.482 qpair failed and we were unable to recover it. 00:28:16.482 [2024-12-10 22:59:23.957385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.482 [2024-12-10 22:59:23.957419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.482 qpair failed and we were unable to recover it. 00:28:16.482 [2024-12-10 22:59:23.957621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.482 [2024-12-10 22:59:23.957656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.482 qpair failed and we were unable to recover it. 00:28:16.482 [2024-12-10 22:59:23.957950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.482 [2024-12-10 22:59:23.957983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.482 qpair failed and we were unable to recover it. 00:28:16.483 [2024-12-10 22:59:23.958173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.483 [2024-12-10 22:59:23.958210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.483 qpair failed and we were unable to recover it. 00:28:16.483 [2024-12-10 22:59:23.958492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.483 [2024-12-10 22:59:23.958527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.483 qpair failed and we were unable to recover it. 00:28:16.483 [2024-12-10 22:59:23.958836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.483 [2024-12-10 22:59:23.958871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.483 qpair failed and we were unable to recover it. 00:28:16.483 [2024-12-10 22:59:23.959175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.483 [2024-12-10 22:59:23.959210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.483 qpair failed and we were unable to recover it. 00:28:16.483 [2024-12-10 22:59:23.959409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.483 [2024-12-10 22:59:23.959443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.483 qpair failed and we were unable to recover it. 00:28:16.483 [2024-12-10 22:59:23.959659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.483 [2024-12-10 22:59:23.959693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.483 qpair failed and we were unable to recover it. 00:28:16.483 [2024-12-10 22:59:23.959877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.483 [2024-12-10 22:59:23.959910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.483 qpair failed and we were unable to recover it. 00:28:16.483 [2024-12-10 22:59:23.960040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.483 [2024-12-10 22:59:23.960074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.483 qpair failed and we were unable to recover it. 00:28:16.483 [2024-12-10 22:59:23.960264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.483 [2024-12-10 22:59:23.960300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.483 qpair failed and we were unable to recover it. 00:28:16.483 [2024-12-10 22:59:23.960557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.483 [2024-12-10 22:59:23.960591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.483 qpair failed and we were unable to recover it. 00:28:16.483 [2024-12-10 22:59:23.960723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.483 [2024-12-10 22:59:23.960757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.483 qpair failed and we were unable to recover it. 00:28:16.483 [2024-12-10 22:59:23.960887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.483 [2024-12-10 22:59:23.960922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.483 qpair failed and we were unable to recover it. 00:28:16.483 [2024-12-10 22:59:23.961102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.483 [2024-12-10 22:59:23.961137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.483 qpair failed and we were unable to recover it. 00:28:16.483 [2024-12-10 22:59:23.961358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.483 [2024-12-10 22:59:23.961392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.483 qpair failed and we were unable to recover it. 00:28:16.483 [2024-12-10 22:59:23.961501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.483 [2024-12-10 22:59:23.961538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.483 qpair failed and we were unable to recover it. 00:28:16.483 [2024-12-10 22:59:23.961725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.483 [2024-12-10 22:59:23.961760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.483 qpair failed and we were unable to recover it. 00:28:16.483 [2024-12-10 22:59:23.961938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.483 [2024-12-10 22:59:23.961972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.483 qpair failed and we were unable to recover it. 00:28:16.483 [2024-12-10 22:59:23.962150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.483 [2024-12-10 22:59:23.962215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.483 qpair failed and we were unable to recover it. 00:28:16.483 [2024-12-10 22:59:23.962340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.483 [2024-12-10 22:59:23.962375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.483 qpair failed and we were unable to recover it. 00:28:16.483 [2024-12-10 22:59:23.962578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.483 [2024-12-10 22:59:23.962612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.483 qpair failed and we were unable to recover it. 00:28:16.483 [2024-12-10 22:59:23.962843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.483 [2024-12-10 22:59:23.962878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.483 qpair failed and we were unable to recover it. 00:28:16.483 [2024-12-10 22:59:23.963080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.483 [2024-12-10 22:59:23.963115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.483 qpair failed and we were unable to recover it. 00:28:16.483 [2024-12-10 22:59:23.963398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.483 [2024-12-10 22:59:23.963433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.483 qpair failed and we were unable to recover it. 00:28:16.483 [2024-12-10 22:59:23.963633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.483 [2024-12-10 22:59:23.963667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.483 qpair failed and we were unable to recover it. 00:28:16.483 [2024-12-10 22:59:23.963861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.483 [2024-12-10 22:59:23.963895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.483 qpair failed and we were unable to recover it. 00:28:16.483 [2024-12-10 22:59:23.964148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.483 [2024-12-10 22:59:23.964194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.483 qpair failed and we were unable to recover it. 00:28:16.483 [2024-12-10 22:59:23.964376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.483 [2024-12-10 22:59:23.964409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.483 qpair failed and we were unable to recover it. 00:28:16.483 [2024-12-10 22:59:23.964539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.483 [2024-12-10 22:59:23.964574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.483 qpair failed and we were unable to recover it. 00:28:16.483 [2024-12-10 22:59:23.964779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.483 [2024-12-10 22:59:23.964813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.483 qpair failed and we were unable to recover it. 00:28:16.483 [2024-12-10 22:59:23.965032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.483 [2024-12-10 22:59:23.965066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.483 qpair failed and we were unable to recover it. 00:28:16.483 [2024-12-10 22:59:23.965254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.483 [2024-12-10 22:59:23.965289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.483 qpair failed and we were unable to recover it. 00:28:16.483 [2024-12-10 22:59:23.965556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.483 [2024-12-10 22:59:23.965591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.483 qpair failed and we were unable to recover it. 00:28:16.483 [2024-12-10 22:59:23.965800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.483 [2024-12-10 22:59:23.965834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.483 qpair failed and we were unable to recover it. 00:28:16.483 [2024-12-10 22:59:23.966014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.483 [2024-12-10 22:59:23.966048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.483 qpair failed and we were unable to recover it. 00:28:16.483 [2024-12-10 22:59:23.966255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.483 [2024-12-10 22:59:23.966291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.483 qpair failed and we were unable to recover it. 00:28:16.483 [2024-12-10 22:59:23.966494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.483 [2024-12-10 22:59:23.966528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.483 qpair failed and we were unable to recover it. 00:28:16.483 [2024-12-10 22:59:23.966729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.483 [2024-12-10 22:59:23.966762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.483 qpair failed and we were unable to recover it. 00:28:16.483 [2024-12-10 22:59:23.966968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.483 [2024-12-10 22:59:23.967003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.483 qpair failed and we were unable to recover it. 00:28:16.483 [2024-12-10 22:59:23.967309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.483 [2024-12-10 22:59:23.967344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.483 qpair failed and we were unable to recover it. 00:28:16.483 [2024-12-10 22:59:23.967542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.483 [2024-12-10 22:59:23.967576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.484 qpair failed and we were unable to recover it. 00:28:16.484 [2024-12-10 22:59:23.967856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.484 [2024-12-10 22:59:23.967891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.484 qpair failed and we were unable to recover it. 00:28:16.484 [2024-12-10 22:59:23.968035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.484 [2024-12-10 22:59:23.968074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.484 qpair failed and we were unable to recover it. 00:28:16.484 [2024-12-10 22:59:23.968375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.484 [2024-12-10 22:59:23.968410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.484 qpair failed and we were unable to recover it. 00:28:16.484 [2024-12-10 22:59:23.968595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.484 [2024-12-10 22:59:23.968629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.484 qpair failed and we were unable to recover it. 00:28:16.484 [2024-12-10 22:59:23.968924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.484 [2024-12-10 22:59:23.968959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.484 qpair failed and we were unable to recover it. 00:28:16.484 [2024-12-10 22:59:23.969283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.484 [2024-12-10 22:59:23.969318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.484 qpair failed and we were unable to recover it. 00:28:16.484 [2024-12-10 22:59:23.969505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.484 [2024-12-10 22:59:23.969539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.484 qpair failed and we were unable to recover it. 00:28:16.484 [2024-12-10 22:59:23.969738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.484 [2024-12-10 22:59:23.969773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.484 qpair failed and we were unable to recover it. 00:28:16.484 [2024-12-10 22:59:23.969968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.484 [2024-12-10 22:59:23.970002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.484 qpair failed and we were unable to recover it. 00:28:16.484 [2024-12-10 22:59:23.970118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.484 [2024-12-10 22:59:23.970150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.484 qpair failed and we were unable to recover it. 00:28:16.484 [2024-12-10 22:59:23.970386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.484 [2024-12-10 22:59:23.970421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.484 qpair failed and we were unable to recover it. 00:28:16.484 [2024-12-10 22:59:23.970622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.484 [2024-12-10 22:59:23.970655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.484 qpair failed and we were unable to recover it. 00:28:16.484 [2024-12-10 22:59:23.970840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.484 [2024-12-10 22:59:23.970875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.484 qpair failed and we were unable to recover it. 00:28:16.484 [2024-12-10 22:59:23.971055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.484 [2024-12-10 22:59:23.971090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.484 qpair failed and we were unable to recover it. 00:28:16.484 [2024-12-10 22:59:23.971365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.484 [2024-12-10 22:59:23.971400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.484 qpair failed and we were unable to recover it. 00:28:16.484 [2024-12-10 22:59:23.971699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.484 [2024-12-10 22:59:23.971732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.484 qpair failed and we were unable to recover it. 00:28:16.484 [2024-12-10 22:59:23.971914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.484 [2024-12-10 22:59:23.971948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.484 qpair failed and we were unable to recover it. 00:28:16.484 [2024-12-10 22:59:23.972127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.484 [2024-12-10 22:59:23.972172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.484 qpair failed and we were unable to recover it. 00:28:16.484 [2024-12-10 22:59:23.972352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.484 [2024-12-10 22:59:23.972384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.484 qpair failed and we were unable to recover it. 00:28:16.484 [2024-12-10 22:59:23.972661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.484 [2024-12-10 22:59:23.972695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.484 qpair failed and we were unable to recover it. 00:28:16.484 [2024-12-10 22:59:23.973013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.484 [2024-12-10 22:59:23.973046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.484 qpair failed and we were unable to recover it. 00:28:16.484 [2024-12-10 22:59:23.973252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.484 [2024-12-10 22:59:23.973288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.484 qpair failed and we were unable to recover it. 00:28:16.484 [2024-12-10 22:59:23.973467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.484 [2024-12-10 22:59:23.973500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.484 qpair failed and we were unable to recover it. 00:28:16.484 [2024-12-10 22:59:23.973695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.484 [2024-12-10 22:59:23.973729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.484 qpair failed and we were unable to recover it. 00:28:16.484 [2024-12-10 22:59:23.973987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.484 [2024-12-10 22:59:23.974021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.484 qpair failed and we were unable to recover it. 00:28:16.484 [2024-12-10 22:59:23.974227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.484 [2024-12-10 22:59:23.974262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.484 qpair failed and we were unable to recover it. 00:28:16.484 [2024-12-10 22:59:23.974374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.484 [2024-12-10 22:59:23.974409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.484 qpair failed and we were unable to recover it. 00:28:16.484 [2024-12-10 22:59:23.974600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.484 [2024-12-10 22:59:23.974634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.484 qpair failed and we were unable to recover it. 00:28:16.484 [2024-12-10 22:59:23.974840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.484 [2024-12-10 22:59:23.974873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.484 qpair failed and we were unable to recover it. 00:28:16.484 [2024-12-10 22:59:23.975079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.484 [2024-12-10 22:59:23.975114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.484 qpair failed and we were unable to recover it. 00:28:16.484 [2024-12-10 22:59:23.975310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.484 [2024-12-10 22:59:23.975346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.484 qpair failed and we were unable to recover it. 00:28:16.484 [2024-12-10 22:59:23.975478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.484 [2024-12-10 22:59:23.975511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.484 qpair failed and we were unable to recover it. 00:28:16.484 [2024-12-10 22:59:23.975696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.484 [2024-12-10 22:59:23.975729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.484 qpair failed and we were unable to recover it. 00:28:16.484 [2024-12-10 22:59:23.975986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.484 [2024-12-10 22:59:23.976020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.484 qpair failed and we were unable to recover it. 00:28:16.484 [2024-12-10 22:59:23.976280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.485 [2024-12-10 22:59:23.976314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.485 qpair failed and we were unable to recover it. 00:28:16.485 [2024-12-10 22:59:23.976614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.485 [2024-12-10 22:59:23.976647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.485 qpair failed and we were unable to recover it. 00:28:16.485 [2024-12-10 22:59:23.976944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.485 [2024-12-10 22:59:23.976978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.485 qpair failed and we were unable to recover it. 00:28:16.485 [2024-12-10 22:59:23.977205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.485 [2024-12-10 22:59:23.977241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.485 qpair failed and we were unable to recover it. 00:28:16.485 [2024-12-10 22:59:23.977355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.485 [2024-12-10 22:59:23.977386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.485 qpair failed and we were unable to recover it. 00:28:16.485 [2024-12-10 22:59:23.977516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.485 [2024-12-10 22:59:23.977549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.485 qpair failed and we were unable to recover it. 00:28:16.485 [2024-12-10 22:59:23.977825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.485 [2024-12-10 22:59:23.977859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.485 qpair failed and we were unable to recover it. 00:28:16.485 [2024-12-10 22:59:23.978137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.485 [2024-12-10 22:59:23.978200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.485 qpair failed and we were unable to recover it. 00:28:16.485 [2024-12-10 22:59:23.978395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.485 [2024-12-10 22:59:23.978433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.485 qpair failed and we were unable to recover it. 00:28:16.485 [2024-12-10 22:59:23.978636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.485 [2024-12-10 22:59:23.978670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.485 qpair failed and we were unable to recover it. 00:28:16.485 [2024-12-10 22:59:23.978884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.485 [2024-12-10 22:59:23.978917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.485 qpair failed and we were unable to recover it. 00:28:16.485 [2024-12-10 22:59:23.979118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.485 [2024-12-10 22:59:23.979151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.485 qpair failed and we were unable to recover it. 00:28:16.485 [2024-12-10 22:59:23.979420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.485 [2024-12-10 22:59:23.979455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.485 qpair failed and we were unable to recover it. 00:28:16.485 [2024-12-10 22:59:23.979749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.485 [2024-12-10 22:59:23.979783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.485 qpair failed and we were unable to recover it. 00:28:16.485 [2024-12-10 22:59:23.979987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.485 [2024-12-10 22:59:23.980020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.485 qpair failed and we were unable to recover it. 00:28:16.485 [2024-12-10 22:59:23.980325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.485 [2024-12-10 22:59:23.980360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.485 qpair failed and we were unable to recover it. 00:28:16.485 [2024-12-10 22:59:23.980583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.485 [2024-12-10 22:59:23.980617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.485 qpair failed and we were unable to recover it. 00:28:16.485 [2024-12-10 22:59:23.980800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.485 [2024-12-10 22:59:23.980833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.485 qpair failed and we were unable to recover it. 00:28:16.485 [2024-12-10 22:59:23.981012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.485 [2024-12-10 22:59:23.981046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.485 qpair failed and we were unable to recover it. 00:28:16.485 [2024-12-10 22:59:23.981261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.485 [2024-12-10 22:59:23.981296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.485 qpair failed and we were unable to recover it. 00:28:16.485 [2024-12-10 22:59:23.981406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.485 [2024-12-10 22:59:23.981441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.485 qpair failed and we were unable to recover it. 00:28:16.485 [2024-12-10 22:59:23.981581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.485 [2024-12-10 22:59:23.981614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.485 qpair failed and we were unable to recover it. 00:28:16.485 [2024-12-10 22:59:23.981805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.485 [2024-12-10 22:59:23.981839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.485 qpair failed and we were unable to recover it. 00:28:16.485 [2024-12-10 22:59:23.982104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.485 [2024-12-10 22:59:23.982138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.485 qpair failed and we were unable to recover it. 00:28:16.485 [2024-12-10 22:59:23.982329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.485 [2024-12-10 22:59:23.982364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.485 qpair failed and we were unable to recover it. 00:28:16.485 [2024-12-10 22:59:23.982647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.485 [2024-12-10 22:59:23.982680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.485 qpair failed and we were unable to recover it. 00:28:16.485 [2024-12-10 22:59:23.982994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.485 [2024-12-10 22:59:23.983028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.485 qpair failed and we were unable to recover it. 00:28:16.485 [2024-12-10 22:59:23.983223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.485 [2024-12-10 22:59:23.983258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.485 qpair failed and we were unable to recover it. 00:28:16.485 [2024-12-10 22:59:23.983519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.485 [2024-12-10 22:59:23.983552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.485 qpair failed and we were unable to recover it. 00:28:16.485 [2024-12-10 22:59:23.983744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.485 [2024-12-10 22:59:23.983777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.485 qpair failed and we were unable to recover it. 00:28:16.485 [2024-12-10 22:59:23.983914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.485 [2024-12-10 22:59:23.983948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.485 qpair failed and we were unable to recover it. 00:28:16.485 [2024-12-10 22:59:23.984202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.485 [2024-12-10 22:59:23.984236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.485 qpair failed and we were unable to recover it. 00:28:16.485 [2024-12-10 22:59:23.984350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.485 [2024-12-10 22:59:23.984384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.485 qpair failed and we were unable to recover it. 00:28:16.485 [2024-12-10 22:59:23.984564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.485 [2024-12-10 22:59:23.984598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.485 qpair failed and we were unable to recover it. 00:28:16.485 [2024-12-10 22:59:23.984794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.485 [2024-12-10 22:59:23.984827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.485 qpair failed and we were unable to recover it. 00:28:16.485 [2024-12-10 22:59:23.985019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.485 [2024-12-10 22:59:23.985058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.485 qpair failed and we were unable to recover it. 00:28:16.485 [2024-12-10 22:59:23.985314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.485 [2024-12-10 22:59:23.985350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.485 qpair failed and we were unable to recover it. 00:28:16.485 [2024-12-10 22:59:23.985546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.486 [2024-12-10 22:59:23.985579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.486 qpair failed and we were unable to recover it. 00:28:16.486 [2024-12-10 22:59:23.985763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.486 [2024-12-10 22:59:23.985798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.486 qpair failed and we were unable to recover it. 00:28:16.486 [2024-12-10 22:59:23.985985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.486 [2024-12-10 22:59:23.986020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.486 qpair failed and we were unable to recover it. 00:28:16.486 [2024-12-10 22:59:23.986320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.486 [2024-12-10 22:59:23.986357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.486 qpair failed and we were unable to recover it. 00:28:16.486 [2024-12-10 22:59:23.986540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.486 [2024-12-10 22:59:23.986574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.486 qpair failed and we were unable to recover it. 00:28:16.486 [2024-12-10 22:59:23.986752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.486 [2024-12-10 22:59:23.986785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.486 qpair failed and we were unable to recover it. 00:28:16.486 [2024-12-10 22:59:23.987046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.486 [2024-12-10 22:59:23.987080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.486 qpair failed and we were unable to recover it. 00:28:16.486 [2024-12-10 22:59:23.987380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.486 [2024-12-10 22:59:23.987416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.486 qpair failed and we were unable to recover it. 00:28:16.486 [2024-12-10 22:59:23.987616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.486 [2024-12-10 22:59:23.987650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.486 qpair failed and we were unable to recover it. 00:28:16.486 [2024-12-10 22:59:23.987848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.486 [2024-12-10 22:59:23.987883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.486 qpair failed and we were unable to recover it. 00:28:16.486 [2024-12-10 22:59:23.988079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.486 [2024-12-10 22:59:23.988113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.486 qpair failed and we were unable to recover it. 00:28:16.486 [2024-12-10 22:59:23.988329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.486 [2024-12-10 22:59:23.988365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.486 qpair failed and we were unable to recover it. 00:28:16.486 [2024-12-10 22:59:23.988650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.486 [2024-12-10 22:59:23.988685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.486 qpair failed and we were unable to recover it. 00:28:16.486 [2024-12-10 22:59:23.988808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.486 [2024-12-10 22:59:23.988841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.486 qpair failed and we were unable to recover it. 00:28:16.486 [2024-12-10 22:59:23.989115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.486 [2024-12-10 22:59:23.989148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.486 qpair failed and we were unable to recover it. 00:28:16.486 [2024-12-10 22:59:23.989293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.486 [2024-12-10 22:59:23.989328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.486 qpair failed and we were unable to recover it. 00:28:16.486 [2024-12-10 22:59:23.989527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.486 [2024-12-10 22:59:23.989560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.486 qpair failed and we were unable to recover it. 00:28:16.486 [2024-12-10 22:59:23.989840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.486 [2024-12-10 22:59:23.989874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.486 qpair failed and we were unable to recover it. 00:28:16.486 [2024-12-10 22:59:23.990053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.486 [2024-12-10 22:59:23.990087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.486 qpair failed and we were unable to recover it. 00:28:16.486 [2024-12-10 22:59:23.990286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.486 [2024-12-10 22:59:23.990323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.486 qpair failed and we were unable to recover it. 00:28:16.486 [2024-12-10 22:59:23.990604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.486 [2024-12-10 22:59:23.990638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.486 qpair failed and we were unable to recover it. 00:28:16.486 [2024-12-10 22:59:23.990756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.486 [2024-12-10 22:59:23.990788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.486 qpair failed and we were unable to recover it. 00:28:16.486 [2024-12-10 22:59:23.991061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.486 [2024-12-10 22:59:23.991094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.486 qpair failed and we were unable to recover it. 00:28:16.486 [2024-12-10 22:59:23.991360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.486 [2024-12-10 22:59:23.991395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.486 qpair failed and we were unable to recover it. 00:28:16.486 [2024-12-10 22:59:23.991693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.486 [2024-12-10 22:59:23.991727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.486 qpair failed and we were unable to recover it. 00:28:16.486 [2024-12-10 22:59:23.991908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.486 [2024-12-10 22:59:23.991941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.486 qpair failed and we were unable to recover it. 00:28:16.486 [2024-12-10 22:59:23.992069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.486 [2024-12-10 22:59:23.992104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.486 qpair failed and we were unable to recover it. 00:28:16.486 [2024-12-10 22:59:23.992389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.486 [2024-12-10 22:59:23.992423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.486 qpair failed and we were unable to recover it. 00:28:16.486 [2024-12-10 22:59:23.992689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.486 [2024-12-10 22:59:23.992722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.486 qpair failed and we were unable to recover it. 00:28:16.486 [2024-12-10 22:59:23.992835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.486 [2024-12-10 22:59:23.992866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.486 qpair failed and we were unable to recover it. 00:28:16.486 [2024-12-10 22:59:23.993146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.486 [2024-12-10 22:59:23.993190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.486 qpair failed and we were unable to recover it. 00:28:16.486 [2024-12-10 22:59:23.993394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.486 [2024-12-10 22:59:23.993427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.486 qpair failed and we were unable to recover it. 00:28:16.486 [2024-12-10 22:59:23.993689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.486 [2024-12-10 22:59:23.993723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.486 qpair failed and we were unable to recover it. 00:28:16.486 [2024-12-10 22:59:23.993846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.486 [2024-12-10 22:59:23.993881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.486 qpair failed and we were unable to recover it. 00:28:16.486 [2024-12-10 22:59:23.994209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.486 [2024-12-10 22:59:23.994246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.486 qpair failed and we were unable to recover it. 00:28:16.486 [2024-12-10 22:59:23.994571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.486 [2024-12-10 22:59:23.994607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.486 qpair failed and we were unable to recover it. 00:28:16.486 [2024-12-10 22:59:23.994882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.486 [2024-12-10 22:59:23.994917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.486 qpair failed and we were unable to recover it. 00:28:16.486 [2024-12-10 22:59:23.995144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.486 [2024-12-10 22:59:23.995189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.486 qpair failed and we were unable to recover it. 00:28:16.486 [2024-12-10 22:59:23.995395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.486 [2024-12-10 22:59:23.995428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.487 qpair failed and we were unable to recover it. 00:28:16.487 [2024-12-10 22:59:23.995707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.487 [2024-12-10 22:59:23.995748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.487 qpair failed and we were unable to recover it. 00:28:16.487 [2024-12-10 22:59:23.995930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.487 [2024-12-10 22:59:23.995963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.487 qpair failed and we were unable to recover it. 00:28:16.487 [2024-12-10 22:59:23.996244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.487 [2024-12-10 22:59:23.996280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.487 qpair failed and we were unable to recover it. 00:28:16.487 [2024-12-10 22:59:23.996466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.487 [2024-12-10 22:59:23.996501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.487 qpair failed and we were unable to recover it. 00:28:16.487 [2024-12-10 22:59:23.996755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.487 [2024-12-10 22:59:23.996788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.487 qpair failed and we were unable to recover it. 00:28:16.487 [2024-12-10 22:59:23.996920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.487 [2024-12-10 22:59:23.996955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.487 qpair failed and we were unable to recover it. 00:28:16.487 [2024-12-10 22:59:23.997137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.487 [2024-12-10 22:59:23.997179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.487 qpair failed and we were unable to recover it. 00:28:16.487 [2024-12-10 22:59:23.997363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.487 [2024-12-10 22:59:23.997397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.487 qpair failed and we were unable to recover it. 00:28:16.487 [2024-12-10 22:59:23.997653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.487 [2024-12-10 22:59:23.997687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.487 qpair failed and we were unable to recover it. 00:28:16.487 [2024-12-10 22:59:23.997906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.487 [2024-12-10 22:59:23.997940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.487 qpair failed and we were unable to recover it. 00:28:16.487 [2024-12-10 22:59:23.998147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.487 [2024-12-10 22:59:23.998194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.487 qpair failed and we were unable to recover it. 00:28:16.487 [2024-12-10 22:59:23.998402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.487 [2024-12-10 22:59:23.998435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.487 qpair failed and we were unable to recover it. 00:28:16.487 [2024-12-10 22:59:23.998618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.487 [2024-12-10 22:59:23.998654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.487 qpair failed and we were unable to recover it. 00:28:16.487 [2024-12-10 22:59:23.998887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.487 [2024-12-10 22:59:23.998921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.487 qpair failed and we were unable to recover it. 00:28:16.487 [2024-12-10 22:59:23.999038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.487 [2024-12-10 22:59:23.999073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.487 qpair failed and we were unable to recover it. 00:28:16.487 [2024-12-10 22:59:23.999251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.487 [2024-12-10 22:59:23.999287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.487 qpair failed and we were unable to recover it. 00:28:16.487 [2024-12-10 22:59:23.999444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.487 [2024-12-10 22:59:23.999478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.487 qpair failed and we were unable to recover it. 00:28:16.487 [2024-12-10 22:59:23.999605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.487 [2024-12-10 22:59:23.999639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.487 qpair failed and we were unable to recover it. 00:28:16.487 [2024-12-10 22:59:23.999824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.487 [2024-12-10 22:59:23.999858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.487 qpair failed and we were unable to recover it. 00:28:16.487 [2024-12-10 22:59:24.000108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.487 [2024-12-10 22:59:24.000143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.487 qpair failed and we were unable to recover it. 00:28:16.487 [2024-12-10 22:59:24.000390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.487 [2024-12-10 22:59:24.000425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.487 qpair failed and we were unable to recover it. 00:28:16.487 [2024-12-10 22:59:24.000671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.487 [2024-12-10 22:59:24.000705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.487 qpair failed and we were unable to recover it. 00:28:16.487 [2024-12-10 22:59:24.000950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.487 [2024-12-10 22:59:24.000984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.487 qpair failed and we were unable to recover it. 00:28:16.487 [2024-12-10 22:59:24.001264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.487 [2024-12-10 22:59:24.001299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.487 qpair failed and we were unable to recover it. 00:28:16.487 [2024-12-10 22:59:24.001582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.487 [2024-12-10 22:59:24.001616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.487 qpair failed and we were unable to recover it. 00:28:16.487 [2024-12-10 22:59:24.001742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.487 [2024-12-10 22:59:24.001776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.487 qpair failed and we were unable to recover it. 00:28:16.487 [2024-12-10 22:59:24.001900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.487 [2024-12-10 22:59:24.001931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.487 qpair failed and we were unable to recover it. 00:28:16.487 [2024-12-10 22:59:24.002130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.487 [2024-12-10 22:59:24.002176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.487 qpair failed and we were unable to recover it. 00:28:16.487 [2024-12-10 22:59:24.002484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.487 [2024-12-10 22:59:24.002517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.487 qpair failed and we were unable to recover it. 00:28:16.487 [2024-12-10 22:59:24.002711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.487 [2024-12-10 22:59:24.002746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.487 qpair failed and we were unable to recover it. 00:28:16.487 [2024-12-10 22:59:24.002924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.487 [2024-12-10 22:59:24.002958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.487 qpair failed and we were unable to recover it. 00:28:16.487 [2024-12-10 22:59:24.003231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.487 [2024-12-10 22:59:24.003266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.487 qpair failed and we were unable to recover it. 00:28:16.487 [2024-12-10 22:59:24.003544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.487 [2024-12-10 22:59:24.003577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.487 qpair failed and we were unable to recover it. 00:28:16.487 [2024-12-10 22:59:24.003862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.487 [2024-12-10 22:59:24.003896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.487 qpair failed and we were unable to recover it. 00:28:16.487 [2024-12-10 22:59:24.004104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.487 [2024-12-10 22:59:24.004140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.487 qpair failed and we were unable to recover it. 00:28:16.487 [2024-12-10 22:59:24.004426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.487 [2024-12-10 22:59:24.004460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.487 qpair failed and we were unable to recover it. 00:28:16.487 [2024-12-10 22:59:24.004579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.487 [2024-12-10 22:59:24.004613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.487 qpair failed and we were unable to recover it. 00:28:16.487 [2024-12-10 22:59:24.004812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.487 [2024-12-10 22:59:24.004846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.487 qpair failed and we were unable to recover it. 00:28:16.487 [2024-12-10 22:59:24.005028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.487 [2024-12-10 22:59:24.005062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.487 qpair failed and we were unable to recover it. 00:28:16.487 [2024-12-10 22:59:24.005253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.488 [2024-12-10 22:59:24.005288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.488 qpair failed and we were unable to recover it. 00:28:16.488 [2024-12-10 22:59:24.005434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.488 [2024-12-10 22:59:24.005469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.488 qpair failed and we were unable to recover it. 00:28:16.488 [2024-12-10 22:59:24.005732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.488 [2024-12-10 22:59:24.005813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.488 qpair failed and we were unable to recover it. 00:28:16.488 [2024-12-10 22:59:24.006075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.488 [2024-12-10 22:59:24.006114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.488 qpair failed and we were unable to recover it. 00:28:16.488 [2024-12-10 22:59:24.006347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.488 [2024-12-10 22:59:24.006384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.488 qpair failed and we were unable to recover it. 00:28:16.488 [2024-12-10 22:59:24.006593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.488 [2024-12-10 22:59:24.006626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.488 qpair failed and we were unable to recover it. 00:28:16.488 [2024-12-10 22:59:24.006768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.488 [2024-12-10 22:59:24.006801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.488 qpair failed and we were unable to recover it. 00:28:16.488 [2024-12-10 22:59:24.007055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.488 [2024-12-10 22:59:24.007089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.488 qpair failed and we were unable to recover it. 00:28:16.488 [2024-12-10 22:59:24.007276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.488 [2024-12-10 22:59:24.007313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.488 qpair failed and we were unable to recover it. 00:28:16.488 [2024-12-10 22:59:24.007493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.488 [2024-12-10 22:59:24.007528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.488 qpair failed and we were unable to recover it. 00:28:16.488 [2024-12-10 22:59:24.007706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.488 [2024-12-10 22:59:24.007742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.488 qpair failed and we were unable to recover it. 00:28:16.488 [2024-12-10 22:59:24.007923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.488 [2024-12-10 22:59:24.007959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.488 qpair failed and we were unable to recover it. 00:28:16.488 [2024-12-10 22:59:24.008235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.488 [2024-12-10 22:59:24.008271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.488 qpair failed and we were unable to recover it. 00:28:16.488 [2024-12-10 22:59:24.008482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.488 [2024-12-10 22:59:24.008515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.488 qpair failed and we were unable to recover it. 00:28:16.488 [2024-12-10 22:59:24.008699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.488 [2024-12-10 22:59:24.008733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.488 qpair failed and we were unable to recover it. 00:28:16.488 [2024-12-10 22:59:24.008930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.488 [2024-12-10 22:59:24.008975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.488 qpair failed and we were unable to recover it. 00:28:16.488 [2024-12-10 22:59:24.009154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.488 [2024-12-10 22:59:24.009198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.488 qpair failed and we were unable to recover it. 00:28:16.488 [2024-12-10 22:59:24.009325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.488 [2024-12-10 22:59:24.009358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.488 qpair failed and we were unable to recover it. 00:28:16.488 [2024-12-10 22:59:24.009559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.488 [2024-12-10 22:59:24.009594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.488 qpair failed and we were unable to recover it. 00:28:16.488 [2024-12-10 22:59:24.009715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.488 [2024-12-10 22:59:24.009750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.488 qpair failed and we were unable to recover it. 00:28:16.488 [2024-12-10 22:59:24.009950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.488 [2024-12-10 22:59:24.009985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.488 qpair failed and we were unable to recover it. 00:28:16.488 [2024-12-10 22:59:24.010185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.488 [2024-12-10 22:59:24.010220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.488 qpair failed and we were unable to recover it. 00:28:16.488 [2024-12-10 22:59:24.010499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.488 [2024-12-10 22:59:24.010534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.488 qpair failed and we were unable to recover it. 00:28:16.488 [2024-12-10 22:59:24.010673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.488 [2024-12-10 22:59:24.010708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.488 qpair failed and we were unable to recover it. 00:28:16.488 [2024-12-10 22:59:24.010887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.488 [2024-12-10 22:59:24.010921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.488 qpair failed and we were unable to recover it. 00:28:16.488 [2024-12-10 22:59:24.011105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.488 [2024-12-10 22:59:24.011140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.488 qpair failed and we were unable to recover it. 00:28:16.488 [2024-12-10 22:59:24.011350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.488 [2024-12-10 22:59:24.011385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.488 qpair failed and we were unable to recover it. 00:28:16.488 [2024-12-10 22:59:24.011639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.488 [2024-12-10 22:59:24.011673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.488 qpair failed and we were unable to recover it. 00:28:16.488 [2024-12-10 22:59:24.011878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.488 [2024-12-10 22:59:24.011913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.488 qpair failed and we were unable to recover it. 00:28:16.488 [2024-12-10 22:59:24.012101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.488 [2024-12-10 22:59:24.012136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.488 qpair failed and we were unable to recover it. 00:28:16.488 [2024-12-10 22:59:24.012453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.488 [2024-12-10 22:59:24.012488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.488 qpair failed and we were unable to recover it. 00:28:16.488 [2024-12-10 22:59:24.012744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.488 [2024-12-10 22:59:24.012778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.488 qpair failed and we were unable to recover it. 00:28:16.489 [2024-12-10 22:59:24.013070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.489 [2024-12-10 22:59:24.013105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.489 qpair failed and we were unable to recover it. 00:28:16.489 [2024-12-10 22:59:24.013376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.489 [2024-12-10 22:59:24.013412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.489 qpair failed and we were unable to recover it. 00:28:16.489 [2024-12-10 22:59:24.013611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.489 [2024-12-10 22:59:24.013645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.489 qpair failed and we were unable to recover it. 00:28:16.489 [2024-12-10 22:59:24.013921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.489 [2024-12-10 22:59:24.013957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.489 qpair failed and we were unable to recover it. 00:28:16.489 [2024-12-10 22:59:24.014082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.489 [2024-12-10 22:59:24.014117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.489 qpair failed and we were unable to recover it. 00:28:16.489 [2024-12-10 22:59:24.014409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.489 [2024-12-10 22:59:24.014445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.489 qpair failed and we were unable to recover it. 00:28:16.489 [2024-12-10 22:59:24.014574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.489 [2024-12-10 22:59:24.014609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.489 qpair failed and we were unable to recover it. 00:28:16.489 [2024-12-10 22:59:24.014787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.489 [2024-12-10 22:59:24.014822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.489 qpair failed and we were unable to recover it. 00:28:16.489 [2024-12-10 22:59:24.015001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.489 [2024-12-10 22:59:24.015036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.489 qpair failed and we were unable to recover it. 00:28:16.489 [2024-12-10 22:59:24.015256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.489 [2024-12-10 22:59:24.015292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.489 qpair failed and we were unable to recover it. 00:28:16.489 [2024-12-10 22:59:24.015419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.489 [2024-12-10 22:59:24.015460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.489 qpair failed and we were unable to recover it. 00:28:16.489 [2024-12-10 22:59:24.015674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.489 [2024-12-10 22:59:24.015707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.489 qpair failed and we were unable to recover it. 00:28:16.489 [2024-12-10 22:59:24.015997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.489 [2024-12-10 22:59:24.016031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.489 qpair failed and we were unable to recover it. 00:28:16.489 [2024-12-10 22:59:24.016170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.489 [2024-12-10 22:59:24.016207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.489 qpair failed and we were unable to recover it. 00:28:16.489 [2024-12-10 22:59:24.016430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.489 [2024-12-10 22:59:24.016464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.489 qpair failed and we were unable to recover it. 00:28:16.489 [2024-12-10 22:59:24.016742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.489 [2024-12-10 22:59:24.016777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.489 qpair failed and we were unable to recover it. 00:28:16.489 [2024-12-10 22:59:24.016976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.489 [2024-12-10 22:59:24.017010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.489 qpair failed and we were unable to recover it. 00:28:16.489 [2024-12-10 22:59:24.017213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.489 [2024-12-10 22:59:24.017248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.489 qpair failed and we were unable to recover it. 00:28:16.489 [2024-12-10 22:59:24.017456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.489 [2024-12-10 22:59:24.017491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.489 qpair failed and we were unable to recover it. 00:28:16.489 [2024-12-10 22:59:24.017672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.489 [2024-12-10 22:59:24.017706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.489 qpair failed and we were unable to recover it. 00:28:16.489 [2024-12-10 22:59:24.017919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.489 [2024-12-10 22:59:24.017953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.489 qpair failed and we were unable to recover it. 00:28:16.489 [2024-12-10 22:59:24.018132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.489 [2024-12-10 22:59:24.018175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.489 qpair failed and we were unable to recover it. 00:28:16.489 [2024-12-10 22:59:24.018435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.489 [2024-12-10 22:59:24.018469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.489 qpair failed and we were unable to recover it. 00:28:16.489 [2024-12-10 22:59:24.018599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.489 [2024-12-10 22:59:24.018633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.489 qpair failed and we were unable to recover it. 00:28:16.489 [2024-12-10 22:59:24.018865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.489 [2024-12-10 22:59:24.018900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.489 qpair failed and we were unable to recover it. 00:28:16.489 [2024-12-10 22:59:24.019174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.489 [2024-12-10 22:59:24.019209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.489 qpair failed and we were unable to recover it. 00:28:16.489 [2024-12-10 22:59:24.019418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.489 [2024-12-10 22:59:24.019452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.489 qpair failed and we were unable to recover it. 00:28:16.489 [2024-12-10 22:59:24.019675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.489 [2024-12-10 22:59:24.019711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.489 qpair failed and we were unable to recover it. 00:28:16.489 [2024-12-10 22:59:24.019965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.489 [2024-12-10 22:59:24.020000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.489 qpair failed and we were unable to recover it. 00:28:16.489 [2024-12-10 22:59:24.020283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.489 [2024-12-10 22:59:24.020318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.489 qpair failed and we were unable to recover it. 00:28:16.489 [2024-12-10 22:59:24.020508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.489 [2024-12-10 22:59:24.020542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.489 qpair failed and we were unable to recover it. 00:28:16.489 [2024-12-10 22:59:24.020745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.489 [2024-12-10 22:59:24.020779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.489 qpair failed and we were unable to recover it. 00:28:16.489 [2024-12-10 22:59:24.020976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.489 [2024-12-10 22:59:24.021009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.489 qpair failed and we were unable to recover it. 00:28:16.489 [2024-12-10 22:59:24.021213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.489 [2024-12-10 22:59:24.021249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.489 qpair failed and we were unable to recover it. 00:28:16.489 [2024-12-10 22:59:24.021465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.489 [2024-12-10 22:59:24.021499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.489 qpair failed and we were unable to recover it. 00:28:16.489 [2024-12-10 22:59:24.021694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.489 [2024-12-10 22:59:24.021729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.489 qpair failed and we were unable to recover it. 00:28:16.489 [2024-12-10 22:59:24.021966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.489 [2024-12-10 22:59:24.022001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.489 qpair failed and we were unable to recover it. 00:28:16.489 [2024-12-10 22:59:24.022310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.489 [2024-12-10 22:59:24.022345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.489 qpair failed and we were unable to recover it. 00:28:16.489 [2024-12-10 22:59:24.022531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.490 [2024-12-10 22:59:24.022565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.490 qpair failed and we were unable to recover it. 00:28:16.490 [2024-12-10 22:59:24.022841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.490 [2024-12-10 22:59:24.022875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.490 qpair failed and we were unable to recover it. 00:28:16.490 [2024-12-10 22:59:24.023127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.490 [2024-12-10 22:59:24.023185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.490 qpair failed and we were unable to recover it. 00:28:16.490 [2024-12-10 22:59:24.023320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.490 [2024-12-10 22:59:24.023355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.490 qpair failed and we were unable to recover it. 00:28:16.490 [2024-12-10 22:59:24.023632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.490 [2024-12-10 22:59:24.023670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.490 qpair failed and we were unable to recover it. 00:28:16.490 [2024-12-10 22:59:24.023871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.490 [2024-12-10 22:59:24.023904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.490 qpair failed and we were unable to recover it. 00:28:16.490 [2024-12-10 22:59:24.024041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.490 [2024-12-10 22:59:24.024074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.490 qpair failed and we were unable to recover it. 00:28:16.490 [2024-12-10 22:59:24.024213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.490 [2024-12-10 22:59:24.024247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.490 qpair failed and we were unable to recover it. 00:28:16.490 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2534939 Killed "${NVMF_APP[@]}" "$@" 00:28:16.490 [2024-12-10 22:59:24.024427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.490 [2024-12-10 22:59:24.024464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.490 qpair failed and we were unable to recover it. 00:28:16.490 [2024-12-10 22:59:24.024647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.490 [2024-12-10 22:59:24.024682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.490 qpair failed and we were unable to recover it. 00:28:16.490 [2024-12-10 22:59:24.024961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.490 [2024-12-10 22:59:24.024999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.490 qpair failed and we were unable to recover it. 00:28:16.490 22:59:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:28:16.490 [2024-12-10 22:59:24.025206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.490 [2024-12-10 22:59:24.025247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.490 qpair failed and we were unable to recover it. 00:28:16.490 [2024-12-10 22:59:24.025434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.490 22:59:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:28:16.490 [2024-12-10 22:59:24.025471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.490 qpair failed and we were unable to recover it. 00:28:16.490 [2024-12-10 22:59:24.025614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.490 [2024-12-10 22:59:24.025648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.490 qpair failed and we were unable to recover it. 00:28:16.490 22:59:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:16.490 [2024-12-10 22:59:24.025825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.490 [2024-12-10 22:59:24.025863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.490 qpair failed and we were unable to recover it. 00:28:16.490 [2024-12-10 22:59:24.025979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.490 [2024-12-10 22:59:24.026013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.490 qpair failed and we were unable to recover it. 00:28:16.490 22:59:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:16.490 [2024-12-10 22:59:24.026197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.490 [2024-12-10 22:59:24.026235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.490 qpair failed and we were unable to recover it. 00:28:16.490 [2024-12-10 22:59:24.026362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.490 22:59:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:16.490 [2024-12-10 22:59:24.026398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.490 qpair failed and we were unable to recover it. 00:28:16.490 [2024-12-10 22:59:24.026679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.490 [2024-12-10 22:59:24.026715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.490 qpair failed and we were unable to recover it. 00:28:16.490 [2024-12-10 22:59:24.026994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.490 [2024-12-10 22:59:24.027030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.490 qpair failed and we were unable to recover it. 00:28:16.490 [2024-12-10 22:59:24.027337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.490 [2024-12-10 22:59:24.027373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.490 qpair failed and we were unable to recover it. 00:28:16.490 [2024-12-10 22:59:24.027558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.490 [2024-12-10 22:59:24.027595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.490 qpair failed and we were unable to recover it. 00:28:16.490 [2024-12-10 22:59:24.027860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.490 [2024-12-10 22:59:24.027894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.490 qpair failed and we were unable to recover it. 00:28:16.490 [2024-12-10 22:59:24.028122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.490 [2024-12-10 22:59:24.028156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.490 qpair failed and we were unable to recover it. 00:28:16.490 [2024-12-10 22:59:24.028371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.490 [2024-12-10 22:59:24.028406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.490 qpair failed and we were unable to recover it. 00:28:16.490 [2024-12-10 22:59:24.028635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.490 [2024-12-10 22:59:24.028671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.490 qpair failed and we were unable to recover it. 00:28:16.490 [2024-12-10 22:59:24.028857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.490 [2024-12-10 22:59:24.028891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.490 qpair failed and we were unable to recover it. 00:28:16.490 [2024-12-10 22:59:24.029080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.490 [2024-12-10 22:59:24.029115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.490 qpair failed and we were unable to recover it. 00:28:16.490 [2024-12-10 22:59:24.029289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.490 [2024-12-10 22:59:24.029327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.490 qpair failed and we were unable to recover it. 00:28:16.490 [2024-12-10 22:59:24.029514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.490 [2024-12-10 22:59:24.029548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.490 qpair failed and we were unable to recover it. 00:28:16.490 [2024-12-10 22:59:24.029678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.490 [2024-12-10 22:59:24.029711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.490 qpair failed and we were unable to recover it. 00:28:16.490 [2024-12-10 22:59:24.029900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.490 [2024-12-10 22:59:24.029935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.490 qpair failed and we were unable to recover it. 00:28:16.490 [2024-12-10 22:59:24.030065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.490 [2024-12-10 22:59:24.030101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.490 qpair failed and we were unable to recover it. 00:28:16.490 [2024-12-10 22:59:24.030308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.490 [2024-12-10 22:59:24.030344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.490 qpair failed and we were unable to recover it. 00:28:16.490 [2024-12-10 22:59:24.030623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.490 [2024-12-10 22:59:24.030659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.490 qpair failed and we were unable to recover it. 00:28:16.490 [2024-12-10 22:59:24.030807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.490 [2024-12-10 22:59:24.030839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.490 qpair failed and we were unable to recover it. 00:28:16.490 [2024-12-10 22:59:24.031116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.490 [2024-12-10 22:59:24.031167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.491 qpair failed and we were unable to recover it. 00:28:16.491 [2024-12-10 22:59:24.031285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.491 [2024-12-10 22:59:24.031321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.491 qpair failed and we were unable to recover it. 00:28:16.491 [2024-12-10 22:59:24.031453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.491 [2024-12-10 22:59:24.031486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.491 qpair failed and we were unable to recover it. 00:28:16.491 [2024-12-10 22:59:24.031667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.491 [2024-12-10 22:59:24.031700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.491 qpair failed and we were unable to recover it. 00:28:16.491 [2024-12-10 22:59:24.031824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.491 [2024-12-10 22:59:24.031858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.491 qpair failed and we were unable to recover it. 00:28:16.491 [2024-12-10 22:59:24.032061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.491 [2024-12-10 22:59:24.032096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.491 qpair failed and we were unable to recover it. 00:28:16.491 [2024-12-10 22:59:24.032250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.491 [2024-12-10 22:59:24.032285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.491 qpair failed and we were unable to recover it. 00:28:16.491 [2024-12-10 22:59:24.032492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.491 [2024-12-10 22:59:24.032529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.491 qpair failed and we were unable to recover it. 00:28:16.491 [2024-12-10 22:59:24.032654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.491 [2024-12-10 22:59:24.032689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.491 qpair failed and we were unable to recover it. 00:28:16.491 [2024-12-10 22:59:24.032882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.491 [2024-12-10 22:59:24.032917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.491 qpair failed and we were unable to recover it. 00:28:16.491 [2024-12-10 22:59:24.033121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.491 [2024-12-10 22:59:24.033156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.491 qpair failed and we were unable to recover it. 00:28:16.491 [2024-12-10 22:59:24.033436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.491 [2024-12-10 22:59:24.033473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.491 qpair failed and we were unable to recover it. 00:28:16.491 [2024-12-10 22:59:24.033596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.491 [2024-12-10 22:59:24.033630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.491 qpair failed and we were unable to recover it. 00:28:16.491 [2024-12-10 22:59:24.033859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.491 [2024-12-10 22:59:24.033896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.491 qpair failed and we were unable to recover it. 00:28:16.491 22:59:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2535652 00:28:16.491 [2024-12-10 22:59:24.034036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.491 [2024-12-10 22:59:24.034075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.491 qpair failed and we were unable to recover it. 00:28:16.491 22:59:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:28:16.491 [2024-12-10 22:59:24.034348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.491 [2024-12-10 22:59:24.034388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.491 qpair failed and we were unable to recover it. 00:28:16.491 22:59:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2535652 00:28:16.491 [2024-12-10 22:59:24.034572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.491 [2024-12-10 22:59:24.034606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.491 qpair failed and we were unable to recover it. 00:28:16.491 [2024-12-10 22:59:24.034824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.491 [2024-12-10 22:59:24.034860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.491 qpair failed and we were unable to recover it. 00:28:16.491 [2024-12-10 22:59:24.035112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.491 [2024-12-10 22:59:24.035150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.491 qpair failed and we were unable to recover it. 00:28:16.491 22:59:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2535652 ']' 00:28:16.491 [2024-12-10 22:59:24.035352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.491 [2024-12-10 22:59:24.035389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.491 qpair failed and we were unable to recover it. 00:28:16.491 [2024-12-10 22:59:24.035512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.491 [2024-12-10 22:59:24.035546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.491 qpair failed and we were unable to recover it. 00:28:16.491 22:59:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:16.491 [2024-12-10 22:59:24.035709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.491 [2024-12-10 22:59:24.035745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.491 qpair failed and we were unable to recover it. 00:28:16.491 [2024-12-10 22:59:24.035951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.491 [2024-12-10 22:59:24.035986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.491 qpair failed and we were unable to recover it. 00:28:16.491 22:59:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:16.491 [2024-12-10 22:59:24.036193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.491 [2024-12-10 22:59:24.036231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.491 qpair failed and we were unable to recover it. 00:28:16.491 [2024-12-10 22:59:24.036375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.491 [2024-12-10 22:59:24.036409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.491 qpair failed and we were unable to recover it. 00:28:16.491 22:59:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:16.491 [2024-12-10 22:59:24.036551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:16.491 [2024-12-10 22:59:24.036591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.491 qpair failed and we were unable to recover it. 00:28:16.491 [2024-12-10 22:59:24.036775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.491 [2024-12-10 22:59:24.036818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.491 qpair failed and we were unable to recover it. 00:28:16.491 22:59:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:16.491 [2024-12-10 22:59:24.037019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.491 [2024-12-10 22:59:24.037054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.491 qpair failed and we were unable to recover it. 00:28:16.491 22:59:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:16.491 [2024-12-10 22:59:24.037203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.491 [2024-12-10 22:59:24.037241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.491 qpair failed and we were unable to recover it. 00:28:16.491 [2024-12-10 22:59:24.037367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.491 [2024-12-10 22:59:24.037402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.491 qpair failed and we were unable to recover it. 00:28:16.491 [2024-12-10 22:59:24.037538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.491 [2024-12-10 22:59:24.037573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.491 qpair failed and we were unable to recover it. 00:28:16.491 [2024-12-10 22:59:24.037778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.491 [2024-12-10 22:59:24.037812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.491 qpair failed and we were unable to recover it. 00:28:16.491 [2024-12-10 22:59:24.038040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.491 [2024-12-10 22:59:24.038074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.491 qpair failed and we were unable to recover it. 00:28:16.491 [2024-12-10 22:59:24.038322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.491 [2024-12-10 22:59:24.038361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.491 qpair failed and we were unable to recover it. 00:28:16.491 [2024-12-10 22:59:24.038475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.492 [2024-12-10 22:59:24.038507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.492 qpair failed and we were unable to recover it. 00:28:16.492 [2024-12-10 22:59:24.038640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.492 [2024-12-10 22:59:24.038681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.492 qpair failed and we were unable to recover it. 00:28:16.492 [2024-12-10 22:59:24.038814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.492 [2024-12-10 22:59:24.038847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.492 qpair failed and we were unable to recover it. 00:28:16.492 [2024-12-10 22:59:24.039040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.492 [2024-12-10 22:59:24.039074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.492 qpair failed and we were unable to recover it. 00:28:16.492 [2024-12-10 22:59:24.039294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.492 [2024-12-10 22:59:24.039330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.492 qpair failed and we were unable to recover it. 00:28:16.492 [2024-12-10 22:59:24.039562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.492 [2024-12-10 22:59:24.039597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.492 qpair failed and we were unable to recover it. 00:28:16.492 [2024-12-10 22:59:24.039734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.492 [2024-12-10 22:59:24.039769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.492 qpair failed and we were unable to recover it. 00:28:16.492 [2024-12-10 22:59:24.040045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.492 [2024-12-10 22:59:24.040082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.492 qpair failed and we were unable to recover it. 00:28:16.492 [2024-12-10 22:59:24.040363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.492 [2024-12-10 22:59:24.040398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.492 qpair failed and we were unable to recover it. 00:28:16.492 [2024-12-10 22:59:24.040537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.492 [2024-12-10 22:59:24.040578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.492 qpair failed and we were unable to recover it. 00:28:16.492 [2024-12-10 22:59:24.040784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.492 [2024-12-10 22:59:24.040825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.492 qpair failed and we were unable to recover it. 00:28:16.492 [2024-12-10 22:59:24.041112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.492 [2024-12-10 22:59:24.041146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.492 qpair failed and we were unable to recover it. 00:28:16.492 [2024-12-10 22:59:24.041389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.492 [2024-12-10 22:59:24.041428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.492 qpair failed and we were unable to recover it. 00:28:16.492 [2024-12-10 22:59:24.041552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.492 [2024-12-10 22:59:24.041586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.492 qpair failed and we were unable to recover it. 00:28:16.492 [2024-12-10 22:59:24.041773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.492 [2024-12-10 22:59:24.041811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.492 qpair failed and we were unable to recover it. 00:28:16.492 [2024-12-10 22:59:24.042021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.492 [2024-12-10 22:59:24.042055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.492 qpair failed and we were unable to recover it. 00:28:16.492 [2024-12-10 22:59:24.042341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.492 [2024-12-10 22:59:24.042376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.492 qpair failed and we were unable to recover it. 00:28:16.492 [2024-12-10 22:59:24.042507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.492 [2024-12-10 22:59:24.042541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.492 qpair failed and we were unable to recover it. 00:28:16.492 [2024-12-10 22:59:24.042669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.492 [2024-12-10 22:59:24.042702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.492 qpair failed and we were unable to recover it. 00:28:16.492 [2024-12-10 22:59:24.042817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.492 [2024-12-10 22:59:24.042851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.492 qpair failed and we were unable to recover it. 00:28:16.492 [2024-12-10 22:59:24.043035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.492 [2024-12-10 22:59:24.043069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.492 qpair failed and we were unable to recover it. 00:28:16.492 [2024-12-10 22:59:24.043194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.492 [2024-12-10 22:59:24.043228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.492 qpair failed and we were unable to recover it. 00:28:16.492 [2024-12-10 22:59:24.043423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.492 [2024-12-10 22:59:24.043457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.492 qpair failed and we were unable to recover it. 00:28:16.492 [2024-12-10 22:59:24.043667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.492 [2024-12-10 22:59:24.043704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.492 qpair failed and we were unable to recover it. 00:28:16.492 [2024-12-10 22:59:24.043848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.492 [2024-12-10 22:59:24.043884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.492 qpair failed and we were unable to recover it. 00:28:16.492 [2024-12-10 22:59:24.044203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.492 [2024-12-10 22:59:24.044241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.492 qpair failed and we were unable to recover it. 00:28:16.492 [2024-12-10 22:59:24.044370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.492 [2024-12-10 22:59:24.044404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.492 qpair failed and we were unable to recover it. 00:28:16.492 [2024-12-10 22:59:24.044562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.492 [2024-12-10 22:59:24.044595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.492 qpair failed and we were unable to recover it. 00:28:16.492 [2024-12-10 22:59:24.044836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.492 [2024-12-10 22:59:24.044878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.492 qpair failed and we were unable to recover it. 00:28:16.492 [2024-12-10 22:59:24.045072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.492 [2024-12-10 22:59:24.045107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.492 qpair failed and we were unable to recover it. 00:28:16.492 [2024-12-10 22:59:24.045300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.492 [2024-12-10 22:59:24.045335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.492 qpair failed and we were unable to recover it. 00:28:16.492 [2024-12-10 22:59:24.045517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.492 [2024-12-10 22:59:24.045551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.492 qpair failed and we were unable to recover it. 00:28:16.492 [2024-12-10 22:59:24.045781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.492 [2024-12-10 22:59:24.045814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.492 qpair failed and we were unable to recover it. 00:28:16.492 [2024-12-10 22:59:24.045955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.492 [2024-12-10 22:59:24.045989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.492 qpair failed and we were unable to recover it. 00:28:16.492 [2024-12-10 22:59:24.046115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.492 [2024-12-10 22:59:24.046148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.492 qpair failed and we were unable to recover it. 00:28:16.492 [2024-12-10 22:59:24.046382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.492 [2024-12-10 22:59:24.046416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.492 qpair failed and we were unable to recover it. 00:28:16.492 [2024-12-10 22:59:24.046625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.492 [2024-12-10 22:59:24.046659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.492 qpair failed and we were unable to recover it. 00:28:16.492 [2024-12-10 22:59:24.046999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.492 [2024-12-10 22:59:24.047033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.492 qpair failed and we were unable to recover it. 00:28:16.492 [2024-12-10 22:59:24.047182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.492 [2024-12-10 22:59:24.047218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.492 qpair failed and we were unable to recover it. 00:28:16.493 [2024-12-10 22:59:24.047354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.493 [2024-12-10 22:59:24.047388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.493 qpair failed and we were unable to recover it. 00:28:16.493 [2024-12-10 22:59:24.047573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.493 [2024-12-10 22:59:24.047609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.493 qpair failed and we were unable to recover it. 00:28:16.493 [2024-12-10 22:59:24.047824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.493 [2024-12-10 22:59:24.047859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.493 qpair failed and we were unable to recover it. 00:28:16.493 [2024-12-10 22:59:24.047981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.493 [2024-12-10 22:59:24.048014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.493 qpair failed and we were unable to recover it. 00:28:16.493 [2024-12-10 22:59:24.048218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.493 [2024-12-10 22:59:24.048252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.493 qpair failed and we were unable to recover it. 00:28:16.493 [2024-12-10 22:59:24.048379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.493 [2024-12-10 22:59:24.048413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.493 qpair failed and we were unable to recover it. 00:28:16.493 [2024-12-10 22:59:24.048607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.493 [2024-12-10 22:59:24.048641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.493 qpair failed and we were unable to recover it. 00:28:16.493 [2024-12-10 22:59:24.048947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.493 [2024-12-10 22:59:24.048981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.493 qpair failed and we were unable to recover it. 00:28:16.493 [2024-12-10 22:59:24.049169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.493 [2024-12-10 22:59:24.049205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.493 qpair failed and we were unable to recover it. 00:28:16.493 [2024-12-10 22:59:24.049398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.493 [2024-12-10 22:59:24.049433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.493 qpair failed and we were unable to recover it. 00:28:16.493 [2024-12-10 22:59:24.049615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.493 [2024-12-10 22:59:24.049649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.493 qpair failed and we were unable to recover it. 00:28:16.493 [2024-12-10 22:59:24.049871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.493 [2024-12-10 22:59:24.049906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.493 qpair failed and we were unable to recover it. 00:28:16.493 [2024-12-10 22:59:24.050096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.493 [2024-12-10 22:59:24.050130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.493 qpair failed and we were unable to recover it. 00:28:16.493 [2024-12-10 22:59:24.050327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.493 [2024-12-10 22:59:24.050364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.493 qpair failed and we were unable to recover it. 00:28:16.493 [2024-12-10 22:59:24.050622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.493 [2024-12-10 22:59:24.050656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.493 qpair failed and we were unable to recover it. 00:28:16.493 [2024-12-10 22:59:24.050866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.493 [2024-12-10 22:59:24.050900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.493 qpair failed and we were unable to recover it. 00:28:16.493 [2024-12-10 22:59:24.051094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.493 [2024-12-10 22:59:24.051127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.493 qpair failed and we were unable to recover it. 00:28:16.493 [2024-12-10 22:59:24.051322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.493 [2024-12-10 22:59:24.051358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.493 qpair failed and we were unable to recover it. 00:28:16.493 [2024-12-10 22:59:24.051544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.493 [2024-12-10 22:59:24.051579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.493 qpair failed and we were unable to recover it. 00:28:16.493 [2024-12-10 22:59:24.051705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.493 [2024-12-10 22:59:24.051741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.493 qpair failed and we were unable to recover it. 00:28:16.493 [2024-12-10 22:59:24.051926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.493 [2024-12-10 22:59:24.051960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.493 qpair failed and we were unable to recover it. 00:28:16.493 [2024-12-10 22:59:24.052198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.493 [2024-12-10 22:59:24.052234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.493 qpair failed and we were unable to recover it. 00:28:16.493 [2024-12-10 22:59:24.052422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.493 [2024-12-10 22:59:24.052458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.493 qpair failed and we were unable to recover it. 00:28:16.493 [2024-12-10 22:59:24.052653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.493 [2024-12-10 22:59:24.052686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.493 qpair failed and we were unable to recover it. 00:28:16.493 [2024-12-10 22:59:24.052798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.493 [2024-12-10 22:59:24.052832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.493 qpair failed and we were unable to recover it. 00:28:16.493 [2024-12-10 22:59:24.053020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.493 [2024-12-10 22:59:24.053054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.493 qpair failed and we were unable to recover it. 00:28:16.493 [2024-12-10 22:59:24.053236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.493 [2024-12-10 22:59:24.053272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.493 qpair failed and we were unable to recover it. 00:28:16.493 [2024-12-10 22:59:24.053415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.493 [2024-12-10 22:59:24.053450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.493 qpair failed and we were unable to recover it. 00:28:16.493 [2024-12-10 22:59:24.053648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.493 [2024-12-10 22:59:24.053683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.493 qpair failed and we were unable to recover it. 00:28:16.493 [2024-12-10 22:59:24.057186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.493 [2024-12-10 22:59:24.057253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.493 qpair failed and we were unable to recover it. 00:28:16.493 [2024-12-10 22:59:24.057569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.494 [2024-12-10 22:59:24.057603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.494 qpair failed and we were unable to recover it. 00:28:16.494 [2024-12-10 22:59:24.057815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.494 [2024-12-10 22:59:24.057847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.494 qpair failed and we were unable to recover it. 00:28:16.494 [2024-12-10 22:59:24.057971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.494 [2024-12-10 22:59:24.058001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.494 qpair failed and we were unable to recover it. 00:28:16.494 [2024-12-10 22:59:24.058193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.494 [2024-12-10 22:59:24.058225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.494 qpair failed and we were unable to recover it. 00:28:16.494 [2024-12-10 22:59:24.058424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.494 [2024-12-10 22:59:24.058455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.494 qpair failed and we were unable to recover it. 00:28:16.494 [2024-12-10 22:59:24.058674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.494 [2024-12-10 22:59:24.058707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.494 qpair failed and we were unable to recover it. 00:28:16.494 [2024-12-10 22:59:24.058908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.494 [2024-12-10 22:59:24.058939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.494 qpair failed and we were unable to recover it. 00:28:16.494 [2024-12-10 22:59:24.059134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.494 [2024-12-10 22:59:24.059187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.494 qpair failed and we were unable to recover it. 00:28:16.494 [2024-12-10 22:59:24.059473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.494 [2024-12-10 22:59:24.059503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.494 qpair failed and we were unable to recover it. 00:28:16.494 [2024-12-10 22:59:24.059630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.494 [2024-12-10 22:59:24.059661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.494 qpair failed and we were unable to recover it. 00:28:16.494 [2024-12-10 22:59:24.059954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.494 [2024-12-10 22:59:24.059984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.494 qpair failed and we were unable to recover it. 00:28:16.494 [2024-12-10 22:59:24.060259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.494 [2024-12-10 22:59:24.060292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.494 qpair failed and we were unable to recover it. 00:28:16.494 [2024-12-10 22:59:24.060602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.494 [2024-12-10 22:59:24.060633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.494 qpair failed and we were unable to recover it. 00:28:16.494 [2024-12-10 22:59:24.060830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.494 [2024-12-10 22:59:24.060860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.494 qpair failed and we were unable to recover it. 00:28:16.494 [2024-12-10 22:59:24.061041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.494 [2024-12-10 22:59:24.061071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.494 qpair failed and we were unable to recover it. 00:28:16.494 [2024-12-10 22:59:24.061319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.494 [2024-12-10 22:59:24.061350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.494 qpair failed and we were unable to recover it. 00:28:16.494 [2024-12-10 22:59:24.061554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.494 [2024-12-10 22:59:24.061588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.494 qpair failed and we were unable to recover it. 00:28:16.494 [2024-12-10 22:59:24.061781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.494 [2024-12-10 22:59:24.061812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.494 qpair failed and we were unable to recover it. 00:28:16.494 [2024-12-10 22:59:24.061992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.494 [2024-12-10 22:59:24.062022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.494 qpair failed and we were unable to recover it. 00:28:16.494 [2024-12-10 22:59:24.062196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.494 [2024-12-10 22:59:24.062228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.494 qpair failed and we were unable to recover it. 00:28:16.494 [2024-12-10 22:59:24.062427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.494 [2024-12-10 22:59:24.062459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.494 qpair failed and we were unable to recover it. 00:28:16.494 [2024-12-10 22:59:24.062662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.494 [2024-12-10 22:59:24.062694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.494 qpair failed and we were unable to recover it. 00:28:16.494 [2024-12-10 22:59:24.062945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.494 [2024-12-10 22:59:24.062977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.494 qpair failed and we were unable to recover it. 00:28:16.494 [2024-12-10 22:59:24.063100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.494 [2024-12-10 22:59:24.063132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.494 qpair failed and we were unable to recover it. 00:28:16.494 [2024-12-10 22:59:24.063261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.494 [2024-12-10 22:59:24.063294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.494 qpair failed and we were unable to recover it. 00:28:16.494 [2024-12-10 22:59:24.063422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.494 [2024-12-10 22:59:24.063452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.494 qpair failed and we were unable to recover it. 00:28:16.494 [2024-12-10 22:59:24.063571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.494 [2024-12-10 22:59:24.063602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.494 qpair failed and we were unable to recover it. 00:28:16.494 [2024-12-10 22:59:24.063806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.494 [2024-12-10 22:59:24.063836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.494 qpair failed and we were unable to recover it. 00:28:16.494 [2024-12-10 22:59:24.063975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.494 [2024-12-10 22:59:24.064006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.494 qpair failed and we were unable to recover it. 00:28:16.494 [2024-12-10 22:59:24.064199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.494 [2024-12-10 22:59:24.064233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.494 qpair failed and we were unable to recover it. 00:28:16.494 [2024-12-10 22:59:24.067187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.494 [2024-12-10 22:59:24.067242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.494 qpair failed and we were unable to recover it. 00:28:16.494 [2024-12-10 22:59:24.067492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.494 [2024-12-10 22:59:24.067532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.494 qpair failed and we were unable to recover it. 00:28:16.494 [2024-12-10 22:59:24.067669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.494 [2024-12-10 22:59:24.067705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.494 qpair failed and we were unable to recover it. 00:28:16.494 [2024-12-10 22:59:24.067891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.494 [2024-12-10 22:59:24.067921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.494 qpair failed and we were unable to recover it. 00:28:16.494 [2024-12-10 22:59:24.068097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.494 [2024-12-10 22:59:24.068126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.494 qpair failed and we were unable to recover it. 00:28:16.494 [2024-12-10 22:59:24.068349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.494 [2024-12-10 22:59:24.068383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.494 qpair failed and we were unable to recover it. 00:28:16.494 [2024-12-10 22:59:24.068603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.494 [2024-12-10 22:59:24.068635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.494 qpair failed and we were unable to recover it. 00:28:16.494 [2024-12-10 22:59:24.068747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.494 [2024-12-10 22:59:24.068771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.495 qpair failed and we were unable to recover it. 00:28:16.495 [2024-12-10 22:59:24.068880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.495 [2024-12-10 22:59:24.068902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.495 qpair failed and we were unable to recover it. 00:28:16.495 [2024-12-10 22:59:24.069065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.495 [2024-12-10 22:59:24.069093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.495 qpair failed and we were unable to recover it. 00:28:16.495 [2024-12-10 22:59:24.069212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.495 [2024-12-10 22:59:24.069237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.495 qpair failed and we were unable to recover it. 00:28:16.495 [2024-12-10 22:59:24.069433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.495 [2024-12-10 22:59:24.069456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.495 qpair failed and we were unable to recover it. 00:28:16.495 [2024-12-10 22:59:24.069615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.495 [2024-12-10 22:59:24.069638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.495 qpair failed and we were unable to recover it. 00:28:16.495 [2024-12-10 22:59:24.069734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.495 [2024-12-10 22:59:24.069754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.495 qpair failed and we were unable to recover it. 00:28:16.495 [2024-12-10 22:59:24.069932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.495 [2024-12-10 22:59:24.069955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.495 qpair failed and we were unable to recover it. 00:28:16.495 [2024-12-10 22:59:24.070073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.495 [2024-12-10 22:59:24.070096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.495 qpair failed and we were unable to recover it. 00:28:16.495 [2024-12-10 22:59:24.070258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.495 [2024-12-10 22:59:24.070281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.495 qpair failed and we were unable to recover it. 00:28:16.495 [2024-12-10 22:59:24.070478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.495 [2024-12-10 22:59:24.070501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.495 qpair failed and we were unable to recover it. 00:28:16.495 [2024-12-10 22:59:24.070602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.495 [2024-12-10 22:59:24.070624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.495 qpair failed and we were unable to recover it. 00:28:16.495 [2024-12-10 22:59:24.070723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.495 [2024-12-10 22:59:24.070764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.495 qpair failed and we were unable to recover it. 00:28:16.495 [2024-12-10 22:59:24.070880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.495 [2024-12-10 22:59:24.070904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.495 qpair failed and we were unable to recover it. 00:28:16.495 [2024-12-10 22:59:24.070994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.495 [2024-12-10 22:59:24.071013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.495 qpair failed and we were unable to recover it. 00:28:16.495 [2024-12-10 22:59:24.071186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.495 [2024-12-10 22:59:24.071209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.495 qpair failed and we were unable to recover it. 00:28:16.495 [2024-12-10 22:59:24.071301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.495 [2024-12-10 22:59:24.071321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.495 qpair failed and we were unable to recover it. 00:28:16.495 [2024-12-10 22:59:24.071437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.495 [2024-12-10 22:59:24.071459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.495 qpair failed and we were unable to recover it. 00:28:16.495 [2024-12-10 22:59:24.071556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.495 [2024-12-10 22:59:24.071577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.495 qpair failed and we were unable to recover it. 00:28:16.495 [2024-12-10 22:59:24.071679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.495 [2024-12-10 22:59:24.071702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.495 qpair failed and we were unable to recover it. 00:28:16.495 [2024-12-10 22:59:24.071808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.495 [2024-12-10 22:59:24.071830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.495 qpair failed and we were unable to recover it. 00:28:16.495 [2024-12-10 22:59:24.071990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.495 [2024-12-10 22:59:24.072010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.495 qpair failed and we were unable to recover it. 00:28:16.495 [2024-12-10 22:59:24.072184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.495 [2024-12-10 22:59:24.072208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.495 qpair failed and we were unable to recover it. 00:28:16.495 [2024-12-10 22:59:24.072407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.495 [2024-12-10 22:59:24.072430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.495 qpair failed and we were unable to recover it. 00:28:16.495 [2024-12-10 22:59:24.072619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.495 [2024-12-10 22:59:24.072641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.495 qpair failed and we were unable to recover it. 00:28:16.495 [2024-12-10 22:59:24.072935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.495 [2024-12-10 22:59:24.072956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.495 qpair failed and we were unable to recover it. 00:28:16.495 [2024-12-10 22:59:24.073079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.495 [2024-12-10 22:59:24.073103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.495 qpair failed and we were unable to recover it. 00:28:16.495 [2024-12-10 22:59:24.073266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.495 [2024-12-10 22:59:24.073300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.495 qpair failed and we were unable to recover it. 00:28:16.495 [2024-12-10 22:59:24.073406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.495 [2024-12-10 22:59:24.073429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.495 qpair failed and we were unable to recover it. 00:28:16.495 [2024-12-10 22:59:24.073524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.495 [2024-12-10 22:59:24.073547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.495 qpair failed and we were unable to recover it. 00:28:16.495 [2024-12-10 22:59:24.073632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.495 [2024-12-10 22:59:24.073651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.495 qpair failed and we were unable to recover it. 00:28:16.495 [2024-12-10 22:59:24.073806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.495 [2024-12-10 22:59:24.073828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.495 qpair failed and we were unable to recover it. 00:28:16.495 [2024-12-10 22:59:24.073938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.495 [2024-12-10 22:59:24.073962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.495 qpair failed and we were unable to recover it. 00:28:16.495 [2024-12-10 22:59:24.074043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.495 [2024-12-10 22:59:24.074064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.495 qpair failed and we were unable to recover it. 00:28:16.495 [2024-12-10 22:59:24.074156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.495 [2024-12-10 22:59:24.074187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.495 qpair failed and we were unable to recover it. 00:28:16.495 [2024-12-10 22:59:24.074278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.495 [2024-12-10 22:59:24.074297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.495 qpair failed and we were unable to recover it. 00:28:16.495 [2024-12-10 22:59:24.074381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.495 [2024-12-10 22:59:24.074400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.495 qpair failed and we were unable to recover it. 00:28:16.495 [2024-12-10 22:59:24.074551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.495 [2024-12-10 22:59:24.074571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.495 qpair failed and we were unable to recover it. 00:28:16.495 [2024-12-10 22:59:24.074661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.495 [2024-12-10 22:59:24.074680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.495 qpair failed and we were unable to recover it. 00:28:16.495 [2024-12-10 22:59:24.074843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.496 [2024-12-10 22:59:24.074864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.496 qpair failed and we were unable to recover it. 00:28:16.496 [2024-12-10 22:59:24.074963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.496 [2024-12-10 22:59:24.074982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.496 qpair failed and we were unable to recover it. 00:28:16.496 [2024-12-10 22:59:24.075057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.496 [2024-12-10 22:59:24.075076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.496 qpair failed and we were unable to recover it. 00:28:16.496 [2024-12-10 22:59:24.075238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.496 [2024-12-10 22:59:24.075264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.496 qpair failed and we were unable to recover it. 00:28:16.496 [2024-12-10 22:59:24.075352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.496 [2024-12-10 22:59:24.075371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.496 qpair failed and we were unable to recover it. 00:28:16.496 [2024-12-10 22:59:24.075469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.496 [2024-12-10 22:59:24.075488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.496 qpair failed and we were unable to recover it. 00:28:16.496 [2024-12-10 22:59:24.075644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.496 [2024-12-10 22:59:24.075665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.496 qpair failed and we were unable to recover it. 00:28:16.496 [2024-12-10 22:59:24.075774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.496 [2024-12-10 22:59:24.075795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.496 qpair failed and we were unable to recover it. 00:28:16.496 [2024-12-10 22:59:24.075951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.496 [2024-12-10 22:59:24.075973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.496 qpair failed and we were unable to recover it. 00:28:16.496 [2024-12-10 22:59:24.076066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.496 [2024-12-10 22:59:24.076085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.496 qpair failed and we were unable to recover it. 00:28:16.496 [2024-12-10 22:59:24.076236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.496 [2024-12-10 22:59:24.076258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.496 qpair failed and we were unable to recover it. 00:28:16.496 [2024-12-10 22:59:24.076347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.496 [2024-12-10 22:59:24.076367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.496 qpair failed and we were unable to recover it. 00:28:16.496 [2024-12-10 22:59:24.076468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.496 [2024-12-10 22:59:24.076489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.496 qpair failed and we were unable to recover it. 00:28:16.496 [2024-12-10 22:59:24.076738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.496 [2024-12-10 22:59:24.076761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.496 qpair failed and we were unable to recover it. 00:28:16.496 [2024-12-10 22:59:24.076942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.496 [2024-12-10 22:59:24.076964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.496 qpair failed and we were unable to recover it. 00:28:16.496 [2024-12-10 22:59:24.077048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.496 [2024-12-10 22:59:24.077067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.496 qpair failed and we were unable to recover it. 00:28:16.496 [2024-12-10 22:59:24.077155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.496 [2024-12-10 22:59:24.077191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.496 qpair failed and we were unable to recover it. 00:28:16.496 [2024-12-10 22:59:24.077359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.496 [2024-12-10 22:59:24.077381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.496 qpair failed and we were unable to recover it. 00:28:16.496 [2024-12-10 22:59:24.077569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.496 [2024-12-10 22:59:24.077591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.496 qpair failed and we were unable to recover it. 00:28:16.496 [2024-12-10 22:59:24.077747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.496 [2024-12-10 22:59:24.077770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.496 qpair failed and we were unable to recover it. 00:28:16.496 [2024-12-10 22:59:24.077935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.496 [2024-12-10 22:59:24.077958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.496 qpair failed and we were unable to recover it. 00:28:16.496 [2024-12-10 22:59:24.078148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.496 [2024-12-10 22:59:24.078182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.496 qpair failed and we were unable to recover it. 00:28:16.496 [2024-12-10 22:59:24.078365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.496 [2024-12-10 22:59:24.078388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.496 qpair failed and we were unable to recover it. 00:28:16.496 [2024-12-10 22:59:24.078543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.496 [2024-12-10 22:59:24.078567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.496 qpair failed and we were unable to recover it. 00:28:16.496 [2024-12-10 22:59:24.078678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.496 [2024-12-10 22:59:24.078700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.496 qpair failed and we were unable to recover it. 00:28:16.496 [2024-12-10 22:59:24.078810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.496 [2024-12-10 22:59:24.078831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.496 qpair failed and we were unable to recover it. 00:28:16.496 [2024-12-10 22:59:24.079006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.496 [2024-12-10 22:59:24.079031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.496 qpair failed and we were unable to recover it. 00:28:16.496 [2024-12-10 22:59:24.079104] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:28:16.496 [2024-12-10 22:59:24.079153] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:16.496 [2024-12-10 22:59:24.079204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.496 [2024-12-10 22:59:24.079227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.496 qpair failed and we were unable to recover it. 00:28:16.496 [2024-12-10 22:59:24.079395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.496 [2024-12-10 22:59:24.079414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.496 qpair failed and we were unable to recover it. 00:28:16.496 [2024-12-10 22:59:24.079648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.496 [2024-12-10 22:59:24.079668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.496 qpair failed and we were unable to recover it. 00:28:16.496 [2024-12-10 22:59:24.079820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.496 [2024-12-10 22:59:24.079841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.496 qpair failed and we were unable to recover it. 00:28:16.496 [2024-12-10 22:59:24.079946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.496 [2024-12-10 22:59:24.079968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.496 qpair failed and we were unable to recover it. 00:28:16.496 [2024-12-10 22:59:24.080064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.496 [2024-12-10 22:59:24.080085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.496 qpair failed and we were unable to recover it. 00:28:16.496 [2024-12-10 22:59:24.080191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.496 [2024-12-10 22:59:24.080214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.496 qpair failed and we were unable to recover it. 00:28:16.496 [2024-12-10 22:59:24.080410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.496 [2024-12-10 22:59:24.080430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.496 qpair failed and we were unable to recover it. 00:28:16.496 [2024-12-10 22:59:24.080530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.496 [2024-12-10 22:59:24.080551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.496 qpair failed and we were unable to recover it. 00:28:16.496 [2024-12-10 22:59:24.080652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.496 [2024-12-10 22:59:24.080673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.496 qpair failed and we were unable to recover it. 00:28:16.496 [2024-12-10 22:59:24.080822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.496 [2024-12-10 22:59:24.080844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.496 qpair failed and we were unable to recover it. 00:28:16.497 [2024-12-10 22:59:24.081062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.497 [2024-12-10 22:59:24.081084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.497 qpair failed and we were unable to recover it. 00:28:16.497 [2024-12-10 22:59:24.081185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.497 [2024-12-10 22:59:24.081207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.497 qpair failed and we were unable to recover it. 00:28:16.497 [2024-12-10 22:59:24.081290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.497 [2024-12-10 22:59:24.081311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.497 qpair failed and we were unable to recover it. 00:28:16.497 [2024-12-10 22:59:24.081412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.497 [2024-12-10 22:59:24.081433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.497 qpair failed and we were unable to recover it. 00:28:16.497 [2024-12-10 22:59:24.081518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.497 [2024-12-10 22:59:24.081539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.497 qpair failed and we were unable to recover it. 00:28:16.497 [2024-12-10 22:59:24.081690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.497 [2024-12-10 22:59:24.081713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.497 qpair failed and we were unable to recover it. 00:28:16.497 [2024-12-10 22:59:24.081885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.497 [2024-12-10 22:59:24.081906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.497 qpair failed and we were unable to recover it. 00:28:16.497 [2024-12-10 22:59:24.082095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.497 [2024-12-10 22:59:24.082117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.497 qpair failed and we were unable to recover it. 00:28:16.497 [2024-12-10 22:59:24.082210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.497 [2024-12-10 22:59:24.082234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.497 qpair failed and we were unable to recover it. 00:28:16.497 [2024-12-10 22:59:24.082347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.497 [2024-12-10 22:59:24.082371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.497 qpair failed and we were unable to recover it. 00:28:16.497 [2024-12-10 22:59:24.082463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.497 [2024-12-10 22:59:24.082485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.497 qpair failed and we were unable to recover it. 00:28:16.497 [2024-12-10 22:59:24.082586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.497 [2024-12-10 22:59:24.082610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.497 qpair failed and we were unable to recover it. 00:28:16.497 [2024-12-10 22:59:24.082712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.497 [2024-12-10 22:59:24.082734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.497 qpair failed and we were unable to recover it. 00:28:16.497 [2024-12-10 22:59:24.082814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.497 [2024-12-10 22:59:24.082836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.497 qpair failed and we were unable to recover it. 00:28:16.497 [2024-12-10 22:59:24.082943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.497 [2024-12-10 22:59:24.082968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.497 qpair failed and we were unable to recover it. 00:28:16.497 [2024-12-10 22:59:24.083088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.497 [2024-12-10 22:59:24.083112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.497 qpair failed and we were unable to recover it. 00:28:16.497 [2024-12-10 22:59:24.083262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.497 [2024-12-10 22:59:24.083285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.497 qpair failed and we were unable to recover it. 00:28:16.497 [2024-12-10 22:59:24.083454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.497 [2024-12-10 22:59:24.083486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.497 qpair failed and we were unable to recover it. 00:28:16.497 [2024-12-10 22:59:24.083636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.497 [2024-12-10 22:59:24.083659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.497 qpair failed and we were unable to recover it. 00:28:16.497 [2024-12-10 22:59:24.083808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.497 [2024-12-10 22:59:24.083831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.497 qpair failed and we were unable to recover it. 00:28:16.497 [2024-12-10 22:59:24.084083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.497 [2024-12-10 22:59:24.084105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.497 qpair failed and we were unable to recover it. 00:28:16.497 [2024-12-10 22:59:24.084256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.497 [2024-12-10 22:59:24.084278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.497 qpair failed and we were unable to recover it. 00:28:16.497 [2024-12-10 22:59:24.084375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.497 [2024-12-10 22:59:24.084395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.497 qpair failed and we were unable to recover it. 00:28:16.497 [2024-12-10 22:59:24.084563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.497 [2024-12-10 22:59:24.084584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.497 qpair failed and we were unable to recover it. 00:28:16.497 [2024-12-10 22:59:24.084734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.497 [2024-12-10 22:59:24.084756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.497 qpair failed and we were unable to recover it. 00:28:16.497 [2024-12-10 22:59:24.084905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.497 [2024-12-10 22:59:24.084927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.497 qpair failed and we were unable to recover it. 00:28:16.497 [2024-12-10 22:59:24.085102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.497 [2024-12-10 22:59:24.085124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.497 qpair failed and we were unable to recover it. 00:28:16.497 [2024-12-10 22:59:24.085300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.497 [2024-12-10 22:59:24.085323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.497 qpair failed and we were unable to recover it. 00:28:16.497 [2024-12-10 22:59:24.085426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.497 [2024-12-10 22:59:24.085449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.497 qpair failed and we were unable to recover it. 00:28:16.497 [2024-12-10 22:59:24.085683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.497 [2024-12-10 22:59:24.085705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.497 qpair failed and we were unable to recover it. 00:28:16.497 [2024-12-10 22:59:24.085858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.497 [2024-12-10 22:59:24.085880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.497 qpair failed and we were unable to recover it. 00:28:16.497 [2024-12-10 22:59:24.086038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.497 [2024-12-10 22:59:24.086061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.497 qpair failed and we were unable to recover it. 00:28:16.497 [2024-12-10 22:59:24.086163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.498 [2024-12-10 22:59:24.086186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.498 qpair failed and we were unable to recover it. 00:28:16.498 [2024-12-10 22:59:24.086340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.498 [2024-12-10 22:59:24.086362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.498 qpair failed and we were unable to recover it. 00:28:16.498 [2024-12-10 22:59:24.086443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.498 [2024-12-10 22:59:24.086463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.498 qpair failed and we were unable to recover it. 00:28:16.498 [2024-12-10 22:59:24.086653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.498 [2024-12-10 22:59:24.086675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.498 qpair failed and we were unable to recover it. 00:28:16.498 [2024-12-10 22:59:24.086843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.498 [2024-12-10 22:59:24.086866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.498 qpair failed and we were unable to recover it. 00:28:16.498 [2024-12-10 22:59:24.086950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.498 [2024-12-10 22:59:24.086972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.498 qpair failed and we were unable to recover it. 00:28:16.498 [2024-12-10 22:59:24.087053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.498 [2024-12-10 22:59:24.087074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.498 qpair failed and we were unable to recover it. 00:28:16.498 [2024-12-10 22:59:24.087248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.498 [2024-12-10 22:59:24.087272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.498 qpair failed and we were unable to recover it. 00:28:16.498 [2024-12-10 22:59:24.087374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.498 [2024-12-10 22:59:24.087396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.498 qpair failed and we were unable to recover it. 00:28:16.498 [2024-12-10 22:59:24.087552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.498 [2024-12-10 22:59:24.087576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.498 qpair failed and we were unable to recover it. 00:28:16.498 [2024-12-10 22:59:24.087756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.498 [2024-12-10 22:59:24.087778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.498 qpair failed and we were unable to recover it. 00:28:16.498 [2024-12-10 22:59:24.087888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.498 [2024-12-10 22:59:24.087911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.498 qpair failed and we were unable to recover it. 00:28:16.498 [2024-12-10 22:59:24.088072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.498 [2024-12-10 22:59:24.088098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.498 qpair failed and we were unable to recover it. 00:28:16.498 [2024-12-10 22:59:24.088235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.498 [2024-12-10 22:59:24.088261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.498 qpair failed and we were unable to recover it. 00:28:16.498 [2024-12-10 22:59:24.088439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.498 [2024-12-10 22:59:24.088465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.498 qpair failed and we were unable to recover it. 00:28:16.498 [2024-12-10 22:59:24.088655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.498 [2024-12-10 22:59:24.088684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.498 qpair failed and we were unable to recover it. 00:28:16.498 [2024-12-10 22:59:24.088787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.498 [2024-12-10 22:59:24.088813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.498 qpair failed and we were unable to recover it. 00:28:16.498 [2024-12-10 22:59:24.088915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.498 [2024-12-10 22:59:24.088941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.498 qpair failed and we were unable to recover it. 00:28:16.498 [2024-12-10 22:59:24.089054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.498 [2024-12-10 22:59:24.089081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.498 qpair failed and we were unable to recover it. 00:28:16.498 [2024-12-10 22:59:24.089179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.498 [2024-12-10 22:59:24.089205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.498 qpair failed and we were unable to recover it. 00:28:16.498 [2024-12-10 22:59:24.089314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.498 [2024-12-10 22:59:24.089340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.498 qpair failed and we were unable to recover it. 00:28:16.498 [2024-12-10 22:59:24.089453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.498 [2024-12-10 22:59:24.089481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.498 qpair failed and we were unable to recover it. 00:28:16.498 [2024-12-10 22:59:24.089588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.498 [2024-12-10 22:59:24.089614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.498 qpair failed and we were unable to recover it. 00:28:16.498 [2024-12-10 22:59:24.089773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.498 [2024-12-10 22:59:24.089799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.498 qpair failed and we were unable to recover it. 00:28:16.498 [2024-12-10 22:59:24.089892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.498 [2024-12-10 22:59:24.089918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.498 qpair failed and we were unable to recover it. 00:28:16.498 [2024-12-10 22:59:24.090036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.498 [2024-12-10 22:59:24.090067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.498 qpair failed and we were unable to recover it. 00:28:16.498 [2024-12-10 22:59:24.090253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.498 [2024-12-10 22:59:24.090279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.498 qpair failed and we were unable to recover it. 00:28:16.498 [2024-12-10 22:59:24.090382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.498 [2024-12-10 22:59:24.090409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.498 qpair failed and we were unable to recover it. 00:28:16.498 [2024-12-10 22:59:24.090523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.498 [2024-12-10 22:59:24.090549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.498 qpair failed and we were unable to recover it. 00:28:16.498 [2024-12-10 22:59:24.090727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.498 [2024-12-10 22:59:24.090753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.498 qpair failed and we were unable to recover it. 00:28:16.498 [2024-12-10 22:59:24.090856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.498 [2024-12-10 22:59:24.090882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.498 qpair failed and we were unable to recover it. 00:28:16.498 [2024-12-10 22:59:24.091041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.498 [2024-12-10 22:59:24.091067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.498 qpair failed and we were unable to recover it. 00:28:16.498 [2024-12-10 22:59:24.091170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.498 [2024-12-10 22:59:24.091198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.498 qpair failed and we were unable to recover it. 00:28:16.498 [2024-12-10 22:59:24.091303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.498 [2024-12-10 22:59:24.091329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.498 qpair failed and we were unable to recover it. 00:28:16.498 [2024-12-10 22:59:24.091453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.498 [2024-12-10 22:59:24.091480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.498 qpair failed and we were unable to recover it. 00:28:16.498 [2024-12-10 22:59:24.091662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.498 [2024-12-10 22:59:24.091691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.498 qpair failed and we were unable to recover it. 00:28:16.498 [2024-12-10 22:59:24.091782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.498 [2024-12-10 22:59:24.091809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.498 qpair failed and we were unable to recover it. 00:28:16.498 [2024-12-10 22:59:24.091983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.498 [2024-12-10 22:59:24.092011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.498 qpair failed and we were unable to recover it. 00:28:16.498 [2024-12-10 22:59:24.092176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.498 [2024-12-10 22:59:24.092204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.498 qpair failed and we were unable to recover it. 00:28:16.498 [2024-12-10 22:59:24.092312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.498 [2024-12-10 22:59:24.092338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.499 qpair failed and we were unable to recover it. 00:28:16.499 [2024-12-10 22:59:24.092595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.499 [2024-12-10 22:59:24.092621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.499 qpair failed and we were unable to recover it. 00:28:16.499 [2024-12-10 22:59:24.092730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.499 [2024-12-10 22:59:24.092756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.499 qpair failed and we were unable to recover it. 00:28:16.499 [2024-12-10 22:59:24.092993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.499 [2024-12-10 22:59:24.093020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.499 qpair failed and we were unable to recover it. 00:28:16.499 [2024-12-10 22:59:24.093201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.499 [2024-12-10 22:59:24.093228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.499 qpair failed and we were unable to recover it. 00:28:16.499 [2024-12-10 22:59:24.093412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.499 [2024-12-10 22:59:24.093439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.499 qpair failed and we were unable to recover it. 00:28:16.499 [2024-12-10 22:59:24.093543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.499 [2024-12-10 22:59:24.093568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.499 qpair failed and we were unable to recover it. 00:28:16.499 [2024-12-10 22:59:24.093672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.499 [2024-12-10 22:59:24.093698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.499 qpair failed and we were unable to recover it. 00:28:16.499 [2024-12-10 22:59:24.093814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.499 [2024-12-10 22:59:24.093841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.499 qpair failed and we were unable to recover it. 00:28:16.499 [2024-12-10 22:59:24.093932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.499 [2024-12-10 22:59:24.093958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.499 qpair failed and we were unable to recover it. 00:28:16.499 [2024-12-10 22:59:24.094058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.499 [2024-12-10 22:59:24.094084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.499 qpair failed and we were unable to recover it. 00:28:16.499 [2024-12-10 22:59:24.094175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.499 [2024-12-10 22:59:24.094201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.499 qpair failed and we were unable to recover it. 00:28:16.499 [2024-12-10 22:59:24.094361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.499 [2024-12-10 22:59:24.094388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.499 qpair failed and we were unable to recover it. 00:28:16.499 [2024-12-10 22:59:24.094562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.499 [2024-12-10 22:59:24.094636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.499 qpair failed and we were unable to recover it. 00:28:16.499 [2024-12-10 22:59:24.094803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.499 [2024-12-10 22:59:24.094878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.499 qpair failed and we were unable to recover it. 00:28:16.499 [2024-12-10 22:59:24.095023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.499 [2024-12-10 22:59:24.095060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.499 qpair failed and we were unable to recover it. 00:28:16.499 [2024-12-10 22:59:24.095315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.499 [2024-12-10 22:59:24.095351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.499 qpair failed and we were unable to recover it. 00:28:16.499 [2024-12-10 22:59:24.095474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.499 [2024-12-10 22:59:24.095507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.499 qpair failed and we were unable to recover it. 00:28:16.499 [2024-12-10 22:59:24.095707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.499 [2024-12-10 22:59:24.095741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.499 qpair failed and we were unable to recover it. 00:28:16.499 [2024-12-10 22:59:24.095919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.499 [2024-12-10 22:59:24.095953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.499 qpair failed and we were unable to recover it. 00:28:16.499 [2024-12-10 22:59:24.096151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.499 [2024-12-10 22:59:24.096198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.499 qpair failed and we were unable to recover it. 00:28:16.499 [2024-12-10 22:59:24.096377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.499 [2024-12-10 22:59:24.096409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.499 qpair failed and we were unable to recover it. 00:28:16.499 [2024-12-10 22:59:24.096528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.499 [2024-12-10 22:59:24.096561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.499 qpair failed and we were unable to recover it. 00:28:16.499 [2024-12-10 22:59:24.096752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.499 [2024-12-10 22:59:24.096784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.499 qpair failed and we were unable to recover it. 00:28:16.499 [2024-12-10 22:59:24.096904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.499 [2024-12-10 22:59:24.096937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.499 qpair failed and we were unable to recover it. 00:28:16.499 [2024-12-10 22:59:24.097133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.499 [2024-12-10 22:59:24.097176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.499 qpair failed and we were unable to recover it. 00:28:16.499 [2024-12-10 22:59:24.097285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.499 [2024-12-10 22:59:24.097334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.499 qpair failed and we were unable to recover it. 00:28:16.499 [2024-12-10 22:59:24.097509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.499 [2024-12-10 22:59:24.097542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.499 qpair failed and we were unable to recover it. 00:28:16.499 [2024-12-10 22:59:24.097721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.499 [2024-12-10 22:59:24.097756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.499 qpair failed and we were unable to recover it. 00:28:16.499 [2024-12-10 22:59:24.097963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.499 [2024-12-10 22:59:24.097996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.499 qpair failed and we were unable to recover it. 00:28:16.499 [2024-12-10 22:59:24.098127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.499 [2024-12-10 22:59:24.098172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.499 qpair failed and we were unable to recover it. 00:28:16.499 [2024-12-10 22:59:24.098292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.499 [2024-12-10 22:59:24.098325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.499 qpair failed and we were unable to recover it. 00:28:16.499 [2024-12-10 22:59:24.098527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.499 [2024-12-10 22:59:24.098561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.499 qpair failed and we were unable to recover it. 00:28:16.499 [2024-12-10 22:59:24.098673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.499 [2024-12-10 22:59:24.098706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.499 qpair failed and we were unable to recover it. 00:28:16.499 [2024-12-10 22:59:24.098896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.499 [2024-12-10 22:59:24.098930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.499 qpair failed and we were unable to recover it. 00:28:16.499 [2024-12-10 22:59:24.099049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.499 [2024-12-10 22:59:24.099082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.499 qpair failed and we were unable to recover it. 00:28:16.499 [2024-12-10 22:59:24.099260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.499 [2024-12-10 22:59:24.099294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.499 qpair failed and we were unable to recover it. 00:28:16.499 [2024-12-10 22:59:24.099416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.499 [2024-12-10 22:59:24.099450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.499 qpair failed and we were unable to recover it. 00:28:16.499 [2024-12-10 22:59:24.099660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.499 [2024-12-10 22:59:24.099693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.499 qpair failed and we were unable to recover it. 00:28:16.499 [2024-12-10 22:59:24.099877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.500 [2024-12-10 22:59:24.099910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.500 qpair failed and we were unable to recover it. 00:28:16.500 [2024-12-10 22:59:24.100036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.500 [2024-12-10 22:59:24.100072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.500 qpair failed and we were unable to recover it. 00:28:16.500 [2024-12-10 22:59:24.100271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.500 [2024-12-10 22:59:24.100307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.500 qpair failed and we were unable to recover it. 00:28:16.500 [2024-12-10 22:59:24.100483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.500 [2024-12-10 22:59:24.100518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.500 qpair failed and we were unable to recover it. 00:28:16.500 [2024-12-10 22:59:24.100650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.500 [2024-12-10 22:59:24.100683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.500 qpair failed and we were unable to recover it. 00:28:16.500 [2024-12-10 22:59:24.100866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.500 [2024-12-10 22:59:24.100901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.500 qpair failed and we were unable to recover it. 00:28:16.500 [2024-12-10 22:59:24.101028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.500 [2024-12-10 22:59:24.101061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.500 qpair failed and we were unable to recover it. 00:28:16.500 [2024-12-10 22:59:24.101264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.500 [2024-12-10 22:59:24.101299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.500 qpair failed and we were unable to recover it. 00:28:16.500 [2024-12-10 22:59:24.101482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.500 [2024-12-10 22:59:24.101516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.500 qpair failed and we were unable to recover it. 00:28:16.500 [2024-12-10 22:59:24.101629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.500 [2024-12-10 22:59:24.101664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.500 qpair failed and we were unable to recover it. 00:28:16.500 [2024-12-10 22:59:24.101783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.500 [2024-12-10 22:59:24.101817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.500 qpair failed and we were unable to recover it. 00:28:16.500 [2024-12-10 22:59:24.101941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.500 [2024-12-10 22:59:24.101974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.500 qpair failed and we were unable to recover it. 00:28:16.500 [2024-12-10 22:59:24.102099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.500 [2024-12-10 22:59:24.102135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.500 qpair failed and we were unable to recover it. 00:28:16.500 [2024-12-10 22:59:24.102262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.500 [2024-12-10 22:59:24.102295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.500 qpair failed and we were unable to recover it. 00:28:16.500 [2024-12-10 22:59:24.102506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.500 [2024-12-10 22:59:24.102545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.500 qpair failed and we were unable to recover it. 00:28:16.500 [2024-12-10 22:59:24.102734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.500 [2024-12-10 22:59:24.102768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.500 qpair failed and we were unable to recover it. 00:28:16.500 [2024-12-10 22:59:24.102881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.500 [2024-12-10 22:59:24.102916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.500 qpair failed and we were unable to recover it. 00:28:16.500 [2024-12-10 22:59:24.103098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.500 [2024-12-10 22:59:24.103132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.500 qpair failed and we were unable to recover it. 00:28:16.500 [2024-12-10 22:59:24.103362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.500 [2024-12-10 22:59:24.103396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.500 qpair failed and we were unable to recover it. 00:28:16.500 [2024-12-10 22:59:24.103515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.500 [2024-12-10 22:59:24.103549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.500 qpair failed and we were unable to recover it. 00:28:16.500 [2024-12-10 22:59:24.103666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.500 [2024-12-10 22:59:24.103698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.500 qpair failed and we were unable to recover it. 00:28:16.500 [2024-12-10 22:59:24.103900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.500 [2024-12-10 22:59:24.103936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.500 qpair failed and we were unable to recover it. 00:28:16.500 [2024-12-10 22:59:24.104134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.500 [2024-12-10 22:59:24.104177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.500 qpair failed and we were unable to recover it. 00:28:16.500 [2024-12-10 22:59:24.104366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.500 [2024-12-10 22:59:24.104400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.500 qpair failed and we were unable to recover it. 00:28:16.500 [2024-12-10 22:59:24.104532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.500 [2024-12-10 22:59:24.104565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.500 qpair failed and we were unable to recover it. 00:28:16.500 [2024-12-10 22:59:24.104683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.500 [2024-12-10 22:59:24.104718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.500 qpair failed and we were unable to recover it. 00:28:16.500 [2024-12-10 22:59:24.104892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.500 [2024-12-10 22:59:24.104926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.500 qpair failed and we were unable to recover it. 00:28:16.500 [2024-12-10 22:59:24.105048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.500 [2024-12-10 22:59:24.105088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.500 qpair failed and we were unable to recover it. 00:28:16.500 [2024-12-10 22:59:24.105216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.500 [2024-12-10 22:59:24.105251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.500 qpair failed and we were unable to recover it. 00:28:16.500 [2024-12-10 22:59:24.105440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.500 [2024-12-10 22:59:24.105472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.500 qpair failed and we were unable to recover it. 00:28:16.500 [2024-12-10 22:59:24.105648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.500 [2024-12-10 22:59:24.105681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.500 qpair failed and we were unable to recover it. 00:28:16.500 [2024-12-10 22:59:24.105861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.500 [2024-12-10 22:59:24.105896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.500 qpair failed and we were unable to recover it. 00:28:16.500 [2024-12-10 22:59:24.106171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.500 [2024-12-10 22:59:24.106206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.500 qpair failed and we were unable to recover it. 00:28:16.500 [2024-12-10 22:59:24.106321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.500 [2024-12-10 22:59:24.106355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.500 qpair failed and we were unable to recover it. 00:28:16.500 [2024-12-10 22:59:24.106472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.500 [2024-12-10 22:59:24.106505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.500 qpair failed and we were unable to recover it. 00:28:16.500 [2024-12-10 22:59:24.106689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.500 [2024-12-10 22:59:24.106723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.500 qpair failed and we were unable to recover it. 00:28:16.500 [2024-12-10 22:59:24.106939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.500 [2024-12-10 22:59:24.106972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.500 qpair failed and we were unable to recover it. 00:28:16.500 [2024-12-10 22:59:24.107093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.500 [2024-12-10 22:59:24.107127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.501 qpair failed and we were unable to recover it. 00:28:16.501 [2024-12-10 22:59:24.107258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.501 [2024-12-10 22:59:24.107296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.501 qpair failed and we were unable to recover it. 00:28:16.501 [2024-12-10 22:59:24.107422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.501 [2024-12-10 22:59:24.107454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.501 qpair failed and we were unable to recover it. 00:28:16.501 [2024-12-10 22:59:24.107642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.501 [2024-12-10 22:59:24.107676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.501 qpair failed and we were unable to recover it. 00:28:16.501 [2024-12-10 22:59:24.107800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.501 [2024-12-10 22:59:24.107835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.501 qpair failed and we were unable to recover it. 00:28:16.501 [2024-12-10 22:59:24.108014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.501 [2024-12-10 22:59:24.108048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.501 qpair failed and we were unable to recover it. 00:28:16.501 [2024-12-10 22:59:24.108182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.501 [2024-12-10 22:59:24.108217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.501 qpair failed and we were unable to recover it. 00:28:16.501 [2024-12-10 22:59:24.108446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.501 [2024-12-10 22:59:24.108479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.501 qpair failed and we were unable to recover it. 00:28:16.501 [2024-12-10 22:59:24.108667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.501 [2024-12-10 22:59:24.108701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.501 qpair failed and we were unable to recover it. 00:28:16.501 [2024-12-10 22:59:24.108967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.501 [2024-12-10 22:59:24.109000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.501 qpair failed and we were unable to recover it. 00:28:16.501 [2024-12-10 22:59:24.109118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.501 [2024-12-10 22:59:24.109151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.501 qpair failed and we were unable to recover it. 00:28:16.501 [2024-12-10 22:59:24.109288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.501 [2024-12-10 22:59:24.109322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.501 qpair failed and we were unable to recover it. 00:28:16.501 [2024-12-10 22:59:24.109535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.501 [2024-12-10 22:59:24.109568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.501 qpair failed and we were unable to recover it. 00:28:16.501 [2024-12-10 22:59:24.109688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.501 [2024-12-10 22:59:24.109722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.501 qpair failed and we were unable to recover it. 00:28:16.501 [2024-12-10 22:59:24.109907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.501 [2024-12-10 22:59:24.109940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.501 qpair failed and we were unable to recover it. 00:28:16.501 [2024-12-10 22:59:24.110123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.501 [2024-12-10 22:59:24.110155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.501 qpair failed and we were unable to recover it. 00:28:16.501 [2024-12-10 22:59:24.110295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.501 [2024-12-10 22:59:24.110329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.501 qpair failed and we were unable to recover it. 00:28:16.501 [2024-12-10 22:59:24.110455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.501 [2024-12-10 22:59:24.110492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.501 qpair failed and we were unable to recover it. 00:28:16.501 [2024-12-10 22:59:24.110686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.501 [2024-12-10 22:59:24.110719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.501 qpair failed and we were unable to recover it. 00:28:16.501 [2024-12-10 22:59:24.110853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.501 [2024-12-10 22:59:24.110886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.501 qpair failed and we were unable to recover it. 00:28:16.501 [2024-12-10 22:59:24.111081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.501 [2024-12-10 22:59:24.111115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.501 qpair failed and we were unable to recover it. 00:28:16.501 [2024-12-10 22:59:24.111295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.501 [2024-12-10 22:59:24.111328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.501 qpair failed and we were unable to recover it. 00:28:16.501 [2024-12-10 22:59:24.111452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.501 [2024-12-10 22:59:24.111484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.501 qpair failed and we were unable to recover it. 00:28:16.501 [2024-12-10 22:59:24.111585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.501 [2024-12-10 22:59:24.111619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.501 qpair failed and we were unable to recover it. 00:28:16.501 [2024-12-10 22:59:24.111793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.501 [2024-12-10 22:59:24.111826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.501 qpair failed and we were unable to recover it. 00:28:16.501 [2024-12-10 22:59:24.111944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.501 [2024-12-10 22:59:24.111978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.501 qpair failed and we were unable to recover it. 00:28:16.501 [2024-12-10 22:59:24.112184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.501 [2024-12-10 22:59:24.112218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.501 qpair failed and we were unable to recover it. 00:28:16.501 [2024-12-10 22:59:24.112340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.501 [2024-12-10 22:59:24.112374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.501 qpair failed and we were unable to recover it. 00:28:16.501 [2024-12-10 22:59:24.112568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.501 [2024-12-10 22:59:24.112602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.501 qpair failed and we were unable to recover it. 00:28:16.501 [2024-12-10 22:59:24.112734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.501 [2024-12-10 22:59:24.112767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.501 qpair failed and we were unable to recover it. 00:28:16.501 [2024-12-10 22:59:24.112947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.501 [2024-12-10 22:59:24.112979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.501 qpair failed and we were unable to recover it. 00:28:16.501 [2024-12-10 22:59:24.113182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.501 [2024-12-10 22:59:24.113217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.501 qpair failed and we were unable to recover it. 00:28:16.501 [2024-12-10 22:59:24.113391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.501 [2024-12-10 22:59:24.113426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.501 qpair failed and we were unable to recover it. 00:28:16.501 [2024-12-10 22:59:24.113641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.501 [2024-12-10 22:59:24.113675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.501 qpair failed and we were unable to recover it. 00:28:16.501 [2024-12-10 22:59:24.113848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.501 [2024-12-10 22:59:24.113881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.501 qpair failed and we were unable to recover it. 00:28:16.501 [2024-12-10 22:59:24.114054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.501 [2024-12-10 22:59:24.114087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.501 qpair failed and we were unable to recover it. 00:28:16.501 [2024-12-10 22:59:24.114298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.501 [2024-12-10 22:59:24.114333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.501 qpair failed and we were unable to recover it. 00:28:16.501 [2024-12-10 22:59:24.114546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.501 [2024-12-10 22:59:24.114578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.501 qpair failed and we were unable to recover it. 00:28:16.501 [2024-12-10 22:59:24.114778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.501 [2024-12-10 22:59:24.114814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.501 qpair failed and we were unable to recover it. 00:28:16.501 [2024-12-10 22:59:24.114937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.501 [2024-12-10 22:59:24.114970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.501 qpair failed and we were unable to recover it. 00:28:16.502 [2024-12-10 22:59:24.115146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.502 [2024-12-10 22:59:24.115192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.502 qpair failed and we were unable to recover it. 00:28:16.502 [2024-12-10 22:59:24.115398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.502 [2024-12-10 22:59:24.115432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.502 qpair failed and we were unable to recover it. 00:28:16.502 [2024-12-10 22:59:24.115541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.502 [2024-12-10 22:59:24.115574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.502 qpair failed and we were unable to recover it. 00:28:16.502 [2024-12-10 22:59:24.115767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.502 [2024-12-10 22:59:24.115800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.502 qpair failed and we were unable to recover it. 00:28:16.502 [2024-12-10 22:59:24.115932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.502 [2024-12-10 22:59:24.115965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.502 qpair failed and we were unable to recover it. 00:28:16.502 [2024-12-10 22:59:24.116088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.502 [2024-12-10 22:59:24.116121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.502 qpair failed and we were unable to recover it. 00:28:16.502 [2024-12-10 22:59:24.116378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.502 [2024-12-10 22:59:24.116412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.502 qpair failed and we were unable to recover it. 00:28:16.502 [2024-12-10 22:59:24.116518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.502 [2024-12-10 22:59:24.116548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.502 qpair failed and we were unable to recover it. 00:28:16.502 [2024-12-10 22:59:24.116818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.502 [2024-12-10 22:59:24.116851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.502 qpair failed and we were unable to recover it. 00:28:16.502 [2024-12-10 22:59:24.117103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.502 [2024-12-10 22:59:24.117136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.502 qpair failed and we were unable to recover it. 00:28:16.502 [2024-12-10 22:59:24.117267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.502 [2024-12-10 22:59:24.117301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.502 qpair failed and we were unable to recover it. 00:28:16.502 [2024-12-10 22:59:24.117409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.502 [2024-12-10 22:59:24.117441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.502 qpair failed and we were unable to recover it. 00:28:16.502 [2024-12-10 22:59:24.117572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.502 [2024-12-10 22:59:24.117604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.502 qpair failed and we were unable to recover it. 00:28:16.502 [2024-12-10 22:59:24.117720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.502 [2024-12-10 22:59:24.117753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.502 qpair failed and we were unable to recover it. 00:28:16.502 [2024-12-10 22:59:24.117957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.502 [2024-12-10 22:59:24.117990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.502 qpair failed and we were unable to recover it. 00:28:16.502 [2024-12-10 22:59:24.118185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.502 [2024-12-10 22:59:24.118219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.502 qpair failed and we were unable to recover it. 00:28:16.502 [2024-12-10 22:59:24.118339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.502 [2024-12-10 22:59:24.118371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.502 qpair failed and we were unable to recover it. 00:28:16.502 [2024-12-10 22:59:24.118547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.502 [2024-12-10 22:59:24.118588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.502 qpair failed and we were unable to recover it. 00:28:16.502 [2024-12-10 22:59:24.118787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.502 [2024-12-10 22:59:24.118819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.502 qpair failed and we were unable to recover it. 00:28:16.502 [2024-12-10 22:59:24.119039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.502 [2024-12-10 22:59:24.119071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.502 qpair failed and we were unable to recover it. 00:28:16.502 [2024-12-10 22:59:24.119204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.502 [2024-12-10 22:59:24.119238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.502 qpair failed and we were unable to recover it. 00:28:16.502 [2024-12-10 22:59:24.119446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.502 [2024-12-10 22:59:24.119478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.502 qpair failed and we were unable to recover it. 00:28:16.502 [2024-12-10 22:59:24.119593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.502 [2024-12-10 22:59:24.119624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.502 qpair failed and we were unable to recover it. 00:28:16.502 [2024-12-10 22:59:24.119731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.502 [2024-12-10 22:59:24.119764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.502 qpair failed and we were unable to recover it. 00:28:16.502 [2024-12-10 22:59:24.119937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.502 [2024-12-10 22:59:24.119970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.502 qpair failed and we were unable to recover it. 00:28:16.502 [2024-12-10 22:59:24.120174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.502 [2024-12-10 22:59:24.120208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.502 qpair failed and we were unable to recover it. 00:28:16.502 [2024-12-10 22:59:24.120335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.502 [2024-12-10 22:59:24.120367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.502 qpair failed and we were unable to recover it. 00:28:16.502 [2024-12-10 22:59:24.120502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.502 [2024-12-10 22:59:24.120536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.502 qpair failed and we were unable to recover it. 00:28:16.502 [2024-12-10 22:59:24.120733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.502 [2024-12-10 22:59:24.120765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.502 qpair failed and we were unable to recover it. 00:28:16.502 [2024-12-10 22:59:24.120885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.502 [2024-12-10 22:59:24.120917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.502 qpair failed and we were unable to recover it. 00:28:16.502 [2024-12-10 22:59:24.121135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.502 [2024-12-10 22:59:24.121182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.502 qpair failed and we were unable to recover it. 00:28:16.502 [2024-12-10 22:59:24.121370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.502 [2024-12-10 22:59:24.121402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.502 qpair failed and we were unable to recover it. 00:28:16.502 [2024-12-10 22:59:24.121575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.502 [2024-12-10 22:59:24.121609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.502 qpair failed and we were unable to recover it. 00:28:16.502 [2024-12-10 22:59:24.121714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.503 [2024-12-10 22:59:24.121746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.503 qpair failed and we were unable to recover it. 00:28:16.503 [2024-12-10 22:59:24.121921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.503 [2024-12-10 22:59:24.121953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.503 qpair failed and we were unable to recover it. 00:28:16.503 [2024-12-10 22:59:24.122094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.503 [2024-12-10 22:59:24.122127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.503 qpair failed and we were unable to recover it. 00:28:16.503 [2024-12-10 22:59:24.122370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.503 [2024-12-10 22:59:24.122441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.503 qpair failed and we were unable to recover it. 00:28:16.503 [2024-12-10 22:59:24.122656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.503 [2024-12-10 22:59:24.122700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.503 qpair failed and we were unable to recover it. 00:28:16.503 [2024-12-10 22:59:24.122835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.503 [2024-12-10 22:59:24.122868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.503 qpair failed and we were unable to recover it. 00:28:16.503 [2024-12-10 22:59:24.123067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.503 [2024-12-10 22:59:24.123100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.503 qpair failed and we were unable to recover it. 00:28:16.503 [2024-12-10 22:59:24.123238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.503 [2024-12-10 22:59:24.123274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.503 qpair failed and we were unable to recover it. 00:28:16.503 [2024-12-10 22:59:24.123449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.503 [2024-12-10 22:59:24.123483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.503 qpair failed and we were unable to recover it. 00:28:16.503 [2024-12-10 22:59:24.123598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.503 [2024-12-10 22:59:24.123632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.503 qpair failed and we were unable to recover it. 00:28:16.503 [2024-12-10 22:59:24.123807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.503 [2024-12-10 22:59:24.123840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.503 qpair failed and we were unable to recover it. 00:28:16.503 [2024-12-10 22:59:24.123972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.503 [2024-12-10 22:59:24.124004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.503 qpair failed and we were unable to recover it. 00:28:16.503 [2024-12-10 22:59:24.124150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.503 [2024-12-10 22:59:24.124196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.503 qpair failed and we were unable to recover it. 00:28:16.503 [2024-12-10 22:59:24.124389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.503 [2024-12-10 22:59:24.124423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.503 qpair failed and we were unable to recover it. 00:28:16.503 [2024-12-10 22:59:24.124608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.503 [2024-12-10 22:59:24.124641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.503 qpair failed and we were unable to recover it. 00:28:16.503 [2024-12-10 22:59:24.124768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.503 [2024-12-10 22:59:24.124800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.503 qpair failed and we were unable to recover it. 00:28:16.503 [2024-12-10 22:59:24.124996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.503 [2024-12-10 22:59:24.125029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.503 qpair failed and we were unable to recover it. 00:28:16.503 [2024-12-10 22:59:24.125144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.503 [2024-12-10 22:59:24.125190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.503 qpair failed and we were unable to recover it. 00:28:16.503 [2024-12-10 22:59:24.125366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.503 [2024-12-10 22:59:24.125399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.503 qpair failed and we were unable to recover it. 00:28:16.503 [2024-12-10 22:59:24.125574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.503 [2024-12-10 22:59:24.125606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.503 qpair failed and we were unable to recover it. 00:28:16.503 [2024-12-10 22:59:24.125821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.503 [2024-12-10 22:59:24.125853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.503 qpair failed and we were unable to recover it. 00:28:16.503 [2024-12-10 22:59:24.126044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.503 [2024-12-10 22:59:24.126076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.503 qpair failed and we were unable to recover it. 00:28:16.503 [2024-12-10 22:59:24.126191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.503 [2024-12-10 22:59:24.126225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.503 qpair failed and we were unable to recover it. 00:28:16.503 [2024-12-10 22:59:24.126417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.503 [2024-12-10 22:59:24.126449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.503 qpair failed and we were unable to recover it. 00:28:16.503 [2024-12-10 22:59:24.126624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.503 [2024-12-10 22:59:24.126662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.503 qpair failed and we were unable to recover it. 00:28:16.503 [2024-12-10 22:59:24.126837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.503 [2024-12-10 22:59:24.126869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.503 qpair failed and we were unable to recover it. 00:28:16.503 [2024-12-10 22:59:24.127060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.503 [2024-12-10 22:59:24.127092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.503 qpair failed and we were unable to recover it. 00:28:16.503 [2024-12-10 22:59:24.127212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.503 [2024-12-10 22:59:24.127246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.503 qpair failed and we were unable to recover it. 00:28:16.503 [2024-12-10 22:59:24.127452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.503 [2024-12-10 22:59:24.127486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.503 qpair failed and we were unable to recover it. 00:28:16.503 [2024-12-10 22:59:24.127664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.503 [2024-12-10 22:59:24.127698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.503 qpair failed and we were unable to recover it. 00:28:16.503 [2024-12-10 22:59:24.127894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.503 [2024-12-10 22:59:24.127926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.503 qpair failed and we were unable to recover it. 00:28:16.503 [2024-12-10 22:59:24.128047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.503 [2024-12-10 22:59:24.128081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.503 qpair failed and we were unable to recover it. 00:28:16.503 [2024-12-10 22:59:24.128213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.503 [2024-12-10 22:59:24.128259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.503 qpair failed and we were unable to recover it. 00:28:16.503 [2024-12-10 22:59:24.128371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.503 [2024-12-10 22:59:24.128406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.503 qpair failed and we were unable to recover it. 00:28:16.503 [2024-12-10 22:59:24.128511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.503 [2024-12-10 22:59:24.128543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.503 qpair failed and we were unable to recover it. 00:28:16.503 [2024-12-10 22:59:24.128661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.503 [2024-12-10 22:59:24.128701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.503 qpair failed and we were unable to recover it. 00:28:16.503 [2024-12-10 22:59:24.128823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.503 [2024-12-10 22:59:24.128855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.503 qpair failed and we were unable to recover it. 00:28:16.503 [2024-12-10 22:59:24.129030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.503 [2024-12-10 22:59:24.129063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.503 qpair failed and we were unable to recover it. 00:28:16.503 [2024-12-10 22:59:24.129278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.504 [2024-12-10 22:59:24.129311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.504 qpair failed and we were unable to recover it. 00:28:16.504 [2024-12-10 22:59:24.129425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.504 [2024-12-10 22:59:24.129459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.504 qpair failed and we were unable to recover it. 00:28:16.504 [2024-12-10 22:59:24.129646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.504 [2024-12-10 22:59:24.129678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.504 qpair failed and we were unable to recover it. 00:28:16.504 [2024-12-10 22:59:24.129796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.504 [2024-12-10 22:59:24.129829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.504 qpair failed and we were unable to recover it. 00:28:16.504 [2024-12-10 22:59:24.130001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.504 [2024-12-10 22:59:24.130033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.504 qpair failed and we were unable to recover it. 00:28:16.504 [2024-12-10 22:59:24.130241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.504 [2024-12-10 22:59:24.130274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.504 qpair failed and we were unable to recover it. 00:28:16.504 [2024-12-10 22:59:24.130456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.504 [2024-12-10 22:59:24.130489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.504 qpair failed and we were unable to recover it. 00:28:16.504 [2024-12-10 22:59:24.130666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.504 [2024-12-10 22:59:24.130697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.504 qpair failed and we were unable to recover it. 00:28:16.504 [2024-12-10 22:59:24.130885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.504 [2024-12-10 22:59:24.130918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.504 qpair failed and we were unable to recover it. 00:28:16.504 [2024-12-10 22:59:24.131033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.504 [2024-12-10 22:59:24.131065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.504 qpair failed and we were unable to recover it. 00:28:16.504 [2024-12-10 22:59:24.131349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.504 [2024-12-10 22:59:24.131381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.504 qpair failed and we were unable to recover it. 00:28:16.504 [2024-12-10 22:59:24.131494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.504 [2024-12-10 22:59:24.131526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.504 qpair failed and we were unable to recover it. 00:28:16.504 [2024-12-10 22:59:24.131698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.504 [2024-12-10 22:59:24.131730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.504 qpair failed and we were unable to recover it. 00:28:16.504 [2024-12-10 22:59:24.131858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.504 [2024-12-10 22:59:24.131891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.504 qpair failed and we were unable to recover it. 00:28:16.504 [2024-12-10 22:59:24.132086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.504 [2024-12-10 22:59:24.132119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.504 qpair failed and we were unable to recover it. 00:28:16.504 [2024-12-10 22:59:24.132325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.504 [2024-12-10 22:59:24.132358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.504 qpair failed and we were unable to recover it. 00:28:16.504 [2024-12-10 22:59:24.132586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.504 [2024-12-10 22:59:24.132618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.504 qpair failed and we were unable to recover it. 00:28:16.504 [2024-12-10 22:59:24.132796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.504 [2024-12-10 22:59:24.132828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.504 qpair failed and we were unable to recover it. 00:28:16.504 [2024-12-10 22:59:24.132946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.504 [2024-12-10 22:59:24.132979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.504 qpair failed and we were unable to recover it. 00:28:16.504 [2024-12-10 22:59:24.133180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.504 [2024-12-10 22:59:24.133213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.504 qpair failed and we were unable to recover it. 00:28:16.504 [2024-12-10 22:59:24.133319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.504 [2024-12-10 22:59:24.133353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.504 qpair failed and we were unable to recover it. 00:28:16.504 [2024-12-10 22:59:24.133469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.504 [2024-12-10 22:59:24.133501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.504 qpair failed and we were unable to recover it. 00:28:16.504 [2024-12-10 22:59:24.133614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.504 [2024-12-10 22:59:24.133646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.504 qpair failed and we were unable to recover it. 00:28:16.504 [2024-12-10 22:59:24.133876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.504 [2024-12-10 22:59:24.133908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.504 qpair failed and we were unable to recover it. 00:28:16.504 [2024-12-10 22:59:24.134015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.504 [2024-12-10 22:59:24.134048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.504 qpair failed and we were unable to recover it. 00:28:16.504 [2024-12-10 22:59:24.134253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.504 [2024-12-10 22:59:24.134290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.504 qpair failed and we were unable to recover it. 00:28:16.504 [2024-12-10 22:59:24.134464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.504 [2024-12-10 22:59:24.134501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.504 qpair failed and we were unable to recover it. 00:28:16.504 [2024-12-10 22:59:24.134633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.504 [2024-12-10 22:59:24.134666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.504 qpair failed and we were unable to recover it. 00:28:16.504 [2024-12-10 22:59:24.134781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.504 [2024-12-10 22:59:24.134813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.504 qpair failed and we were unable to recover it. 00:28:16.504 [2024-12-10 22:59:24.134986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.504 [2024-12-10 22:59:24.135019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.504 qpair failed and we were unable to recover it. 00:28:16.504 [2024-12-10 22:59:24.135220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.504 [2024-12-10 22:59:24.135253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.504 qpair failed and we were unable to recover it. 00:28:16.504 [2024-12-10 22:59:24.135427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.504 [2024-12-10 22:59:24.135459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.504 qpair failed and we were unable to recover it. 00:28:16.504 [2024-12-10 22:59:24.135576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.504 [2024-12-10 22:59:24.135607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.504 qpair failed and we were unable to recover it. 00:28:16.504 [2024-12-10 22:59:24.135789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.504 [2024-12-10 22:59:24.135822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.504 qpair failed and we were unable to recover it. 00:28:16.504 [2024-12-10 22:59:24.135954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.504 [2024-12-10 22:59:24.135986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.504 qpair failed and we were unable to recover it. 00:28:16.504 [2024-12-10 22:59:24.136098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.504 [2024-12-10 22:59:24.136130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.504 qpair failed and we were unable to recover it. 00:28:16.504 [2024-12-10 22:59:24.136385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.504 [2024-12-10 22:59:24.136418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.504 qpair failed and we were unable to recover it. 00:28:16.504 [2024-12-10 22:59:24.136597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.504 [2024-12-10 22:59:24.136628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.504 qpair failed and we were unable to recover it. 00:28:16.504 [2024-12-10 22:59:24.136837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.505 [2024-12-10 22:59:24.136870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.505 qpair failed and we were unable to recover it. 00:28:16.505 [2024-12-10 22:59:24.137058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.505 [2024-12-10 22:59:24.137092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.505 qpair failed and we were unable to recover it. 00:28:16.505 [2024-12-10 22:59:24.137279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.505 [2024-12-10 22:59:24.137313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.505 qpair failed and we were unable to recover it. 00:28:16.505 [2024-12-10 22:59:24.137432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.505 [2024-12-10 22:59:24.137464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.505 qpair failed and we were unable to recover it. 00:28:16.505 [2024-12-10 22:59:24.137593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.505 [2024-12-10 22:59:24.137626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.505 qpair failed and we were unable to recover it. 00:28:16.505 [2024-12-10 22:59:24.137752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.505 [2024-12-10 22:59:24.137784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.505 qpair failed and we were unable to recover it. 00:28:16.505 [2024-12-10 22:59:24.137958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.505 [2024-12-10 22:59:24.137999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.505 qpair failed and we were unable to recover it. 00:28:16.505 [2024-12-10 22:59:24.138202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.505 [2024-12-10 22:59:24.138239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.505 qpair failed and we were unable to recover it. 00:28:16.505 [2024-12-10 22:59:24.138513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.505 [2024-12-10 22:59:24.138548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.505 qpair failed and we were unable to recover it. 00:28:16.505 [2024-12-10 22:59:24.138742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.505 [2024-12-10 22:59:24.138774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.505 qpair failed and we were unable to recover it. 00:28:16.505 [2024-12-10 22:59:24.138891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.505 [2024-12-10 22:59:24.138924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.505 qpair failed and we were unable to recover it. 00:28:16.505 [2024-12-10 22:59:24.139111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.505 [2024-12-10 22:59:24.139145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.505 qpair failed and we were unable to recover it. 00:28:16.505 [2024-12-10 22:59:24.139273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.505 [2024-12-10 22:59:24.139306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.505 qpair failed and we were unable to recover it. 00:28:16.505 [2024-12-10 22:59:24.139480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.505 [2024-12-10 22:59:24.139513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.505 qpair failed and we were unable to recover it. 00:28:16.505 [2024-12-10 22:59:24.139618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.505 [2024-12-10 22:59:24.139651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.505 qpair failed and we were unable to recover it. 00:28:16.505 [2024-12-10 22:59:24.139779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.505 [2024-12-10 22:59:24.139811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.505 qpair failed and we were unable to recover it. 00:28:16.505 [2024-12-10 22:59:24.140095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.505 [2024-12-10 22:59:24.140127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.505 qpair failed and we were unable to recover it. 00:28:16.505 [2024-12-10 22:59:24.140257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.505 [2024-12-10 22:59:24.140290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.505 qpair failed and we were unable to recover it. 00:28:16.505 [2024-12-10 22:59:24.140478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.505 [2024-12-10 22:59:24.140512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.505 qpair failed and we were unable to recover it. 00:28:16.505 [2024-12-10 22:59:24.140690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.505 [2024-12-10 22:59:24.140723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.505 qpair failed and we were unable to recover it. 00:28:16.505 [2024-12-10 22:59:24.140849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.505 [2024-12-10 22:59:24.140890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.505 qpair failed and we were unable to recover it. 00:28:16.505 [2024-12-10 22:59:24.141005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.505 [2024-12-10 22:59:24.141039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.505 qpair failed and we were unable to recover it. 00:28:16.505 [2024-12-10 22:59:24.141145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.505 [2024-12-10 22:59:24.141195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.505 qpair failed and we were unable to recover it. 00:28:16.505 [2024-12-10 22:59:24.141319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.505 [2024-12-10 22:59:24.141353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.505 qpair failed and we were unable to recover it. 00:28:16.505 [2024-12-10 22:59:24.141467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.505 [2024-12-10 22:59:24.141500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.505 qpair failed and we were unable to recover it. 00:28:16.505 [2024-12-10 22:59:24.141681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.505 [2024-12-10 22:59:24.141715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.505 qpair failed and we were unable to recover it. 00:28:16.505 [2024-12-10 22:59:24.141821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.505 [2024-12-10 22:59:24.141855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.505 qpair failed and we were unable to recover it. 00:28:16.505 [2024-12-10 22:59:24.141982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.505 [2024-12-10 22:59:24.142015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.505 qpair failed and we were unable to recover it. 00:28:16.505 [2024-12-10 22:59:24.142229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.505 [2024-12-10 22:59:24.142268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.505 qpair failed and we were unable to recover it. 00:28:16.505 [2024-12-10 22:59:24.142466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.505 [2024-12-10 22:59:24.142498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.505 qpair failed and we were unable to recover it. 00:28:16.505 [2024-12-10 22:59:24.142670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.505 [2024-12-10 22:59:24.142703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.505 qpair failed and we were unable to recover it. 00:28:16.505 [2024-12-10 22:59:24.142811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.505 [2024-12-10 22:59:24.142845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.505 qpair failed and we were unable to recover it. 00:28:16.505 [2024-12-10 22:59:24.142954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.505 [2024-12-10 22:59:24.142986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.505 qpair failed and we were unable to recover it. 00:28:16.505 [2024-12-10 22:59:24.143087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.505 [2024-12-10 22:59:24.143121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.505 qpair failed and we were unable to recover it. 00:28:16.505 [2024-12-10 22:59:24.143249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.505 [2024-12-10 22:59:24.143283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.505 qpair failed and we were unable to recover it. 00:28:16.505 [2024-12-10 22:59:24.143484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.505 [2024-12-10 22:59:24.143518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.505 qpair failed and we were unable to recover it. 00:28:16.505 [2024-12-10 22:59:24.143687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.505 [2024-12-10 22:59:24.143719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.505 qpair failed and we were unable to recover it. 00:28:16.505 [2024-12-10 22:59:24.143824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.505 [2024-12-10 22:59:24.143858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.506 qpair failed and we were unable to recover it. 00:28:16.506 [2024-12-10 22:59:24.144045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.506 [2024-12-10 22:59:24.144079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.506 qpair failed and we were unable to recover it. 00:28:16.506 [2024-12-10 22:59:24.144277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.506 [2024-12-10 22:59:24.144314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.506 qpair failed and we were unable to recover it. 00:28:16.506 [2024-12-10 22:59:24.144488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.506 [2024-12-10 22:59:24.144521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.506 qpair failed and we were unable to recover it. 00:28:16.506 [2024-12-10 22:59:24.144706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.506 [2024-12-10 22:59:24.144740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.506 qpair failed and we were unable to recover it. 00:28:16.506 [2024-12-10 22:59:24.144856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.506 [2024-12-10 22:59:24.144888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.506 qpair failed and we were unable to recover it. 00:28:16.506 [2024-12-10 22:59:24.145137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.506 [2024-12-10 22:59:24.145177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.506 qpair failed and we were unable to recover it. 00:28:16.506 [2024-12-10 22:59:24.145347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.506 [2024-12-10 22:59:24.145380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.506 qpair failed and we were unable to recover it. 00:28:16.506 [2024-12-10 22:59:24.145548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.506 [2024-12-10 22:59:24.145581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.506 qpair failed and we were unable to recover it. 00:28:16.506 [2024-12-10 22:59:24.145692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.506 [2024-12-10 22:59:24.145725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.506 qpair failed and we were unable to recover it. 00:28:16.506 [2024-12-10 22:59:24.145839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.506 [2024-12-10 22:59:24.145871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.506 qpair failed and we were unable to recover it. 00:28:16.506 [2024-12-10 22:59:24.146043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.506 [2024-12-10 22:59:24.146078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.506 qpair failed and we were unable to recover it. 00:28:16.506 [2024-12-10 22:59:24.146196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.506 [2024-12-10 22:59:24.146231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.506 qpair failed and we were unable to recover it. 00:28:16.506 [2024-12-10 22:59:24.146355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.506 [2024-12-10 22:59:24.146387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.506 qpair failed and we were unable to recover it. 00:28:16.506 [2024-12-10 22:59:24.146503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.506 [2024-12-10 22:59:24.146535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.506 qpair failed and we were unable to recover it. 00:28:16.506 [2024-12-10 22:59:24.146741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.506 [2024-12-10 22:59:24.146774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.506 qpair failed and we were unable to recover it. 00:28:16.506 [2024-12-10 22:59:24.146885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.506 [2024-12-10 22:59:24.146918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.506 qpair failed and we were unable to recover it. 00:28:16.506 [2024-12-10 22:59:24.147109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.506 [2024-12-10 22:59:24.147142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.506 qpair failed and we were unable to recover it. 00:28:16.506 [2024-12-10 22:59:24.147275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.506 [2024-12-10 22:59:24.147307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.506 qpair failed and we were unable to recover it. 00:28:16.506 [2024-12-10 22:59:24.147507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.506 [2024-12-10 22:59:24.147540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.506 qpair failed and we were unable to recover it. 00:28:16.506 [2024-12-10 22:59:24.147665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.506 [2024-12-10 22:59:24.147697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.506 qpair failed and we were unable to recover it. 00:28:16.506 [2024-12-10 22:59:24.147981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.506 [2024-12-10 22:59:24.148014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.506 qpair failed and we were unable to recover it. 00:28:16.506 [2024-12-10 22:59:24.148128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.506 [2024-12-10 22:59:24.148170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.506 qpair failed and we were unable to recover it. 00:28:16.506 [2024-12-10 22:59:24.148279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.506 [2024-12-10 22:59:24.148312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.506 qpair failed and we were unable to recover it. 00:28:16.506 [2024-12-10 22:59:24.148437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.506 [2024-12-10 22:59:24.148473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.506 qpair failed and we were unable to recover it. 00:28:16.506 [2024-12-10 22:59:24.148646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.506 [2024-12-10 22:59:24.148682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.506 qpair failed and we were unable to recover it. 00:28:16.506 [2024-12-10 22:59:24.148800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.506 [2024-12-10 22:59:24.148833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.506 qpair failed and we were unable to recover it. 00:28:16.506 [2024-12-10 22:59:24.148967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.506 [2024-12-10 22:59:24.149001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.506 qpair failed and we were unable to recover it. 00:28:16.506 [2024-12-10 22:59:24.149207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.506 [2024-12-10 22:59:24.149242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.506 qpair failed and we were unable to recover it. 00:28:16.506 [2024-12-10 22:59:24.149443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.506 [2024-12-10 22:59:24.149477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.506 qpair failed and we were unable to recover it. 00:28:16.506 [2024-12-10 22:59:24.149656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.506 [2024-12-10 22:59:24.149688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.506 qpair failed and we were unable to recover it. 00:28:16.506 [2024-12-10 22:59:24.149867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.506 [2024-12-10 22:59:24.149906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.506 qpair failed and we were unable to recover it. 00:28:16.506 [2024-12-10 22:59:24.150021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.506 [2024-12-10 22:59:24.150053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.506 qpair failed and we were unable to recover it. 00:28:16.506 [2024-12-10 22:59:24.150295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.507 [2024-12-10 22:59:24.150329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.507 qpair failed and we were unable to recover it. 00:28:16.507 [2024-12-10 22:59:24.150513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.507 [2024-12-10 22:59:24.150545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.507 qpair failed and we were unable to recover it. 00:28:16.507 [2024-12-10 22:59:24.150718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.507 [2024-12-10 22:59:24.150749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.507 qpair failed and we were unable to recover it. 00:28:16.507 [2024-12-10 22:59:24.150872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.507 [2024-12-10 22:59:24.150917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.507 qpair failed and we were unable to recover it. 00:28:16.507 [2024-12-10 22:59:24.151087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.507 [2024-12-10 22:59:24.151119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.507 qpair failed and we were unable to recover it. 00:28:16.507 [2024-12-10 22:59:24.151255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.507 [2024-12-10 22:59:24.151289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.507 qpair failed and we were unable to recover it. 00:28:16.507 [2024-12-10 22:59:24.151473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.507 [2024-12-10 22:59:24.151505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.507 qpair failed and we were unable to recover it. 00:28:16.507 [2024-12-10 22:59:24.151637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.507 [2024-12-10 22:59:24.151671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.507 qpair failed and we were unable to recover it. 00:28:16.507 [2024-12-10 22:59:24.151849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.507 [2024-12-10 22:59:24.151885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.507 qpair failed and we were unable to recover it. 00:28:16.507 [2024-12-10 22:59:24.151995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.507 [2024-12-10 22:59:24.152025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.507 qpair failed and we were unable to recover it. 00:28:16.507 [2024-12-10 22:59:24.152171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.507 [2024-12-10 22:59:24.152206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.507 qpair failed and we were unable to recover it. 00:28:16.507 [2024-12-10 22:59:24.152311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.507 [2024-12-10 22:59:24.152344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.507 qpair failed and we were unable to recover it. 00:28:16.507 [2024-12-10 22:59:24.152484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.507 [2024-12-10 22:59:24.152516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.507 qpair failed and we were unable to recover it. 00:28:16.507 [2024-12-10 22:59:24.152632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.507 [2024-12-10 22:59:24.152666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.507 qpair failed and we were unable to recover it. 00:28:16.507 [2024-12-10 22:59:24.152949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.507 [2024-12-10 22:59:24.152981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.507 qpair failed and we were unable to recover it. 00:28:16.507 [2024-12-10 22:59:24.153094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.507 [2024-12-10 22:59:24.153127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.507 qpair failed and we were unable to recover it. 00:28:16.507 [2024-12-10 22:59:24.153185] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaa0b20 (9): Bad file descriptor 00:28:16.507 [2024-12-10 22:59:24.153370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.507 [2024-12-10 22:59:24.153442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.507 qpair failed and we were unable to recover it. 00:28:16.507 [2024-12-10 22:59:24.153602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.507 [2024-12-10 22:59:24.153646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.507 qpair failed and we were unable to recover it. 00:28:16.507 [2024-12-10 22:59:24.153762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.507 [2024-12-10 22:59:24.153799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.507 qpair failed and we were unable to recover it. 00:28:16.507 [2024-12-10 22:59:24.153970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.507 [2024-12-10 22:59:24.154005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.507 qpair failed and we were unable to recover it. 00:28:16.507 [2024-12-10 22:59:24.154181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.507 [2024-12-10 22:59:24.154216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.507 qpair failed and we were unable to recover it. 00:28:16.507 [2024-12-10 22:59:24.154390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.507 [2024-12-10 22:59:24.154424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.507 qpair failed and we were unable to recover it. 00:28:16.507 [2024-12-10 22:59:24.154536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.507 [2024-12-10 22:59:24.154568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.507 qpair failed and we were unable to recover it. 00:28:16.507 [2024-12-10 22:59:24.154749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.507 [2024-12-10 22:59:24.154783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.507 qpair failed and we were unable to recover it. 00:28:16.507 [2024-12-10 22:59:24.154956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.507 [2024-12-10 22:59:24.154988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.507 qpair failed and we were unable to recover it. 00:28:16.507 [2024-12-10 22:59:24.155135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.507 [2024-12-10 22:59:24.155179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.507 qpair failed and we were unable to recover it. 00:28:16.507 [2024-12-10 22:59:24.155294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.507 [2024-12-10 22:59:24.155327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.507 qpair failed and we were unable to recover it. 00:28:16.507 [2024-12-10 22:59:24.155439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.507 [2024-12-10 22:59:24.155471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.507 qpair failed and we were unable to recover it. 00:28:16.507 [2024-12-10 22:59:24.155666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.507 [2024-12-10 22:59:24.155698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.507 qpair failed and we were unable to recover it. 00:28:16.507 [2024-12-10 22:59:24.155870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.507 [2024-12-10 22:59:24.155902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.507 qpair failed and we were unable to recover it. 00:28:16.507 [2024-12-10 22:59:24.156037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.507 [2024-12-10 22:59:24.156070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.507 qpair failed and we were unable to recover it. 00:28:16.507 [2024-12-10 22:59:24.156296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.507 [2024-12-10 22:59:24.156330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.507 qpair failed and we were unable to recover it. 00:28:16.507 [2024-12-10 22:59:24.156440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.507 [2024-12-10 22:59:24.156473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.507 qpair failed and we were unable to recover it. 00:28:16.507 [2024-12-10 22:59:24.156644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.507 [2024-12-10 22:59:24.156677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.507 qpair failed and we were unable to recover it. 00:28:16.507 [2024-12-10 22:59:24.156792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.507 [2024-12-10 22:59:24.156825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.507 qpair failed and we were unable to recover it. 00:28:16.507 [2024-12-10 22:59:24.157020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.507 [2024-12-10 22:59:24.157053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.507 qpair failed and we were unable to recover it. 00:28:16.508 [2024-12-10 22:59:24.157222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.508 [2024-12-10 22:59:24.157255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.508 qpair failed and we were unable to recover it. 00:28:16.508 [2024-12-10 22:59:24.157358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.508 [2024-12-10 22:59:24.157390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.508 qpair failed and we were unable to recover it. 00:28:16.508 [2024-12-10 22:59:24.157672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.508 [2024-12-10 22:59:24.157713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.508 qpair failed and we were unable to recover it. 00:28:16.508 [2024-12-10 22:59:24.157925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.508 [2024-12-10 22:59:24.157958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.508 qpair failed and we were unable to recover it. 00:28:16.508 [2024-12-10 22:59:24.158075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.508 [2024-12-10 22:59:24.158109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.508 qpair failed and we were unable to recover it. 00:28:16.508 [2024-12-10 22:59:24.158225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.508 [2024-12-10 22:59:24.158260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.508 qpair failed and we were unable to recover it. 00:28:16.508 [2024-12-10 22:59:24.158499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.508 [2024-12-10 22:59:24.158532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.508 qpair failed and we were unable to recover it. 00:28:16.508 [2024-12-10 22:59:24.158797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.508 [2024-12-10 22:59:24.158832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.508 qpair failed and we were unable to recover it. 00:28:16.508 [2024-12-10 22:59:24.158941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.508 [2024-12-10 22:59:24.158973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.508 qpair failed and we were unable to recover it. 00:28:16.508 [2024-12-10 22:59:24.159092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.508 [2024-12-10 22:59:24.159125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.508 qpair failed and we were unable to recover it. 00:28:16.508 [2024-12-10 22:59:24.159328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.508 [2024-12-10 22:59:24.159365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.508 qpair failed and we were unable to recover it. 00:28:16.508 [2024-12-10 22:59:24.159489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.508 [2024-12-10 22:59:24.159522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.508 qpair failed and we were unable to recover it. 00:28:16.508 [2024-12-10 22:59:24.159631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.508 [2024-12-10 22:59:24.159664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.508 qpair failed and we were unable to recover it. 00:28:16.508 [2024-12-10 22:59:24.159793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.508 [2024-12-10 22:59:24.159831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.508 qpair failed and we were unable to recover it. 00:28:16.508 [2024-12-10 22:59:24.160039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.508 [2024-12-10 22:59:24.160076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.508 qpair failed and we were unable to recover it. 00:28:16.508 [2024-12-10 22:59:24.160221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.508 [2024-12-10 22:59:24.160255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.508 qpair failed and we were unable to recover it. 00:28:16.508 [2024-12-10 22:59:24.160435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.508 [2024-12-10 22:59:24.160468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.508 qpair failed and we were unable to recover it. 00:28:16.508 [2024-12-10 22:59:24.160572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.508 [2024-12-10 22:59:24.160617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.508 qpair failed and we were unable to recover it. 00:28:16.508 [2024-12-10 22:59:24.160727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.508 [2024-12-10 22:59:24.160760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.508 qpair failed and we were unable to recover it. 00:28:16.508 [2024-12-10 22:59:24.160874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.508 [2024-12-10 22:59:24.160907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.508 qpair failed and we were unable to recover it. 00:28:16.508 [2024-12-10 22:59:24.161046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.508 [2024-12-10 22:59:24.161079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.508 qpair failed and we were unable to recover it. 00:28:16.508 [2024-12-10 22:59:24.161181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.508 [2024-12-10 22:59:24.161214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.508 qpair failed and we were unable to recover it. 00:28:16.508 [2024-12-10 22:59:24.161324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.508 [2024-12-10 22:59:24.161357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.508 qpair failed and we were unable to recover it. 00:28:16.508 [2024-12-10 22:59:24.161529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.508 [2024-12-10 22:59:24.161562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.508 qpair failed and we were unable to recover it. 00:28:16.508 [2024-12-10 22:59:24.161736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.508 [2024-12-10 22:59:24.161770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.508 qpair failed and we were unable to recover it. 00:28:16.508 [2024-12-10 22:59:24.161953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.508 [2024-12-10 22:59:24.161986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.508 qpair failed and we were unable to recover it. 00:28:16.508 [2024-12-10 22:59:24.162103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.508 [2024-12-10 22:59:24.162135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.508 qpair failed and we were unable to recover it. 00:28:16.508 [2024-12-10 22:59:24.162269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.508 [2024-12-10 22:59:24.162303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.508 qpair failed and we were unable to recover it. 00:28:16.508 [2024-12-10 22:59:24.162404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.508 [2024-12-10 22:59:24.162437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.508 qpair failed and we were unable to recover it. 00:28:16.508 [2024-12-10 22:59:24.162631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.508 [2024-12-10 22:59:24.162676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.508 qpair failed and we were unable to recover it. 00:28:16.508 [2024-12-10 22:59:24.162856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.508 [2024-12-10 22:59:24.162889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.508 qpair failed and we were unable to recover it. 00:28:16.508 [2024-12-10 22:59:24.163003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.508 [2024-12-10 22:59:24.163038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.508 qpair failed and we were unable to recover it. 00:28:16.508 [2024-12-10 22:59:24.163238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.508 [2024-12-10 22:59:24.163277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.508 qpair failed and we were unable to recover it. 00:28:16.508 [2024-12-10 22:59:24.163395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.508 [2024-12-10 22:59:24.163428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.508 qpair failed and we were unable to recover it. 00:28:16.508 [2024-12-10 22:59:24.163619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.509 [2024-12-10 22:59:24.163652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.509 qpair failed and we were unable to recover it. 00:28:16.509 [2024-12-10 22:59:24.163824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.509 [2024-12-10 22:59:24.163858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.509 qpair failed and we were unable to recover it. 00:28:16.509 [2024-12-10 22:59:24.164033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.509 [2024-12-10 22:59:24.164065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.509 qpair failed and we were unable to recover it. 00:28:16.509 [2024-12-10 22:59:24.164115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:16.509 [2024-12-10 22:59:24.164261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.509 [2024-12-10 22:59:24.164298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.509 qpair failed and we were unable to recover it. 00:28:16.509 [2024-12-10 22:59:24.164426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.509 [2024-12-10 22:59:24.164460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.509 qpair failed and we were unable to recover it. 00:28:16.509 [2024-12-10 22:59:24.164645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.509 [2024-12-10 22:59:24.164679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.509 qpair failed and we were unable to recover it. 00:28:16.509 [2024-12-10 22:59:24.164889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.509 [2024-12-10 22:59:24.164922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.509 qpair failed and we were unable to recover it. 00:28:16.509 [2024-12-10 22:59:24.165040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.509 [2024-12-10 22:59:24.165072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.509 qpair failed and we were unable to recover it. 00:28:16.509 [2024-12-10 22:59:24.165248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.509 [2024-12-10 22:59:24.165290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.509 qpair failed and we were unable to recover it. 00:28:16.509 [2024-12-10 22:59:24.165406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.509 [2024-12-10 22:59:24.165439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.509 qpair failed and we were unable to recover it. 00:28:16.509 [2024-12-10 22:59:24.165608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.509 [2024-12-10 22:59:24.165640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.509 qpair failed and we were unable to recover it. 00:28:16.509 [2024-12-10 22:59:24.165810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.509 [2024-12-10 22:59:24.165843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.509 qpair failed and we were unable to recover it. 00:28:16.509 [2024-12-10 22:59:24.166012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.509 [2024-12-10 22:59:24.166044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.509 qpair failed and we were unable to recover it. 00:28:16.509 [2024-12-10 22:59:24.166177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.509 [2024-12-10 22:59:24.166211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.509 qpair failed and we were unable to recover it. 00:28:16.509 [2024-12-10 22:59:24.166343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.509 [2024-12-10 22:59:24.166377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.509 qpair failed and we were unable to recover it. 00:28:16.509 [2024-12-10 22:59:24.166512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.509 [2024-12-10 22:59:24.166546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.509 qpair failed and we were unable to recover it. 00:28:16.509 [2024-12-10 22:59:24.166677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.509 [2024-12-10 22:59:24.166711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.509 qpair failed and we were unable to recover it. 00:28:16.509 [2024-12-10 22:59:24.166888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.509 [2024-12-10 22:59:24.166921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.509 qpair failed and we were unable to recover it. 00:28:16.509 [2024-12-10 22:59:24.167051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.509 [2024-12-10 22:59:24.167085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.509 qpair failed and we were unable to recover it. 00:28:16.509 [2024-12-10 22:59:24.167212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.509 [2024-12-10 22:59:24.167246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.509 qpair failed and we were unable to recover it. 00:28:16.509 [2024-12-10 22:59:24.167359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.509 [2024-12-10 22:59:24.167392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.509 qpair failed and we were unable to recover it. 00:28:16.509 [2024-12-10 22:59:24.167498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.509 [2024-12-10 22:59:24.167530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.509 qpair failed and we were unable to recover it. 00:28:16.509 [2024-12-10 22:59:24.167747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.509 [2024-12-10 22:59:24.167781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.509 qpair failed and we were unable to recover it. 00:28:16.509 [2024-12-10 22:59:24.167896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.509 [2024-12-10 22:59:24.167928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.509 qpair failed and we were unable to recover it. 00:28:16.509 [2024-12-10 22:59:24.168031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.509 [2024-12-10 22:59:24.168066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.509 qpair failed and we were unable to recover it. 00:28:16.509 [2024-12-10 22:59:24.168187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.509 [2024-12-10 22:59:24.168221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.509 qpair failed and we were unable to recover it. 00:28:16.509 [2024-12-10 22:59:24.168402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.509 [2024-12-10 22:59:24.168436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.509 qpair failed and we were unable to recover it. 00:28:16.509 [2024-12-10 22:59:24.168545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.509 [2024-12-10 22:59:24.168581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.509 qpair failed and we were unable to recover it. 00:28:16.509 [2024-12-10 22:59:24.168696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.509 [2024-12-10 22:59:24.168729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.509 qpair failed and we were unable to recover it. 00:28:16.509 [2024-12-10 22:59:24.168924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.509 [2024-12-10 22:59:24.168958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.509 qpair failed and we were unable to recover it. 00:28:16.509 [2024-12-10 22:59:24.169065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.509 [2024-12-10 22:59:24.169098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.509 qpair failed and we were unable to recover it. 00:28:16.509 [2024-12-10 22:59:24.169207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.509 [2024-12-10 22:59:24.169242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.509 qpair failed and we were unable to recover it. 00:28:16.509 [2024-12-10 22:59:24.169408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.509 [2024-12-10 22:59:24.169444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.509 qpair failed and we were unable to recover it. 00:28:16.509 [2024-12-10 22:59:24.169560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.509 [2024-12-10 22:59:24.169593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.509 qpair failed and we were unable to recover it. 00:28:16.509 [2024-12-10 22:59:24.169840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.509 [2024-12-10 22:59:24.169873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.509 qpair failed and we were unable to recover it. 00:28:16.509 [2024-12-10 22:59:24.170071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.509 [2024-12-10 22:59:24.170105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.509 qpair failed and we were unable to recover it. 00:28:16.509 [2024-12-10 22:59:24.170235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.509 [2024-12-10 22:59:24.170271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.509 qpair failed and we were unable to recover it. 00:28:16.509 [2024-12-10 22:59:24.170469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.509 [2024-12-10 22:59:24.170503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.509 qpair failed and we were unable to recover it. 00:28:16.509 [2024-12-10 22:59:24.170715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.509 [2024-12-10 22:59:24.170749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.509 qpair failed and we were unable to recover it. 00:28:16.510 [2024-12-10 22:59:24.170849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.510 [2024-12-10 22:59:24.170883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.510 qpair failed and we were unable to recover it. 00:28:16.510 [2024-12-10 22:59:24.171056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.510 [2024-12-10 22:59:24.171090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.510 qpair failed and we were unable to recover it. 00:28:16.510 [2024-12-10 22:59:24.171307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.510 [2024-12-10 22:59:24.171342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.510 qpair failed and we were unable to recover it. 00:28:16.510 [2024-12-10 22:59:24.171511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.510 [2024-12-10 22:59:24.171543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.510 qpair failed and we were unable to recover it. 00:28:16.510 [2024-12-10 22:59:24.171688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.510 [2024-12-10 22:59:24.171722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.510 qpair failed and we were unable to recover it. 00:28:16.510 [2024-12-10 22:59:24.171895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.510 [2024-12-10 22:59:24.171929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.510 qpair failed and we were unable to recover it. 00:28:16.510 [2024-12-10 22:59:24.172046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.510 [2024-12-10 22:59:24.172080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.510 qpair failed and we were unable to recover it. 00:28:16.510 [2024-12-10 22:59:24.172185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.510 [2024-12-10 22:59:24.172219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.510 qpair failed and we were unable to recover it. 00:28:16.510 [2024-12-10 22:59:24.172330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.510 [2024-12-10 22:59:24.172363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.510 qpair failed and we were unable to recover it. 00:28:16.510 [2024-12-10 22:59:24.172484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.510 [2024-12-10 22:59:24.172522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.510 qpair failed and we were unable to recover it. 00:28:16.510 [2024-12-10 22:59:24.172637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.510 [2024-12-10 22:59:24.172672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.510 qpair failed and we were unable to recover it. 00:28:16.510 [2024-12-10 22:59:24.172862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.510 [2024-12-10 22:59:24.172894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.510 qpair failed and we were unable to recover it. 00:28:16.510 [2024-12-10 22:59:24.173021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.510 [2024-12-10 22:59:24.173055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.510 qpair failed and we were unable to recover it. 00:28:16.510 [2024-12-10 22:59:24.173192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.510 [2024-12-10 22:59:24.173228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.510 qpair failed and we were unable to recover it. 00:28:16.510 [2024-12-10 22:59:24.173419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.510 [2024-12-10 22:59:24.173454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.510 qpair failed and we were unable to recover it. 00:28:16.510 [2024-12-10 22:59:24.173622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.510 [2024-12-10 22:59:24.173655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.510 qpair failed and we were unable to recover it. 00:28:16.510 [2024-12-10 22:59:24.173861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.510 [2024-12-10 22:59:24.173894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.510 qpair failed and we were unable to recover it. 00:28:16.510 [2024-12-10 22:59:24.173998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.510 [2024-12-10 22:59:24.174033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.510 qpair failed and we were unable to recover it. 00:28:16.510 [2024-12-10 22:59:24.174139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.510 [2024-12-10 22:59:24.174181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.510 qpair failed and we were unable to recover it. 00:28:16.510 [2024-12-10 22:59:24.174359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.510 [2024-12-10 22:59:24.174393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.510 qpair failed and we were unable to recover it. 00:28:16.510 [2024-12-10 22:59:24.174587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.510 [2024-12-10 22:59:24.174621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.510 qpair failed and we were unable to recover it. 00:28:16.510 [2024-12-10 22:59:24.174867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.510 [2024-12-10 22:59:24.174902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.510 qpair failed and we were unable to recover it. 00:28:16.510 [2024-12-10 22:59:24.175024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.510 [2024-12-10 22:59:24.175057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.510 qpair failed and we were unable to recover it. 00:28:16.510 [2024-12-10 22:59:24.175307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.510 [2024-12-10 22:59:24.175342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.510 qpair failed and we were unable to recover it. 00:28:16.510 [2024-12-10 22:59:24.175513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.510 [2024-12-10 22:59:24.175546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.510 qpair failed and we were unable to recover it. 00:28:16.510 [2024-12-10 22:59:24.175719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.510 [2024-12-10 22:59:24.175753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.510 qpair failed and we were unable to recover it. 00:28:16.510 [2024-12-10 22:59:24.175860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.510 [2024-12-10 22:59:24.175894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.510 qpair failed and we were unable to recover it. 00:28:16.510 [2024-12-10 22:59:24.176015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.510 [2024-12-10 22:59:24.176048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.510 qpair failed and we were unable to recover it. 00:28:16.510 [2024-12-10 22:59:24.176239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.510 [2024-12-10 22:59:24.176301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.510 qpair failed and we were unable to recover it. 00:28:16.510 [2024-12-10 22:59:24.176492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.510 [2024-12-10 22:59:24.176527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.510 qpair failed and we were unable to recover it. 00:28:16.510 [2024-12-10 22:59:24.176642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.510 [2024-12-10 22:59:24.176675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.510 qpair failed and we were unable to recover it. 00:28:16.510 [2024-12-10 22:59:24.176800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.510 [2024-12-10 22:59:24.176835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.510 qpair failed and we were unable to recover it. 00:28:16.510 [2024-12-10 22:59:24.177007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.510 [2024-12-10 22:59:24.177040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.510 qpair failed and we were unable to recover it. 00:28:16.511 [2024-12-10 22:59:24.177310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.511 [2024-12-10 22:59:24.177344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.511 qpair failed and we were unable to recover it. 00:28:16.511 [2024-12-10 22:59:24.177454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.511 [2024-12-10 22:59:24.177487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.511 qpair failed and we were unable to recover it. 00:28:16.511 [2024-12-10 22:59:24.177601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.511 [2024-12-10 22:59:24.177634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.511 qpair failed and we were unable to recover it. 00:28:16.511 [2024-12-10 22:59:24.177760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.511 [2024-12-10 22:59:24.177794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.511 qpair failed and we were unable to recover it. 00:28:16.511 [2024-12-10 22:59:24.177907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.511 [2024-12-10 22:59:24.177940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.511 qpair failed and we were unable to recover it. 00:28:16.511 [2024-12-10 22:59:24.178121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.511 [2024-12-10 22:59:24.178154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.511 qpair failed and we were unable to recover it. 00:28:16.511 [2024-12-10 22:59:24.178335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.511 [2024-12-10 22:59:24.178368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.511 qpair failed and we were unable to recover it. 00:28:16.511 [2024-12-10 22:59:24.178476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.511 [2024-12-10 22:59:24.178509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.511 qpair failed and we were unable to recover it. 00:28:16.511 [2024-12-10 22:59:24.178690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.511 [2024-12-10 22:59:24.178726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.511 qpair failed and we were unable to recover it. 00:28:16.511 [2024-12-10 22:59:24.178899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.511 [2024-12-10 22:59:24.178932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.511 qpair failed and we were unable to recover it. 00:28:16.511 [2024-12-10 22:59:24.179040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.511 [2024-12-10 22:59:24.179074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.511 qpair failed and we were unable to recover it. 00:28:16.511 [2024-12-10 22:59:24.179199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.511 [2024-12-10 22:59:24.179233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.511 qpair failed and we were unable to recover it. 00:28:16.511 [2024-12-10 22:59:24.179403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.511 [2024-12-10 22:59:24.179439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.511 qpair failed and we were unable to recover it. 00:28:16.511 [2024-12-10 22:59:24.179611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.511 [2024-12-10 22:59:24.179643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.511 qpair failed and we were unable to recover it. 00:28:16.511 [2024-12-10 22:59:24.179839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.511 [2024-12-10 22:59:24.179871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.511 qpair failed and we were unable to recover it. 00:28:16.511 [2024-12-10 22:59:24.180112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.511 [2024-12-10 22:59:24.180146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.511 qpair failed and we were unable to recover it. 00:28:16.511 [2024-12-10 22:59:24.180343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.511 [2024-12-10 22:59:24.180395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.511 qpair failed and we were unable to recover it. 00:28:16.511 [2024-12-10 22:59:24.180512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.511 [2024-12-10 22:59:24.180545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.511 qpair failed and we were unable to recover it. 00:28:16.511 [2024-12-10 22:59:24.180650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.511 [2024-12-10 22:59:24.180683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.511 qpair failed and we were unable to recover it. 00:28:16.511 [2024-12-10 22:59:24.180854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.511 [2024-12-10 22:59:24.180888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.511 qpair failed and we were unable to recover it. 00:28:16.511 [2024-12-10 22:59:24.181197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.511 [2024-12-10 22:59:24.181232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.511 qpair failed and we were unable to recover it. 00:28:16.511 [2024-12-10 22:59:24.181356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.511 [2024-12-10 22:59:24.181390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.511 qpair failed and we were unable to recover it. 00:28:16.511 [2024-12-10 22:59:24.181583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.511 [2024-12-10 22:59:24.181616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.511 qpair failed and we were unable to recover it. 00:28:16.511 [2024-12-10 22:59:24.181733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.511 [2024-12-10 22:59:24.181766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.511 qpair failed and we were unable to recover it. 00:28:16.511 [2024-12-10 22:59:24.181940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.511 [2024-12-10 22:59:24.181972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.511 qpair failed and we were unable to recover it. 00:28:16.511 [2024-12-10 22:59:24.182180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.511 [2024-12-10 22:59:24.182214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.511 qpair failed and we were unable to recover it. 00:28:16.511 [2024-12-10 22:59:24.182317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.511 [2024-12-10 22:59:24.182349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.511 qpair failed and we were unable to recover it. 00:28:16.511 [2024-12-10 22:59:24.182463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.511 [2024-12-10 22:59:24.182495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.511 qpair failed and we were unable to recover it. 00:28:16.511 [2024-12-10 22:59:24.182609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.511 [2024-12-10 22:59:24.182642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.511 qpair failed and we were unable to recover it. 00:28:16.789 [2024-12-10 22:59:24.182750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.789 [2024-12-10 22:59:24.182783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.789 qpair failed and we were unable to recover it. 00:28:16.789 [2024-12-10 22:59:24.182992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.789 [2024-12-10 22:59:24.183026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.789 qpair failed and we were unable to recover it. 00:28:16.789 [2024-12-10 22:59:24.183137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.789 [2024-12-10 22:59:24.183180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.789 qpair failed and we were unable to recover it. 00:28:16.789 [2024-12-10 22:59:24.183420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.789 [2024-12-10 22:59:24.183455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.789 qpair failed and we were unable to recover it. 00:28:16.789 [2024-12-10 22:59:24.183561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.789 [2024-12-10 22:59:24.183596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.789 qpair failed and we were unable to recover it. 00:28:16.789 [2024-12-10 22:59:24.183837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.789 [2024-12-10 22:59:24.183870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.789 qpair failed and we were unable to recover it. 00:28:16.789 [2024-12-10 22:59:24.184064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.789 [2024-12-10 22:59:24.184096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.789 qpair failed and we were unable to recover it. 00:28:16.789 [2024-12-10 22:59:24.184220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.789 [2024-12-10 22:59:24.184254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.789 qpair failed and we were unable to recover it. 00:28:16.789 [2024-12-10 22:59:24.184371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.789 [2024-12-10 22:59:24.184404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.789 qpair failed and we were unable to recover it. 00:28:16.789 [2024-12-10 22:59:24.184534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.789 [2024-12-10 22:59:24.184567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.789 qpair failed and we were unable to recover it. 00:28:16.789 [2024-12-10 22:59:24.184670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.789 [2024-12-10 22:59:24.184703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.789 qpair failed and we were unable to recover it. 00:28:16.789 [2024-12-10 22:59:24.184896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.789 [2024-12-10 22:59:24.184928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.789 qpair failed and we were unable to recover it. 00:28:16.789 [2024-12-10 22:59:24.185134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.789 [2024-12-10 22:59:24.185175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.789 qpair failed and we were unable to recover it. 00:28:16.789 [2024-12-10 22:59:24.185279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.789 [2024-12-10 22:59:24.185326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.789 qpair failed and we were unable to recover it. 00:28:16.789 [2024-12-10 22:59:24.185457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.789 [2024-12-10 22:59:24.185491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.789 qpair failed and we were unable to recover it. 00:28:16.789 [2024-12-10 22:59:24.185660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.789 [2024-12-10 22:59:24.185694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.789 qpair failed and we were unable to recover it. 00:28:16.789 [2024-12-10 22:59:24.185800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.789 [2024-12-10 22:59:24.185833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.789 qpair failed and we were unable to recover it. 00:28:16.789 [2024-12-10 22:59:24.186008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.789 [2024-12-10 22:59:24.186041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.789 qpair failed and we were unable to recover it. 00:28:16.789 [2024-12-10 22:59:24.186147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.789 [2024-12-10 22:59:24.186192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.789 qpair failed and we were unable to recover it. 00:28:16.789 [2024-12-10 22:59:24.186390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.790 [2024-12-10 22:59:24.186424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.790 qpair failed and we were unable to recover it. 00:28:16.790 [2024-12-10 22:59:24.186687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.790 [2024-12-10 22:59:24.186719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.790 qpair failed and we were unable to recover it. 00:28:16.790 [2024-12-10 22:59:24.186903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.790 [2024-12-10 22:59:24.186937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.790 qpair failed and we were unable to recover it. 00:28:16.790 [2024-12-10 22:59:24.187063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.790 [2024-12-10 22:59:24.187095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.790 qpair failed and we were unable to recover it. 00:28:16.790 [2024-12-10 22:59:24.187198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.790 [2024-12-10 22:59:24.187232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.790 qpair failed and we were unable to recover it. 00:28:16.790 [2024-12-10 22:59:24.187416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.790 [2024-12-10 22:59:24.187448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.790 qpair failed and we were unable to recover it. 00:28:16.790 [2024-12-10 22:59:24.187628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.790 [2024-12-10 22:59:24.187663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.790 qpair failed and we were unable to recover it. 00:28:16.790 [2024-12-10 22:59:24.187777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.790 [2024-12-10 22:59:24.187811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.790 qpair failed and we were unable to recover it. 00:28:16.790 [2024-12-10 22:59:24.187938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.790 [2024-12-10 22:59:24.187976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.790 qpair failed and we were unable to recover it. 00:28:16.790 [2024-12-10 22:59:24.188145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.790 [2024-12-10 22:59:24.188187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.790 qpair failed and we were unable to recover it. 00:28:16.790 [2024-12-10 22:59:24.188292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.790 [2024-12-10 22:59:24.188326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.790 qpair failed and we were unable to recover it. 00:28:16.790 [2024-12-10 22:59:24.188519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.790 [2024-12-10 22:59:24.188554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.790 qpair failed and we were unable to recover it. 00:28:16.790 [2024-12-10 22:59:24.188727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.790 [2024-12-10 22:59:24.188767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.790 qpair failed and we were unable to recover it. 00:28:16.790 [2024-12-10 22:59:24.188889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.790 [2024-12-10 22:59:24.188921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.790 qpair failed and we were unable to recover it. 00:28:16.790 [2024-12-10 22:59:24.189040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.790 [2024-12-10 22:59:24.189072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.790 qpair failed and we were unable to recover it. 00:28:16.790 [2024-12-10 22:59:24.189193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.790 [2024-12-10 22:59:24.189226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.790 qpair failed and we were unable to recover it. 00:28:16.790 [2024-12-10 22:59:24.189330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.790 [2024-12-10 22:59:24.189363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.790 qpair failed and we were unable to recover it. 00:28:16.790 [2024-12-10 22:59:24.189465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.790 [2024-12-10 22:59:24.189498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.790 qpair failed and we were unable to recover it. 00:28:16.790 [2024-12-10 22:59:24.189609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.790 [2024-12-10 22:59:24.189642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.790 qpair failed and we were unable to recover it. 00:28:16.790 [2024-12-10 22:59:24.189759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.790 [2024-12-10 22:59:24.189792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.790 qpair failed and we were unable to recover it. 00:28:16.790 [2024-12-10 22:59:24.189913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.790 [2024-12-10 22:59:24.189945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.790 qpair failed and we were unable to recover it. 00:28:16.790 [2024-12-10 22:59:24.190075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.790 [2024-12-10 22:59:24.190107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.790 qpair failed and we were unable to recover it. 00:28:16.790 [2024-12-10 22:59:24.190307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.790 [2024-12-10 22:59:24.190341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.790 qpair failed and we were unable to recover it. 00:28:16.790 [2024-12-10 22:59:24.190536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.790 [2024-12-10 22:59:24.190569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.790 qpair failed and we were unable to recover it. 00:28:16.790 [2024-12-10 22:59:24.190760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.790 [2024-12-10 22:59:24.190792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.790 qpair failed and we were unable to recover it. 00:28:16.790 [2024-12-10 22:59:24.190913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.790 [2024-12-10 22:59:24.190946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.790 qpair failed and we were unable to recover it. 00:28:16.790 [2024-12-10 22:59:24.191211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.790 [2024-12-10 22:59:24.191246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.790 qpair failed and we were unable to recover it. 00:28:16.790 [2024-12-10 22:59:24.191423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.790 [2024-12-10 22:59:24.191472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.790 qpair failed and we were unable to recover it. 00:28:16.790 [2024-12-10 22:59:24.191686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.790 [2024-12-10 22:59:24.191718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.790 qpair failed and we were unable to recover it. 00:28:16.790 [2024-12-10 22:59:24.191832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.790 [2024-12-10 22:59:24.191865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.790 qpair failed and we were unable to recover it. 00:28:16.790 [2024-12-10 22:59:24.191971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.790 [2024-12-10 22:59:24.192003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.790 qpair failed and we were unable to recover it. 00:28:16.790 [2024-12-10 22:59:24.192178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.790 [2024-12-10 22:59:24.192212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.790 qpair failed and we were unable to recover it. 00:28:16.790 [2024-12-10 22:59:24.192325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.790 [2024-12-10 22:59:24.192357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.790 qpair failed and we were unable to recover it. 00:28:16.790 [2024-12-10 22:59:24.192492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.790 [2024-12-10 22:59:24.192525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.790 qpair failed and we were unable to recover it. 00:28:16.790 [2024-12-10 22:59:24.192717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.790 [2024-12-10 22:59:24.192749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.790 qpair failed and we were unable to recover it. 00:28:16.790 [2024-12-10 22:59:24.192927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.790 [2024-12-10 22:59:24.192962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.790 qpair failed and we were unable to recover it. 00:28:16.790 [2024-12-10 22:59:24.193072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.790 [2024-12-10 22:59:24.193104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.790 qpair failed and we were unable to recover it. 00:28:16.790 [2024-12-10 22:59:24.193216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.790 [2024-12-10 22:59:24.193249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.790 qpair failed and we were unable to recover it. 00:28:16.790 [2024-12-10 22:59:24.193417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.790 [2024-12-10 22:59:24.193449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.790 qpair failed and we were unable to recover it. 00:28:16.790 [2024-12-10 22:59:24.193617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.791 [2024-12-10 22:59:24.193651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.791 qpair failed and we were unable to recover it. 00:28:16.791 [2024-12-10 22:59:24.193853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.791 [2024-12-10 22:59:24.193886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.791 qpair failed and we were unable to recover it. 00:28:16.791 [2024-12-10 22:59:24.194000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.791 [2024-12-10 22:59:24.194029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.791 qpair failed and we were unable to recover it. 00:28:16.791 [2024-12-10 22:59:24.194197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.791 [2024-12-10 22:59:24.194230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.791 qpair failed and we were unable to recover it. 00:28:16.791 [2024-12-10 22:59:24.194403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.791 [2024-12-10 22:59:24.194438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.791 qpair failed and we were unable to recover it. 00:28:16.791 [2024-12-10 22:59:24.194609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.791 [2024-12-10 22:59:24.194641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.791 qpair failed and we were unable to recover it. 00:28:16.791 [2024-12-10 22:59:24.194841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.791 [2024-12-10 22:59:24.194875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.791 qpair failed and we were unable to recover it. 00:28:16.791 [2024-12-10 22:59:24.195070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.791 [2024-12-10 22:59:24.195103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.791 qpair failed and we were unable to recover it. 00:28:16.791 [2024-12-10 22:59:24.195332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.791 [2024-12-10 22:59:24.195366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.791 qpair failed and we were unable to recover it. 00:28:16.791 [2024-12-10 22:59:24.195487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.791 [2024-12-10 22:59:24.195525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.791 qpair failed and we were unable to recover it. 00:28:16.791 [2024-12-10 22:59:24.195631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.791 [2024-12-10 22:59:24.195664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.791 qpair failed and we were unable to recover it. 00:28:16.791 [2024-12-10 22:59:24.195851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.791 [2024-12-10 22:59:24.195883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.791 qpair failed and we were unable to recover it. 00:28:16.791 [2024-12-10 22:59:24.195988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.791 [2024-12-10 22:59:24.196020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.791 qpair failed and we were unable to recover it. 00:28:16.791 [2024-12-10 22:59:24.196205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.791 [2024-12-10 22:59:24.196240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.791 qpair failed and we were unable to recover it. 00:28:16.791 [2024-12-10 22:59:24.196412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.791 [2024-12-10 22:59:24.196446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.791 qpair failed and we were unable to recover it. 00:28:16.791 [2024-12-10 22:59:24.196629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.791 [2024-12-10 22:59:24.196662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.791 qpair failed and we were unable to recover it. 00:28:16.791 [2024-12-10 22:59:24.196800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.791 [2024-12-10 22:59:24.196834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.791 qpair failed and we were unable to recover it. 00:28:16.791 [2024-12-10 22:59:24.197004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.791 [2024-12-10 22:59:24.197038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.791 qpair failed and we were unable to recover it. 00:28:16.791 [2024-12-10 22:59:24.197166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.791 [2024-12-10 22:59:24.197201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.791 qpair failed and we were unable to recover it. 00:28:16.791 [2024-12-10 22:59:24.197320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.791 [2024-12-10 22:59:24.197352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.791 qpair failed and we were unable to recover it. 00:28:16.791 [2024-12-10 22:59:24.197476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.791 [2024-12-10 22:59:24.197509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.791 qpair failed and we were unable to recover it. 00:28:16.791 [2024-12-10 22:59:24.197616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.791 [2024-12-10 22:59:24.197649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.791 qpair failed and we were unable to recover it. 00:28:16.791 [2024-12-10 22:59:24.197761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.791 [2024-12-10 22:59:24.197793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.791 qpair failed and we were unable to recover it. 00:28:16.791 [2024-12-10 22:59:24.197900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.791 [2024-12-10 22:59:24.197931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.791 qpair failed and we were unable to recover it. 00:28:16.791 [2024-12-10 22:59:24.198100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.791 [2024-12-10 22:59:24.198132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.791 qpair failed and we were unable to recover it. 00:28:16.791 [2024-12-10 22:59:24.198312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.791 [2024-12-10 22:59:24.198345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.791 qpair failed and we were unable to recover it. 00:28:16.791 [2024-12-10 22:59:24.198517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.791 [2024-12-10 22:59:24.198550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.791 qpair failed and we were unable to recover it. 00:28:16.791 [2024-12-10 22:59:24.198671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.791 [2024-12-10 22:59:24.198704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.791 qpair failed and we were unable to recover it. 00:28:16.791 [2024-12-10 22:59:24.198966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.791 [2024-12-10 22:59:24.199000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.791 qpair failed and we were unable to recover it. 00:28:16.791 [2024-12-10 22:59:24.199177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.791 [2024-12-10 22:59:24.199212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.791 qpair failed and we were unable to recover it. 00:28:16.791 [2024-12-10 22:59:24.199339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.791 [2024-12-10 22:59:24.199372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.791 qpair failed and we were unable to recover it. 00:28:16.791 [2024-12-10 22:59:24.199479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.791 [2024-12-10 22:59:24.199511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.791 qpair failed and we were unable to recover it. 00:28:16.791 [2024-12-10 22:59:24.199615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.791 [2024-12-10 22:59:24.199649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.791 qpair failed and we were unable to recover it. 00:28:16.791 [2024-12-10 22:59:24.199764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.791 [2024-12-10 22:59:24.199796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.791 qpair failed and we were unable to recover it. 00:28:16.791 [2024-12-10 22:59:24.199978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.791 [2024-12-10 22:59:24.200011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.791 qpair failed and we were unable to recover it. 00:28:16.791 [2024-12-10 22:59:24.200117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.791 [2024-12-10 22:59:24.200150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.791 qpair failed and we were unable to recover it. 00:28:16.791 [2024-12-10 22:59:24.200391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.791 [2024-12-10 22:59:24.200425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.791 qpair failed and we were unable to recover it. 00:28:16.791 [2024-12-10 22:59:24.200588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.791 [2024-12-10 22:59:24.200620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.791 qpair failed and we were unable to recover it. 00:28:16.791 [2024-12-10 22:59:24.200729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.791 [2024-12-10 22:59:24.200761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.791 qpair failed and we were unable to recover it. 00:28:16.791 [2024-12-10 22:59:24.200928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.791 [2024-12-10 22:59:24.200960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.792 qpair failed and we were unable to recover it. 00:28:16.792 [2024-12-10 22:59:24.201123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.792 [2024-12-10 22:59:24.201155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.792 qpair failed and we were unable to recover it. 00:28:16.792 [2024-12-10 22:59:24.201280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.792 [2024-12-10 22:59:24.201313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.792 qpair failed and we were unable to recover it. 00:28:16.792 [2024-12-10 22:59:24.201510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.792 [2024-12-10 22:59:24.201543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.792 qpair failed and we were unable to recover it. 00:28:16.792 [2024-12-10 22:59:24.201657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.792 [2024-12-10 22:59:24.201691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.792 qpair failed and we were unable to recover it. 00:28:16.792 [2024-12-10 22:59:24.201878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.792 [2024-12-10 22:59:24.201910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.792 qpair failed and we were unable to recover it. 00:28:16.792 [2024-12-10 22:59:24.202016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.792 [2024-12-10 22:59:24.202049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.792 qpair failed and we were unable to recover it. 00:28:16.792 [2024-12-10 22:59:24.202179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.792 [2024-12-10 22:59:24.202215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.792 qpair failed and we were unable to recover it. 00:28:16.792 [2024-12-10 22:59:24.202385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.792 [2024-12-10 22:59:24.202417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.792 qpair failed and we were unable to recover it. 00:28:16.792 [2024-12-10 22:59:24.202589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.792 [2024-12-10 22:59:24.202622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.792 qpair failed and we were unable to recover it. 00:28:16.792 [2024-12-10 22:59:24.202742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.792 [2024-12-10 22:59:24.202780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.792 qpair failed and we were unable to recover it. 00:28:16.792 [2024-12-10 22:59:24.202955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.792 [2024-12-10 22:59:24.202987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.792 qpair failed and we were unable to recover it. 00:28:16.792 [2024-12-10 22:59:24.203177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.792 [2024-12-10 22:59:24.203211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.792 qpair failed and we were unable to recover it. 00:28:16.792 [2024-12-10 22:59:24.203333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.792 [2024-12-10 22:59:24.203365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.792 qpair failed and we were unable to recover it. 00:28:16.792 [2024-12-10 22:59:24.203482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.792 [2024-12-10 22:59:24.203514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.792 qpair failed and we were unable to recover it. 00:28:16.792 [2024-12-10 22:59:24.203629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.792 [2024-12-10 22:59:24.203662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.792 qpair failed and we were unable to recover it. 00:28:16.792 [2024-12-10 22:59:24.203767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.792 [2024-12-10 22:59:24.203800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.792 qpair failed and we were unable to recover it. 00:28:16.792 [2024-12-10 22:59:24.203975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.792 [2024-12-10 22:59:24.204008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.792 qpair failed and we were unable to recover it. 00:28:16.792 [2024-12-10 22:59:24.204109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.792 [2024-12-10 22:59:24.204143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.792 qpair failed and we were unable to recover it. 00:28:16.792 [2024-12-10 22:59:24.204343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.792 [2024-12-10 22:59:24.204378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.792 qpair failed and we were unable to recover it. 00:28:16.792 [2024-12-10 22:59:24.204551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.792 [2024-12-10 22:59:24.204585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.792 qpair failed and we were unable to recover it. 00:28:16.792 [2024-12-10 22:59:24.204703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.792 [2024-12-10 22:59:24.204736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.792 qpair failed and we were unable to recover it. 00:28:16.792 [2024-12-10 22:59:24.204907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.792 [2024-12-10 22:59:24.204940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.792 qpair failed and we were unable to recover it. 00:28:16.792 [2024-12-10 22:59:24.205083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.792 [2024-12-10 22:59:24.205117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.792 qpair failed and we were unable to recover it. 00:28:16.792 [2024-12-10 22:59:24.205335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.792 [2024-12-10 22:59:24.205390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.792 qpair failed and we were unable to recover it. 00:28:16.792 [2024-12-10 22:59:24.205510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.792 [2024-12-10 22:59:24.205546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.792 qpair failed and we were unable to recover it. 00:28:16.792 [2024-12-10 22:59:24.205726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.792 [2024-12-10 22:59:24.205759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.792 qpair failed and we were unable to recover it. 00:28:16.792 [2024-12-10 22:59:24.205954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.792 [2024-12-10 22:59:24.205989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.792 qpair failed and we were unable to recover it. 00:28:16.792 [2024-12-10 22:59:24.206101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.792 [2024-12-10 22:59:24.206134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.792 qpair failed and we were unable to recover it. 00:28:16.792 [2024-12-10 22:59:24.206400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.792 [2024-12-10 22:59:24.206472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.792 qpair failed and we were unable to recover it. 00:28:16.792 [2024-12-10 22:59:24.206607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.792 [2024-12-10 22:59:24.206645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.792 qpair failed and we were unable to recover it. 00:28:16.792 [2024-12-10 22:59:24.206841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.792 [2024-12-10 22:59:24.206874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.792 qpair failed and we were unable to recover it. 00:28:16.792 [2024-12-10 22:59:24.206993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.792 [2024-12-10 22:59:24.207025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.792 qpair failed and we were unable to recover it. 00:28:16.792 [2024-12-10 22:59:24.207126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.792 [2024-12-10 22:59:24.207166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.792 qpair failed and we were unable to recover it. 00:28:16.792 [2024-12-10 22:59:24.207382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.792 [2024-12-10 22:59:24.207414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.792 qpair failed and we were unable to recover it. 00:28:16.792 [2024-12-10 22:59:24.207616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.792 [2024-12-10 22:59:24.207648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.792 qpair failed and we were unable to recover it. 00:28:16.792 [2024-12-10 22:59:24.207788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.792 [2024-12-10 22:59:24.207824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.792 qpair failed and we were unable to recover it. 00:28:16.792 [2024-12-10 22:59:24.207978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.792 [2024-12-10 22:59:24.208021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.792 qpair failed and we were unable to recover it. 00:28:16.792 [2024-12-10 22:59:24.208125] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:16.792 [2024-12-10 22:59:24.208150] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:16.792 [2024-12-10 22:59:24.208179] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:16.792 [2024-12-10 22:59:24.208185] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:16.792 [2024-12-10 22:59:24.208191] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:16.793 [2024-12-10 22:59:24.208219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.793 [2024-12-10 22:59:24.208253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.793 qpair failed and we were unable to recover it. 00:28:16.793 [2024-12-10 22:59:24.208435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.793 [2024-12-10 22:59:24.208467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.793 qpair failed and we were unable to recover it. 00:28:16.793 [2024-12-10 22:59:24.208579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.793 [2024-12-10 22:59:24.208610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.793 qpair failed and we were unable to recover it. 00:28:16.793 [2024-12-10 22:59:24.208744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.793 [2024-12-10 22:59:24.208777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.793 qpair failed and we were unable to recover it. 00:28:16.793 [2024-12-10 22:59:24.208893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.793 [2024-12-10 22:59:24.208928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.793 qpair failed and we were unable to recover it. 00:28:16.793 [2024-12-10 22:59:24.209046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.793 [2024-12-10 22:59:24.209079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.793 qpair failed and we were unable to recover it. 00:28:16.793 [2024-12-10 22:59:24.209184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.793 [2024-12-10 22:59:24.209219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.793 qpair failed and we were unable to recover it. 00:28:16.793 [2024-12-10 22:59:24.209392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.793 [2024-12-10 22:59:24.209424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.793 qpair failed and we were unable to recover it. 00:28:16.793 [2024-12-10 22:59:24.209543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.793 [2024-12-10 22:59:24.209575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.793 qpair failed and we were unable to recover it. 00:28:16.793 [2024-12-10 22:59:24.209700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.793 [2024-12-10 22:59:24.209732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.793 qpair failed and we were unable to recover it. 00:28:16.793 [2024-12-10 22:59:24.209708] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:28:16.793 [2024-12-10 22:59:24.209816] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:28:16.793 [2024-12-10 22:59:24.209910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.793 [2024-12-10 22:59:24.209924] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:28:16.793 [2024-12-10 22:59:24.209941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.793 qpair failed and we were unable to recover it. 00:28:16.793 [2024-12-10 22:59:24.209925] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:28:16.793 [2024-12-10 22:59:24.210114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.793 [2024-12-10 22:59:24.210146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.793 qpair failed and we were unable to recover it. 00:28:16.793 [2024-12-10 22:59:24.210300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.793 [2024-12-10 22:59:24.210331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.793 qpair failed and we were unable to recover it. 00:28:16.793 [2024-12-10 22:59:24.210577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.793 [2024-12-10 22:59:24.210611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.793 qpair failed and we were unable to recover it. 00:28:16.793 [2024-12-10 22:59:24.210810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.793 [2024-12-10 22:59:24.210842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.793 qpair failed and we were unable to recover it. 00:28:16.793 [2024-12-10 22:59:24.211091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.793 [2024-12-10 22:59:24.211124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.793 qpair failed and we were unable to recover it. 00:28:16.793 [2024-12-10 22:59:24.211239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.793 [2024-12-10 22:59:24.211272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.793 qpair failed and we were unable to recover it. 00:28:16.793 [2024-12-10 22:59:24.211393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.793 [2024-12-10 22:59:24.211426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.793 qpair failed and we were unable to recover it. 00:28:16.793 [2024-12-10 22:59:24.211603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.793 [2024-12-10 22:59:24.211637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.793 qpair failed and we were unable to recover it. 00:28:16.793 [2024-12-10 22:59:24.211751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.793 [2024-12-10 22:59:24.211784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.793 qpair failed and we were unable to recover it. 00:28:16.793 [2024-12-10 22:59:24.211906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.793 [2024-12-10 22:59:24.211938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.793 qpair failed and we were unable to recover it. 00:28:16.793 [2024-12-10 22:59:24.212111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.793 [2024-12-10 22:59:24.212144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.793 qpair failed and we were unable to recover it. 00:28:16.793 [2024-12-10 22:59:24.212348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.793 [2024-12-10 22:59:24.212384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.793 qpair failed and we were unable to recover it. 00:28:16.793 [2024-12-10 22:59:24.212577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.793 [2024-12-10 22:59:24.212614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.793 qpair failed and we were unable to recover it. 00:28:16.793 [2024-12-10 22:59:24.212807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.793 [2024-12-10 22:59:24.212847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.793 qpair failed and we were unable to recover it. 00:28:16.793 [2024-12-10 22:59:24.212985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.793 [2024-12-10 22:59:24.213026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.793 qpair failed and we were unable to recover it. 00:28:16.793 [2024-12-10 22:59:24.213152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.793 [2024-12-10 22:59:24.213200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.793 qpair failed and we were unable to recover it. 00:28:16.793 [2024-12-10 22:59:24.213472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.793 [2024-12-10 22:59:24.213505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.793 qpair failed and we were unable to recover it. 00:28:16.793 [2024-12-10 22:59:24.213627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.793 [2024-12-10 22:59:24.213661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.793 qpair failed and we were unable to recover it. 00:28:16.793 [2024-12-10 22:59:24.213834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.793 [2024-12-10 22:59:24.213868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.793 qpair failed and we were unable to recover it. 00:28:16.793 [2024-12-10 22:59:24.213990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.793 [2024-12-10 22:59:24.214028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.793 qpair failed and we were unable to recover it. 00:28:16.793 [2024-12-10 22:59:24.214212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.793 [2024-12-10 22:59:24.214246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.793 qpair failed and we were unable to recover it. 00:28:16.793 [2024-12-10 22:59:24.214370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.794 [2024-12-10 22:59:24.214404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.794 qpair failed and we were unable to recover it. 00:28:16.794 [2024-12-10 22:59:24.214525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.794 [2024-12-10 22:59:24.214560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.794 qpair failed and we were unable to recover it. 00:28:16.794 [2024-12-10 22:59:24.214727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.794 [2024-12-10 22:59:24.214759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.794 qpair failed and we were unable to recover it. 00:28:16.794 [2024-12-10 22:59:24.214864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.794 [2024-12-10 22:59:24.214897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.794 qpair failed and we were unable to recover it. 00:28:16.794 [2024-12-10 22:59:24.215007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.794 [2024-12-10 22:59:24.215050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.794 qpair failed and we were unable to recover it. 00:28:16.794 [2024-12-10 22:59:24.215178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.794 [2024-12-10 22:59:24.215216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.794 qpair failed and we were unable to recover it. 00:28:16.794 [2024-12-10 22:59:24.215436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.794 [2024-12-10 22:59:24.215473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.794 qpair failed and we were unable to recover it. 00:28:16.794 [2024-12-10 22:59:24.215588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.794 [2024-12-10 22:59:24.215632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.794 qpair failed and we were unable to recover it. 00:28:16.794 [2024-12-10 22:59:24.215819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.794 [2024-12-10 22:59:24.215852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.794 qpair failed and we were unable to recover it. 00:28:16.794 [2024-12-10 22:59:24.215966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.794 [2024-12-10 22:59:24.215998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.794 qpair failed and we were unable to recover it. 00:28:16.794 [2024-12-10 22:59:24.216171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.794 [2024-12-10 22:59:24.216207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.794 qpair failed and we were unable to recover it. 00:28:16.794 [2024-12-10 22:59:24.216333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.794 [2024-12-10 22:59:24.216367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.794 qpair failed and we were unable to recover it. 00:28:16.794 [2024-12-10 22:59:24.216538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.794 [2024-12-10 22:59:24.216570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.794 qpair failed and we were unable to recover it. 00:28:16.794 [2024-12-10 22:59:24.216762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.794 [2024-12-10 22:59:24.216797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.794 qpair failed and we were unable to recover it. 00:28:16.794 [2024-12-10 22:59:24.216912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.794 [2024-12-10 22:59:24.216948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.794 qpair failed and we were unable to recover it. 00:28:16.794 [2024-12-10 22:59:24.217138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.794 [2024-12-10 22:59:24.217194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.794 qpair failed and we were unable to recover it. 00:28:16.794 [2024-12-10 22:59:24.217441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.794 [2024-12-10 22:59:24.217476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.794 qpair failed and we were unable to recover it. 00:28:16.794 [2024-12-10 22:59:24.217648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.794 [2024-12-10 22:59:24.217682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.794 qpair failed and we were unable to recover it. 00:28:16.794 [2024-12-10 22:59:24.217900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.794 [2024-12-10 22:59:24.217933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.794 qpair failed and we were unable to recover it. 00:28:16.794 [2024-12-10 22:59:24.218168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.794 [2024-12-10 22:59:24.218203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.794 qpair failed and we were unable to recover it. 00:28:16.794 [2024-12-10 22:59:24.218327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.794 [2024-12-10 22:59:24.218361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.794 qpair failed and we were unable to recover it. 00:28:16.794 [2024-12-10 22:59:24.218468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.794 [2024-12-10 22:59:24.218501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.794 qpair failed and we were unable to recover it. 00:28:16.794 [2024-12-10 22:59:24.218600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.794 [2024-12-10 22:59:24.218633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.794 qpair failed and we were unable to recover it. 00:28:16.794 [2024-12-10 22:59:24.218824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.794 [2024-12-10 22:59:24.218856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.794 qpair failed and we were unable to recover it. 00:28:16.794 [2024-12-10 22:59:24.218983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.794 [2024-12-10 22:59:24.219018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.794 qpair failed and we were unable to recover it. 00:28:16.794 [2024-12-10 22:59:24.219143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.794 [2024-12-10 22:59:24.219187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.794 qpair failed and we were unable to recover it. 00:28:16.794 [2024-12-10 22:59:24.219360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.794 [2024-12-10 22:59:24.219396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.794 qpair failed and we were unable to recover it. 00:28:16.794 [2024-12-10 22:59:24.219590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.794 [2024-12-10 22:59:24.219624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.794 qpair failed and we were unable to recover it. 00:28:16.794 [2024-12-10 22:59:24.219816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.794 [2024-12-10 22:59:24.219851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.794 qpair failed and we were unable to recover it. 00:28:16.794 [2024-12-10 22:59:24.220017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.794 [2024-12-10 22:59:24.220050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.794 qpair failed and we were unable to recover it. 00:28:16.794 [2024-12-10 22:59:24.220329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.794 [2024-12-10 22:59:24.220364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.794 qpair failed and we were unable to recover it. 00:28:16.794 [2024-12-10 22:59:24.220555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.794 [2024-12-10 22:59:24.220589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.794 qpair failed and we were unable to recover it. 00:28:16.794 [2024-12-10 22:59:24.220770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.794 [2024-12-10 22:59:24.220803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.794 qpair failed and we were unable to recover it. 00:28:16.794 [2024-12-10 22:59:24.220919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.794 [2024-12-10 22:59:24.220954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.794 qpair failed and we were unable to recover it. 00:28:16.794 [2024-12-10 22:59:24.221065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.794 [2024-12-10 22:59:24.221097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.794 qpair failed and we were unable to recover it. 00:28:16.794 [2024-12-10 22:59:24.221265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.794 [2024-12-10 22:59:24.221301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.794 qpair failed and we were unable to recover it. 00:28:16.794 [2024-12-10 22:59:24.221471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.794 [2024-12-10 22:59:24.221504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.794 qpair failed and we were unable to recover it. 00:28:16.794 [2024-12-10 22:59:24.221711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.794 [2024-12-10 22:59:24.221747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.794 qpair failed and we were unable to recover it. 00:28:16.794 [2024-12-10 22:59:24.221881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.794 [2024-12-10 22:59:24.221917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.794 qpair failed and we were unable to recover it. 00:28:16.794 [2024-12-10 22:59:24.222037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.794 [2024-12-10 22:59:24.222070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.795 qpair failed and we were unable to recover it. 00:28:16.795 [2024-12-10 22:59:24.222235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.795 [2024-12-10 22:59:24.222269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.795 qpair failed and we were unable to recover it. 00:28:16.795 [2024-12-10 22:59:24.222437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.795 [2024-12-10 22:59:24.222471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.795 qpair failed and we were unable to recover it. 00:28:16.795 [2024-12-10 22:59:24.222576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.795 [2024-12-10 22:59:24.222610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.795 qpair failed and we were unable to recover it. 00:28:16.795 [2024-12-10 22:59:24.222735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.795 [2024-12-10 22:59:24.222769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.795 qpair failed and we were unable to recover it. 00:28:16.795 [2024-12-10 22:59:24.222997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.795 [2024-12-10 22:59:24.223031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.795 qpair failed and we were unable to recover it. 00:28:16.795 [2024-12-10 22:59:24.223166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.795 [2024-12-10 22:59:24.223213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.795 qpair failed and we were unable to recover it. 00:28:16.795 [2024-12-10 22:59:24.223418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.795 [2024-12-10 22:59:24.223453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.795 qpair failed and we were unable to recover it. 00:28:16.795 [2024-12-10 22:59:24.223626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.795 [2024-12-10 22:59:24.223658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.795 qpair failed and we were unable to recover it. 00:28:16.795 [2024-12-10 22:59:24.223765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.795 [2024-12-10 22:59:24.223798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.795 qpair failed and we were unable to recover it. 00:28:16.795 [2024-12-10 22:59:24.223971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.795 [2024-12-10 22:59:24.224006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.795 qpair failed and we were unable to recover it. 00:28:16.795 [2024-12-10 22:59:24.224192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.795 [2024-12-10 22:59:24.224226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.795 qpair failed and we were unable to recover it. 00:28:16.795 [2024-12-10 22:59:24.224434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.795 [2024-12-10 22:59:24.224466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.795 qpair failed and we were unable to recover it. 00:28:16.795 [2024-12-10 22:59:24.224572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.795 [2024-12-10 22:59:24.224604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.795 qpair failed and we were unable to recover it. 00:28:16.795 [2024-12-10 22:59:24.224798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.795 [2024-12-10 22:59:24.224831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.795 qpair failed and we were unable to recover it. 00:28:16.795 [2024-12-10 22:59:24.224935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.795 [2024-12-10 22:59:24.224969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.795 qpair failed and we were unable to recover it. 00:28:16.795 [2024-12-10 22:59:24.225137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.795 [2024-12-10 22:59:24.225179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.795 qpair failed and we were unable to recover it. 00:28:16.795 [2024-12-10 22:59:24.225287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.795 [2024-12-10 22:59:24.225320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.795 qpair failed and we were unable to recover it. 00:28:16.795 [2024-12-10 22:59:24.225516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.795 [2024-12-10 22:59:24.225549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.795 qpair failed and we were unable to recover it. 00:28:16.795 [2024-12-10 22:59:24.225740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.795 [2024-12-10 22:59:24.225781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.795 qpair failed and we were unable to recover it. 00:28:16.795 [2024-12-10 22:59:24.225993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.795 [2024-12-10 22:59:24.226028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.795 qpair failed and we were unable to recover it. 00:28:16.795 [2024-12-10 22:59:24.226136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.795 [2024-12-10 22:59:24.226179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.795 qpair failed and we were unable to recover it. 00:28:16.795 [2024-12-10 22:59:24.226301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.795 [2024-12-10 22:59:24.226334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.795 qpair failed and we were unable to recover it. 00:28:16.795 [2024-12-10 22:59:24.226525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.795 [2024-12-10 22:59:24.226557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.795 qpair failed and we were unable to recover it. 00:28:16.795 [2024-12-10 22:59:24.226687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.795 [2024-12-10 22:59:24.226722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.795 qpair failed and we were unable to recover it. 00:28:16.795 [2024-12-10 22:59:24.226844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.795 [2024-12-10 22:59:24.226878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.795 qpair failed and we were unable to recover it. 00:28:16.795 [2024-12-10 22:59:24.226994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.795 [2024-12-10 22:59:24.227027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.795 qpair failed and we were unable to recover it. 00:28:16.795 [2024-12-10 22:59:24.227142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.795 [2024-12-10 22:59:24.227186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.795 qpair failed and we were unable to recover it. 00:28:16.795 [2024-12-10 22:59:24.227350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.795 [2024-12-10 22:59:24.227383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.795 qpair failed and we were unable to recover it. 00:28:16.795 [2024-12-10 22:59:24.227547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.795 [2024-12-10 22:59:24.227580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.795 qpair failed and we were unable to recover it. 00:28:16.795 [2024-12-10 22:59:24.227766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.795 [2024-12-10 22:59:24.227799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.795 qpair failed and we were unable to recover it. 00:28:16.795 [2024-12-10 22:59:24.227980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.795 [2024-12-10 22:59:24.228013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.795 qpair failed and we were unable to recover it. 00:28:16.795 [2024-12-10 22:59:24.228210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.795 [2024-12-10 22:59:24.228245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.795 qpair failed and we were unable to recover it. 00:28:16.795 [2024-12-10 22:59:24.228443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.795 [2024-12-10 22:59:24.228477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.795 qpair failed and we were unable to recover it. 00:28:16.795 [2024-12-10 22:59:24.228654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.795 [2024-12-10 22:59:24.228687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.795 qpair failed and we were unable to recover it. 00:28:16.795 [2024-12-10 22:59:24.228860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.795 [2024-12-10 22:59:24.228896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.795 qpair failed and we were unable to recover it. 00:28:16.795 [2024-12-10 22:59:24.229173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.795 [2024-12-10 22:59:24.229211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.795 qpair failed and we were unable to recover it. 00:28:16.795 [2024-12-10 22:59:24.229399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.795 [2024-12-10 22:59:24.229433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.795 qpair failed and we were unable to recover it. 00:28:16.795 [2024-12-10 22:59:24.229621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.795 [2024-12-10 22:59:24.229655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.795 qpair failed and we were unable to recover it. 00:28:16.795 [2024-12-10 22:59:24.229771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.795 [2024-12-10 22:59:24.229803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.796 qpair failed and we were unable to recover it. 00:28:16.796 [2024-12-10 22:59:24.229981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.796 [2024-12-10 22:59:24.230015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.796 qpair failed and we were unable to recover it. 00:28:16.796 [2024-12-10 22:59:24.230153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.796 [2024-12-10 22:59:24.230199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.796 qpair failed and we were unable to recover it. 00:28:16.796 [2024-12-10 22:59:24.230337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.796 [2024-12-10 22:59:24.230377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.796 qpair failed and we were unable to recover it. 00:28:16.796 [2024-12-10 22:59:24.230501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.796 [2024-12-10 22:59:24.230535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.796 qpair failed and we were unable to recover it. 00:28:16.796 [2024-12-10 22:59:24.230655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.796 [2024-12-10 22:59:24.230687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.796 qpair failed and we were unable to recover it. 00:28:16.796 [2024-12-10 22:59:24.230888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.796 [2024-12-10 22:59:24.230922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.796 qpair failed and we were unable to recover it. 00:28:16.796 [2024-12-10 22:59:24.231095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.796 [2024-12-10 22:59:24.231135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.796 qpair failed and we were unable to recover it. 00:28:16.796 [2024-12-10 22:59:24.231254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.796 [2024-12-10 22:59:24.231287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.796 qpair failed and we were unable to recover it. 00:28:16.796 [2024-12-10 22:59:24.231405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.796 [2024-12-10 22:59:24.231438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.796 qpair failed and we were unable to recover it. 00:28:16.796 [2024-12-10 22:59:24.231588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.796 [2024-12-10 22:59:24.231621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.796 qpair failed and we were unable to recover it. 00:28:16.796 [2024-12-10 22:59:24.231785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.796 [2024-12-10 22:59:24.231819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.796 qpair failed and we were unable to recover it. 00:28:16.796 [2024-12-10 22:59:24.231981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.796 [2024-12-10 22:59:24.232014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.796 qpair failed and we were unable to recover it. 00:28:16.796 [2024-12-10 22:59:24.232188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.796 [2024-12-10 22:59:24.232232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.796 qpair failed and we were unable to recover it. 00:28:16.796 [2024-12-10 22:59:24.232407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.796 [2024-12-10 22:59:24.232440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.796 qpair failed and we were unable to recover it. 00:28:16.796 [2024-12-10 22:59:24.232545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.796 [2024-12-10 22:59:24.232578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.796 qpair failed and we were unable to recover it. 00:28:16.796 [2024-12-10 22:59:24.232745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.796 [2024-12-10 22:59:24.232779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.796 qpair failed and we were unable to recover it. 00:28:16.796 [2024-12-10 22:59:24.232960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.796 [2024-12-10 22:59:24.232992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.796 qpair failed and we were unable to recover it. 00:28:16.796 [2024-12-10 22:59:24.233105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.796 [2024-12-10 22:59:24.233139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.796 qpair failed and we were unable to recover it. 00:28:16.796 [2024-12-10 22:59:24.233314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.796 [2024-12-10 22:59:24.233347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.796 qpair failed and we were unable to recover it. 00:28:16.796 [2024-12-10 22:59:24.233458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.796 [2024-12-10 22:59:24.233491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.796 qpair failed and we were unable to recover it. 00:28:16.796 [2024-12-10 22:59:24.233675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.796 [2024-12-10 22:59:24.233707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.796 qpair failed and we were unable to recover it. 00:28:16.796 [2024-12-10 22:59:24.233903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.796 [2024-12-10 22:59:24.233936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.796 qpair failed and we were unable to recover it. 00:28:16.796 [2024-12-10 22:59:24.234104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.796 [2024-12-10 22:59:24.234137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.796 qpair failed and we were unable to recover it. 00:28:16.796 [2024-12-10 22:59:24.234278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.796 [2024-12-10 22:59:24.234312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.796 qpair failed and we were unable to recover it. 00:28:16.796 [2024-12-10 22:59:24.234496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.796 [2024-12-10 22:59:24.234530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.796 qpair failed and we were unable to recover it. 00:28:16.796 [2024-12-10 22:59:24.234700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.796 [2024-12-10 22:59:24.234734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.796 qpair failed and we were unable to recover it. 00:28:16.796 [2024-12-10 22:59:24.234972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.796 [2024-12-10 22:59:24.235008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.796 qpair failed and we were unable to recover it. 00:28:16.796 [2024-12-10 22:59:24.235177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.796 [2024-12-10 22:59:24.235211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.796 qpair failed and we were unable to recover it. 00:28:16.796 [2024-12-10 22:59:24.235417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.796 [2024-12-10 22:59:24.235453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.796 qpair failed and we were unable to recover it. 00:28:16.796 [2024-12-10 22:59:24.235656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.796 [2024-12-10 22:59:24.235690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.796 qpair failed and we were unable to recover it. 00:28:16.796 [2024-12-10 22:59:24.235805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.796 [2024-12-10 22:59:24.235838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.796 qpair failed and we were unable to recover it. 00:28:16.796 [2024-12-10 22:59:24.235952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.796 [2024-12-10 22:59:24.235985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.796 qpair failed and we were unable to recover it. 00:28:16.796 [2024-12-10 22:59:24.236153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.796 [2024-12-10 22:59:24.236198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.796 qpair failed and we were unable to recover it. 00:28:16.796 [2024-12-10 22:59:24.236389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.796 [2024-12-10 22:59:24.236423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.796 qpair failed and we were unable to recover it. 00:28:16.796 [2024-12-10 22:59:24.236539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.796 [2024-12-10 22:59:24.236572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.796 qpair failed and we were unable to recover it. 00:28:16.796 [2024-12-10 22:59:24.236679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.797 [2024-12-10 22:59:24.236713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.797 qpair failed and we were unable to recover it. 00:28:16.797 [2024-12-10 22:59:24.236838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.797 [2024-12-10 22:59:24.236872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.797 qpair failed and we were unable to recover it. 00:28:16.797 [2024-12-10 22:59:24.237012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.797 [2024-12-10 22:59:24.237044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.797 qpair failed and we were unable to recover it. 00:28:16.797 [2024-12-10 22:59:24.237168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.797 [2024-12-10 22:59:24.237204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.797 qpair failed and we were unable to recover it. 00:28:16.797 [2024-12-10 22:59:24.237419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.797 [2024-12-10 22:59:24.237453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.797 qpair failed and we were unable to recover it. 00:28:16.797 [2024-12-10 22:59:24.237564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.797 [2024-12-10 22:59:24.237597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.797 qpair failed and we were unable to recover it. 00:28:16.797 [2024-12-10 22:59:24.237776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.797 [2024-12-10 22:59:24.237809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.797 qpair failed and we were unable to recover it. 00:28:16.797 [2024-12-10 22:59:24.237926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.797 [2024-12-10 22:59:24.237961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.797 qpair failed and we were unable to recover it. 00:28:16.797 [2024-12-10 22:59:24.238082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.797 [2024-12-10 22:59:24.238114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.797 qpair failed and we were unable to recover it. 00:28:16.797 [2024-12-10 22:59:24.238250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.797 [2024-12-10 22:59:24.238285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.797 qpair failed and we were unable to recover it. 00:28:16.797 [2024-12-10 22:59:24.238404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.797 [2024-12-10 22:59:24.238438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.797 qpair failed and we were unable to recover it. 00:28:16.797 [2024-12-10 22:59:24.238549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.797 [2024-12-10 22:59:24.238589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.797 qpair failed and we were unable to recover it. 00:28:16.797 [2024-12-10 22:59:24.238782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.797 [2024-12-10 22:59:24.238816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.797 qpair failed and we were unable to recover it. 00:28:16.797 [2024-12-10 22:59:24.238999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.797 [2024-12-10 22:59:24.239035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.797 qpair failed and we were unable to recover it. 00:28:16.797 [2024-12-10 22:59:24.239169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.797 [2024-12-10 22:59:24.239203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.797 qpair failed and we were unable to recover it. 00:28:16.797 [2024-12-10 22:59:24.239322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.797 [2024-12-10 22:59:24.239355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.797 qpair failed and we were unable to recover it. 00:28:16.797 [2024-12-10 22:59:24.239526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.797 [2024-12-10 22:59:24.239561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.797 qpair failed and we were unable to recover it. 00:28:16.797 [2024-12-10 22:59:24.239728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.797 [2024-12-10 22:59:24.239763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.797 qpair failed and we were unable to recover it. 00:28:16.797 [2024-12-10 22:59:24.239876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.797 [2024-12-10 22:59:24.239908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.797 qpair failed and we were unable to recover it. 00:28:16.797 [2024-12-10 22:59:24.240120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.797 [2024-12-10 22:59:24.240155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.797 qpair failed and we were unable to recover it. 00:28:16.797 [2024-12-10 22:59:24.240298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.797 [2024-12-10 22:59:24.240331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.797 qpair failed and we were unable to recover it. 00:28:16.797 [2024-12-10 22:59:24.240494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.797 [2024-12-10 22:59:24.240527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.797 qpair failed and we were unable to recover it. 00:28:16.797 [2024-12-10 22:59:24.240630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.797 [2024-12-10 22:59:24.240663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.797 qpair failed and we were unable to recover it. 00:28:16.797 [2024-12-10 22:59:24.240799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.797 [2024-12-10 22:59:24.240832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.797 qpair failed and we were unable to recover it. 00:28:16.797 [2024-12-10 22:59:24.241015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.797 [2024-12-10 22:59:24.241047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.797 qpair failed and we were unable to recover it. 00:28:16.797 [2024-12-10 22:59:24.241228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.797 [2024-12-10 22:59:24.241263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.797 qpair failed and we were unable to recover it. 00:28:16.797 [2024-12-10 22:59:24.241452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.797 [2024-12-10 22:59:24.241485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.797 qpair failed and we were unable to recover it. 00:28:16.797 [2024-12-10 22:59:24.241625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.797 [2024-12-10 22:59:24.241658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.797 qpair failed and we were unable to recover it. 00:28:16.797 [2024-12-10 22:59:24.241776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.797 [2024-12-10 22:59:24.241809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.797 qpair failed and we were unable to recover it. 00:28:16.797 [2024-12-10 22:59:24.241943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.797 [2024-12-10 22:59:24.241977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.797 qpair failed and we were unable to recover it. 00:28:16.797 [2024-12-10 22:59:24.242147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.797 [2024-12-10 22:59:24.242190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.797 qpair failed and we were unable to recover it. 00:28:16.797 [2024-12-10 22:59:24.242372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.797 [2024-12-10 22:59:24.242404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.797 qpair failed and we were unable to recover it. 00:28:16.797 [2024-12-10 22:59:24.242509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.797 [2024-12-10 22:59:24.242541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.797 qpair failed and we were unable to recover it. 00:28:16.797 [2024-12-10 22:59:24.242723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.797 [2024-12-10 22:59:24.242757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.797 qpair failed and we were unable to recover it. 00:28:16.797 [2024-12-10 22:59:24.242877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.797 [2024-12-10 22:59:24.242910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.797 qpair failed and we were unable to recover it. 00:28:16.797 [2024-12-10 22:59:24.243026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.797 [2024-12-10 22:59:24.243058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.797 qpair failed and we were unable to recover it. 00:28:16.797 [2024-12-10 22:59:24.243175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.797 [2024-12-10 22:59:24.243208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.797 qpair failed and we were unable to recover it. 00:28:16.797 [2024-12-10 22:59:24.243417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.797 [2024-12-10 22:59:24.243449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.797 qpair failed and we were unable to recover it. 00:28:16.797 [2024-12-10 22:59:24.243629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.797 [2024-12-10 22:59:24.243663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.797 qpair failed and we were unable to recover it. 00:28:16.797 [2024-12-10 22:59:24.243955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.797 [2024-12-10 22:59:24.243989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.798 qpair failed and we were unable to recover it. 00:28:16.798 [2024-12-10 22:59:24.244172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.798 [2024-12-10 22:59:24.244206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.798 qpair failed and we were unable to recover it. 00:28:16.798 [2024-12-10 22:59:24.244318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.798 [2024-12-10 22:59:24.244349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.798 qpair failed and we were unable to recover it. 00:28:16.798 [2024-12-10 22:59:24.244520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.798 [2024-12-10 22:59:24.244553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.798 qpair failed and we were unable to recover it. 00:28:16.798 [2024-12-10 22:59:24.244730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.798 [2024-12-10 22:59:24.244763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.798 qpair failed and we were unable to recover it. 00:28:16.798 [2024-12-10 22:59:24.244937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.798 [2024-12-10 22:59:24.244968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.798 qpair failed and we were unable to recover it. 00:28:16.798 [2024-12-10 22:59:24.245089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.798 [2024-12-10 22:59:24.245121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.798 qpair failed and we were unable to recover it. 00:28:16.798 [2024-12-10 22:59:24.245252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.798 [2024-12-10 22:59:24.245287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.798 qpair failed and we were unable to recover it. 00:28:16.798 [2024-12-10 22:59:24.245407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.798 [2024-12-10 22:59:24.245439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.798 qpair failed and we were unable to recover it. 00:28:16.798 [2024-12-10 22:59:24.245570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.798 [2024-12-10 22:59:24.245602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.798 qpair failed and we were unable to recover it. 00:28:16.798 [2024-12-10 22:59:24.245728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.798 [2024-12-10 22:59:24.245761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.798 qpair failed and we were unable to recover it. 00:28:16.798 [2024-12-10 22:59:24.245877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.798 [2024-12-10 22:59:24.245911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.798 qpair failed and we were unable to recover it. 00:28:16.798 [2024-12-10 22:59:24.246022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.798 [2024-12-10 22:59:24.246059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.798 qpair failed and we were unable to recover it. 00:28:16.798 [2024-12-10 22:59:24.246173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.798 [2024-12-10 22:59:24.246207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.798 qpair failed and we were unable to recover it. 00:28:16.798 [2024-12-10 22:59:24.246389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.798 [2024-12-10 22:59:24.246421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.798 qpair failed and we were unable to recover it. 00:28:16.798 [2024-12-10 22:59:24.246620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.798 [2024-12-10 22:59:24.246652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.798 qpair failed and we were unable to recover it. 00:28:16.798 [2024-12-10 22:59:24.246755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.798 [2024-12-10 22:59:24.246785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.798 qpair failed and we were unable to recover it. 00:28:16.798 [2024-12-10 22:59:24.246969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.798 [2024-12-10 22:59:24.247004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.798 qpair failed and we were unable to recover it. 00:28:16.798 [2024-12-10 22:59:24.247184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.798 [2024-12-10 22:59:24.247217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.798 qpair failed and we were unable to recover it. 00:28:16.798 [2024-12-10 22:59:24.247388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.798 [2024-12-10 22:59:24.247421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.798 qpair failed and we were unable to recover it. 00:28:16.798 [2024-12-10 22:59:24.247533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.798 [2024-12-10 22:59:24.247566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.798 qpair failed and we were unable to recover it. 00:28:16.798 [2024-12-10 22:59:24.247857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.798 [2024-12-10 22:59:24.247889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.798 qpair failed and we were unable to recover it. 00:28:16.798 [2024-12-10 22:59:24.248000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.798 [2024-12-10 22:59:24.248032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.798 qpair failed and we were unable to recover it. 00:28:16.798 [2024-12-10 22:59:24.248201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.798 [2024-12-10 22:59:24.248233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.798 qpair failed and we were unable to recover it. 00:28:16.798 [2024-12-10 22:59:24.248352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.798 [2024-12-10 22:59:24.248385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.798 qpair failed and we were unable to recover it. 00:28:16.798 [2024-12-10 22:59:24.248520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.798 [2024-12-10 22:59:24.248551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.798 qpair failed and we were unable to recover it. 00:28:16.798 [2024-12-10 22:59:24.248676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.798 [2024-12-10 22:59:24.248708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.798 qpair failed and we were unable to recover it. 00:28:16.798 [2024-12-10 22:59:24.248831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.798 [2024-12-10 22:59:24.248864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.798 qpair failed and we were unable to recover it. 00:28:16.798 [2024-12-10 22:59:24.249065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.798 [2024-12-10 22:59:24.249097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.798 qpair failed and we were unable to recover it. 00:28:16.798 [2024-12-10 22:59:24.249283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.798 [2024-12-10 22:59:24.249317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.798 qpair failed and we were unable to recover it. 00:28:16.798 [2024-12-10 22:59:24.249494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.798 [2024-12-10 22:59:24.249525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.798 qpair failed and we were unable to recover it. 00:28:16.798 [2024-12-10 22:59:24.249629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.798 [2024-12-10 22:59:24.249660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.798 qpair failed and we were unable to recover it. 00:28:16.798 [2024-12-10 22:59:24.249776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.798 [2024-12-10 22:59:24.249809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.798 qpair failed and we were unable to recover it. 00:28:16.798 [2024-12-10 22:59:24.249977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.798 [2024-12-10 22:59:24.250010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.798 qpair failed and we were unable to recover it. 00:28:16.798 [2024-12-10 22:59:24.250115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.798 [2024-12-10 22:59:24.250147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.799 qpair failed and we were unable to recover it. 00:28:16.799 [2024-12-10 22:59:24.250325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.799 [2024-12-10 22:59:24.250357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.799 qpair failed and we were unable to recover it. 00:28:16.799 [2024-12-10 22:59:24.250526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.799 [2024-12-10 22:59:24.250559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.799 qpair failed and we were unable to recover it. 00:28:16.799 [2024-12-10 22:59:24.250661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.799 [2024-12-10 22:59:24.250693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.799 qpair failed and we were unable to recover it. 00:28:16.799 [2024-12-10 22:59:24.250868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.799 [2024-12-10 22:59:24.250901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.799 qpair failed and we were unable to recover it. 00:28:16.799 [2024-12-10 22:59:24.251023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.799 [2024-12-10 22:59:24.251057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.799 qpair failed and we were unable to recover it. 00:28:16.799 [2024-12-10 22:59:24.251220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.799 [2024-12-10 22:59:24.251253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.799 qpair failed and we were unable to recover it. 00:28:16.799 [2024-12-10 22:59:24.251363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.799 [2024-12-10 22:59:24.251396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.799 qpair failed and we were unable to recover it. 00:28:16.799 [2024-12-10 22:59:24.251560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.799 [2024-12-10 22:59:24.251593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.799 qpair failed and we were unable to recover it. 00:28:16.799 [2024-12-10 22:59:24.251695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.799 [2024-12-10 22:59:24.251727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.799 qpair failed and we were unable to recover it. 00:28:16.799 [2024-12-10 22:59:24.251845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.799 [2024-12-10 22:59:24.251877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.799 qpair failed and we were unable to recover it. 00:28:16.799 [2024-12-10 22:59:24.252077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.799 [2024-12-10 22:59:24.252108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.799 qpair failed and we were unable to recover it. 00:28:16.799 [2024-12-10 22:59:24.252233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.799 [2024-12-10 22:59:24.252266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.799 qpair failed and we were unable to recover it. 00:28:16.799 [2024-12-10 22:59:24.252459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.799 [2024-12-10 22:59:24.252491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.799 qpair failed and we were unable to recover it. 00:28:16.799 [2024-12-10 22:59:24.252623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.799 [2024-12-10 22:59:24.252656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.799 qpair failed and we were unable to recover it. 00:28:16.799 [2024-12-10 22:59:24.252759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.799 [2024-12-10 22:59:24.252792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.799 qpair failed and we were unable to recover it. 00:28:16.799 [2024-12-10 22:59:24.253038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.799 [2024-12-10 22:59:24.253070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.799 qpair failed and we were unable to recover it. 00:28:16.799 [2024-12-10 22:59:24.253262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.799 [2024-12-10 22:59:24.253296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.799 qpair failed and we were unable to recover it. 00:28:16.799 [2024-12-10 22:59:24.253509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.799 [2024-12-10 22:59:24.253547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.799 qpair failed and we were unable to recover it. 00:28:16.799 [2024-12-10 22:59:24.253669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.799 [2024-12-10 22:59:24.253702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.799 qpair failed and we were unable to recover it. 00:28:16.799 [2024-12-10 22:59:24.253819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.799 [2024-12-10 22:59:24.253853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.799 qpair failed and we were unable to recover it. 00:28:16.799 [2024-12-10 22:59:24.254027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.799 [2024-12-10 22:59:24.254061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.799 qpair failed and we were unable to recover it. 00:28:16.799 [2024-12-10 22:59:24.254236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.799 [2024-12-10 22:59:24.254272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.799 qpair failed and we were unable to recover it. 00:28:16.799 [2024-12-10 22:59:24.254463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.799 [2024-12-10 22:59:24.254497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.799 qpair failed and we were unable to recover it. 00:28:16.799 [2024-12-10 22:59:24.254673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.799 [2024-12-10 22:59:24.254706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.799 qpair failed and we were unable to recover it. 00:28:16.799 [2024-12-10 22:59:24.254879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.799 [2024-12-10 22:59:24.254912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.799 qpair failed and we were unable to recover it. 00:28:16.799 [2024-12-10 22:59:24.255087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.799 [2024-12-10 22:59:24.255119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.799 qpair failed and we were unable to recover it. 00:28:16.799 [2024-12-10 22:59:24.255246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.799 [2024-12-10 22:59:24.255279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.799 qpair failed and we were unable to recover it. 00:28:16.799 [2024-12-10 22:59:24.255381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.799 [2024-12-10 22:59:24.255411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.799 qpair failed and we were unable to recover it. 00:28:16.799 [2024-12-10 22:59:24.255579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.799 [2024-12-10 22:59:24.255612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.799 qpair failed and we were unable to recover it. 00:28:16.799 [2024-12-10 22:59:24.255855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.799 [2024-12-10 22:59:24.255888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.799 qpair failed and we were unable to recover it. 00:28:16.799 [2024-12-10 22:59:24.256013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.799 [2024-12-10 22:59:24.256045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.799 qpair failed and we were unable to recover it. 00:28:16.799 [2024-12-10 22:59:24.256172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.799 [2024-12-10 22:59:24.256207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.799 qpair failed and we were unable to recover it. 00:28:16.799 [2024-12-10 22:59:24.256392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.799 [2024-12-10 22:59:24.256426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.799 qpair failed and we were unable to recover it. 00:28:16.799 [2024-12-10 22:59:24.256604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.799 [2024-12-10 22:59:24.256636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.799 qpair failed and we were unable to recover it. 00:28:16.799 [2024-12-10 22:59:24.256818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.799 [2024-12-10 22:59:24.256851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.799 qpair failed and we were unable to recover it. 00:28:16.799 [2024-12-10 22:59:24.257076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.799 [2024-12-10 22:59:24.257110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.799 qpair failed and we were unable to recover it. 00:28:16.799 [2024-12-10 22:59:24.257242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.799 [2024-12-10 22:59:24.257276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.799 qpair failed and we were unable to recover it. 00:28:16.799 [2024-12-10 22:59:24.257496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.799 [2024-12-10 22:59:24.257528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.799 qpair failed and we were unable to recover it. 00:28:16.799 [2024-12-10 22:59:24.257726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.799 [2024-12-10 22:59:24.257759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.799 qpair failed and we were unable to recover it. 00:28:16.800 [2024-12-10 22:59:24.257928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.800 [2024-12-10 22:59:24.257962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.800 qpair failed and we were unable to recover it. 00:28:16.800 [2024-12-10 22:59:24.258067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.800 [2024-12-10 22:59:24.258100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.800 qpair failed and we were unable to recover it. 00:28:16.800 [2024-12-10 22:59:24.258211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.800 [2024-12-10 22:59:24.258246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.800 qpair failed and we were unable to recover it. 00:28:16.800 [2024-12-10 22:59:24.258343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.800 [2024-12-10 22:59:24.258377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.800 qpair failed and we were unable to recover it. 00:28:16.800 [2024-12-10 22:59:24.258479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.800 [2024-12-10 22:59:24.258512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.800 qpair failed and we were unable to recover it. 00:28:16.800 [2024-12-10 22:59:24.258686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.800 [2024-12-10 22:59:24.258719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.800 qpair failed and we were unable to recover it. 00:28:16.800 [2024-12-10 22:59:24.258921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.800 [2024-12-10 22:59:24.258954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.800 qpair failed and we were unable to recover it. 00:28:16.800 [2024-12-10 22:59:24.259152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.800 [2024-12-10 22:59:24.259214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.800 qpair failed and we were unable to recover it. 00:28:16.800 [2024-12-10 22:59:24.259320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.800 [2024-12-10 22:59:24.259353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.800 qpair failed and we were unable to recover it. 00:28:16.800 [2024-12-10 22:59:24.259465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.800 [2024-12-10 22:59:24.259498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.800 qpair failed and we were unable to recover it. 00:28:16.800 [2024-12-10 22:59:24.259601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.800 [2024-12-10 22:59:24.259633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.800 qpair failed and we were unable to recover it. 00:28:16.800 [2024-12-10 22:59:24.259760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.800 [2024-12-10 22:59:24.259793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.800 qpair failed and we were unable to recover it. 00:28:16.800 [2024-12-10 22:59:24.259959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.800 [2024-12-10 22:59:24.259991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.800 qpair failed and we were unable to recover it. 00:28:16.800 [2024-12-10 22:59:24.260166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.800 [2024-12-10 22:59:24.260201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.800 qpair failed and we were unable to recover it. 00:28:16.800 [2024-12-10 22:59:24.260375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.800 [2024-12-10 22:59:24.260407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.800 qpair failed and we were unable to recover it. 00:28:16.800 [2024-12-10 22:59:24.260531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.800 [2024-12-10 22:59:24.260563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.800 qpair failed and we were unable to recover it. 00:28:16.800 [2024-12-10 22:59:24.260738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.800 [2024-12-10 22:59:24.260771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.800 qpair failed and we were unable to recover it. 00:28:16.800 [2024-12-10 22:59:24.260961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.800 [2024-12-10 22:59:24.260994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.800 qpair failed and we were unable to recover it. 00:28:16.800 [2024-12-10 22:59:24.261199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.800 [2024-12-10 22:59:24.261242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.800 qpair failed and we were unable to recover it. 00:28:16.800 [2024-12-10 22:59:24.261413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.800 [2024-12-10 22:59:24.261446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.800 qpair failed and we were unable to recover it. 00:28:16.800 [2024-12-10 22:59:24.261565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.800 [2024-12-10 22:59:24.261597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.800 qpair failed and we were unable to recover it. 00:28:16.800 [2024-12-10 22:59:24.261771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.800 [2024-12-10 22:59:24.261805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.800 qpair failed and we were unable to recover it. 00:28:16.800 [2024-12-10 22:59:24.261911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.800 [2024-12-10 22:59:24.261944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.800 qpair failed and we were unable to recover it. 00:28:16.800 [2024-12-10 22:59:24.262063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.800 [2024-12-10 22:59:24.262094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.800 qpair failed and we were unable to recover it. 00:28:16.800 [2024-12-10 22:59:24.262198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.800 [2024-12-10 22:59:24.262231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.800 qpair failed and we were unable to recover it. 00:28:16.800 [2024-12-10 22:59:24.262487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.800 [2024-12-10 22:59:24.262520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.800 qpair failed and we were unable to recover it. 00:28:16.800 [2024-12-10 22:59:24.262723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.800 [2024-12-10 22:59:24.262754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.800 qpair failed and we were unable to recover it. 00:28:16.800 [2024-12-10 22:59:24.262872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.800 [2024-12-10 22:59:24.262904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.800 qpair failed and we were unable to recover it. 00:28:16.800 [2024-12-10 22:59:24.263007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.800 [2024-12-10 22:59:24.263040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.800 qpair failed and we were unable to recover it. 00:28:16.800 [2024-12-10 22:59:24.263306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.800 [2024-12-10 22:59:24.263340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.800 qpair failed and we were unable to recover it. 00:28:16.800 [2024-12-10 22:59:24.263517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.800 [2024-12-10 22:59:24.263550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.800 qpair failed and we were unable to recover it. 00:28:16.800 [2024-12-10 22:59:24.263746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.800 [2024-12-10 22:59:24.263778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.800 qpair failed and we were unable to recover it. 00:28:16.800 [2024-12-10 22:59:24.263901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.800 [2024-12-10 22:59:24.263935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.800 qpair failed and we were unable to recover it. 00:28:16.800 [2024-12-10 22:59:24.264045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.800 [2024-12-10 22:59:24.264077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.800 qpair failed and we were unable to recover it. 00:28:16.800 [2024-12-10 22:59:24.264261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.800 [2024-12-10 22:59:24.264296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.800 qpair failed and we were unable to recover it. 00:28:16.800 [2024-12-10 22:59:24.264492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.800 [2024-12-10 22:59:24.264524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.800 qpair failed and we were unable to recover it. 00:28:16.800 [2024-12-10 22:59:24.264732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.800 [2024-12-10 22:59:24.264765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.800 qpair failed and we were unable to recover it. 00:28:16.800 [2024-12-10 22:59:24.264889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.800 [2024-12-10 22:59:24.264921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.800 qpair failed and we were unable to recover it. 00:28:16.800 [2024-12-10 22:59:24.265096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.800 [2024-12-10 22:59:24.265130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.800 qpair failed and we were unable to recover it. 00:28:16.800 [2024-12-10 22:59:24.265331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.801 [2024-12-10 22:59:24.265374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.801 qpair failed and we were unable to recover it. 00:28:16.801 [2024-12-10 22:59:24.265488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.801 [2024-12-10 22:59:24.265522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.801 qpair failed and we were unable to recover it. 00:28:16.801 [2024-12-10 22:59:24.265697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.801 [2024-12-10 22:59:24.265730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.801 qpair failed and we were unable to recover it. 00:28:16.801 [2024-12-10 22:59:24.265834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.801 [2024-12-10 22:59:24.265867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.801 qpair failed and we were unable to recover it. 00:28:16.801 [2024-12-10 22:59:24.265970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.801 [2024-12-10 22:59:24.266003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.801 qpair failed and we were unable to recover it. 00:28:16.801 [2024-12-10 22:59:24.266185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.801 [2024-12-10 22:59:24.266219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.801 qpair failed and we were unable to recover it. 00:28:16.801 [2024-12-10 22:59:24.266340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.801 [2024-12-10 22:59:24.266373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.801 qpair failed and we were unable to recover it. 00:28:16.801 [2024-12-10 22:59:24.266475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.801 [2024-12-10 22:59:24.266508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.801 qpair failed and we were unable to recover it. 00:28:16.801 [2024-12-10 22:59:24.266628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.801 [2024-12-10 22:59:24.266661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.801 qpair failed and we were unable to recover it. 00:28:16.801 [2024-12-10 22:59:24.266829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.801 [2024-12-10 22:59:24.266862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.801 qpair failed and we were unable to recover it. 00:28:16.801 [2024-12-10 22:59:24.266963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.801 [2024-12-10 22:59:24.266995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.801 qpair failed and we were unable to recover it. 00:28:16.801 [2024-12-10 22:59:24.267123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.801 [2024-12-10 22:59:24.267164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.801 qpair failed and we were unable to recover it. 00:28:16.801 [2024-12-10 22:59:24.267336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.801 [2024-12-10 22:59:24.267368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.801 qpair failed and we were unable to recover it. 00:28:16.801 [2024-12-10 22:59:24.267550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.801 [2024-12-10 22:59:24.267582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.801 qpair failed and we were unable to recover it. 00:28:16.801 [2024-12-10 22:59:24.267747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.801 [2024-12-10 22:59:24.267781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.801 qpair failed and we were unable to recover it. 00:28:16.801 [2024-12-10 22:59:24.267952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.801 [2024-12-10 22:59:24.267986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.801 qpair failed and we were unable to recover it. 00:28:16.801 [2024-12-10 22:59:24.268092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.801 [2024-12-10 22:59:24.268126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.801 qpair failed and we were unable to recover it. 00:28:16.801 [2024-12-10 22:59:24.268303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.801 [2024-12-10 22:59:24.268338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.801 qpair failed and we were unable to recover it. 00:28:16.801 [2024-12-10 22:59:24.268510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.801 [2024-12-10 22:59:24.268542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.801 qpair failed and we were unable to recover it. 00:28:16.801 [2024-12-10 22:59:24.268644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.801 [2024-12-10 22:59:24.268684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.801 qpair failed and we were unable to recover it. 00:28:16.801 [2024-12-10 22:59:24.268926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.801 [2024-12-10 22:59:24.268959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.801 qpair failed and we were unable to recover it. 00:28:16.801 [2024-12-10 22:59:24.269077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.801 [2024-12-10 22:59:24.269110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.801 qpair failed and we were unable to recover it. 00:28:16.801 [2024-12-10 22:59:24.269234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.801 [2024-12-10 22:59:24.269269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.801 qpair failed and we were unable to recover it. 00:28:16.801 [2024-12-10 22:59:24.269455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.801 [2024-12-10 22:59:24.269487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.801 qpair failed and we were unable to recover it. 00:28:16.801 [2024-12-10 22:59:24.269601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.801 [2024-12-10 22:59:24.269634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.801 qpair failed and we were unable to recover it. 00:28:16.801 [2024-12-10 22:59:24.269748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.801 [2024-12-10 22:59:24.269780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.801 qpair failed and we were unable to recover it. 00:28:16.801 [2024-12-10 22:59:24.269951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.801 [2024-12-10 22:59:24.269983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.801 qpair failed and we were unable to recover it. 00:28:16.801 [2024-12-10 22:59:24.270179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.801 [2024-12-10 22:59:24.270213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.801 qpair failed and we were unable to recover it. 00:28:16.801 [2024-12-10 22:59:24.270386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.801 [2024-12-10 22:59:24.270418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.801 qpair failed and we were unable to recover it. 00:28:16.801 [2024-12-10 22:59:24.270543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.801 [2024-12-10 22:59:24.270575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.801 qpair failed and we were unable to recover it. 00:28:16.801 [2024-12-10 22:59:24.270682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.801 [2024-12-10 22:59:24.270714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.801 qpair failed and we were unable to recover it. 00:28:16.801 [2024-12-10 22:59:24.270822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.801 [2024-12-10 22:59:24.270853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.801 qpair failed and we were unable to recover it. 00:28:16.801 [2024-12-10 22:59:24.271076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.801 [2024-12-10 22:59:24.271107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.801 qpair failed and we were unable to recover it. 00:28:16.801 [2024-12-10 22:59:24.271295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.801 [2024-12-10 22:59:24.271330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.801 qpair failed and we were unable to recover it. 00:28:16.801 [2024-12-10 22:59:24.271444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.801 [2024-12-10 22:59:24.271477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.801 qpair failed and we were unable to recover it. 00:28:16.801 [2024-12-10 22:59:24.271587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.801 [2024-12-10 22:59:24.271620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.801 qpair failed and we were unable to recover it. 00:28:16.801 [2024-12-10 22:59:24.271793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.801 [2024-12-10 22:59:24.271825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.801 qpair failed and we were unable to recover it. 00:28:16.801 [2024-12-10 22:59:24.271994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.801 [2024-12-10 22:59:24.272027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.801 qpair failed and we were unable to recover it. 00:28:16.801 [2024-12-10 22:59:24.272145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.801 [2024-12-10 22:59:24.272188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.801 qpair failed and we were unable to recover it. 00:28:16.801 [2024-12-10 22:59:24.272362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.802 [2024-12-10 22:59:24.272395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.802 qpair failed and we were unable to recover it. 00:28:16.802 [2024-12-10 22:59:24.272521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.802 [2024-12-10 22:59:24.272554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.802 qpair failed and we were unable to recover it. 00:28:16.802 [2024-12-10 22:59:24.272762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.802 [2024-12-10 22:59:24.272794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.802 qpair failed and we were unable to recover it. 00:28:16.802 [2024-12-10 22:59:24.272910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.802 [2024-12-10 22:59:24.272943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.802 qpair failed and we were unable to recover it. 00:28:16.802 [2024-12-10 22:59:24.273067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.802 [2024-12-10 22:59:24.273099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.802 qpair failed and we were unable to recover it. 00:28:16.802 [2024-12-10 22:59:24.273280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.802 [2024-12-10 22:59:24.273314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.802 qpair failed and we were unable to recover it. 00:28:16.802 [2024-12-10 22:59:24.273535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.802 [2024-12-10 22:59:24.273567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.802 qpair failed and we were unable to recover it. 00:28:16.802 [2024-12-10 22:59:24.273739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.802 [2024-12-10 22:59:24.273772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.802 qpair failed and we were unable to recover it. 00:28:16.802 [2024-12-10 22:59:24.273985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.802 [2024-12-10 22:59:24.274019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.802 qpair failed and we were unable to recover it. 00:28:16.802 [2024-12-10 22:59:24.274141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.802 [2024-12-10 22:59:24.274185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.802 qpair failed and we were unable to recover it. 00:28:16.802 [2024-12-10 22:59:24.274321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.802 [2024-12-10 22:59:24.274354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.802 qpair failed and we were unable to recover it. 00:28:16.802 [2024-12-10 22:59:24.274551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.802 [2024-12-10 22:59:24.274583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.802 qpair failed and we were unable to recover it. 00:28:16.802 [2024-12-10 22:59:24.274689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.802 [2024-12-10 22:59:24.274721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.802 qpair failed and we were unable to recover it. 00:28:16.802 [2024-12-10 22:59:24.274912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.802 [2024-12-10 22:59:24.274945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.802 qpair failed and we were unable to recover it. 00:28:16.802 [2024-12-10 22:59:24.275202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.802 [2024-12-10 22:59:24.275236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.802 qpair failed and we were unable to recover it. 00:28:16.802 [2024-12-10 22:59:24.275436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.802 [2024-12-10 22:59:24.275470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.802 qpair failed and we were unable to recover it. 00:28:16.802 [2024-12-10 22:59:24.275645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.802 [2024-12-10 22:59:24.275677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.802 qpair failed and we were unable to recover it. 00:28:16.802 [2024-12-10 22:59:24.275847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.802 [2024-12-10 22:59:24.275880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.802 qpair failed and we were unable to recover it. 00:28:16.802 [2024-12-10 22:59:24.275999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.802 [2024-12-10 22:59:24.276031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.802 qpair failed and we were unable to recover it. 00:28:16.802 [2024-12-10 22:59:24.276201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.802 [2024-12-10 22:59:24.276235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.802 qpair failed and we were unable to recover it. 00:28:16.802 [2024-12-10 22:59:24.276465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.802 [2024-12-10 22:59:24.276503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.802 qpair failed and we were unable to recover it. 00:28:16.802 [2024-12-10 22:59:24.276603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.802 [2024-12-10 22:59:24.276636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.802 qpair failed and we were unable to recover it. 00:28:16.802 [2024-12-10 22:59:24.276827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.802 [2024-12-10 22:59:24.276860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.802 qpair failed and we were unable to recover it. 00:28:16.802 [2024-12-10 22:59:24.277028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.802 [2024-12-10 22:59:24.277060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.802 qpair failed and we were unable to recover it. 00:28:16.802 [2024-12-10 22:59:24.277186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.802 [2024-12-10 22:59:24.277221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.802 qpair failed and we were unable to recover it. 00:28:16.802 [2024-12-10 22:59:24.277334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.802 [2024-12-10 22:59:24.277366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.802 qpair failed and we were unable to recover it. 00:28:16.802 [2024-12-10 22:59:24.277551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.802 [2024-12-10 22:59:24.277583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.802 qpair failed and we were unable to recover it. 00:28:16.802 [2024-12-10 22:59:24.277753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.802 [2024-12-10 22:59:24.277785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.802 qpair failed and we were unable to recover it. 00:28:16.802 [2024-12-10 22:59:24.277952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.802 [2024-12-10 22:59:24.277984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.802 qpair failed and we were unable to recover it. 00:28:16.802 [2024-12-10 22:59:24.278153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.802 [2024-12-10 22:59:24.278195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.802 qpair failed and we were unable to recover it. 00:28:16.802 [2024-12-10 22:59:24.278312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.802 [2024-12-10 22:59:24.278344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.802 qpair failed and we were unable to recover it. 00:28:16.802 [2024-12-10 22:59:24.278516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.802 [2024-12-10 22:59:24.278548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.802 qpair failed and we were unable to recover it. 00:28:16.802 [2024-12-10 22:59:24.278663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.802 [2024-12-10 22:59:24.278696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.802 qpair failed and we were unable to recover it. 00:28:16.802 [2024-12-10 22:59:24.278808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.802 [2024-12-10 22:59:24.278840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.802 qpair failed and we were unable to recover it. 00:28:16.802 [2024-12-10 22:59:24.278949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.802 [2024-12-10 22:59:24.278984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.802 qpair failed and we were unable to recover it. 00:28:16.803 [2024-12-10 22:59:24.279191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.803 [2024-12-10 22:59:24.279225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.803 qpair failed and we were unable to recover it. 00:28:16.803 [2024-12-10 22:59:24.279341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.803 [2024-12-10 22:59:24.279374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.803 qpair failed and we were unable to recover it. 00:28:16.803 [2024-12-10 22:59:24.279490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.803 [2024-12-10 22:59:24.279524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.803 qpair failed and we were unable to recover it. 00:28:16.803 [2024-12-10 22:59:24.279695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.803 [2024-12-10 22:59:24.279727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.803 qpair failed and we were unable to recover it. 00:28:16.803 [2024-12-10 22:59:24.279913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.803 [2024-12-10 22:59:24.279945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.803 qpair failed and we were unable to recover it. 00:28:16.803 [2024-12-10 22:59:24.280130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.803 [2024-12-10 22:59:24.280177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.803 qpair failed and we were unable to recover it. 00:28:16.803 [2024-12-10 22:59:24.280421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.803 [2024-12-10 22:59:24.280454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.803 qpair failed and we were unable to recover it. 00:28:16.803 [2024-12-10 22:59:24.280569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.803 [2024-12-10 22:59:24.280601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.803 qpair failed and we were unable to recover it. 00:28:16.803 [2024-12-10 22:59:24.280828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.803 [2024-12-10 22:59:24.280860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.803 qpair failed and we were unable to recover it. 00:28:16.803 [2024-12-10 22:59:24.280976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.803 [2024-12-10 22:59:24.281009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.803 qpair failed and we were unable to recover it. 00:28:16.803 [2024-12-10 22:59:24.281213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.803 [2024-12-10 22:59:24.281247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.803 qpair failed and we were unable to recover it. 00:28:16.803 [2024-12-10 22:59:24.281424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.803 [2024-12-10 22:59:24.281455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.803 qpair failed and we were unable to recover it. 00:28:16.803 [2024-12-10 22:59:24.281580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.803 [2024-12-10 22:59:24.281610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.803 qpair failed and we were unable to recover it. 00:28:16.803 [2024-12-10 22:59:24.281806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.803 [2024-12-10 22:59:24.281838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.803 qpair failed and we were unable to recover it. 00:28:16.803 [2024-12-10 22:59:24.281961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.803 [2024-12-10 22:59:24.281994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.803 qpair failed and we were unable to recover it. 00:28:16.803 [2024-12-10 22:59:24.282212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.803 [2024-12-10 22:59:24.282246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.803 qpair failed and we were unable to recover it. 00:28:16.803 [2024-12-10 22:59:24.282418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.803 [2024-12-10 22:59:24.282451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.803 qpair failed and we were unable to recover it. 00:28:16.803 [2024-12-10 22:59:24.282617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.803 [2024-12-10 22:59:24.282649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.803 qpair failed and we were unable to recover it. 00:28:16.803 [2024-12-10 22:59:24.282896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.803 [2024-12-10 22:59:24.282929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.803 qpair failed and we were unable to recover it. 00:28:16.803 [2024-12-10 22:59:24.283123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.803 [2024-12-10 22:59:24.283155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.803 qpair failed and we were unable to recover it. 00:28:16.803 [2024-12-10 22:59:24.283348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.803 [2024-12-10 22:59:24.283379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.803 qpair failed and we were unable to recover it. 00:28:16.803 [2024-12-10 22:59:24.283493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.803 [2024-12-10 22:59:24.283526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.803 qpair failed and we were unable to recover it. 00:28:16.803 [2024-12-10 22:59:24.283773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.803 [2024-12-10 22:59:24.283806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.803 qpair failed and we were unable to recover it. 00:28:16.803 [2024-12-10 22:59:24.283970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.803 [2024-12-10 22:59:24.284002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.803 qpair failed and we were unable to recover it. 00:28:16.803 [2024-12-10 22:59:24.284095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.803 [2024-12-10 22:59:24.284128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.803 qpair failed and we were unable to recover it. 00:28:16.803 [2024-12-10 22:59:24.284299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.803 [2024-12-10 22:59:24.284368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.803 qpair failed and we were unable to recover it. 00:28:16.803 [2024-12-10 22:59:24.284492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.803 [2024-12-10 22:59:24.284525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.803 qpair failed and we were unable to recover it. 00:28:16.803 [2024-12-10 22:59:24.284656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.803 [2024-12-10 22:59:24.284689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.803 qpair failed and we were unable to recover it. 00:28:16.803 [2024-12-10 22:59:24.284795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.803 [2024-12-10 22:59:24.284829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.803 qpair failed and we were unable to recover it. 00:28:16.803 [2024-12-10 22:59:24.284928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.803 [2024-12-10 22:59:24.284960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.803 qpair failed and we were unable to recover it. 00:28:16.803 [2024-12-10 22:59:24.285063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.803 [2024-12-10 22:59:24.285096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.803 qpair failed and we were unable to recover it. 00:28:16.803 [2024-12-10 22:59:24.285267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.803 [2024-12-10 22:59:24.285300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.803 qpair failed and we were unable to recover it. 00:28:16.803 [2024-12-10 22:59:24.285400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.803 [2024-12-10 22:59:24.285432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.803 qpair failed and we were unable to recover it. 00:28:16.803 [2024-12-10 22:59:24.285549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.803 [2024-12-10 22:59:24.285582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.803 qpair failed and we were unable to recover it. 00:28:16.803 [2024-12-10 22:59:24.285749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.803 [2024-12-10 22:59:24.285781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.803 qpair failed and we were unable to recover it. 00:28:16.803 [2024-12-10 22:59:24.285879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.803 [2024-12-10 22:59:24.285912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.803 qpair failed and we were unable to recover it. 00:28:16.803 [2024-12-10 22:59:24.286031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.803 [2024-12-10 22:59:24.286063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.803 qpair failed and we were unable to recover it. 00:28:16.803 [2024-12-10 22:59:24.286251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.803 [2024-12-10 22:59:24.286285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.803 qpair failed and we were unable to recover it. 00:28:16.803 [2024-12-10 22:59:24.286456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.803 [2024-12-10 22:59:24.286489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.803 qpair failed and we were unable to recover it. 00:28:16.804 [2024-12-10 22:59:24.286665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.804 [2024-12-10 22:59:24.286698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.804 qpair failed and we were unable to recover it. 00:28:16.804 [2024-12-10 22:59:24.286901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.804 [2024-12-10 22:59:24.286934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.804 qpair failed and we were unable to recover it. 00:28:16.804 [2024-12-10 22:59:24.287038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.804 [2024-12-10 22:59:24.287071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.804 qpair failed and we were unable to recover it. 00:28:16.804 [2024-12-10 22:59:24.287234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.804 [2024-12-10 22:59:24.287267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.804 qpair failed and we were unable to recover it. 00:28:16.804 [2024-12-10 22:59:24.287438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.804 [2024-12-10 22:59:24.287473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.804 qpair failed and we were unable to recover it. 00:28:16.804 [2024-12-10 22:59:24.287587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.804 [2024-12-10 22:59:24.287619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.804 qpair failed and we were unable to recover it. 00:28:16.804 [2024-12-10 22:59:24.287789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.804 [2024-12-10 22:59:24.287823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.804 qpair failed and we were unable to recover it. 00:28:16.804 [2024-12-10 22:59:24.287994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.804 [2024-12-10 22:59:24.288028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.804 qpair failed and we were unable to recover it. 00:28:16.804 [2024-12-10 22:59:24.288122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.804 [2024-12-10 22:59:24.288156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.804 qpair failed and we were unable to recover it. 00:28:16.804 [2024-12-10 22:59:24.288332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.804 [2024-12-10 22:59:24.288364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.804 qpair failed and we were unable to recover it. 00:28:16.804 [2024-12-10 22:59:24.288553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.804 [2024-12-10 22:59:24.288585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.804 qpair failed and we were unable to recover it. 00:28:16.804 [2024-12-10 22:59:24.288699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.804 [2024-12-10 22:59:24.288732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.804 qpair failed and we were unable to recover it. 00:28:16.804 [2024-12-10 22:59:24.288897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.804 [2024-12-10 22:59:24.288930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.804 qpair failed and we were unable to recover it. 00:28:16.804 [2024-12-10 22:59:24.289126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.804 [2024-12-10 22:59:24.289174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.804 qpair failed and we were unable to recover it. 00:28:16.804 [2024-12-10 22:59:24.289468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.804 [2024-12-10 22:59:24.289502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.804 qpair failed and we were unable to recover it. 00:28:16.804 [2024-12-10 22:59:24.289774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.804 [2024-12-10 22:59:24.289806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.804 qpair failed and we were unable to recover it. 00:28:16.804 [2024-12-10 22:59:24.289922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.804 [2024-12-10 22:59:24.289954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.804 qpair failed and we were unable to recover it. 00:28:16.804 [2024-12-10 22:59:24.290171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.804 [2024-12-10 22:59:24.290204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.804 qpair failed and we were unable to recover it. 00:28:16.804 [2024-12-10 22:59:24.290375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.804 [2024-12-10 22:59:24.290408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.804 qpair failed and we were unable to recover it. 00:28:16.804 [2024-12-10 22:59:24.290511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.804 [2024-12-10 22:59:24.290544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.804 qpair failed and we were unable to recover it. 00:28:16.804 [2024-12-10 22:59:24.290727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.804 [2024-12-10 22:59:24.290759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.804 qpair failed and we were unable to recover it. 00:28:16.804 [2024-12-10 22:59:24.290871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.804 [2024-12-10 22:59:24.290904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.804 qpair failed and we were unable to recover it. 00:28:16.804 [2024-12-10 22:59:24.291019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.804 [2024-12-10 22:59:24.291051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.804 qpair failed and we were unable to recover it. 00:28:16.804 [2024-12-10 22:59:24.291180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.804 [2024-12-10 22:59:24.291214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.804 qpair failed and we were unable to recover it. 00:28:16.804 [2024-12-10 22:59:24.291393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.804 [2024-12-10 22:59:24.291424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.804 qpair failed and we were unable to recover it. 00:28:16.804 [2024-12-10 22:59:24.291606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.804 [2024-12-10 22:59:24.291638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.804 qpair failed and we were unable to recover it. 00:28:16.804 [2024-12-10 22:59:24.291738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.804 [2024-12-10 22:59:24.291778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.804 qpair failed and we were unable to recover it. 00:28:16.804 [2024-12-10 22:59:24.291906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.804 [2024-12-10 22:59:24.291939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.804 qpair failed and we were unable to recover it. 00:28:16.804 [2024-12-10 22:59:24.292182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.804 [2024-12-10 22:59:24.292217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.804 qpair failed and we were unable to recover it. 00:28:16.804 [2024-12-10 22:59:24.292318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.804 [2024-12-10 22:59:24.292351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.804 qpair failed and we were unable to recover it. 00:28:16.804 [2024-12-10 22:59:24.292452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.804 [2024-12-10 22:59:24.292484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.804 qpair failed and we were unable to recover it. 00:28:16.804 [2024-12-10 22:59:24.292653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.804 [2024-12-10 22:59:24.292686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.804 qpair failed and we were unable to recover it. 00:28:16.804 [2024-12-10 22:59:24.292854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.804 [2024-12-10 22:59:24.292888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.804 qpair failed and we were unable to recover it. 00:28:16.804 [2024-12-10 22:59:24.293001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.804 [2024-12-10 22:59:24.293034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.804 qpair failed and we were unable to recover it. 00:28:16.804 [2024-12-10 22:59:24.293239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.804 [2024-12-10 22:59:24.293275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.804 qpair failed and we were unable to recover it. 00:28:16.804 [2024-12-10 22:59:24.293373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.804 [2024-12-10 22:59:24.293406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.804 qpair failed and we were unable to recover it. 00:28:16.804 [2024-12-10 22:59:24.293503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.804 [2024-12-10 22:59:24.293535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.804 qpair failed and we were unable to recover it. 00:28:16.804 [2024-12-10 22:59:24.293711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.804 [2024-12-10 22:59:24.293744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.804 qpair failed and we were unable to recover it. 00:28:16.804 [2024-12-10 22:59:24.293865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.804 [2024-12-10 22:59:24.293897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.804 qpair failed and we were unable to recover it. 00:28:16.805 [2024-12-10 22:59:24.294083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.805 [2024-12-10 22:59:24.294114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.805 qpair failed and we were unable to recover it. 00:28:16.805 [2024-12-10 22:59:24.294306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.805 [2024-12-10 22:59:24.294339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.805 qpair failed and we were unable to recover it. 00:28:16.805 [2024-12-10 22:59:24.294525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.805 [2024-12-10 22:59:24.294558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.805 qpair failed and we were unable to recover it. 00:28:16.805 [2024-12-10 22:59:24.294753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.805 [2024-12-10 22:59:24.294785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.805 qpair failed and we were unable to recover it. 00:28:16.805 [2024-12-10 22:59:24.294909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.805 [2024-12-10 22:59:24.294941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.805 qpair failed and we were unable to recover it. 00:28:16.805 [2024-12-10 22:59:24.295074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.805 [2024-12-10 22:59:24.295106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.805 qpair failed and we were unable to recover it. 00:28:16.805 [2024-12-10 22:59:24.295246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.805 [2024-12-10 22:59:24.295279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.805 qpair failed and we were unable to recover it. 00:28:16.805 [2024-12-10 22:59:24.295468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.805 [2024-12-10 22:59:24.295500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.805 qpair failed and we were unable to recover it. 00:28:16.805 [2024-12-10 22:59:24.295692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.805 [2024-12-10 22:59:24.295725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.805 qpair failed and we were unable to recover it. 00:28:16.805 [2024-12-10 22:59:24.295926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.805 [2024-12-10 22:59:24.295959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.805 qpair failed and we were unable to recover it. 00:28:16.805 [2024-12-10 22:59:24.296148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.805 [2024-12-10 22:59:24.296191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.805 qpair failed and we were unable to recover it. 00:28:16.805 [2024-12-10 22:59:24.296321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.805 [2024-12-10 22:59:24.296353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.805 qpair failed and we were unable to recover it. 00:28:16.805 [2024-12-10 22:59:24.296522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.805 [2024-12-10 22:59:24.296553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.805 qpair failed and we were unable to recover it. 00:28:16.805 [2024-12-10 22:59:24.296674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.805 [2024-12-10 22:59:24.296706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.805 qpair failed and we were unable to recover it. 00:28:16.805 [2024-12-10 22:59:24.296896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.805 [2024-12-10 22:59:24.296949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.805 qpair failed and we were unable to recover it. 00:28:16.805 [2024-12-10 22:59:24.297129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.805 [2024-12-10 22:59:24.297174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.805 qpair failed and we were unable to recover it. 00:28:16.805 [2024-12-10 22:59:24.297358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.805 [2024-12-10 22:59:24.297391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.805 qpair failed and we were unable to recover it. 00:28:16.805 [2024-12-10 22:59:24.297569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.805 [2024-12-10 22:59:24.297601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.805 qpair failed and we were unable to recover it. 00:28:16.805 [2024-12-10 22:59:24.297795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.805 [2024-12-10 22:59:24.297827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.805 qpair failed and we were unable to recover it. 00:28:16.805 [2024-12-10 22:59:24.297995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.805 [2024-12-10 22:59:24.298029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.805 qpair failed and we were unable to recover it. 00:28:16.805 [2024-12-10 22:59:24.298141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.805 [2024-12-10 22:59:24.298185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.805 qpair failed and we were unable to recover it. 00:28:16.805 [2024-12-10 22:59:24.298306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.805 [2024-12-10 22:59:24.298339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.805 qpair failed and we were unable to recover it. 00:28:16.805 [2024-12-10 22:59:24.298463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.805 [2024-12-10 22:59:24.298495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.805 qpair failed and we were unable to recover it. 00:28:16.805 [2024-12-10 22:59:24.298687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.805 [2024-12-10 22:59:24.298719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.805 qpair failed and we were unable to recover it. 00:28:16.805 [2024-12-10 22:59:24.298888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.805 [2024-12-10 22:59:24.298920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.805 qpair failed and we were unable to recover it. 00:28:16.805 [2024-12-10 22:59:24.299024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.805 [2024-12-10 22:59:24.299056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.805 qpair failed and we were unable to recover it. 00:28:16.805 [2024-12-10 22:59:24.299296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.805 [2024-12-10 22:59:24.299330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.805 qpair failed and we were unable to recover it. 00:28:16.805 [2024-12-10 22:59:24.299507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.805 [2024-12-10 22:59:24.299539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.805 qpair failed and we were unable to recover it. 00:28:16.805 [2024-12-10 22:59:24.299656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.805 [2024-12-10 22:59:24.299690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.805 qpair failed and we were unable to recover it. 00:28:16.805 [2024-12-10 22:59:24.299862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.805 [2024-12-10 22:59:24.299894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.805 qpair failed and we were unable to recover it. 00:28:16.805 [2024-12-10 22:59:24.300004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.805 [2024-12-10 22:59:24.300036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.805 qpair failed and we were unable to recover it. 00:28:16.805 [2024-12-10 22:59:24.300194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.805 [2024-12-10 22:59:24.300228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.805 qpair failed and we were unable to recover it. 00:28:16.805 [2024-12-10 22:59:24.300328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.805 [2024-12-10 22:59:24.300360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.805 qpair failed and we were unable to recover it. 00:28:16.805 [2024-12-10 22:59:24.300532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.805 [2024-12-10 22:59:24.300564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.805 qpair failed and we were unable to recover it. 00:28:16.805 [2024-12-10 22:59:24.300758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.805 [2024-12-10 22:59:24.300790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.805 qpair failed and we were unable to recover it. 00:28:16.805 [2024-12-10 22:59:24.301029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.805 [2024-12-10 22:59:24.301060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.805 qpair failed and we were unable to recover it. 00:28:16.805 [2024-12-10 22:59:24.301252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.805 [2024-12-10 22:59:24.301285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.805 qpair failed and we were unable to recover it. 00:28:16.805 [2024-12-10 22:59:24.301473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.805 [2024-12-10 22:59:24.301505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.805 qpair failed and we were unable to recover it. 00:28:16.805 [2024-12-10 22:59:24.301634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.805 [2024-12-10 22:59:24.301666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.805 qpair failed and we were unable to recover it. 00:28:16.805 [2024-12-10 22:59:24.301833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.805 [2024-12-10 22:59:24.301865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.806 qpair failed and we were unable to recover it. 00:28:16.806 [2024-12-10 22:59:24.302037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.806 [2024-12-10 22:59:24.302068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.806 qpair failed and we were unable to recover it. 00:28:16.806 [2024-12-10 22:59:24.302194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.806 [2024-12-10 22:59:24.302233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.806 qpair failed and we were unable to recover it. 00:28:16.806 [2024-12-10 22:59:24.302439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.806 [2024-12-10 22:59:24.302472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.806 qpair failed and we were unable to recover it. 00:28:16.806 [2024-12-10 22:59:24.302641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.806 [2024-12-10 22:59:24.302674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.806 qpair failed and we were unable to recover it. 00:28:16.806 [2024-12-10 22:59:24.302786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.806 [2024-12-10 22:59:24.302817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.806 qpair failed and we were unable to recover it. 00:28:16.806 [2024-12-10 22:59:24.302987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.806 [2024-12-10 22:59:24.303019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.806 qpair failed and we were unable to recover it. 00:28:16.806 [2024-12-10 22:59:24.303131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.806 [2024-12-10 22:59:24.303176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.806 qpair failed and we were unable to recover it. 00:28:16.806 [2024-12-10 22:59:24.303309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.806 [2024-12-10 22:59:24.303340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.806 qpair failed and we were unable to recover it. 00:28:16.806 [2024-12-10 22:59:24.303455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.806 [2024-12-10 22:59:24.303487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.806 qpair failed and we were unable to recover it. 00:28:16.806 [2024-12-10 22:59:24.303675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.806 [2024-12-10 22:59:24.303708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.806 qpair failed and we were unable to recover it. 00:28:16.806 [2024-12-10 22:59:24.303815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.806 [2024-12-10 22:59:24.303847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.806 qpair failed and we were unable to recover it. 00:28:16.806 [2024-12-10 22:59:24.303946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.806 [2024-12-10 22:59:24.303979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.806 qpair failed and we were unable to recover it. 00:28:16.806 [2024-12-10 22:59:24.304083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.806 [2024-12-10 22:59:24.304115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.806 qpair failed and we were unable to recover it. 00:28:16.806 [2024-12-10 22:59:24.304277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.806 [2024-12-10 22:59:24.304310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.806 qpair failed and we were unable to recover it. 00:28:16.806 [2024-12-10 22:59:24.304549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.806 [2024-12-10 22:59:24.304581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.806 qpair failed and we were unable to recover it. 00:28:16.806 [2024-12-10 22:59:24.304770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.806 [2024-12-10 22:59:24.304803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.806 qpair failed and we were unable to recover it. 00:28:16.806 [2024-12-10 22:59:24.304919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.806 [2024-12-10 22:59:24.304952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.806 qpair failed and we were unable to recover it. 00:28:16.806 [2024-12-10 22:59:24.305055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.806 [2024-12-10 22:59:24.305087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.806 qpair failed and we were unable to recover it. 00:28:16.806 [2024-12-10 22:59:24.305200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.806 [2024-12-10 22:59:24.305234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.806 qpair failed and we were unable to recover it. 00:28:16.806 [2024-12-10 22:59:24.305343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.806 [2024-12-10 22:59:24.305376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.806 qpair failed and we were unable to recover it. 00:28:16.806 [2024-12-10 22:59:24.305552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.806 [2024-12-10 22:59:24.305584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.806 qpair failed and we were unable to recover it. 00:28:16.806 [2024-12-10 22:59:24.305710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.806 [2024-12-10 22:59:24.305744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.806 qpair failed and we were unable to recover it. 00:28:16.806 [2024-12-10 22:59:24.305865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.806 [2024-12-10 22:59:24.305897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.806 qpair failed and we were unable to recover it. 00:28:16.806 [2024-12-10 22:59:24.306105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.806 [2024-12-10 22:59:24.306138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.806 qpair failed and we were unable to recover it. 00:28:16.806 [2024-12-10 22:59:24.306273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.806 [2024-12-10 22:59:24.306306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.806 qpair failed and we were unable to recover it. 00:28:16.806 [2024-12-10 22:59:24.306478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.806 [2024-12-10 22:59:24.306510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.806 qpair failed and we were unable to recover it. 00:28:16.806 [2024-12-10 22:59:24.306622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.806 [2024-12-10 22:59:24.306654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.806 qpair failed and we were unable to recover it. 00:28:16.806 [2024-12-10 22:59:24.306753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.806 [2024-12-10 22:59:24.306785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.806 qpair failed and we were unable to recover it. 00:28:16.806 [2024-12-10 22:59:24.306900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.806 [2024-12-10 22:59:24.306937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.806 qpair failed and we were unable to recover it. 00:28:16.806 [2024-12-10 22:59:24.307092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.806 [2024-12-10 22:59:24.307123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.806 qpair failed and we were unable to recover it. 00:28:16.806 [2024-12-10 22:59:24.307303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.806 [2024-12-10 22:59:24.307336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.806 qpair failed and we were unable to recover it. 00:28:16.806 [2024-12-10 22:59:24.307490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.806 [2024-12-10 22:59:24.307523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.806 qpair failed and we were unable to recover it. 00:28:16.806 [2024-12-10 22:59:24.307691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.806 [2024-12-10 22:59:24.307726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.806 qpair failed and we were unable to recover it. 00:28:16.806 [2024-12-10 22:59:24.307920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.806 [2024-12-10 22:59:24.307953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.806 qpair failed and we were unable to recover it. 00:28:16.806 [2024-12-10 22:59:24.308102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.806 [2024-12-10 22:59:24.308134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.806 qpair failed and we were unable to recover it. 00:28:16.806 [2024-12-10 22:59:24.308402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.806 [2024-12-10 22:59:24.308436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.806 qpair failed and we were unable to recover it. 00:28:16.806 [2024-12-10 22:59:24.308550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.806 [2024-12-10 22:59:24.308582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.806 qpair failed and we were unable to recover it. 00:28:16.806 [2024-12-10 22:59:24.308768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.806 [2024-12-10 22:59:24.308801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.806 qpair failed and we were unable to recover it. 00:28:16.806 [2024-12-10 22:59:24.308979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.806 [2024-12-10 22:59:24.309012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.806 qpair failed and we were unable to recover it. 00:28:16.806 [2024-12-10 22:59:24.309193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.807 [2024-12-10 22:59:24.309226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.807 qpair failed and we were unable to recover it. 00:28:16.807 [2024-12-10 22:59:24.309395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.807 [2024-12-10 22:59:24.309427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.807 qpair failed and we were unable to recover it. 00:28:16.807 [2024-12-10 22:59:24.309553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.807 [2024-12-10 22:59:24.309586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.807 qpair failed and we were unable to recover it. 00:28:16.807 [2024-12-10 22:59:24.309770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.807 [2024-12-10 22:59:24.309813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.807 qpair failed and we were unable to recover it. 00:28:16.807 [2024-12-10 22:59:24.310005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.807 [2024-12-10 22:59:24.310042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.807 qpair failed and we were unable to recover it. 00:28:16.807 [2024-12-10 22:59:24.310211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.807 [2024-12-10 22:59:24.310245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.807 qpair failed and we were unable to recover it. 00:28:16.807 [2024-12-10 22:59:24.310351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.807 [2024-12-10 22:59:24.310384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.807 qpair failed and we were unable to recover it. 00:28:16.807 [2024-12-10 22:59:24.310635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.807 [2024-12-10 22:59:24.310667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.807 qpair failed and we were unable to recover it. 00:28:16.807 [2024-12-10 22:59:24.310836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.807 [2024-12-10 22:59:24.310869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.807 qpair failed and we were unable to recover it. 00:28:16.807 [2024-12-10 22:59:24.310989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.807 [2024-12-10 22:59:24.311020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.807 qpair failed and we were unable to recover it. 00:28:16.807 [2024-12-10 22:59:24.311138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.807 [2024-12-10 22:59:24.311188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.807 qpair failed and we were unable to recover it. 00:28:16.807 [2024-12-10 22:59:24.311298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.807 [2024-12-10 22:59:24.311331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.807 qpair failed and we were unable to recover it. 00:28:16.807 [2024-12-10 22:59:24.311436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.807 [2024-12-10 22:59:24.311468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.807 qpair failed and we were unable to recover it. 00:28:16.807 [2024-12-10 22:59:24.311660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.807 [2024-12-10 22:59:24.311692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.807 qpair failed and we were unable to recover it. 00:28:16.807 [2024-12-10 22:59:24.311872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.807 [2024-12-10 22:59:24.311904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.807 qpair failed and we were unable to recover it. 00:28:16.807 [2024-12-10 22:59:24.312092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.807 [2024-12-10 22:59:24.312124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.807 qpair failed and we were unable to recover it. 00:28:16.807 [2024-12-10 22:59:24.312246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.807 [2024-12-10 22:59:24.312285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.807 qpair failed and we were unable to recover it. 00:28:16.807 [2024-12-10 22:59:24.312498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.807 [2024-12-10 22:59:24.312531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.807 qpair failed and we were unable to recover it. 00:28:16.807 [2024-12-10 22:59:24.312793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.807 [2024-12-10 22:59:24.312824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.807 qpair failed and we were unable to recover it. 00:28:16.807 [2024-12-10 22:59:24.312945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.807 [2024-12-10 22:59:24.312979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.807 qpair failed and we were unable to recover it. 00:28:16.807 [2024-12-10 22:59:24.313146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.807 [2024-12-10 22:59:24.313190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.807 qpair failed and we were unable to recover it. 00:28:16.807 [2024-12-10 22:59:24.313304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.807 [2024-12-10 22:59:24.313336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.807 qpair failed and we were unable to recover it. 00:28:16.807 [2024-12-10 22:59:24.313454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.807 [2024-12-10 22:59:24.313487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.807 qpair failed and we were unable to recover it. 00:28:16.807 [2024-12-10 22:59:24.313605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.807 [2024-12-10 22:59:24.313639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.807 qpair failed and we were unable to recover it. 00:28:16.807 22:59:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:16.807 [2024-12-10 22:59:24.313764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.807 [2024-12-10 22:59:24.313798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.807 qpair failed and we were unable to recover it. 00:28:16.807 [2024-12-10 22:59:24.313979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.807 [2024-12-10 22:59:24.314013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.807 qpair failed and we were unable to recover it. 00:28:16.807 22:59:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:28:16.807 [2024-12-10 22:59:24.314200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.807 [2024-12-10 22:59:24.314237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.807 qpair failed and we were unable to recover it. 00:28:16.807 [2024-12-10 22:59:24.314400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.807 [2024-12-10 22:59:24.314431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.807 qpair failed and we were unable to recover it. 00:28:16.807 [2024-12-10 22:59:24.314547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.807 [2024-12-10 22:59:24.314582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.807 qpair failed and we were unable to recover it. 00:28:16.807 22:59:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:16.807 [2024-12-10 22:59:24.314716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.807 [2024-12-10 22:59:24.314749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.807 qpair failed and we were unable to recover it. 00:28:16.807 [2024-12-10 22:59:24.314921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.807 [2024-12-10 22:59:24.314953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.807 qpair failed and we were unable to recover it. 00:28:16.807 22:59:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:16.807 [2024-12-10 22:59:24.315070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.807 [2024-12-10 22:59:24.315102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.807 qpair failed and we were unable to recover it. 00:28:16.807 [2024-12-10 22:59:24.315283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.807 22:59:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:16.807 [2024-12-10 22:59:24.315318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.807 qpair failed and we were unable to recover it. 00:28:16.808 [2024-12-10 22:59:24.315515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.808 [2024-12-10 22:59:24.315548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.808 qpair failed and we were unable to recover it. 00:28:16.808 [2024-12-10 22:59:24.315746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.808 [2024-12-10 22:59:24.315778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.808 qpair failed and we were unable to recover it. 00:28:16.808 [2024-12-10 22:59:24.315881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.808 [2024-12-10 22:59:24.315911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.808 qpair failed and we were unable to recover it. 00:28:16.808 [2024-12-10 22:59:24.316031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.808 [2024-12-10 22:59:24.316062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.808 qpair failed and we were unable to recover it. 00:28:16.808 [2024-12-10 22:59:24.316238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.808 [2024-12-10 22:59:24.316271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.808 qpair failed and we were unable to recover it. 00:28:16.808 [2024-12-10 22:59:24.316461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.808 [2024-12-10 22:59:24.316495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.808 qpair failed and we were unable to recover it. 00:28:16.808 [2024-12-10 22:59:24.316681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.808 [2024-12-10 22:59:24.316714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.808 qpair failed and we were unable to recover it. 00:28:16.808 [2024-12-10 22:59:24.316883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.808 [2024-12-10 22:59:24.316915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.808 qpair failed and we were unable to recover it. 00:28:16.808 [2024-12-10 22:59:24.317101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.808 [2024-12-10 22:59:24.317153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.808 qpair failed and we were unable to recover it. 00:28:16.808 [2024-12-10 22:59:24.317350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.808 [2024-12-10 22:59:24.317385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.808 qpair failed and we were unable to recover it. 00:28:16.808 [2024-12-10 22:59:24.317567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.808 [2024-12-10 22:59:24.317602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.808 qpair failed and we were unable to recover it. 00:28:16.808 [2024-12-10 22:59:24.317787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.808 [2024-12-10 22:59:24.317821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.808 qpair failed and we were unable to recover it. 00:28:16.808 [2024-12-10 22:59:24.318002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.808 [2024-12-10 22:59:24.318039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.808 qpair failed and we were unable to recover it. 00:28:16.808 [2024-12-10 22:59:24.318171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.808 [2024-12-10 22:59:24.318206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.808 qpair failed and we were unable to recover it. 00:28:16.808 [2024-12-10 22:59:24.318377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.808 [2024-12-10 22:59:24.318409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.808 qpair failed and we were unable to recover it. 00:28:16.808 [2024-12-10 22:59:24.318522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.808 [2024-12-10 22:59:24.318556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.808 qpair failed and we were unable to recover it. 00:28:16.808 [2024-12-10 22:59:24.318665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.808 [2024-12-10 22:59:24.318698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.808 qpair failed and we were unable to recover it. 00:28:16.808 [2024-12-10 22:59:24.318871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.808 [2024-12-10 22:59:24.318905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.808 qpair failed and we were unable to recover it. 00:28:16.808 [2024-12-10 22:59:24.319130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.808 [2024-12-10 22:59:24.319178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.808 qpair failed and we were unable to recover it. 00:28:16.808 [2024-12-10 22:59:24.319303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.808 [2024-12-10 22:59:24.319337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.808 qpair failed and we were unable to recover it. 00:28:16.808 [2024-12-10 22:59:24.319587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.808 [2024-12-10 22:59:24.319620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.808 qpair failed and we were unable to recover it. 00:28:16.808 [2024-12-10 22:59:24.319787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.808 [2024-12-10 22:59:24.319829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.808 qpair failed and we were unable to recover it. 00:28:16.808 [2024-12-10 22:59:24.319941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.808 [2024-12-10 22:59:24.319975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.808 qpair failed and we were unable to recover it. 00:28:16.808 [2024-12-10 22:59:24.320096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.808 [2024-12-10 22:59:24.320129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.808 qpair failed and we were unable to recover it. 00:28:16.808 [2024-12-10 22:59:24.320314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.808 [2024-12-10 22:59:24.320350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.808 qpair failed and we were unable to recover it. 00:28:16.808 [2024-12-10 22:59:24.320461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.808 [2024-12-10 22:59:24.320498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.808 qpair failed and we were unable to recover it. 00:28:16.808 [2024-12-10 22:59:24.320624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.808 [2024-12-10 22:59:24.320660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.808 qpair failed and we were unable to recover it. 00:28:16.808 [2024-12-10 22:59:24.320781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.808 [2024-12-10 22:59:24.320813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.808 qpair failed and we were unable to recover it. 00:28:16.808 [2024-12-10 22:59:24.320977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.808 [2024-12-10 22:59:24.321010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.808 qpair failed and we were unable to recover it. 00:28:16.808 [2024-12-10 22:59:24.321109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.808 [2024-12-10 22:59:24.321139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.808 qpair failed and we were unable to recover it. 00:28:16.808 [2024-12-10 22:59:24.321329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.808 [2024-12-10 22:59:24.321363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.808 qpair failed and we were unable to recover it. 00:28:16.808 [2024-12-10 22:59:24.321572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.808 [2024-12-10 22:59:24.321604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.808 qpair failed and we were unable to recover it. 00:28:16.808 [2024-12-10 22:59:24.321859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.808 [2024-12-10 22:59:24.321892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.808 qpair failed and we were unable to recover it. 00:28:16.808 [2024-12-10 22:59:24.321996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.808 [2024-12-10 22:59:24.322030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.808 qpair failed and we were unable to recover it. 00:28:16.808 [2024-12-10 22:59:24.322152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.808 [2024-12-10 22:59:24.322198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.808 qpair failed and we were unable to recover it. 00:28:16.808 [2024-12-10 22:59:24.322320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.808 [2024-12-10 22:59:24.322352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.808 qpair failed and we were unable to recover it. 00:28:16.808 [2024-12-10 22:59:24.322550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.808 [2024-12-10 22:59:24.322583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.808 qpair failed and we were unable to recover it. 00:28:16.808 [2024-12-10 22:59:24.322774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.808 [2024-12-10 22:59:24.322809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.808 qpair failed and we were unable to recover it. 00:28:16.808 [2024-12-10 22:59:24.322984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.808 [2024-12-10 22:59:24.323016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.808 qpair failed and we were unable to recover it. 00:28:16.809 [2024-12-10 22:59:24.323133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.809 [2024-12-10 22:59:24.323177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.809 qpair failed and we were unable to recover it. 00:28:16.809 [2024-12-10 22:59:24.323308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.809 [2024-12-10 22:59:24.323340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.809 qpair failed and we were unable to recover it. 00:28:16.809 [2024-12-10 22:59:24.323458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.809 [2024-12-10 22:59:24.323489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.809 qpair failed and we were unable to recover it. 00:28:16.809 [2024-12-10 22:59:24.323659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.809 [2024-12-10 22:59:24.323692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.809 qpair failed and we were unable to recover it. 00:28:16.809 [2024-12-10 22:59:24.323794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.809 [2024-12-10 22:59:24.323827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.809 qpair failed and we were unable to recover it. 00:28:16.809 [2024-12-10 22:59:24.323934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.809 [2024-12-10 22:59:24.323966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.809 qpair failed and we were unable to recover it. 00:28:16.809 [2024-12-10 22:59:24.324141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.809 [2024-12-10 22:59:24.324185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.809 qpair failed and we were unable to recover it. 00:28:16.809 [2024-12-10 22:59:24.324380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.809 [2024-12-10 22:59:24.324412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.809 qpair failed and we were unable to recover it. 00:28:16.809 [2024-12-10 22:59:24.324587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.809 [2024-12-10 22:59:24.324619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.809 qpair failed and we were unable to recover it. 00:28:16.809 [2024-12-10 22:59:24.324746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.809 [2024-12-10 22:59:24.324782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.809 qpair failed and we were unable to recover it. 00:28:16.809 [2024-12-10 22:59:24.324896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.809 [2024-12-10 22:59:24.324930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.809 qpair failed and we were unable to recover it. 00:28:16.809 [2024-12-10 22:59:24.325045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.809 [2024-12-10 22:59:24.325078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.809 qpair failed and we were unable to recover it. 00:28:16.809 [2024-12-10 22:59:24.325195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.809 [2024-12-10 22:59:24.325230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.809 qpair failed and we were unable to recover it. 00:28:16.809 [2024-12-10 22:59:24.325338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.809 [2024-12-10 22:59:24.325374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.809 qpair failed and we were unable to recover it. 00:28:16.809 [2024-12-10 22:59:24.325611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.809 [2024-12-10 22:59:24.325646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.809 qpair failed and we were unable to recover it. 00:28:16.809 [2024-12-10 22:59:24.325758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.809 [2024-12-10 22:59:24.325790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.809 qpair failed and we were unable to recover it. 00:28:16.809 [2024-12-10 22:59:24.325912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.809 [2024-12-10 22:59:24.325946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.809 qpair failed and we were unable to recover it. 00:28:16.809 [2024-12-10 22:59:24.326122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.809 [2024-12-10 22:59:24.326155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.809 qpair failed and we were unable to recover it. 00:28:16.809 [2024-12-10 22:59:24.326288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.809 [2024-12-10 22:59:24.326322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.809 qpair failed and we were unable to recover it. 00:28:16.809 [2024-12-10 22:59:24.326494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.809 [2024-12-10 22:59:24.326528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.809 qpair failed and we were unable to recover it. 00:28:16.809 [2024-12-10 22:59:24.326739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.809 [2024-12-10 22:59:24.326773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.809 qpair failed and we were unable to recover it. 00:28:16.809 [2024-12-10 22:59:24.326951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.809 [2024-12-10 22:59:24.326985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.809 qpair failed and we were unable to recover it. 00:28:16.809 [2024-12-10 22:59:24.327100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.809 [2024-12-10 22:59:24.327140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.809 qpair failed and we were unable to recover it. 00:28:16.809 [2024-12-10 22:59:24.327327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.809 [2024-12-10 22:59:24.327361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.809 qpair failed and we were unable to recover it. 00:28:16.809 [2024-12-10 22:59:24.327479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.809 [2024-12-10 22:59:24.327512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.809 qpair failed and we were unable to recover it. 00:28:16.809 [2024-12-10 22:59:24.327767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.809 [2024-12-10 22:59:24.327800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.809 qpair failed and we were unable to recover it. 00:28:16.809 [2024-12-10 22:59:24.327916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.809 [2024-12-10 22:59:24.327951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.809 qpair failed and we were unable to recover it. 00:28:16.809 [2024-12-10 22:59:24.328063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.809 [2024-12-10 22:59:24.328096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.809 qpair failed and we were unable to recover it. 00:28:16.809 [2024-12-10 22:59:24.328229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.809 [2024-12-10 22:59:24.328263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.809 qpair failed and we were unable to recover it. 00:28:16.809 [2024-12-10 22:59:24.328371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.809 [2024-12-10 22:59:24.328404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.809 qpair failed and we were unable to recover it. 00:28:16.809 [2024-12-10 22:59:24.328525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.809 [2024-12-10 22:59:24.328558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.809 qpair failed and we were unable to recover it. 00:28:16.809 [2024-12-10 22:59:24.328670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.809 [2024-12-10 22:59:24.328703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.809 qpair failed and we were unable to recover it. 00:28:16.809 [2024-12-10 22:59:24.328830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.809 [2024-12-10 22:59:24.328863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.809 qpair failed and we were unable to recover it. 00:28:16.809 [2024-12-10 22:59:24.329068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.809 [2024-12-10 22:59:24.329101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.809 qpair failed and we were unable to recover it. 00:28:16.809 [2024-12-10 22:59:24.329243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.809 [2024-12-10 22:59:24.329276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.809 qpair failed and we were unable to recover it. 00:28:16.809 [2024-12-10 22:59:24.329392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.809 [2024-12-10 22:59:24.329424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.809 qpair failed and we were unable to recover it. 00:28:16.809 [2024-12-10 22:59:24.329598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.809 [2024-12-10 22:59:24.329631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.809 qpair failed and we were unable to recover it. 00:28:16.809 [2024-12-10 22:59:24.329802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.809 [2024-12-10 22:59:24.329835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.809 qpair failed and we were unable to recover it. 00:28:16.809 [2024-12-10 22:59:24.330005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.809 [2024-12-10 22:59:24.330037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.809 qpair failed and we were unable to recover it. 00:28:16.810 [2024-12-10 22:59:24.330174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.810 [2024-12-10 22:59:24.330209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.810 qpair failed and we were unable to recover it. 00:28:16.810 [2024-12-10 22:59:24.330323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.810 [2024-12-10 22:59:24.330357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.810 qpair failed and we were unable to recover it. 00:28:16.810 [2024-12-10 22:59:24.330530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.810 [2024-12-10 22:59:24.330564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.810 qpair failed and we were unable to recover it. 00:28:16.810 [2024-12-10 22:59:24.330673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.810 [2024-12-10 22:59:24.330705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.810 qpair failed and we were unable to recover it. 00:28:16.810 [2024-12-10 22:59:24.330879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.810 [2024-12-10 22:59:24.330912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.810 qpair failed and we were unable to recover it. 00:28:16.810 [2024-12-10 22:59:24.331087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.810 [2024-12-10 22:59:24.331120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.810 qpair failed and we were unable to recover it. 00:28:16.810 [2024-12-10 22:59:24.331248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.810 [2024-12-10 22:59:24.331281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.810 qpair failed and we were unable to recover it. 00:28:16.810 [2024-12-10 22:59:24.331406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.810 [2024-12-10 22:59:24.331439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.810 qpair failed and we were unable to recover it. 00:28:16.810 [2024-12-10 22:59:24.331558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.810 [2024-12-10 22:59:24.331591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.810 qpair failed and we were unable to recover it. 00:28:16.810 [2024-12-10 22:59:24.331795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.810 [2024-12-10 22:59:24.331828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.810 qpair failed and we were unable to recover it. 00:28:16.810 [2024-12-10 22:59:24.332006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.810 [2024-12-10 22:59:24.332043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.810 qpair failed and we were unable to recover it. 00:28:16.810 [2024-12-10 22:59:24.332152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.810 [2024-12-10 22:59:24.332196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.810 qpair failed and we were unable to recover it. 00:28:16.810 [2024-12-10 22:59:24.332299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.810 [2024-12-10 22:59:24.332331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.810 qpair failed and we were unable to recover it. 00:28:16.810 [2024-12-10 22:59:24.332443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.810 [2024-12-10 22:59:24.332475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.810 qpair failed and we were unable to recover it. 00:28:16.810 [2024-12-10 22:59:24.332583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.810 [2024-12-10 22:59:24.332616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.810 qpair failed and we were unable to recover it. 00:28:16.810 [2024-12-10 22:59:24.332800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.810 [2024-12-10 22:59:24.332834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.810 qpair failed and we were unable to recover it. 00:28:16.810 [2024-12-10 22:59:24.333004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.810 [2024-12-10 22:59:24.333036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.810 qpair failed and we were unable to recover it. 00:28:16.810 [2024-12-10 22:59:24.333156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.810 [2024-12-10 22:59:24.333203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.810 qpair failed and we were unable to recover it. 00:28:16.810 [2024-12-10 22:59:24.333318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.810 [2024-12-10 22:59:24.333350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.810 qpair failed and we were unable to recover it. 00:28:16.810 [2024-12-10 22:59:24.333444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.810 [2024-12-10 22:59:24.333477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.810 qpair failed and we were unable to recover it. 00:28:16.810 [2024-12-10 22:59:24.333570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.810 [2024-12-10 22:59:24.333605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.810 qpair failed and we were unable to recover it. 00:28:16.810 [2024-12-10 22:59:24.333721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.810 [2024-12-10 22:59:24.333755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.810 qpair failed and we were unable to recover it. 00:28:16.810 [2024-12-10 22:59:24.333920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.810 [2024-12-10 22:59:24.333953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.810 qpair failed and we were unable to recover it. 00:28:16.810 [2024-12-10 22:59:24.334050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.810 [2024-12-10 22:59:24.334083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.810 qpair failed and we were unable to recover it. 00:28:16.810 [2024-12-10 22:59:24.334282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.810 [2024-12-10 22:59:24.334317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.810 qpair failed and we were unable to recover it. 00:28:16.810 [2024-12-10 22:59:24.334454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.810 [2024-12-10 22:59:24.334487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.810 qpair failed and we were unable to recover it. 00:28:16.810 [2024-12-10 22:59:24.334675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.810 [2024-12-10 22:59:24.334708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.810 qpair failed and we were unable to recover it. 00:28:16.810 [2024-12-10 22:59:24.334875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.810 [2024-12-10 22:59:24.334909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.810 qpair failed and we were unable to recover it. 00:28:16.810 [2024-12-10 22:59:24.335085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.810 [2024-12-10 22:59:24.335118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.810 qpair failed and we were unable to recover it. 00:28:16.810 [2024-12-10 22:59:24.335238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.810 [2024-12-10 22:59:24.335272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.810 qpair failed and we were unable to recover it. 00:28:16.810 [2024-12-10 22:59:24.335444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.810 [2024-12-10 22:59:24.335476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.810 qpair failed and we were unable to recover it. 00:28:16.810 [2024-12-10 22:59:24.335645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.810 [2024-12-10 22:59:24.335677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.810 qpair failed and we were unable to recover it. 00:28:16.810 [2024-12-10 22:59:24.335774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.810 [2024-12-10 22:59:24.335806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.810 qpair failed and we were unable to recover it. 00:28:16.810 [2024-12-10 22:59:24.335973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.810 [2024-12-10 22:59:24.336006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.810 qpair failed and we were unable to recover it. 00:28:16.810 [2024-12-10 22:59:24.336107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.810 [2024-12-10 22:59:24.336140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.810 qpair failed and we were unable to recover it. 00:28:16.810 [2024-12-10 22:59:24.336265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.810 [2024-12-10 22:59:24.336297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.810 qpair failed and we were unable to recover it. 00:28:16.810 [2024-12-10 22:59:24.336434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.810 [2024-12-10 22:59:24.336467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.810 qpair failed and we were unable to recover it. 00:28:16.810 [2024-12-10 22:59:24.336578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.810 [2024-12-10 22:59:24.336617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.810 qpair failed and we were unable to recover it. 00:28:16.810 [2024-12-10 22:59:24.336724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.810 [2024-12-10 22:59:24.336756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.810 qpair failed and we were unable to recover it. 00:28:16.810 [2024-12-10 22:59:24.336876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.810 [2024-12-10 22:59:24.336909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.811 qpair failed and we were unable to recover it. 00:28:16.811 [2024-12-10 22:59:24.337030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.811 [2024-12-10 22:59:24.337063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.811 qpair failed and we were unable to recover it. 00:28:16.811 [2024-12-10 22:59:24.337177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.811 [2024-12-10 22:59:24.337212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.811 qpair failed and we were unable to recover it. 00:28:16.811 [2024-12-10 22:59:24.337347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.811 [2024-12-10 22:59:24.337380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.811 qpair failed and we were unable to recover it. 00:28:16.811 [2024-12-10 22:59:24.337486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.811 [2024-12-10 22:59:24.337518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.811 qpair failed and we were unable to recover it. 00:28:16.811 [2024-12-10 22:59:24.337629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.811 [2024-12-10 22:59:24.337661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.811 qpair failed and we were unable to recover it. 00:28:16.811 [2024-12-10 22:59:24.337767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.811 [2024-12-10 22:59:24.337800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.811 qpair failed and we were unable to recover it. 00:28:16.811 [2024-12-10 22:59:24.337977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.811 [2024-12-10 22:59:24.338009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.811 qpair failed and we were unable to recover it. 00:28:16.811 [2024-12-10 22:59:24.338195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.811 [2024-12-10 22:59:24.338228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.811 qpair failed and we were unable to recover it. 00:28:16.811 [2024-12-10 22:59:24.338405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.811 [2024-12-10 22:59:24.338441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.811 qpair failed and we were unable to recover it. 00:28:16.811 [2024-12-10 22:59:24.338559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.811 [2024-12-10 22:59:24.338592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.811 qpair failed and we were unable to recover it. 00:28:16.811 [2024-12-10 22:59:24.338764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.811 [2024-12-10 22:59:24.338799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.811 qpair failed and we were unable to recover it. 00:28:16.811 [2024-12-10 22:59:24.338917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.811 [2024-12-10 22:59:24.338950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.811 qpair failed and we were unable to recover it. 00:28:16.811 [2024-12-10 22:59:24.339067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.811 [2024-12-10 22:59:24.339098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.811 qpair failed and we were unable to recover it. 00:28:16.811 [2024-12-10 22:59:24.339214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.811 [2024-12-10 22:59:24.339249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.811 qpair failed and we were unable to recover it. 00:28:16.811 [2024-12-10 22:59:24.339367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.811 [2024-12-10 22:59:24.339401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.811 qpair failed and we were unable to recover it. 00:28:16.811 [2024-12-10 22:59:24.339527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.811 [2024-12-10 22:59:24.339562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.811 qpair failed and we were unable to recover it. 00:28:16.811 [2024-12-10 22:59:24.339744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.811 [2024-12-10 22:59:24.339777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.811 qpair failed and we were unable to recover it. 00:28:16.811 [2024-12-10 22:59:24.339882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.811 [2024-12-10 22:59:24.339915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.811 qpair failed and we were unable to recover it. 00:28:16.811 [2024-12-10 22:59:24.340018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.811 [2024-12-10 22:59:24.340050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.811 qpair failed and we were unable to recover it. 00:28:16.811 [2024-12-10 22:59:24.340267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.811 [2024-12-10 22:59:24.340300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.811 qpair failed and we were unable to recover it. 00:28:16.811 [2024-12-10 22:59:24.340403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.811 [2024-12-10 22:59:24.340434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.811 qpair failed and we were unable to recover it. 00:28:16.811 [2024-12-10 22:59:24.340601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.811 [2024-12-10 22:59:24.340633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.811 qpair failed and we were unable to recover it. 00:28:16.811 [2024-12-10 22:59:24.340761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.811 [2024-12-10 22:59:24.340794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.811 qpair failed and we were unable to recover it. 00:28:16.811 [2024-12-10 22:59:24.340986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.811 [2024-12-10 22:59:24.341019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.811 qpair failed and we were unable to recover it. 00:28:16.811 [2024-12-10 22:59:24.341146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.811 [2024-12-10 22:59:24.341191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.811 qpair failed and we were unable to recover it. 00:28:16.811 [2024-12-10 22:59:24.341332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.811 [2024-12-10 22:59:24.341366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.811 qpair failed and we were unable to recover it. 00:28:16.811 [2024-12-10 22:59:24.341476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.811 [2024-12-10 22:59:24.341508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.811 qpair failed and we were unable to recover it. 00:28:16.811 [2024-12-10 22:59:24.341629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.811 [2024-12-10 22:59:24.341663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.811 qpair failed and we were unable to recover it. 00:28:16.811 [2024-12-10 22:59:24.341778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.811 [2024-12-10 22:59:24.341811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.811 qpair failed and we were unable to recover it. 00:28:16.811 [2024-12-10 22:59:24.341914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.811 [2024-12-10 22:59:24.341948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.811 qpair failed and we were unable to recover it. 00:28:16.811 [2024-12-10 22:59:24.342064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.811 [2024-12-10 22:59:24.342098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.811 qpair failed and we were unable to recover it. 00:28:16.811 [2024-12-10 22:59:24.342207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.811 [2024-12-10 22:59:24.342241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.811 qpair failed and we were unable to recover it. 00:28:16.811 [2024-12-10 22:59:24.342414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.811 [2024-12-10 22:59:24.342447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.811 qpair failed and we were unable to recover it. 00:28:16.811 [2024-12-10 22:59:24.342557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.811 [2024-12-10 22:59:24.342590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.811 qpair failed and we were unable to recover it. 00:28:16.811 [2024-12-10 22:59:24.342702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.811 [2024-12-10 22:59:24.342736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.811 qpair failed and we were unable to recover it. 00:28:16.812 [2024-12-10 22:59:24.342839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.812 [2024-12-10 22:59:24.342871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.812 qpair failed and we were unable to recover it. 00:28:16.812 [2024-12-10 22:59:24.343038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.812 [2024-12-10 22:59:24.343070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.812 qpair failed and we were unable to recover it. 00:28:16.812 [2024-12-10 22:59:24.343242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.812 [2024-12-10 22:59:24.343276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.812 qpair failed and we were unable to recover it. 00:28:16.812 [2024-12-10 22:59:24.343529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.812 [2024-12-10 22:59:24.343568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.812 qpair failed and we were unable to recover it. 00:28:16.812 [2024-12-10 22:59:24.343688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.812 [2024-12-10 22:59:24.343721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.812 qpair failed and we were unable to recover it. 00:28:16.812 [2024-12-10 22:59:24.343843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.812 [2024-12-10 22:59:24.343875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.812 qpair failed and we were unable to recover it. 00:28:16.812 [2024-12-10 22:59:24.343997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.812 [2024-12-10 22:59:24.344030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.812 qpair failed and we were unable to recover it. 00:28:16.812 [2024-12-10 22:59:24.344132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.812 [2024-12-10 22:59:24.344175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.812 qpair failed and we were unable to recover it. 00:28:16.812 [2024-12-10 22:59:24.344291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.812 [2024-12-10 22:59:24.344325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.812 qpair failed and we were unable to recover it. 00:28:16.812 [2024-12-10 22:59:24.344434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.812 [2024-12-10 22:59:24.344466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.812 qpair failed and we were unable to recover it. 00:28:16.812 [2024-12-10 22:59:24.344577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.812 [2024-12-10 22:59:24.344609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.812 qpair failed and we were unable to recover it. 00:28:16.812 [2024-12-10 22:59:24.344720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.812 [2024-12-10 22:59:24.344753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.812 qpair failed and we were unable to recover it. 00:28:16.812 [2024-12-10 22:59:24.344869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.812 [2024-12-10 22:59:24.344902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.812 qpair failed and we were unable to recover it. 00:28:16.812 [2024-12-10 22:59:24.345031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.812 [2024-12-10 22:59:24.345063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.812 qpair failed and we were unable to recover it. 00:28:16.812 [2024-12-10 22:59:24.345244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.812 [2024-12-10 22:59:24.345278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.812 qpair failed and we were unable to recover it. 00:28:16.812 [2024-12-10 22:59:24.345393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.812 [2024-12-10 22:59:24.345426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.812 qpair failed and we were unable to recover it. 00:28:16.812 [2024-12-10 22:59:24.345594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.812 [2024-12-10 22:59:24.345632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.812 qpair failed and we were unable to recover it. 00:28:16.812 [2024-12-10 22:59:24.345733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.812 [2024-12-10 22:59:24.345766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.812 qpair failed and we were unable to recover it. 00:28:16.812 [2024-12-10 22:59:24.345869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.812 [2024-12-10 22:59:24.345901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.812 qpair failed and we were unable to recover it. 00:28:16.812 [2024-12-10 22:59:24.346013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.812 [2024-12-10 22:59:24.346047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.812 qpair failed and we were unable to recover it. 00:28:16.812 [2024-12-10 22:59:24.346218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.812 [2024-12-10 22:59:24.346252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.812 qpair failed and we were unable to recover it. 00:28:16.812 [2024-12-10 22:59:24.346363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.812 [2024-12-10 22:59:24.346396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.812 qpair failed and we were unable to recover it. 00:28:16.812 [2024-12-10 22:59:24.346495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.812 [2024-12-10 22:59:24.346529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.812 qpair failed and we were unable to recover it. 00:28:16.812 [2024-12-10 22:59:24.346628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.812 [2024-12-10 22:59:24.346660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.812 qpair failed and we were unable to recover it. 00:28:16.812 [2024-12-10 22:59:24.346836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.812 [2024-12-10 22:59:24.346870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.812 qpair failed and we were unable to recover it. 00:28:16.812 [2024-12-10 22:59:24.346970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.812 [2024-12-10 22:59:24.347002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.812 qpair failed and we were unable to recover it. 00:28:16.812 [2024-12-10 22:59:24.347177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.812 [2024-12-10 22:59:24.347211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.812 qpair failed and we were unable to recover it. 00:28:16.812 [2024-12-10 22:59:24.347322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.812 [2024-12-10 22:59:24.347354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.812 qpair failed and we were unable to recover it. 00:28:16.812 [2024-12-10 22:59:24.347457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.812 [2024-12-10 22:59:24.347490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.812 qpair failed and we were unable to recover it. 00:28:16.812 [2024-12-10 22:59:24.347591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.812 [2024-12-10 22:59:24.347626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.812 qpair failed and we were unable to recover it. 00:28:16.812 [2024-12-10 22:59:24.347745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.812 [2024-12-10 22:59:24.347779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.812 qpair failed and we were unable to recover it. 00:28:16.812 [2024-12-10 22:59:24.347884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.812 [2024-12-10 22:59:24.347917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.812 qpair failed and we were unable to recover it. 00:28:16.812 [2024-12-10 22:59:24.348088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.812 [2024-12-10 22:59:24.348121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.812 qpair failed and we were unable to recover it. 00:28:16.812 [2024-12-10 22:59:24.348237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.812 [2024-12-10 22:59:24.348269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.812 qpair failed and we were unable to recover it. 00:28:16.812 [2024-12-10 22:59:24.348385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.812 [2024-12-10 22:59:24.348421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.812 qpair failed and we were unable to recover it. 00:28:16.812 [2024-12-10 22:59:24.348530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.812 [2024-12-10 22:59:24.348562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.812 qpair failed and we were unable to recover it. 00:28:16.812 [2024-12-10 22:59:24.348664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.813 [2024-12-10 22:59:24.348697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.813 qpair failed and we were unable to recover it. 00:28:16.813 [2024-12-10 22:59:24.348799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.813 [2024-12-10 22:59:24.348830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.813 qpair failed and we were unable to recover it. 00:28:16.813 [2024-12-10 22:59:24.348935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.813 [2024-12-10 22:59:24.348967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.813 qpair failed and we were unable to recover it. 00:28:16.813 [2024-12-10 22:59:24.349138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.813 [2024-12-10 22:59:24.349183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.813 qpair failed and we were unable to recover it. 00:28:16.813 [2024-12-10 22:59:24.349362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.813 [2024-12-10 22:59:24.349395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.813 qpair failed and we were unable to recover it. 00:28:16.813 [2024-12-10 22:59:24.349513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.813 [2024-12-10 22:59:24.349545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.813 qpair failed and we were unable to recover it. 00:28:16.813 [2024-12-10 22:59:24.349651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.813 22:59:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:16.813 [2024-12-10 22:59:24.349685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.813 qpair failed and we were unable to recover it. 00:28:16.813 [2024-12-10 22:59:24.349803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.813 [2024-12-10 22:59:24.349836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.813 qpair failed and we were unable to recover it. 00:28:16.813 [2024-12-10 22:59:24.349941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.813 [2024-12-10 22:59:24.349975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.813 qpair failed and we were unable to recover it. 00:28:16.813 22:59:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:16.813 [2024-12-10 22:59:24.350103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.813 [2024-12-10 22:59:24.350137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.813 qpair failed and we were unable to recover it. 00:28:16.813 [2024-12-10 22:59:24.350257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.813 [2024-12-10 22:59:24.350291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.813 qpair failed and we were unable to recover it. 00:28:16.813 22:59:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.813 [2024-12-10 22:59:24.350460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.813 [2024-12-10 22:59:24.350495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.813 qpair failed and we were unable to recover it. 00:28:16.813 [2024-12-10 22:59:24.350662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.813 [2024-12-10 22:59:24.350695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.813 22:59:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:16.813 qpair failed and we were unable to recover it. 00:28:16.813 [2024-12-10 22:59:24.350796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.813 [2024-12-10 22:59:24.350827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.813 qpair failed and we were unable to recover it. 00:28:16.813 [2024-12-10 22:59:24.350944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.813 [2024-12-10 22:59:24.350977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.813 qpair failed and we were unable to recover it. 00:28:16.813 [2024-12-10 22:59:24.351080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.813 [2024-12-10 22:59:24.351112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.813 qpair failed and we were unable to recover it. 00:28:16.813 [2024-12-10 22:59:24.351230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.813 [2024-12-10 22:59:24.351264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.813 qpair failed and we were unable to recover it. 00:28:16.813 [2024-12-10 22:59:24.351385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.813 [2024-12-10 22:59:24.351418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.813 qpair failed and we were unable to recover it. 00:28:16.813 [2024-12-10 22:59:24.351587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.813 [2024-12-10 22:59:24.351621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.813 qpair failed and we were unable to recover it. 00:28:16.813 [2024-12-10 22:59:24.351751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.813 [2024-12-10 22:59:24.351783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.813 qpair failed and we were unable to recover it. 00:28:16.813 [2024-12-10 22:59:24.351891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.813 [2024-12-10 22:59:24.351923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.813 qpair failed and we were unable to recover it. 00:28:16.813 [2024-12-10 22:59:24.352028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.813 [2024-12-10 22:59:24.352060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.813 qpair failed and we were unable to recover it. 00:28:16.813 [2024-12-10 22:59:24.352210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.813 [2024-12-10 22:59:24.352244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.813 qpair failed and we were unable to recover it. 00:28:16.813 [2024-12-10 22:59:24.352348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.813 [2024-12-10 22:59:24.352380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.813 qpair failed and we were unable to recover it. 00:28:16.813 [2024-12-10 22:59:24.352483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.813 [2024-12-10 22:59:24.352515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.813 qpair failed and we were unable to recover it. 00:28:16.813 [2024-12-10 22:59:24.352639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.813 [2024-12-10 22:59:24.352671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.813 qpair failed and we were unable to recover it. 00:28:16.813 [2024-12-10 22:59:24.352774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.813 [2024-12-10 22:59:24.352807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.813 qpair failed and we were unable to recover it. 00:28:16.813 [2024-12-10 22:59:24.352917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.813 [2024-12-10 22:59:24.352950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.813 qpair failed and we were unable to recover it. 00:28:16.813 [2024-12-10 22:59:24.353057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.813 [2024-12-10 22:59:24.353089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.813 qpair failed and we were unable to recover it. 00:28:16.813 [2024-12-10 22:59:24.353197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.813 [2024-12-10 22:59:24.353230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.813 qpair failed and we were unable to recover it. 00:28:16.813 [2024-12-10 22:59:24.353340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.813 [2024-12-10 22:59:24.353373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.813 qpair failed and we were unable to recover it. 00:28:16.813 [2024-12-10 22:59:24.353483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.813 [2024-12-10 22:59:24.353515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.813 qpair failed and we were unable to recover it. 00:28:16.813 [2024-12-10 22:59:24.353621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.813 [2024-12-10 22:59:24.353654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.813 qpair failed and we were unable to recover it. 00:28:16.813 [2024-12-10 22:59:24.353762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.813 [2024-12-10 22:59:24.353795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.813 qpair failed and we were unable to recover it. 00:28:16.813 [2024-12-10 22:59:24.353989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.813 [2024-12-10 22:59:24.354022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.813 qpair failed and we were unable to recover it. 00:28:16.813 [2024-12-10 22:59:24.354125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.813 [2024-12-10 22:59:24.354156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.813 qpair failed and we were unable to recover it. 00:28:16.813 [2024-12-10 22:59:24.354342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.813 [2024-12-10 22:59:24.354374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.813 qpair failed and we were unable to recover it. 00:28:16.814 [2024-12-10 22:59:24.354494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.814 [2024-12-10 22:59:24.354526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.814 qpair failed and we were unable to recover it. 00:28:16.814 [2024-12-10 22:59:24.354631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.814 [2024-12-10 22:59:24.354663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.814 qpair failed and we were unable to recover it. 00:28:16.814 [2024-12-10 22:59:24.354854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.814 [2024-12-10 22:59:24.354887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.814 qpair failed and we were unable to recover it. 00:28:16.814 [2024-12-10 22:59:24.354989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.814 [2024-12-10 22:59:24.355022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.814 qpair failed and we were unable to recover it. 00:28:16.814 [2024-12-10 22:59:24.355121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.814 [2024-12-10 22:59:24.355154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.814 qpair failed and we were unable to recover it. 00:28:16.814 [2024-12-10 22:59:24.355287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.814 [2024-12-10 22:59:24.355319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.814 qpair failed and we were unable to recover it. 00:28:16.814 [2024-12-10 22:59:24.355446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.814 [2024-12-10 22:59:24.355479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.814 qpair failed and we were unable to recover it. 00:28:16.814 [2024-12-10 22:59:24.355589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.814 [2024-12-10 22:59:24.355620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.814 qpair failed and we were unable to recover it. 00:28:16.814 [2024-12-10 22:59:24.355882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.814 [2024-12-10 22:59:24.355919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.814 qpair failed and we were unable to recover it. 00:28:16.814 [2024-12-10 22:59:24.356033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.814 [2024-12-10 22:59:24.356066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.814 qpair failed and we were unable to recover it. 00:28:16.814 [2024-12-10 22:59:24.356186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.814 [2024-12-10 22:59:24.356220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.814 qpair failed and we were unable to recover it. 00:28:16.814 [2024-12-10 22:59:24.356321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.814 [2024-12-10 22:59:24.356354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.814 qpair failed and we were unable to recover it. 00:28:16.814 [2024-12-10 22:59:24.356461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.814 [2024-12-10 22:59:24.356494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.814 qpair failed and we were unable to recover it. 00:28:16.814 [2024-12-10 22:59:24.356609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.814 [2024-12-10 22:59:24.356643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.814 qpair failed and we were unable to recover it. 00:28:16.814 [2024-12-10 22:59:24.356756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.814 [2024-12-10 22:59:24.356789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.814 qpair failed and we were unable to recover it. 00:28:16.814 [2024-12-10 22:59:24.356895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.814 [2024-12-10 22:59:24.356927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.814 qpair failed and we were unable to recover it. 00:28:16.814 [2024-12-10 22:59:24.357102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.814 [2024-12-10 22:59:24.357135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.814 qpair failed and we were unable to recover it. 00:28:16.814 [2024-12-10 22:59:24.357263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.814 [2024-12-10 22:59:24.357297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.814 qpair failed and we were unable to recover it. 00:28:16.814 [2024-12-10 22:59:24.357407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.814 [2024-12-10 22:59:24.357439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.814 qpair failed and we were unable to recover it. 00:28:16.814 [2024-12-10 22:59:24.357546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.814 [2024-12-10 22:59:24.357579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.814 qpair failed and we were unable to recover it. 00:28:16.814 [2024-12-10 22:59:24.357684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.814 [2024-12-10 22:59:24.357717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.814 qpair failed and we were unable to recover it. 00:28:16.814 [2024-12-10 22:59:24.357827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.814 [2024-12-10 22:59:24.357860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.814 qpair failed and we were unable to recover it. 00:28:16.814 [2024-12-10 22:59:24.357973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.814 [2024-12-10 22:59:24.358005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.814 qpair failed and we were unable to recover it. 00:28:16.814 [2024-12-10 22:59:24.358176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.814 [2024-12-10 22:59:24.358211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.814 qpair failed and we were unable to recover it. 00:28:16.814 [2024-12-10 22:59:24.358320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.814 [2024-12-10 22:59:24.358353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.814 qpair failed and we were unable to recover it. 00:28:16.814 [2024-12-10 22:59:24.358457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.814 [2024-12-10 22:59:24.358489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.814 qpair failed and we were unable to recover it. 00:28:16.814 [2024-12-10 22:59:24.358591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.814 [2024-12-10 22:59:24.358623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.814 qpair failed and we were unable to recover it. 00:28:16.814 [2024-12-10 22:59:24.358743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.814 [2024-12-10 22:59:24.358775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.814 qpair failed and we were unable to recover it. 00:28:16.814 [2024-12-10 22:59:24.358881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.814 [2024-12-10 22:59:24.358913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.814 qpair failed and we were unable to recover it. 00:28:16.814 [2024-12-10 22:59:24.359021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.814 [2024-12-10 22:59:24.359054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.814 qpair failed and we were unable to recover it. 00:28:16.814 [2024-12-10 22:59:24.359172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.814 [2024-12-10 22:59:24.359205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.814 qpair failed and we were unable to recover it. 00:28:16.814 [2024-12-10 22:59:24.359309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.814 [2024-12-10 22:59:24.359340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.814 qpair failed and we were unable to recover it. 00:28:16.814 [2024-12-10 22:59:24.359448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.814 [2024-12-10 22:59:24.359480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.814 qpair failed and we were unable to recover it. 00:28:16.814 [2024-12-10 22:59:24.359587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.814 [2024-12-10 22:59:24.359620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.814 qpair failed and we were unable to recover it. 00:28:16.814 [2024-12-10 22:59:24.359720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.814 [2024-12-10 22:59:24.359753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.814 qpair failed and we were unable to recover it. 00:28:16.814 [2024-12-10 22:59:24.359863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.814 [2024-12-10 22:59:24.359896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.814 qpair failed and we were unable to recover it. 00:28:16.814 [2024-12-10 22:59:24.359996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.814 [2024-12-10 22:59:24.360027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.814 qpair failed and we were unable to recover it. 00:28:16.814 [2024-12-10 22:59:24.360134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.814 [2024-12-10 22:59:24.360193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.814 qpair failed and we were unable to recover it. 00:28:16.814 [2024-12-10 22:59:24.360505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.814 [2024-12-10 22:59:24.360538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.814 qpair failed and we were unable to recover it. 00:28:16.814 [2024-12-10 22:59:24.360709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.815 [2024-12-10 22:59:24.360742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.815 qpair failed and we were unable to recover it. 00:28:16.815 [2024-12-10 22:59:24.360915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.815 [2024-12-10 22:59:24.360947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.815 qpair failed and we were unable to recover it. 00:28:16.815 [2024-12-10 22:59:24.361050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.815 [2024-12-10 22:59:24.361083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.815 qpair failed and we were unable to recover it. 00:28:16.815 [2024-12-10 22:59:24.361186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.815 [2024-12-10 22:59:24.361220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.815 qpair failed and we were unable to recover it. 00:28:16.815 [2024-12-10 22:59:24.361331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.815 [2024-12-10 22:59:24.361364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.815 qpair failed and we were unable to recover it. 00:28:16.815 [2024-12-10 22:59:24.361481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.815 [2024-12-10 22:59:24.361516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.815 qpair failed and we were unable to recover it. 00:28:16.815 [2024-12-10 22:59:24.361646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.815 [2024-12-10 22:59:24.361678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.815 qpair failed and we were unable to recover it. 00:28:16.815 [2024-12-10 22:59:24.361780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.815 [2024-12-10 22:59:24.361823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.815 qpair failed and we were unable to recover it. 00:28:16.815 [2024-12-10 22:59:24.361928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.815 [2024-12-10 22:59:24.361958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.815 qpair failed and we were unable to recover it. 00:28:16.815 [2024-12-10 22:59:24.362154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.815 [2024-12-10 22:59:24.362219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.815 qpair failed and we were unable to recover it. 00:28:16.815 [2024-12-10 22:59:24.362402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.815 [2024-12-10 22:59:24.362431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.815 qpair failed and we were unable to recover it. 00:28:16.815 [2024-12-10 22:59:24.362593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.815 [2024-12-10 22:59:24.362622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.815 qpair failed and we were unable to recover it. 00:28:16.815 [2024-12-10 22:59:24.362721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.815 [2024-12-10 22:59:24.362749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.815 qpair failed and we were unable to recover it. 00:28:16.815 [2024-12-10 22:59:24.362927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.815 [2024-12-10 22:59:24.362956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.815 qpair failed and we were unable to recover it. 00:28:16.815 [2024-12-10 22:59:24.363063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.815 [2024-12-10 22:59:24.363092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.815 qpair failed and we were unable to recover it. 00:28:16.815 [2024-12-10 22:59:24.363197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.815 [2024-12-10 22:59:24.363227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.815 qpair failed and we were unable to recover it. 00:28:16.815 [2024-12-10 22:59:24.363335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.815 [2024-12-10 22:59:24.363365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.815 qpair failed and we were unable to recover it. 00:28:16.815 [2024-12-10 22:59:24.363476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.815 [2024-12-10 22:59:24.363505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.815 qpair failed and we were unable to recover it. 00:28:16.815 [2024-12-10 22:59:24.363679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.815 [2024-12-10 22:59:24.363709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.815 qpair failed and we were unable to recover it. 00:28:16.815 [2024-12-10 22:59:24.363821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.815 [2024-12-10 22:59:24.363850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.815 qpair failed and we were unable to recover it. 00:28:16.815 [2024-12-10 22:59:24.363944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.815 [2024-12-10 22:59:24.363974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.815 qpair failed and we were unable to recover it. 00:28:16.815 [2024-12-10 22:59:24.364131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.815 [2024-12-10 22:59:24.364169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.815 qpair failed and we were unable to recover it. 00:28:16.815 [2024-12-10 22:59:24.364341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.815 [2024-12-10 22:59:24.364371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.815 qpair failed and we were unable to recover it. 00:28:16.815 [2024-12-10 22:59:24.364477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.815 [2024-12-10 22:59:24.364507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.815 qpair failed and we were unable to recover it. 00:28:16.815 [2024-12-10 22:59:24.364622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.815 [2024-12-10 22:59:24.364653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.815 qpair failed and we were unable to recover it. 00:28:16.815 [2024-12-10 22:59:24.364838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.815 [2024-12-10 22:59:24.364866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.815 qpair failed and we were unable to recover it. 00:28:16.815 [2024-12-10 22:59:24.364963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.815 [2024-12-10 22:59:24.364993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.815 qpair failed and we were unable to recover it. 00:28:16.815 [2024-12-10 22:59:24.365090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.815 [2024-12-10 22:59:24.365120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.815 qpair failed and we were unable to recover it. 00:28:16.815 [2024-12-10 22:59:24.365250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.815 [2024-12-10 22:59:24.365282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.815 qpair failed and we were unable to recover it. 00:28:16.815 [2024-12-10 22:59:24.365388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.815 [2024-12-10 22:59:24.365418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.815 qpair failed and we were unable to recover it. 00:28:16.815 [2024-12-10 22:59:24.365583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.815 [2024-12-10 22:59:24.365613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.815 qpair failed and we were unable to recover it. 00:28:16.815 [2024-12-10 22:59:24.365721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.815 [2024-12-10 22:59:24.365751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.815 qpair failed and we were unable to recover it. 00:28:16.815 [2024-12-10 22:59:24.365853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.815 [2024-12-10 22:59:24.365882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.815 qpair failed and we were unable to recover it. 00:28:16.815 [2024-12-10 22:59:24.365981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.815 [2024-12-10 22:59:24.366011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.815 qpair failed and we were unable to recover it. 00:28:16.815 [2024-12-10 22:59:24.366206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.815 [2024-12-10 22:59:24.366237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.815 qpair failed and we were unable to recover it. 00:28:16.815 [2024-12-10 22:59:24.366405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.815 [2024-12-10 22:59:24.366435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.815 qpair failed and we were unable to recover it. 00:28:16.815 [2024-12-10 22:59:24.366610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.815 [2024-12-10 22:59:24.366641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.815 qpair failed and we were unable to recover it. 00:28:16.815 [2024-12-10 22:59:24.366757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.815 [2024-12-10 22:59:24.366786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.815 qpair failed and we were unable to recover it. 00:28:16.815 [2024-12-10 22:59:24.366895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.815 [2024-12-10 22:59:24.366925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.816 qpair failed and we were unable to recover it. 00:28:16.816 [2024-12-10 22:59:24.367029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.816 [2024-12-10 22:59:24.367059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.816 qpair failed and we were unable to recover it. 00:28:16.816 [2024-12-10 22:59:24.367240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.816 [2024-12-10 22:59:24.367270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.816 qpair failed and we were unable to recover it. 00:28:16.816 [2024-12-10 22:59:24.367441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.816 [2024-12-10 22:59:24.367471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.816 qpair failed and we were unable to recover it. 00:28:16.816 [2024-12-10 22:59:24.367566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.816 [2024-12-10 22:59:24.367596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.816 qpair failed and we were unable to recover it. 00:28:16.816 [2024-12-10 22:59:24.367774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.816 [2024-12-10 22:59:24.367804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.816 qpair failed and we were unable to recover it. 00:28:16.816 [2024-12-10 22:59:24.367911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.816 [2024-12-10 22:59:24.367939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.816 qpair failed and we were unable to recover it. 00:28:16.816 [2024-12-10 22:59:24.368050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.816 [2024-12-10 22:59:24.368080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.816 qpair failed and we were unable to recover it. 00:28:16.816 [2024-12-10 22:59:24.368182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.816 [2024-12-10 22:59:24.368212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.816 qpair failed and we were unable to recover it. 00:28:16.816 [2024-12-10 22:59:24.368389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.816 [2024-12-10 22:59:24.368419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.816 qpair failed and we were unable to recover it. 00:28:16.816 [2024-12-10 22:59:24.368526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.816 [2024-12-10 22:59:24.368555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.816 qpair failed and we were unable to recover it. 00:28:16.816 [2024-12-10 22:59:24.368660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.816 [2024-12-10 22:59:24.368694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.816 qpair failed and we were unable to recover it. 00:28:16.816 [2024-12-10 22:59:24.368815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.816 [2024-12-10 22:59:24.368845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.816 qpair failed and we were unable to recover it. 00:28:16.816 [2024-12-10 22:59:24.368956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.816 [2024-12-10 22:59:24.368986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.816 qpair failed and we were unable to recover it. 00:28:16.816 [2024-12-10 22:59:24.369090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.816 [2024-12-10 22:59:24.369119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.816 qpair failed and we were unable to recover it. 00:28:16.816 [2024-12-10 22:59:24.369245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.816 [2024-12-10 22:59:24.369276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.816 qpair failed and we were unable to recover it. 00:28:16.816 [2024-12-10 22:59:24.369383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.816 [2024-12-10 22:59:24.369411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.816 qpair failed and we were unable to recover it. 00:28:16.816 [2024-12-10 22:59:24.369527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.816 [2024-12-10 22:59:24.369557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.816 qpair failed and we were unable to recover it. 00:28:16.816 [2024-12-10 22:59:24.369756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.816 [2024-12-10 22:59:24.369788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.816 qpair failed and we were unable to recover it. 00:28:16.816 [2024-12-10 22:59:24.369954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.816 [2024-12-10 22:59:24.369984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.816 qpair failed and we were unable to recover it. 00:28:16.816 [2024-12-10 22:59:24.370176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.816 [2024-12-10 22:59:24.370208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.816 qpair failed and we were unable to recover it. 00:28:16.816 [2024-12-10 22:59:24.370339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.816 [2024-12-10 22:59:24.370371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.816 qpair failed and we were unable to recover it. 00:28:16.816 [2024-12-10 22:59:24.370557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.816 [2024-12-10 22:59:24.370587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.816 qpair failed and we were unable to recover it. 00:28:16.816 [2024-12-10 22:59:24.370686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.816 [2024-12-10 22:59:24.370716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.816 qpair failed and we were unable to recover it. 00:28:16.816 [2024-12-10 22:59:24.370884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.816 [2024-12-10 22:59:24.370914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.816 qpair failed and we were unable to recover it. 00:28:16.816 [2024-12-10 22:59:24.371019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.816 [2024-12-10 22:59:24.371049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.816 qpair failed and we were unable to recover it. 00:28:16.816 [2024-12-10 22:59:24.371145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.816 [2024-12-10 22:59:24.371182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.816 qpair failed and we were unable to recover it. 00:28:16.816 [2024-12-10 22:59:24.371297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.816 [2024-12-10 22:59:24.371326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.816 qpair failed and we were unable to recover it. 00:28:16.816 [2024-12-10 22:59:24.371446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.816 [2024-12-10 22:59:24.371476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.816 qpair failed and we were unable to recover it. 00:28:16.816 [2024-12-10 22:59:24.371578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.816 [2024-12-10 22:59:24.371607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.816 qpair failed and we were unable to recover it. 00:28:16.816 [2024-12-10 22:59:24.371716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.816 [2024-12-10 22:59:24.371746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.816 qpair failed and we were unable to recover it. 00:28:16.816 [2024-12-10 22:59:24.371913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.817 [2024-12-10 22:59:24.371942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.817 qpair failed and we were unable to recover it. 00:28:16.817 [2024-12-10 22:59:24.372045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.817 [2024-12-10 22:59:24.372074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.817 qpair failed and we were unable to recover it. 00:28:16.817 [2024-12-10 22:59:24.372173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.817 [2024-12-10 22:59:24.372206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.817 qpair failed and we were unable to recover it. 00:28:16.817 [2024-12-10 22:59:24.372301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.817 [2024-12-10 22:59:24.372329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.817 qpair failed and we were unable to recover it. 00:28:16.817 [2024-12-10 22:59:24.372427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.817 [2024-12-10 22:59:24.372458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.817 qpair failed and we were unable to recover it. 00:28:16.817 [2024-12-10 22:59:24.372558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.817 [2024-12-10 22:59:24.372588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.817 qpair failed and we were unable to recover it. 00:28:16.817 [2024-12-10 22:59:24.372754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.817 [2024-12-10 22:59:24.372782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.817 qpair failed and we were unable to recover it. 00:28:16.817 [2024-12-10 22:59:24.372892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.817 [2024-12-10 22:59:24.372919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.817 qpair failed and we were unable to recover it. 00:28:16.817 [2024-12-10 22:59:24.373012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.817 [2024-12-10 22:59:24.373040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.817 qpair failed and we were unable to recover it. 00:28:16.817 [2024-12-10 22:59:24.373134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.817 [2024-12-10 22:59:24.373168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.817 qpair failed and we were unable to recover it. 00:28:16.817 [2024-12-10 22:59:24.373273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.817 [2024-12-10 22:59:24.373302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.817 qpair failed and we were unable to recover it. 00:28:16.817 [2024-12-10 22:59:24.373409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.817 [2024-12-10 22:59:24.373435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.817 qpair failed and we were unable to recover it. 00:28:16.817 [2024-12-10 22:59:24.373605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.817 [2024-12-10 22:59:24.373634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.817 qpair failed and we were unable to recover it. 00:28:16.817 [2024-12-10 22:59:24.373723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.817 [2024-12-10 22:59:24.373751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.817 qpair failed and we were unable to recover it. 00:28:16.817 [2024-12-10 22:59:24.373851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.817 [2024-12-10 22:59:24.373878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.817 qpair failed and we were unable to recover it. 00:28:16.817 [2024-12-10 22:59:24.373997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.817 [2024-12-10 22:59:24.374026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.817 qpair failed and we were unable to recover it. 00:28:16.817 [2024-12-10 22:59:24.374130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.817 [2024-12-10 22:59:24.374179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.817 qpair failed and we were unable to recover it. 00:28:16.817 [2024-12-10 22:59:24.374344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.817 [2024-12-10 22:59:24.374372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.817 qpair failed and we were unable to recover it. 00:28:16.817 [2024-12-10 22:59:24.374470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.817 [2024-12-10 22:59:24.374497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.817 qpair failed and we were unable to recover it. 00:28:16.817 [2024-12-10 22:59:24.374587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.817 [2024-12-10 22:59:24.374613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.817 qpair failed and we were unable to recover it. 00:28:16.817 [2024-12-10 22:59:24.374774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.817 [2024-12-10 22:59:24.374806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.817 qpair failed and we were unable to recover it. 00:28:16.817 [2024-12-10 22:59:24.374910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.817 [2024-12-10 22:59:24.374938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.817 qpair failed and we were unable to recover it. 00:28:16.817 [2024-12-10 22:59:24.375044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.817 [2024-12-10 22:59:24.375071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.817 qpair failed and we were unable to recover it. 00:28:16.817 [2024-12-10 22:59:24.375172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.817 [2024-12-10 22:59:24.375201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.817 qpair failed and we were unable to recover it. 00:28:16.817 [2024-12-10 22:59:24.375295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.817 [2024-12-10 22:59:24.375321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.817 qpair failed and we were unable to recover it. 00:28:16.817 [2024-12-10 22:59:24.375486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.817 [2024-12-10 22:59:24.375512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.817 qpair failed and we were unable to recover it. 00:28:16.817 [2024-12-10 22:59:24.375627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.817 [2024-12-10 22:59:24.375655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.817 qpair failed and we were unable to recover it. 00:28:16.817 [2024-12-10 22:59:24.375825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.817 [2024-12-10 22:59:24.375852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.817 qpair failed and we were unable to recover it. 00:28:16.817 [2024-12-10 22:59:24.375946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.817 [2024-12-10 22:59:24.375973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.817 qpair failed and we were unable to recover it. 00:28:16.817 [2024-12-10 22:59:24.376073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.817 [2024-12-10 22:59:24.376100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.817 qpair failed and we were unable to recover it. 00:28:16.817 [2024-12-10 22:59:24.376203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.817 [2024-12-10 22:59:24.376232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.817 qpair failed and we were unable to recover it. 00:28:16.817 [2024-12-10 22:59:24.376328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.817 [2024-12-10 22:59:24.376355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.817 qpair failed and we were unable to recover it. 00:28:16.817 [2024-12-10 22:59:24.376516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.817 [2024-12-10 22:59:24.376543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.817 qpair failed and we were unable to recover it. 00:28:16.817 [2024-12-10 22:59:24.376638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.817 [2024-12-10 22:59:24.376666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b9Malloc0 00:28:16.817 0 with addr=10.0.0.2, port=4420 00:28:16.817 qpair failed and we were unable to recover it. 00:28:16.817 [2024-12-10 22:59:24.376842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.817 [2024-12-10 22:59:24.376870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.817 qpair failed and we were unable to recover it. 00:28:16.817 [2024-12-10 22:59:24.376974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.817 [2024-12-10 22:59:24.377001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.817 qpair failed and we were unable to recover it. 00:28:16.817 [2024-12-10 22:59:24.377106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.817 [2024-12-10 22:59:24.377132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.817 qpair failed and we were unable to recover it. 00:28:16.817 22:59:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.817 [2024-12-10 22:59:24.377393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.817 [2024-12-10 22:59:24.377423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.817 qpair failed and we were unable to recover it. 00:28:16.817 [2024-12-10 22:59:24.377515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.818 [2024-12-10 22:59:24.377542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.818 qpair failed and we were unable to recover it. 00:28:16.818 22:59:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:16.818 [2024-12-10 22:59:24.377634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.818 [2024-12-10 22:59:24.377663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.818 qpair failed and we were unable to recover it. 00:28:16.818 [2024-12-10 22:59:24.377768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.818 [2024-12-10 22:59:24.377795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.818 qpair failed and we were unable to recover it. 00:28:16.818 22:59:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.818 [2024-12-10 22:59:24.377989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.818 [2024-12-10 22:59:24.378018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.818 qpair failed and we were unable to recover it. 00:28:16.818 [2024-12-10 22:59:24.378108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.818 [2024-12-10 22:59:24.378134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.818 qpair failed and we were unable to recover it. 00:28:16.818 22:59:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:16.818 [2024-12-10 22:59:24.378348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.818 [2024-12-10 22:59:24.378398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.818 qpair failed and we were unable to recover it. 00:28:16.818 [2024-12-10 22:59:24.378579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.818 [2024-12-10 22:59:24.378612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.818 qpair failed and we were unable to recover it. 00:28:16.818 [2024-12-10 22:59:24.378739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.818 [2024-12-10 22:59:24.378781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.818 qpair failed and we were unable to recover it. 00:28:16.818 [2024-12-10 22:59:24.378952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.818 [2024-12-10 22:59:24.378985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.818 qpair failed and we were unable to recover it. 00:28:16.818 [2024-12-10 22:59:24.379090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.818 [2024-12-10 22:59:24.379123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.818 qpair failed and we were unable to recover it. 00:28:16.818 [2024-12-10 22:59:24.379268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.818 [2024-12-10 22:59:24.379302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.818 qpair failed and we were unable to recover it. 00:28:16.818 [2024-12-10 22:59:24.379420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.818 [2024-12-10 22:59:24.379452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.818 qpair failed and we were unable to recover it. 00:28:16.818 [2024-12-10 22:59:24.379620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.818 [2024-12-10 22:59:24.379653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.818 qpair failed and we were unable to recover it. 00:28:16.818 [2024-12-10 22:59:24.379767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.818 [2024-12-10 22:59:24.379799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.818 qpair failed and we were unable to recover it. 00:28:16.818 [2024-12-10 22:59:24.379897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.818 [2024-12-10 22:59:24.379927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.818 qpair failed and we were unable to recover it. 00:28:16.818 [2024-12-10 22:59:24.380036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.818 [2024-12-10 22:59:24.380063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.818 qpair failed and we were unable to recover it. 00:28:16.818 [2024-12-10 22:59:24.380167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.818 [2024-12-10 22:59:24.380195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.818 qpair failed and we were unable to recover it. 00:28:16.818 [2024-12-10 22:59:24.380310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.818 [2024-12-10 22:59:24.380336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.818 qpair failed and we were unable to recover it. 00:28:16.818 [2024-12-10 22:59:24.380436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.818 [2024-12-10 22:59:24.380464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.818 qpair failed and we were unable to recover it. 00:28:16.818 [2024-12-10 22:59:24.380565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.818 [2024-12-10 22:59:24.380592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.818 qpair failed and we were unable to recover it. 00:28:16.818 [2024-12-10 22:59:24.380702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.818 [2024-12-10 22:59:24.380730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.818 qpair failed and we were unable to recover it. 00:28:16.818 [2024-12-10 22:59:24.380964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.818 [2024-12-10 22:59:24.380990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.818 qpair failed and we were unable to recover it. 00:28:16.818 [2024-12-10 22:59:24.381111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.818 [2024-12-10 22:59:24.381139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.818 qpair failed and we were unable to recover it. 00:28:16.818 [2024-12-10 22:59:24.381244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.818 [2024-12-10 22:59:24.381271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.818 qpair failed and we were unable to recover it. 00:28:16.818 [2024-12-10 22:59:24.381381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.818 [2024-12-10 22:59:24.381408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.818 qpair failed and we were unable to recover it. 00:28:16.818 [2024-12-10 22:59:24.381501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.818 [2024-12-10 22:59:24.381528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.818 qpair failed and we were unable to recover it. 00:28:16.818 [2024-12-10 22:59:24.381635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.818 [2024-12-10 22:59:24.381661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.818 qpair failed and we were unable to recover it. 00:28:16.818 [2024-12-10 22:59:24.381754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.818 [2024-12-10 22:59:24.381782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.818 qpair failed and we were unable to recover it. 00:28:16.818 [2024-12-10 22:59:24.381897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.818 [2024-12-10 22:59:24.381926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.818 qpair failed and we were unable to recover it. 00:28:16.818 [2024-12-10 22:59:24.382127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.818 [2024-12-10 22:59:24.382189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.818 qpair failed and we were unable to recover it. 00:28:16.818 [2024-12-10 22:59:24.382296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.818 [2024-12-10 22:59:24.382326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.818 qpair failed and we were unable to recover it. 00:28:16.818 [2024-12-10 22:59:24.382427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.818 [2024-12-10 22:59:24.382456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.818 qpair failed and we were unable to recover it. 00:28:16.818 [2024-12-10 22:59:24.382558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.818 [2024-12-10 22:59:24.382587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.818 qpair failed and we were unable to recover it. 00:28:16.818 [2024-12-10 22:59:24.382695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.818 [2024-12-10 22:59:24.382724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.818 qpair failed and we were unable to recover it. 00:28:16.818 [2024-12-10 22:59:24.382920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.818 [2024-12-10 22:59:24.382949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.818 qpair failed and we were unable to recover it. 00:28:16.818 [2024-12-10 22:59:24.383172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.818 [2024-12-10 22:59:24.383204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.818 qpair failed and we were unable to recover it. 00:28:16.818 [2024-12-10 22:59:24.383313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.818 [2024-12-10 22:59:24.383343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.818 qpair failed and we were unable to recover it. 00:28:16.818 [2024-12-10 22:59:24.383507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.818 [2024-12-10 22:59:24.383537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.818 qpair failed and we were unable to recover it. 00:28:16.819 [2024-12-10 22:59:24.383634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.819 [2024-12-10 22:59:24.383664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.819 qpair failed and we were unable to recover it. 00:28:16.819 [2024-12-10 22:59:24.383770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.819 [2024-12-10 22:59:24.383800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.819 qpair failed and we were unable to recover it. 00:28:16.819 [2024-12-10 22:59:24.384056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.819 [2024-12-10 22:59:24.384085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.819 qpair failed and we were unable to recover it. 00:28:16.819 [2024-12-10 22:59:24.384195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.819 [2024-12-10 22:59:24.384226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.819 qpair failed and we were unable to recover it. 00:28:16.819 [2024-12-10 22:59:24.384352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.819 [2024-12-10 22:59:24.384383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.819 qpair failed and we were unable to recover it. 00:28:16.819 [2024-12-10 22:59:24.384401] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:16.819 [2024-12-10 22:59:24.384548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.819 [2024-12-10 22:59:24.384578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.819 qpair failed and we were unable to recover it. 00:28:16.819 [2024-12-10 22:59:24.384749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.819 [2024-12-10 22:59:24.384778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.819 qpair failed and we were unable to recover it. 00:28:16.819 [2024-12-10 22:59:24.384878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.819 [2024-12-10 22:59:24.384907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.819 qpair failed and we were unable to recover it. 00:28:16.819 [2024-12-10 22:59:24.385068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.819 [2024-12-10 22:59:24.385098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.819 qpair failed and we were unable to recover it. 00:28:16.819 [2024-12-10 22:59:24.385205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.819 [2024-12-10 22:59:24.385235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.819 qpair failed and we were unable to recover it. 00:28:16.819 [2024-12-10 22:59:24.385400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.819 [2024-12-10 22:59:24.385430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.819 qpair failed and we were unable to recover it. 00:28:16.819 [2024-12-10 22:59:24.385634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.819 [2024-12-10 22:59:24.385662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.819 qpair failed and we were unable to recover it. 00:28:16.819 [2024-12-10 22:59:24.385775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.819 [2024-12-10 22:59:24.385804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.819 qpair failed and we were unable to recover it. 00:28:16.819 [2024-12-10 22:59:24.385901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.819 [2024-12-10 22:59:24.385930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.819 qpair failed and we were unable to recover it. 00:28:16.819 [2024-12-10 22:59:24.386026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.819 [2024-12-10 22:59:24.386055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.819 qpair failed and we were unable to recover it. 00:28:16.819 [2024-12-10 22:59:24.386156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.819 [2024-12-10 22:59:24.386195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.819 qpair failed and we were unable to recover it. 00:28:16.819 [2024-12-10 22:59:24.386379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.819 [2024-12-10 22:59:24.386408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.819 qpair failed and we were unable to recover it. 00:28:16.819 [2024-12-10 22:59:24.386575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.819 [2024-12-10 22:59:24.386604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.819 qpair failed and we were unable to recover it. 00:28:16.819 [2024-12-10 22:59:24.386763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.819 [2024-12-10 22:59:24.386793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.819 qpair failed and we were unable to recover it. 00:28:16.819 [2024-12-10 22:59:24.386900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.819 [2024-12-10 22:59:24.386929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.819 qpair failed and we were unable to recover it. 00:28:16.819 [2024-12-10 22:59:24.387024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.819 [2024-12-10 22:59:24.387054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.819 qpair failed and we were unable to recover it. 00:28:16.819 [2024-12-10 22:59:24.387253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.819 [2024-12-10 22:59:24.387282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.819 qpair failed and we were unable to recover it. 00:28:16.819 [2024-12-10 22:59:24.387406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.819 [2024-12-10 22:59:24.387440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.819 qpair failed and we were unable to recover it. 00:28:16.819 [2024-12-10 22:59:24.387540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.819 [2024-12-10 22:59:24.387570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.819 qpair failed and we were unable to recover it. 00:28:16.819 [2024-12-10 22:59:24.387672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.819 [2024-12-10 22:59:24.387702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.819 qpair failed and we were unable to recover it. 00:28:16.819 [2024-12-10 22:59:24.387869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.819 [2024-12-10 22:59:24.387898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.819 qpair failed and we were unable to recover it. 00:28:16.819 [2024-12-10 22:59:24.388069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.819 [2024-12-10 22:59:24.388098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.819 qpair failed and we were unable to recover it. 00:28:16.819 [2024-12-10 22:59:24.388330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.819 [2024-12-10 22:59:24.388360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.819 qpair failed and we were unable to recover it. 00:28:16.819 [2024-12-10 22:59:24.388528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.819 [2024-12-10 22:59:24.388558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.819 qpair failed and we were unable to recover it. 00:28:16.819 [2024-12-10 22:59:24.388720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.819 [2024-12-10 22:59:24.388749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.819 qpair failed and we were unable to recover it. 00:28:16.819 [2024-12-10 22:59:24.388848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.819 [2024-12-10 22:59:24.388878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.819 qpair failed and we were unable to recover it. 00:28:16.819 [2024-12-10 22:59:24.389042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.819 [2024-12-10 22:59:24.389071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.819 qpair failed and we were unable to recover it. 00:28:16.819 [2024-12-10 22:59:24.389244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.819 [2024-12-10 22:59:24.389273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.819 qpair failed and we were unable to recover it. 00:28:16.819 [2024-12-10 22:59:24.389366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.819 [2024-12-10 22:59:24.389395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.819 qpair failed and we were unable to recover it. 00:28:16.819 [2024-12-10 22:59:24.389570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.819 [2024-12-10 22:59:24.389598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.819 qpair failed and we were unable to recover it. 00:28:16.819 [2024-12-10 22:59:24.389772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.819 [2024-12-10 22:59:24.389802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.819 qpair failed and we were unable to recover it. 00:28:16.819 [2024-12-10 22:59:24.389912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.819 [2024-12-10 22:59:24.389941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.819 qpair failed and we were unable to recover it. 00:28:16.819 [2024-12-10 22:59:24.390124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.819 [2024-12-10 22:59:24.390154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.819 qpair failed and we were unable to recover it. 00:28:16.819 [2024-12-10 22:59:24.390268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.820 [2024-12-10 22:59:24.390298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.820 qpair failed and we were unable to recover it. 00:28:16.820 [2024-12-10 22:59:24.390463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.820 [2024-12-10 22:59:24.390493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.820 qpair failed and we were unable to recover it. 00:28:16.820 [2024-12-10 22:59:24.390661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.820 [2024-12-10 22:59:24.390691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.820 qpair failed and we were unable to recover it. 00:28:16.820 [2024-12-10 22:59:24.390867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.820 [2024-12-10 22:59:24.390895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.820 qpair failed and we were unable to recover it. 00:28:16.820 [2024-12-10 22:59:24.390990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.820 [2024-12-10 22:59:24.391020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.820 qpair failed and we were unable to recover it. 00:28:16.820 [2024-12-10 22:59:24.391214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.820 [2024-12-10 22:59:24.391244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.820 qpair failed and we were unable to recover it. 00:28:16.820 [2024-12-10 22:59:24.391342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.820 [2024-12-10 22:59:24.391371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.820 qpair failed and we were unable to recover it. 00:28:16.820 [2024-12-10 22:59:24.391620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.820 [2024-12-10 22:59:24.391649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.820 qpair failed and we were unable to recover it. 00:28:16.820 [2024-12-10 22:59:24.391756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.820 [2024-12-10 22:59:24.391786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.820 qpair failed and we were unable to recover it. 00:28:16.820 [2024-12-10 22:59:24.391963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.820 [2024-12-10 22:59:24.391991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.820 qpair failed and we were unable to recover it. 00:28:16.820 [2024-12-10 22:59:24.392087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.820 [2024-12-10 22:59:24.392116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.820 qpair failed and we were unable to recover it. 00:28:16.820 [2024-12-10 22:59:24.392242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.820 [2024-12-10 22:59:24.392274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.820 qpair failed and we were unable to recover it. 00:28:16.820 [2024-12-10 22:59:24.392397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.820 [2024-12-10 22:59:24.392426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.820 qpair failed and we were unable to recover it. 00:28:16.820 [2024-12-10 22:59:24.392533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.820 [2024-12-10 22:59:24.392563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.820 qpair failed and we were unable to recover it. 00:28:16.820 [2024-12-10 22:59:24.392677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.820 [2024-12-10 22:59:24.392707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.820 qpair failed and we were unable to recover it. 00:28:16.820 [2024-12-10 22:59:24.392871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.820 [2024-12-10 22:59:24.392900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.820 qpair failed and we were unable to recover it. 00:28:16.820 [2024-12-10 22:59:24.393060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.820 [2024-12-10 22:59:24.393090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.820 qpair failed and we were unable to recover it. 00:28:16.820 [2024-12-10 22:59:24.393252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.820 [2024-12-10 22:59:24.393283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.820 22:59:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.820 qpair failed and we were unable to recover it. 00:28:16.820 [2024-12-10 22:59:24.393421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.820 [2024-12-10 22:59:24.393478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.820 qpair failed and we were unable to recover it. 00:28:16.820 [2024-12-10 22:59:24.393678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.820 22:59:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:16.820 [2024-12-10 22:59:24.393713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.820 qpair failed and we were unable to recover it. 00:28:16.820 [2024-12-10 22:59:24.393821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.820 [2024-12-10 22:59:24.393854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.820 qpair failed and we were unable to recover it. 00:28:16.820 [2024-12-10 22:59:24.393974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.820 [2024-12-10 22:59:24.394007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.820 22:59:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.820 qpair failed and we were unable to recover it. 00:28:16.820 [2024-12-10 22:59:24.394175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.820 [2024-12-10 22:59:24.394210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.820 qpair failed and we were unable to recover it. 00:28:16.820 22:59:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:16.820 [2024-12-10 22:59:24.394404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.820 [2024-12-10 22:59:24.394438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.820 qpair failed and we were unable to recover it. 00:28:16.820 [2024-12-10 22:59:24.396593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.820 [2024-12-10 22:59:24.396655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.820 qpair failed and we were unable to recover it. 00:28:16.820 [2024-12-10 22:59:24.396888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.820 [2024-12-10 22:59:24.396925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.820 qpair failed and we were unable to recover it. 00:28:16.820 [2024-12-10 22:59:24.397195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.820 [2024-12-10 22:59:24.397231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.820 qpair failed and we were unable to recover it. 00:28:16.820 [2024-12-10 22:59:24.397403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.820 [2024-12-10 22:59:24.397436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.820 qpair failed and we were unable to recover it. 00:28:16.820 [2024-12-10 22:59:24.397697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.820 [2024-12-10 22:59:24.397730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.820 qpair failed and we were unable to recover it. 00:28:16.820 [2024-12-10 22:59:24.397901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.820 [2024-12-10 22:59:24.397934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.820 qpair failed and we were unable to recover it. 00:28:16.820 [2024-12-10 22:59:24.398192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.820 [2024-12-10 22:59:24.398226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.820 qpair failed and we were unable to recover it. 00:28:16.820 [2024-12-10 22:59:24.398396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.821 [2024-12-10 22:59:24.398428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.821 qpair failed and we were unable to recover it. 00:28:16.821 [2024-12-10 22:59:24.398598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.821 [2024-12-10 22:59:24.398630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.821 qpair failed and we were unable to recover it. 00:28:16.821 [2024-12-10 22:59:24.398810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.821 [2024-12-10 22:59:24.398843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.821 qpair failed and we were unable to recover it. 00:28:16.821 [2024-12-10 22:59:24.398961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.821 [2024-12-10 22:59:24.398994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.821 qpair failed and we were unable to recover it. 00:28:16.821 [2024-12-10 22:59:24.399137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.821 [2024-12-10 22:59:24.399179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.821 qpair failed and we were unable to recover it. 00:28:16.821 [2024-12-10 22:59:24.399292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.821 [2024-12-10 22:59:24.399324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.821 qpair failed and we were unable to recover it. 00:28:16.821 [2024-12-10 22:59:24.399562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.821 [2024-12-10 22:59:24.399596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.821 qpair failed and we were unable to recover it. 00:28:16.821 [2024-12-10 22:59:24.399765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.821 [2024-12-10 22:59:24.399798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.821 qpair failed and we were unable to recover it. 00:28:16.821 [2024-12-10 22:59:24.399967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.821 [2024-12-10 22:59:24.399999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.821 qpair failed and we were unable to recover it. 00:28:16.821 [2024-12-10 22:59:24.400187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.821 [2024-12-10 22:59:24.400221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.821 qpair failed and we were unable to recover it. 00:28:16.821 [2024-12-10 22:59:24.400414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.821 [2024-12-10 22:59:24.400446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.821 qpair failed and we were unable to recover it. 00:28:16.821 [2024-12-10 22:59:24.400564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.821 [2024-12-10 22:59:24.400596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.821 qpair failed and we were unable to recover it. 00:28:16.821 [2024-12-10 22:59:24.400713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.821 [2024-12-10 22:59:24.400745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.821 qpair failed and we were unable to recover it. 00:28:16.821 [2024-12-10 22:59:24.400927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.821 [2024-12-10 22:59:24.400960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0ec000b90 with addr=10.0.0.2, port=4420 00:28:16.821 qpair failed and we were unable to recover it. 00:28:16.821 [2024-12-10 22:59:24.401130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.821 [2024-12-10 22:59:24.401171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.821 qpair failed and we were unable to recover it. 00:28:16.821 [2024-12-10 22:59:24.401356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.821 [2024-12-10 22:59:24.401385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.821 qpair failed and we were unable to recover it. 00:28:16.821 [2024-12-10 22:59:24.401590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.821 [2024-12-10 22:59:24.401620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.821 qpair failed and we were unable to recover it. 00:28:16.821 [2024-12-10 22:59:24.401849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.821 [2024-12-10 22:59:24.401878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f0000b90 with addr=10.0.0.2, port=4420 00:28:16.821 qpair failed and we were unable to recover it. 00:28:16.821 [2024-12-10 22:59:24.402103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.821 [2024-12-10 22:59:24.402191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.821 qpair failed and we were unable to recover it. 00:28:16.821 [2024-12-10 22:59:24.402389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.821 [2024-12-10 22:59:24.402436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.821 qpair failed and we were unable to recover it. 00:28:16.821 [2024-12-10 22:59:24.402556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.821 [2024-12-10 22:59:24.402590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.821 qpair failed and we were unable to recover it. 00:28:16.821 [2024-12-10 22:59:24.402784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.821 [2024-12-10 22:59:24.402817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.821 qpair failed and we were unable to recover it. 00:28:16.821 [2024-12-10 22:59:24.403007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.821 [2024-12-10 22:59:24.403039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.821 qpair failed and we were unable to recover it. 00:28:16.821 [2024-12-10 22:59:24.403175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.821 [2024-12-10 22:59:24.403210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.821 qpair failed and we were unable to recover it. 00:28:16.821 [2024-12-10 22:59:24.403383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.821 [2024-12-10 22:59:24.403415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.821 qpair failed and we were unable to recover it. 00:28:16.821 [2024-12-10 22:59:24.403536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.821 [2024-12-10 22:59:24.403569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.821 qpair failed and we were unable to recover it. 00:28:16.821 [2024-12-10 22:59:24.403691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.821 [2024-12-10 22:59:24.403723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.821 qpair failed and we were unable to recover it. 00:28:16.821 [2024-12-10 22:59:24.403907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.821 [2024-12-10 22:59:24.403940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.821 qpair failed and we were unable to recover it. 00:28:16.821 [2024-12-10 22:59:24.404059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.821 [2024-12-10 22:59:24.404091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.821 qpair failed and we were unable to recover it. 00:28:16.821 [2024-12-10 22:59:24.404211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.821 [2024-12-10 22:59:24.404245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.821 qpair failed and we were unable to recover it. 00:28:16.821 [2024-12-10 22:59:24.404432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.821 [2024-12-10 22:59:24.404464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.821 qpair failed and we were unable to recover it. 00:28:16.821 [2024-12-10 22:59:24.404633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.821 [2024-12-10 22:59:24.404666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.821 qpair failed and we were unable to recover it. 00:28:16.821 [2024-12-10 22:59:24.404794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.821 [2024-12-10 22:59:24.404826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.821 qpair failed and we were unable to recover it. 00:28:16.821 [2024-12-10 22:59:24.405091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.821 22:59:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.821 [2024-12-10 22:59:24.405123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa92be0 with addr=10.0.0.2, port=4420 00:28:16.821 qpair failed and we were unable to recover it. 00:28:16.821 [2024-12-10 22:59:24.405355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.821 [2024-12-10 22:59:24.405396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.821 qpair failed and we were unable to recover it. 00:28:16.821 22:59:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:16.821 [2024-12-10 22:59:24.405593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.821 [2024-12-10 22:59:24.405628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.821 qpair failed and we were unable to recover it. 00:28:16.821 [2024-12-10 22:59:24.405741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.821 [2024-12-10 22:59:24.405774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.821 qpair failed and we were unable to recover it. 00:28:16.821 22:59:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.821 [2024-12-10 22:59:24.405903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.821 [2024-12-10 22:59:24.405936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.821 qpair failed and we were unable to recover it. 00:28:16.821 [2024-12-10 22:59:24.406060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.822 [2024-12-10 22:59:24.406091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.822 qpair failed and we were unable to recover it. 00:28:16.822 22:59:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:16.822 [2024-12-10 22:59:24.406278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.822 [2024-12-10 22:59:24.406309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.822 qpair failed and we were unable to recover it. 00:28:16.822 [2024-12-10 22:59:24.406448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.822 [2024-12-10 22:59:24.406479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.822 qpair failed and we were unable to recover it. 00:28:16.822 [2024-12-10 22:59:24.406651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.822 [2024-12-10 22:59:24.406681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.822 qpair failed and we were unable to recover it. 00:28:16.822 [2024-12-10 22:59:24.406783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.822 [2024-12-10 22:59:24.406812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.822 qpair failed and we were unable to recover it. 00:28:16.822 [2024-12-10 22:59:24.406975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.822 [2024-12-10 22:59:24.407015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.822 qpair failed and we were unable to recover it. 00:28:16.822 [2024-12-10 22:59:24.407188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.822 [2024-12-10 22:59:24.407220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.822 qpair failed and we were unable to recover it. 00:28:16.822 [2024-12-10 22:59:24.407344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.822 [2024-12-10 22:59:24.407373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.822 qpair failed and we were unable to recover it. 00:28:16.822 [2024-12-10 22:59:24.407633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.822 [2024-12-10 22:59:24.407663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.822 qpair failed and we were unable to recover it. 00:28:16.822 [2024-12-10 22:59:24.407825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.822 [2024-12-10 22:59:24.407855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.822 qpair failed and we were unable to recover it. 00:28:16.822 [2024-12-10 22:59:24.407971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.822 [2024-12-10 22:59:24.408001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.822 qpair failed and we were unable to recover it. 00:28:16.822 [2024-12-10 22:59:24.408116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.822 [2024-12-10 22:59:24.408145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.822 qpair failed and we were unable to recover it. 00:28:16.822 [2024-12-10 22:59:24.408257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.822 [2024-12-10 22:59:24.408286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.822 qpair failed and we were unable to recover it. 00:28:16.822 [2024-12-10 22:59:24.408396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.822 [2024-12-10 22:59:24.408425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.822 qpair failed and we were unable to recover it. 00:28:16.822 [2024-12-10 22:59:24.408620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.822 [2024-12-10 22:59:24.408650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.822 qpair failed and we were unable to recover it. 00:28:16.822 [2024-12-10 22:59:24.408768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.822 [2024-12-10 22:59:24.408798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.822 qpair failed and we were unable to recover it. 00:28:16.822 [2024-12-10 22:59:24.408920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.822 [2024-12-10 22:59:24.408950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.822 qpair failed and we were unable to recover it. 00:28:16.822 [2024-12-10 22:59:24.409141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.822 [2024-12-10 22:59:24.409180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.822 qpair failed and we were unable to recover it. 00:28:16.822 [2024-12-10 22:59:24.409347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.822 [2024-12-10 22:59:24.409377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.822 qpair failed and we were unable to recover it. 00:28:16.822 [2024-12-10 22:59:24.409500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.822 [2024-12-10 22:59:24.409529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.822 qpair failed and we were unable to recover it. 00:28:16.822 [2024-12-10 22:59:24.409645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.822 [2024-12-10 22:59:24.409674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.822 qpair failed and we were unable to recover it. 00:28:16.822 [2024-12-10 22:59:24.409879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.822 [2024-12-10 22:59:24.409909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.822 qpair failed and we were unable to recover it. 00:28:16.822 [2024-12-10 22:59:24.410013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.822 [2024-12-10 22:59:24.410042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.822 qpair failed and we were unable to recover it. 00:28:16.822 [2024-12-10 22:59:24.410210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.822 [2024-12-10 22:59:24.410240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.822 qpair failed and we were unable to recover it. 00:28:16.822 [2024-12-10 22:59:24.410408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.822 [2024-12-10 22:59:24.410437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.822 qpair failed and we were unable to recover it. 00:28:16.822 [2024-12-10 22:59:24.410643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.822 [2024-12-10 22:59:24.410673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.822 qpair failed and we were unable to recover it. 00:28:16.822 [2024-12-10 22:59:24.410788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.822 [2024-12-10 22:59:24.410818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.822 qpair failed and we were unable to recover it. 00:28:16.822 [2024-12-10 22:59:24.410924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.822 [2024-12-10 22:59:24.410954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.822 qpair failed and we were unable to recover it. 00:28:16.822 [2024-12-10 22:59:24.411120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.822 [2024-12-10 22:59:24.411150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.822 qpair failed and we were unable to recover it. 00:28:16.822 [2024-12-10 22:59:24.411349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.822 [2024-12-10 22:59:24.411379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.822 qpair failed and we were unable to recover it. 00:28:16.822 [2024-12-10 22:59:24.411579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.822 [2024-12-10 22:59:24.411608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.822 qpair failed and we were unable to recover it. 00:28:16.822 [2024-12-10 22:59:24.411736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.822 [2024-12-10 22:59:24.411766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.822 qpair failed and we were unable to recover it. 00:28:16.822 [2024-12-10 22:59:24.411951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.822 [2024-12-10 22:59:24.411981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.822 qpair failed and we were unable to recover it. 00:28:16.822 [2024-12-10 22:59:24.412092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.822 [2024-12-10 22:59:24.412121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.822 qpair failed and we were unable to recover it. 00:28:16.822 [2024-12-10 22:59:24.412238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.822 [2024-12-10 22:59:24.412269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.822 qpair failed and we were unable to recover it. 00:28:16.822 [2024-12-10 22:59:24.412441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.822 [2024-12-10 22:59:24.412470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.822 qpair failed and we were unable to recover it. 00:28:16.822 [2024-12-10 22:59:24.412582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.823 [2024-12-10 22:59:24.412612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.823 qpair failed and we were unable to recover it. 00:28:16.823 [2024-12-10 22:59:24.412804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.823 [2024-12-10 22:59:24.412834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.823 qpair failed and we were unable to recover it. 00:28:16.823 [2024-12-10 22:59:24.412952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.823 [2024-12-10 22:59:24.412982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.823 qpair failed and we were unable to recover it. 00:28:16.823 22:59:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.823 [2024-12-10 22:59:24.413150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.823 [2024-12-10 22:59:24.413194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.823 qpair failed and we were unable to recover it. 00:28:16.823 [2024-12-10 22:59:24.413318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.823 [2024-12-10 22:59:24.413347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.823 qpair failed and we were unable to recover it. 00:28:16.823 22:59:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:16.823 [2024-12-10 22:59:24.413547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.823 [2024-12-10 22:59:24.413577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.823 qpair failed and we were unable to recover it. 00:28:16.823 [2024-12-10 22:59:24.413678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.823 [2024-12-10 22:59:24.413709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.823 qpair failed and we were unable to recover it. 00:28:16.823 22:59:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.823 [2024-12-10 22:59:24.413876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.823 [2024-12-10 22:59:24.413905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.823 qpair failed and we were unable to recover it. 00:28:16.823 22:59:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:16.823 [2024-12-10 22:59:24.414090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.823 [2024-12-10 22:59:24.414121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.823 qpair failed and we were unable to recover it. 00:28:16.823 [2024-12-10 22:59:24.414237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.823 [2024-12-10 22:59:24.414270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.823 qpair failed and we were unable to recover it. 00:28:16.823 [2024-12-10 22:59:24.414389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.823 [2024-12-10 22:59:24.414419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.823 qpair failed and we were unable to recover it. 00:28:16.823 [2024-12-10 22:59:24.414541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.823 [2024-12-10 22:59:24.414571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.823 qpair failed and we were unable to recover it. 00:28:16.823 [2024-12-10 22:59:24.414675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.823 [2024-12-10 22:59:24.414706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.823 qpair failed and we were unable to recover it. 00:28:16.823 [2024-12-10 22:59:24.414806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.823 [2024-12-10 22:59:24.414837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.823 qpair failed and we were unable to recover it. 00:28:16.823 [2024-12-10 22:59:24.414959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.823 [2024-12-10 22:59:24.414988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.823 qpair failed and we were unable to recover it. 00:28:16.823 [2024-12-10 22:59:24.415172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.823 [2024-12-10 22:59:24.415204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.823 qpair failed and we were unable to recover it. 00:28:16.823 [2024-12-10 22:59:24.415387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.823 [2024-12-10 22:59:24.415416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.823 qpair failed and we were unable to recover it. 00:28:16.823 [2024-12-10 22:59:24.415520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.823 [2024-12-10 22:59:24.415551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.823 qpair failed and we were unable to recover it. 00:28:16.823 [2024-12-10 22:59:24.415672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.823 [2024-12-10 22:59:24.415702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.823 qpair failed and we were unable to recover it. 00:28:16.823 [2024-12-10 22:59:24.415917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.823 [2024-12-10 22:59:24.415949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.823 qpair failed and we were unable to recover it. 00:28:16.823 [2024-12-10 22:59:24.416060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.823 [2024-12-10 22:59:24.416091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.823 qpair failed and we were unable to recover it. 00:28:16.823 [2024-12-10 22:59:24.416221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.823 [2024-12-10 22:59:24.416255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.823 qpair failed and we were unable to recover it. 00:28:16.823 [2024-12-10 22:59:24.416370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:16.823 [2024-12-10 22:59:24.416400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fd0f8000b90 with addr=10.0.0.2, port=4420 00:28:16.823 qpair failed and we were unable to recover it. 00:28:16.823 [2024-12-10 22:59:24.416688] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:16.823 22:59:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.823 22:59:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:16.823 22:59:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.823 22:59:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:16.823 [2024-12-10 22:59:24.425111] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.823 [2024-12-10 22:59:24.425218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.823 [2024-12-10 22:59:24.425255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.823 [2024-12-10 22:59:24.425275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.823 [2024-12-10 22:59:24.425293] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd0f8000b90 00:28:16.823 [2024-12-10 22:59:24.425335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.823 qpair failed and we were unable to recover it. 00:28:16.823 22:59:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.823 22:59:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2534964 00:28:16.823 [2024-12-10 22:59:24.435008] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.823 [2024-12-10 22:59:24.435096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.823 [2024-12-10 22:59:24.435120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.823 [2024-12-10 22:59:24.435133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.823 [2024-12-10 22:59:24.435144] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd0f8000b90 00:28:16.823 [2024-12-10 22:59:24.435176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.823 qpair failed and we were unable to recover it. 00:28:16.823 [2024-12-10 22:59:24.445011] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.823 [2024-12-10 22:59:24.445072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.823 [2024-12-10 22:59:24.445089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.823 [2024-12-10 22:59:24.445098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.823 [2024-12-10 22:59:24.445109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd0f8000b90 00:28:16.823 [2024-12-10 22:59:24.445129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.823 qpair failed and we were unable to recover it. 00:28:16.823 [2024-12-10 22:59:24.455029] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.823 [2024-12-10 22:59:24.455094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.823 [2024-12-10 22:59:24.455111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.823 [2024-12-10 22:59:24.455121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.823 [2024-12-10 22:59:24.455129] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd0f8000b90 00:28:16.824 [2024-12-10 22:59:24.455146] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.824 qpair failed and we were unable to recover it. 00:28:16.824 [2024-12-10 22:59:24.465002] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.824 [2024-12-10 22:59:24.465078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.824 [2024-12-10 22:59:24.465095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.824 [2024-12-10 22:59:24.465105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.824 [2024-12-10 22:59:24.465113] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd0f8000b90 00:28:16.824 [2024-12-10 22:59:24.465130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.824 qpair failed and we were unable to recover it. 00:28:16.824 [2024-12-10 22:59:24.475025] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.824 [2024-12-10 22:59:24.475095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.824 [2024-12-10 22:59:24.475112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.824 [2024-12-10 22:59:24.475120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.824 [2024-12-10 22:59:24.475128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd0f8000b90 00:28:16.824 [2024-12-10 22:59:24.475146] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.824 qpair failed and we were unable to recover it. 00:28:16.824 [2024-12-10 22:59:24.485040] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.824 [2024-12-10 22:59:24.485097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.824 [2024-12-10 22:59:24.485115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.824 [2024-12-10 22:59:24.485123] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.824 [2024-12-10 22:59:24.485131] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd0f8000b90 00:28:16.824 [2024-12-10 22:59:24.485149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.824 qpair failed and we were unable to recover it. 00:28:16.824 [2024-12-10 22:59:24.495104] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:16.824 [2024-12-10 22:59:24.495194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:16.824 [2024-12-10 22:59:24.495211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:16.824 [2024-12-10 22:59:24.495220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:16.824 [2024-12-10 22:59:24.495228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd0f8000b90 00:28:16.824 [2024-12-10 22:59:24.495247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:16.824 qpair failed and we were unable to recover it. 00:28:17.083 [2024-12-10 22:59:24.505126] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.083 [2024-12-10 22:59:24.505221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.083 [2024-12-10 22:59:24.505237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.083 [2024-12-10 22:59:24.505245] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.084 [2024-12-10 22:59:24.505253] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd0f8000b90 00:28:17.084 [2024-12-10 22:59:24.505270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.084 qpair failed and we were unable to recover it. 00:28:17.084 [2024-12-10 22:59:24.515171] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.084 [2024-12-10 22:59:24.515240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.084 [2024-12-10 22:59:24.515255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.084 [2024-12-10 22:59:24.515264] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.084 [2024-12-10 22:59:24.515274] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd0f8000b90 00:28:17.084 [2024-12-10 22:59:24.515291] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.084 qpair failed and we were unable to recover it. 00:28:17.084 [2024-12-10 22:59:24.525176] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.084 [2024-12-10 22:59:24.525236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.084 [2024-12-10 22:59:24.525251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.084 [2024-12-10 22:59:24.525260] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.084 [2024-12-10 22:59:24.525269] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd0f8000b90 00:28:17.084 [2024-12-10 22:59:24.525286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.084 qpair failed and we were unable to recover it. 00:28:17.084 [2024-12-10 22:59:24.535224] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.084 [2024-12-10 22:59:24.535291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.084 [2024-12-10 22:59:24.535307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.084 [2024-12-10 22:59:24.535316] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.084 [2024-12-10 22:59:24.535324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd0f8000b90 00:28:17.084 [2024-12-10 22:59:24.535341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.084 qpair failed and we were unable to recover it. 00:28:17.084 [2024-12-10 22:59:24.545225] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.084 [2024-12-10 22:59:24.545285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.084 [2024-12-10 22:59:24.545302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.084 [2024-12-10 22:59:24.545310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.084 [2024-12-10 22:59:24.545318] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd0f8000b90 00:28:17.084 [2024-12-10 22:59:24.545335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.084 qpair failed and we were unable to recover it. 00:28:17.084 [2024-12-10 22:59:24.555214] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.084 [2024-12-10 22:59:24.555274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.084 [2024-12-10 22:59:24.555289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.084 [2024-12-10 22:59:24.555298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.084 [2024-12-10 22:59:24.555306] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd0f8000b90 00:28:17.084 [2024-12-10 22:59:24.555323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.084 qpair failed and we were unable to recover it. 00:28:17.084 [2024-12-10 22:59:24.565269] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.084 [2024-12-10 22:59:24.565322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.084 [2024-12-10 22:59:24.565338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.084 [2024-12-10 22:59:24.565347] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.084 [2024-12-10 22:59:24.565356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd0f8000b90 00:28:17.084 [2024-12-10 22:59:24.565373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.084 qpair failed and we were unable to recover it. 00:28:17.084 [2024-12-10 22:59:24.575339] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.084 [2024-12-10 22:59:24.575401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.084 [2024-12-10 22:59:24.575418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.084 [2024-12-10 22:59:24.575429] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.084 [2024-12-10 22:59:24.575437] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd0f8000b90 00:28:17.084 [2024-12-10 22:59:24.575455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.084 qpair failed and we were unable to recover it. 00:28:17.084 [2024-12-10 22:59:24.585349] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.084 [2024-12-10 22:59:24.585419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.084 [2024-12-10 22:59:24.585438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.084 [2024-12-10 22:59:24.585446] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.084 [2024-12-10 22:59:24.585454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd0f8000b90 00:28:17.084 [2024-12-10 22:59:24.585471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.084 qpair failed and we were unable to recover it. 00:28:17.084 [2024-12-10 22:59:24.595374] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.084 [2024-12-10 22:59:24.595427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.084 [2024-12-10 22:59:24.595442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.084 [2024-12-10 22:59:24.595451] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.084 [2024-12-10 22:59:24.595460] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd0f8000b90 00:28:17.084 [2024-12-10 22:59:24.595477] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.084 qpair failed and we were unable to recover it. 00:28:17.084 [2024-12-10 22:59:24.605375] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.084 [2024-12-10 22:59:24.605432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.084 [2024-12-10 22:59:24.605447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.084 [2024-12-10 22:59:24.605457] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.084 [2024-12-10 22:59:24.605464] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd0f8000b90 00:28:17.084 [2024-12-10 22:59:24.605481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.084 qpair failed and we were unable to recover it. 00:28:17.084 [2024-12-10 22:59:24.615433] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.084 [2024-12-10 22:59:24.615505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.084 [2024-12-10 22:59:24.615521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.084 [2024-12-10 22:59:24.615529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.084 [2024-12-10 22:59:24.615537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd0f8000b90 00:28:17.084 [2024-12-10 22:59:24.615558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.084 qpair failed and we were unable to recover it. 00:28:17.084 [2024-12-10 22:59:24.625458] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.084 [2024-12-10 22:59:24.625518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.084 [2024-12-10 22:59:24.625533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.084 [2024-12-10 22:59:24.625544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.084 [2024-12-10 22:59:24.625552] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd0f8000b90 00:28:17.084 [2024-12-10 22:59:24.625569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.084 qpair failed and we were unable to recover it. 00:28:17.085 [2024-12-10 22:59:24.635482] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.085 [2024-12-10 22:59:24.635543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.085 [2024-12-10 22:59:24.635558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.085 [2024-12-10 22:59:24.635567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.085 [2024-12-10 22:59:24.635575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd0f8000b90 00:28:17.085 [2024-12-10 22:59:24.635591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.085 qpair failed and we were unable to recover it. 00:28:17.085 [2024-12-10 22:59:24.645523] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.085 [2024-12-10 22:59:24.645586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.085 [2024-12-10 22:59:24.645602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.085 [2024-12-10 22:59:24.645611] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.085 [2024-12-10 22:59:24.645618] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd0f8000b90 00:28:17.085 [2024-12-10 22:59:24.645636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.085 qpair failed and we were unable to recover it. 00:28:17.085 [2024-12-10 22:59:24.655548] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.085 [2024-12-10 22:59:24.655608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.085 [2024-12-10 22:59:24.655624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.085 [2024-12-10 22:59:24.655633] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.085 [2024-12-10 22:59:24.655640] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd0f8000b90 00:28:17.085 [2024-12-10 22:59:24.655657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.085 qpair failed and we were unable to recover it. 00:28:17.085 [2024-12-10 22:59:24.665565] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.085 [2024-12-10 22:59:24.665624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.085 [2024-12-10 22:59:24.665641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.085 [2024-12-10 22:59:24.665649] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.085 [2024-12-10 22:59:24.665659] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd0f8000b90 00:28:17.085 [2024-12-10 22:59:24.665676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.085 qpair failed and we were unable to recover it. 00:28:17.085 [2024-12-10 22:59:24.675597] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.085 [2024-12-10 22:59:24.675671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.085 [2024-12-10 22:59:24.753779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.085 [2024-12-10 22:59:24.753812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.085 [2024-12-10 22:59:24.753825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd0f8000b90 00:28:17.085 [2024-12-10 22:59:24.753857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.085 qpair failed and we were unable to recover it. 00:28:17.085 [2024-12-10 22:59:24.755887] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.085 [2024-12-10 22:59:24.755961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.085 [2024-12-10 22:59:24.755985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.085 [2024-12-10 22:59:24.755995] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.085 [2024-12-10 22:59:24.756004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd0f8000b90 00:28:17.085 [2024-12-10 22:59:24.756026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.085 qpair failed and we were unable to recover it. 00:28:17.085 [2024-12-10 22:59:24.765811] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.085 [2024-12-10 22:59:24.765886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.085 [2024-12-10 22:59:24.765912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.085 [2024-12-10 22:59:24.765924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.085 [2024-12-10 22:59:24.765934] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd0f8000b90 00:28:17.085 [2024-12-10 22:59:24.765961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.085 qpair failed and we were unable to recover it. 00:28:17.085 [2024-12-10 22:59:24.775891] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.085 [2024-12-10 22:59:24.775956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.085 [2024-12-10 22:59:24.775972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.085 [2024-12-10 22:59:24.775987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.085 [2024-12-10 22:59:24.775994] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd0f8000b90 00:28:17.085 [2024-12-10 22:59:24.776013] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.085 qpair failed and we were unable to recover it. 00:28:17.085 [2024-12-10 22:59:24.785898] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.085 [2024-12-10 22:59:24.785950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.085 [2024-12-10 22:59:24.785965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.085 [2024-12-10 22:59:24.785972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.085 [2024-12-10 22:59:24.785979] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd0f8000b90 00:28:17.085 [2024-12-10 22:59:24.785995] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.085 qpair failed and we were unable to recover it. 00:28:17.085 [2024-12-10 22:59:24.795922] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.085 [2024-12-10 22:59:24.796014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.085 [2024-12-10 22:59:24.796029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.085 [2024-12-10 22:59:24.796036] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.085 [2024-12-10 22:59:24.796042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd0f8000b90 00:28:17.085 [2024-12-10 22:59:24.796057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.085 qpair failed and we were unable to recover it. 00:28:17.085 [2024-12-10 22:59:24.805976] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.085 [2024-12-10 22:59:24.806048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.085 [2024-12-10 22:59:24.806062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.085 [2024-12-10 22:59:24.806070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.085 [2024-12-10 22:59:24.806076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd0f8000b90 00:28:17.085 [2024-12-10 22:59:24.806091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.085 qpair failed and we were unable to recover it. 00:28:17.345 [2024-12-10 22:59:24.815942] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.345 [2024-12-10 22:59:24.816006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.345 [2024-12-10 22:59:24.816020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.345 [2024-12-10 22:59:24.816027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.345 [2024-12-10 22:59:24.816033] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd0f8000b90 00:28:17.345 [2024-12-10 22:59:24.816053] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.345 qpair failed and we were unable to recover it. 00:28:17.345 [2024-12-10 22:59:24.825944] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.345 [2024-12-10 22:59:24.826002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.345 [2024-12-10 22:59:24.826016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.345 [2024-12-10 22:59:24.826023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.345 [2024-12-10 22:59:24.826030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd0f8000b90 00:28:17.345 [2024-12-10 22:59:24.826045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.345 qpair failed and we were unable to recover it. 00:28:17.345 [2024-12-10 22:59:24.836071] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.345 [2024-12-10 22:59:24.836125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.345 [2024-12-10 22:59:24.836139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.345 [2024-12-10 22:59:24.836146] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.345 [2024-12-10 22:59:24.836153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd0f8000b90 00:28:17.345 [2024-12-10 22:59:24.836174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.345 qpair failed and we were unable to recover it. 00:28:17.345 [2024-12-10 22:59:24.846090] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.345 [2024-12-10 22:59:24.846151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.345 [2024-12-10 22:59:24.846170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.345 [2024-12-10 22:59:24.846177] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.345 [2024-12-10 22:59:24.846184] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd0f8000b90 00:28:17.345 [2024-12-10 22:59:24.846200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.345 qpair failed and we were unable to recover it. 00:28:17.345 [2024-12-10 22:59:24.856069] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.345 [2024-12-10 22:59:24.856165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.345 [2024-12-10 22:59:24.856179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.345 [2024-12-10 22:59:24.856186] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.345 [2024-12-10 22:59:24.856192] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd0f8000b90 00:28:17.345 [2024-12-10 22:59:24.856208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.345 qpair failed and we were unable to recover it. 00:28:17.345 [2024-12-10 22:59:24.866141] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.345 [2024-12-10 22:59:24.866207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.345 [2024-12-10 22:59:24.866221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.345 [2024-12-10 22:59:24.866228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.345 [2024-12-10 22:59:24.866235] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd0f8000b90 00:28:17.345 [2024-12-10 22:59:24.866250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.345 qpair failed and we were unable to recover it. 00:28:17.345 [2024-12-10 22:59:24.876119] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.345 [2024-12-10 22:59:24.876211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.346 [2024-12-10 22:59:24.876225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.346 [2024-12-10 22:59:24.876232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.346 [2024-12-10 22:59:24.876239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd0f8000b90 00:28:17.346 [2024-12-10 22:59:24.876254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.346 qpair failed and we were unable to recover it. 00:28:17.346 [2024-12-10 22:59:24.886134] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.346 [2024-12-10 22:59:24.886205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.346 [2024-12-10 22:59:24.886220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.346 [2024-12-10 22:59:24.886227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.346 [2024-12-10 22:59:24.886233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd0f8000b90 00:28:17.346 [2024-12-10 22:59:24.886249] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.346 qpair failed and we were unable to recover it. 00:28:17.346 [2024-12-10 22:59:24.896219] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.346 [2024-12-10 22:59:24.896276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.346 [2024-12-10 22:59:24.896290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.346 [2024-12-10 22:59:24.896297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.346 [2024-12-10 22:59:24.896304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd0f8000b90 00:28:17.346 [2024-12-10 22:59:24.896319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.346 qpair failed and we were unable to recover it. 00:28:17.346 [2024-12-10 22:59:24.906245] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.346 [2024-12-10 22:59:24.906299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.346 [2024-12-10 22:59:24.906318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.346 [2024-12-10 22:59:24.906325] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.346 [2024-12-10 22:59:24.906331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd0f8000b90 00:28:17.346 [2024-12-10 22:59:24.906347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.346 qpair failed and we were unable to recover it. 00:28:17.346 [2024-12-10 22:59:24.916227] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.346 [2024-12-10 22:59:24.916284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.346 [2024-12-10 22:59:24.916298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.346 [2024-12-10 22:59:24.916305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.346 [2024-12-10 22:59:24.916312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd0f8000b90 00:28:17.346 [2024-12-10 22:59:24.916327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.346 qpair failed and we were unable to recover it. 00:28:17.346 [2024-12-10 22:59:24.926286] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.346 [2024-12-10 22:59:24.926354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.346 [2024-12-10 22:59:24.926368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.346 [2024-12-10 22:59:24.926375] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.346 [2024-12-10 22:59:24.926381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd0f8000b90 00:28:17.346 [2024-12-10 22:59:24.926397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.346 qpair failed and we were unable to recover it. 00:28:17.346 [2024-12-10 22:59:24.936271] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.346 [2024-12-10 22:59:24.936329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.346 [2024-12-10 22:59:24.936343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.346 [2024-12-10 22:59:24.936350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.346 [2024-12-10 22:59:24.936357] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd0f8000b90 00:28:17.346 [2024-12-10 22:59:24.936372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.346 qpair failed and we were unable to recover it. 00:28:17.346 [2024-12-10 22:59:24.946432] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.346 [2024-12-10 22:59:24.946488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.346 [2024-12-10 22:59:24.946501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.346 [2024-12-10 22:59:24.946508] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.346 [2024-12-10 22:59:24.946521] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd0f8000b90 00:28:17.346 [2024-12-10 22:59:24.946537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.346 qpair failed and we were unable to recover it. 00:28:17.346 [2024-12-10 22:59:24.956411] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.346 [2024-12-10 22:59:24.956464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.346 [2024-12-10 22:59:24.956478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.346 [2024-12-10 22:59:24.956485] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.346 [2024-12-10 22:59:24.956491] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd0f8000b90 00:28:17.346 [2024-12-10 22:59:24.956508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.346 qpair failed and we were unable to recover it. 00:28:17.346 [2024-12-10 22:59:24.966392] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.346 [2024-12-10 22:59:24.966449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.346 [2024-12-10 22:59:24.966463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.346 [2024-12-10 22:59:24.966471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.346 [2024-12-10 22:59:24.966477] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd0f8000b90 00:28:17.346 [2024-12-10 22:59:24.966493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.346 qpair failed and we were unable to recover it. 00:28:17.346 [2024-12-10 22:59:24.976396] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.346 [2024-12-10 22:59:24.976455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.346 [2024-12-10 22:59:24.976469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.346 [2024-12-10 22:59:24.976477] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.346 [2024-12-10 22:59:24.976483] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd0f8000b90 00:28:17.346 [2024-12-10 22:59:24.976499] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.346 qpair failed and we were unable to recover it. 00:28:17.346 [2024-12-10 22:59:24.986508] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.346 [2024-12-10 22:59:24.986565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.346 [2024-12-10 22:59:24.986580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.346 [2024-12-10 22:59:24.986587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.346 [2024-12-10 22:59:24.986594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd0f8000b90 00:28:17.346 [2024-12-10 22:59:24.986609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.346 qpair failed and we were unable to recover it. 00:28:17.346 [2024-12-10 22:59:24.996447] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.346 [2024-12-10 22:59:24.996505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.346 [2024-12-10 22:59:24.996519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.346 [2024-12-10 22:59:24.996526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.346 [2024-12-10 22:59:24.996533] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd0f8000b90 00:28:17.346 [2024-12-10 22:59:24.996549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.346 qpair failed and we were unable to recover it. 00:28:17.346 [2024-12-10 22:59:25.006595] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.346 [2024-12-10 22:59:25.006663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.346 [2024-12-10 22:59:25.006678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.346 [2024-12-10 22:59:25.006686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.346 [2024-12-10 22:59:25.006693] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd0f8000b90 00:28:17.347 [2024-12-10 22:59:25.006708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.347 qpair failed and we were unable to recover it. 00:28:17.347 [2024-12-10 22:59:25.016556] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.347 [2024-12-10 22:59:25.016612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.347 [2024-12-10 22:59:25.016625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.347 [2024-12-10 22:59:25.016633] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.347 [2024-12-10 22:59:25.016639] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd0f8000b90 00:28:17.347 [2024-12-10 22:59:25.016654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.347 qpair failed and we were unable to recover it. 00:28:17.347 [2024-12-10 22:59:25.026563] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.347 [2024-12-10 22:59:25.026660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.347 [2024-12-10 22:59:25.026673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.347 [2024-12-10 22:59:25.026680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.347 [2024-12-10 22:59:25.026687] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd0f8000b90 00:28:17.347 [2024-12-10 22:59:25.026701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.347 qpair failed and we were unable to recover it. 00:28:17.347 [2024-12-10 22:59:25.036647] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.347 [2024-12-10 22:59:25.036697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.347 [2024-12-10 22:59:25.036715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.347 [2024-12-10 22:59:25.036722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.347 [2024-12-10 22:59:25.036729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd0f8000b90 00:28:17.347 [2024-12-10 22:59:25.036745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.347 qpair failed and we were unable to recover it. 00:28:17.347 [2024-12-10 22:59:25.046608] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.347 [2024-12-10 22:59:25.046665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.347 [2024-12-10 22:59:25.046679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.347 [2024-12-10 22:59:25.046686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.347 [2024-12-10 22:59:25.046692] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd0f8000b90 00:28:17.347 [2024-12-10 22:59:25.046707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.347 qpair failed and we were unable to recover it. 00:28:17.347 [2024-12-10 22:59:25.056700] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.347 [2024-12-10 22:59:25.056761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.347 [2024-12-10 22:59:25.056774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.347 [2024-12-10 22:59:25.056782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.347 [2024-12-10 22:59:25.056788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd0f8000b90 00:28:17.347 [2024-12-10 22:59:25.056802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.347 qpair failed and we were unable to recover it. 00:28:17.347 [2024-12-10 22:59:25.066658] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.347 [2024-12-10 22:59:25.066715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.347 [2024-12-10 22:59:25.066730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.347 [2024-12-10 22:59:25.066737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.347 [2024-12-10 22:59:25.066743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd0f8000b90 00:28:17.347 [2024-12-10 22:59:25.066759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.347 qpair failed and we were unable to recover it. 00:28:17.605 [2024-12-10 22:59:25.076682] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.605 [2024-12-10 22:59:25.076739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.605 [2024-12-10 22:59:25.076753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.605 [2024-12-10 22:59:25.076760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.605 [2024-12-10 22:59:25.076769] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd0f8000b90 00:28:17.605 [2024-12-10 22:59:25.076785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.605 qpair failed and we were unable to recover it. 00:28:17.605 [2024-12-10 22:59:25.086808] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.605 [2024-12-10 22:59:25.086873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.605 [2024-12-10 22:59:25.086887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.605 [2024-12-10 22:59:25.086895] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.605 [2024-12-10 22:59:25.086901] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd0f8000b90 00:28:17.605 [2024-12-10 22:59:25.086917] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.605 qpair failed and we were unable to recover it. 00:28:17.605 [2024-12-10 22:59:25.096797] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.605 [2024-12-10 22:59:25.096884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.605 [2024-12-10 22:59:25.096900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.605 [2024-12-10 22:59:25.096907] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.605 [2024-12-10 22:59:25.096914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd0f8000b90 00:28:17.605 [2024-12-10 22:59:25.096929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.605 qpair failed and we were unable to recover it. 00:28:17.605 [2024-12-10 22:59:25.106784] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.605 [2024-12-10 22:59:25.106843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.605 [2024-12-10 22:59:25.106857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.605 [2024-12-10 22:59:25.106865] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.605 [2024-12-10 22:59:25.106872] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd0f8000b90 00:28:17.606 [2024-12-10 22:59:25.106887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.606 qpair failed and we were unable to recover it. 00:28:17.606 [2024-12-10 22:59:25.116807] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.606 [2024-12-10 22:59:25.116864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.606 [2024-12-10 22:59:25.116878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.606 [2024-12-10 22:59:25.116885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.606 [2024-12-10 22:59:25.116892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd0f8000b90 00:28:17.606 [2024-12-10 22:59:25.116908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.606 qpair failed and we were unable to recover it. 00:28:17.606 [2024-12-10 22:59:25.126895] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.606 [2024-12-10 22:59:25.126954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.606 [2024-12-10 22:59:25.126968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.606 [2024-12-10 22:59:25.126975] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.606 [2024-12-10 22:59:25.126982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd0f8000b90 00:28:17.606 [2024-12-10 22:59:25.126997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.606 qpair failed and we were unable to recover it. 00:28:17.606 [2024-12-10 22:59:25.136965] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.606 [2024-12-10 22:59:25.137048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.606 [2024-12-10 22:59:25.137062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.606 [2024-12-10 22:59:25.137069] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.606 [2024-12-10 22:59:25.137075] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd0f8000b90 00:28:17.606 [2024-12-10 22:59:25.137089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.606 qpair failed and we were unable to recover it. 00:28:17.606 [2024-12-10 22:59:25.146953] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.606 [2024-12-10 22:59:25.147008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.606 [2024-12-10 22:59:25.147022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.606 [2024-12-10 22:59:25.147029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.606 [2024-12-10 22:59:25.147036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd0f8000b90 00:28:17.606 [2024-12-10 22:59:25.147051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.606 qpair failed and we were unable to recover it. 00:28:17.606 [2024-12-10 22:59:25.156924] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.606 [2024-12-10 22:59:25.156979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.606 [2024-12-10 22:59:25.156993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.606 [2024-12-10 22:59:25.157001] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.606 [2024-12-10 22:59:25.157008] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd0f8000b90 00:28:17.606 [2024-12-10 22:59:25.157024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.606 qpair failed and we were unable to recover it. 00:28:17.606 [2024-12-10 22:59:25.167040] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.606 [2024-12-10 22:59:25.167105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.606 [2024-12-10 22:59:25.167119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.606 [2024-12-10 22:59:25.167127] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.606 [2024-12-10 22:59:25.167133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd0f8000b90 00:28:17.606 [2024-12-10 22:59:25.167148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:17.606 qpair failed and we were unable to recover it. 00:28:17.606 [2024-12-10 22:59:25.177082] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.606 [2024-12-10 22:59:25.177193] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.606 [2024-12-10 22:59:25.177257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.606 [2024-12-10 22:59:25.177283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.606 [2024-12-10 22:59:25.177304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:17.606 [2024-12-10 22:59:25.177362] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.606 qpair failed and we were unable to recover it. 00:28:17.606 [2024-12-10 22:59:25.187081] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.606 [2024-12-10 22:59:25.187155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.606 [2024-12-10 22:59:25.187190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.606 [2024-12-10 22:59:25.187205] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.606 [2024-12-10 22:59:25.187218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:17.606 [2024-12-10 22:59:25.187248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.606 qpair failed and we were unable to recover it. 00:28:17.606 [2024-12-10 22:59:25.197116] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.606 [2024-12-10 22:59:25.197180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.606 [2024-12-10 22:59:25.197199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.606 [2024-12-10 22:59:25.197209] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.606 [2024-12-10 22:59:25.197218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:17.606 [2024-12-10 22:59:25.197239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.606 qpair failed and we were unable to recover it. 00:28:17.606 [2024-12-10 22:59:25.207192] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.606 [2024-12-10 22:59:25.207253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.606 [2024-12-10 22:59:25.207268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.606 [2024-12-10 22:59:25.207279] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.606 [2024-12-10 22:59:25.207286] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:17.606 [2024-12-10 22:59:25.207301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.606 qpair failed and we were unable to recover it. 00:28:17.606 [2024-12-10 22:59:25.217313] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.606 [2024-12-10 22:59:25.217383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.606 [2024-12-10 22:59:25.217397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.606 [2024-12-10 22:59:25.217405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.606 [2024-12-10 22:59:25.217412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:17.606 [2024-12-10 22:59:25.217426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.606 qpair failed and we were unable to recover it. 00:28:17.606 [2024-12-10 22:59:25.227257] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.606 [2024-12-10 22:59:25.227317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.606 [2024-12-10 22:59:25.227332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.606 [2024-12-10 22:59:25.227339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.606 [2024-12-10 22:59:25.227346] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:17.606 [2024-12-10 22:59:25.227361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.606 qpair failed and we were unable to recover it. 00:28:17.606 [2024-12-10 22:59:25.237268] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.606 [2024-12-10 22:59:25.237326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.606 [2024-12-10 22:59:25.237340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.606 [2024-12-10 22:59:25.237348] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.606 [2024-12-10 22:59:25.237354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:17.606 [2024-12-10 22:59:25.237369] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.606 qpair failed and we were unable to recover it. 00:28:17.606 [2024-12-10 22:59:25.247331] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.606 [2024-12-10 22:59:25.247391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.606 [2024-12-10 22:59:25.247408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.606 [2024-12-10 22:59:25.247416] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.606 [2024-12-10 22:59:25.247422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:17.606 [2024-12-10 22:59:25.247437] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.606 qpair failed and we were unable to recover it. 00:28:17.606 [2024-12-10 22:59:25.257317] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.606 [2024-12-10 22:59:25.257384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.606 [2024-12-10 22:59:25.257398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.606 [2024-12-10 22:59:25.257405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.606 [2024-12-10 22:59:25.257412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:17.606 [2024-12-10 22:59:25.257427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.606 qpair failed and we were unable to recover it. 00:28:17.606 [2024-12-10 22:59:25.267311] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.606 [2024-12-10 22:59:25.267370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.606 [2024-12-10 22:59:25.267384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.606 [2024-12-10 22:59:25.267392] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.606 [2024-12-10 22:59:25.267399] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:17.606 [2024-12-10 22:59:25.267413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.606 qpair failed and we were unable to recover it. 00:28:17.606 [2024-12-10 22:59:25.277360] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.606 [2024-12-10 22:59:25.277414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.606 [2024-12-10 22:59:25.277428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.606 [2024-12-10 22:59:25.277436] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.606 [2024-12-10 22:59:25.277443] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:17.606 [2024-12-10 22:59:25.277458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.606 qpair failed and we were unable to recover it. 00:28:17.606 [2024-12-10 22:59:25.287388] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.606 [2024-12-10 22:59:25.287445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.606 [2024-12-10 22:59:25.287460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.606 [2024-12-10 22:59:25.287467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.606 [2024-12-10 22:59:25.287473] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:17.606 [2024-12-10 22:59:25.287488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.606 qpair failed and we were unable to recover it. 00:28:17.606 [2024-12-10 22:59:25.297433] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.606 [2024-12-10 22:59:25.297511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.606 [2024-12-10 22:59:25.297525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.606 [2024-12-10 22:59:25.297532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.607 [2024-12-10 22:59:25.297538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:17.607 [2024-12-10 22:59:25.297553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.607 qpair failed and we were unable to recover it. 00:28:17.607 [2024-12-10 22:59:25.307444] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.607 [2024-12-10 22:59:25.307500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.607 [2024-12-10 22:59:25.307514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.607 [2024-12-10 22:59:25.307522] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.607 [2024-12-10 22:59:25.307529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:17.607 [2024-12-10 22:59:25.307543] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.607 qpair failed and we were unable to recover it. 00:28:17.607 [2024-12-10 22:59:25.317472] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.607 [2024-12-10 22:59:25.317567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.607 [2024-12-10 22:59:25.317581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.607 [2024-12-10 22:59:25.317588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.607 [2024-12-10 22:59:25.317594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:17.607 [2024-12-10 22:59:25.317609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.607 qpair failed and we were unable to recover it. 00:28:17.607 [2024-12-10 22:59:25.327519] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.607 [2024-12-10 22:59:25.327580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.607 [2024-12-10 22:59:25.327594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.607 [2024-12-10 22:59:25.327601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.607 [2024-12-10 22:59:25.327608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:17.607 [2024-12-10 22:59:25.327622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.607 qpair failed and we were unable to recover it. 00:28:17.866 [2024-12-10 22:59:25.337538] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.866 [2024-12-10 22:59:25.337594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.866 [2024-12-10 22:59:25.337608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.866 [2024-12-10 22:59:25.337620] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.866 [2024-12-10 22:59:25.337626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:17.866 [2024-12-10 22:59:25.337641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.866 qpair failed and we were unable to recover it. 00:28:17.866 [2024-12-10 22:59:25.347565] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.866 [2024-12-10 22:59:25.347619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.866 [2024-12-10 22:59:25.347635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.866 [2024-12-10 22:59:25.347643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.866 [2024-12-10 22:59:25.347649] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:17.866 [2024-12-10 22:59:25.347664] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.866 qpair failed and we were unable to recover it. 00:28:17.866 [2024-12-10 22:59:25.357588] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.866 [2024-12-10 22:59:25.357648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.866 [2024-12-10 22:59:25.357663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.866 [2024-12-10 22:59:25.357671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.866 [2024-12-10 22:59:25.357678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:17.866 [2024-12-10 22:59:25.357693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.866 qpair failed and we were unable to recover it. 00:28:17.866 [2024-12-10 22:59:25.367653] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.866 [2024-12-10 22:59:25.367710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.866 [2024-12-10 22:59:25.367724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.866 [2024-12-10 22:59:25.367732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.866 [2024-12-10 22:59:25.367739] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:17.866 [2024-12-10 22:59:25.367753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.866 qpair failed and we were unable to recover it. 00:28:17.866 [2024-12-10 22:59:25.377662] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.866 [2024-12-10 22:59:25.377736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.866 [2024-12-10 22:59:25.377751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.866 [2024-12-10 22:59:25.377760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.866 [2024-12-10 22:59:25.377767] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:17.866 [2024-12-10 22:59:25.377782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.866 qpair failed and we were unable to recover it. 00:28:17.866 [2024-12-10 22:59:25.387656] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.866 [2024-12-10 22:59:25.387918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.866 [2024-12-10 22:59:25.387936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.866 [2024-12-10 22:59:25.387943] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.866 [2024-12-10 22:59:25.387950] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:17.866 [2024-12-10 22:59:25.387966] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.866 qpair failed and we were unable to recover it. 00:28:17.866 [2024-12-10 22:59:25.397721] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.866 [2024-12-10 22:59:25.397773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.866 [2024-12-10 22:59:25.397787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.866 [2024-12-10 22:59:25.397794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.866 [2024-12-10 22:59:25.397801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:17.866 [2024-12-10 22:59:25.397815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.866 qpair failed and we were unable to recover it. 00:28:17.866 [2024-12-10 22:59:25.407753] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.867 [2024-12-10 22:59:25.407812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.867 [2024-12-10 22:59:25.407827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.867 [2024-12-10 22:59:25.407834] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.867 [2024-12-10 22:59:25.407840] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:17.867 [2024-12-10 22:59:25.407855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.867 qpair failed and we were unable to recover it. 00:28:17.867 [2024-12-10 22:59:25.417785] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.867 [2024-12-10 22:59:25.417843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.867 [2024-12-10 22:59:25.417857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.867 [2024-12-10 22:59:25.417864] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.867 [2024-12-10 22:59:25.417871] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:17.867 [2024-12-10 22:59:25.417886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.867 qpair failed and we were unable to recover it. 00:28:17.867 [2024-12-10 22:59:25.427830] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.867 [2024-12-10 22:59:25.427903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.867 [2024-12-10 22:59:25.427918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.867 [2024-12-10 22:59:25.427925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.867 [2024-12-10 22:59:25.427931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:17.867 [2024-12-10 22:59:25.427947] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.867 qpair failed and we were unable to recover it. 00:28:17.867 [2024-12-10 22:59:25.437837] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.867 [2024-12-10 22:59:25.437912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.867 [2024-12-10 22:59:25.437928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.867 [2024-12-10 22:59:25.437935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.867 [2024-12-10 22:59:25.437941] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:17.867 [2024-12-10 22:59:25.437956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.867 qpair failed and we were unable to recover it. 00:28:17.867 [2024-12-10 22:59:25.447896] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.867 [2024-12-10 22:59:25.447958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.867 [2024-12-10 22:59:25.447973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.867 [2024-12-10 22:59:25.447980] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.867 [2024-12-10 22:59:25.447987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:17.867 [2024-12-10 22:59:25.448002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.867 qpair failed and we were unable to recover it. 00:28:17.867 [2024-12-10 22:59:25.457905] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.867 [2024-12-10 22:59:25.457963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.867 [2024-12-10 22:59:25.457977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.867 [2024-12-10 22:59:25.457984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.867 [2024-12-10 22:59:25.457991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:17.867 [2024-12-10 22:59:25.458005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.867 qpair failed and we were unable to recover it. 00:28:17.867 [2024-12-10 22:59:25.467935] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.867 [2024-12-10 22:59:25.467992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.867 [2024-12-10 22:59:25.468007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.867 [2024-12-10 22:59:25.468019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.867 [2024-12-10 22:59:25.468025] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:17.867 [2024-12-10 22:59:25.468041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.867 qpair failed and we were unable to recover it. 00:28:17.867 [2024-12-10 22:59:25.477997] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.867 [2024-12-10 22:59:25.478096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.867 [2024-12-10 22:59:25.478111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.867 [2024-12-10 22:59:25.478118] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.867 [2024-12-10 22:59:25.478124] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:17.867 [2024-12-10 22:59:25.478139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.867 qpair failed and we were unable to recover it. 00:28:17.867 [2024-12-10 22:59:25.487957] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.867 [2024-12-10 22:59:25.488020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.867 [2024-12-10 22:59:25.488034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.867 [2024-12-10 22:59:25.488041] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.867 [2024-12-10 22:59:25.488047] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:17.867 [2024-12-10 22:59:25.488063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.867 qpair failed and we were unable to recover it. 00:28:17.867 [2024-12-10 22:59:25.498007] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.867 [2024-12-10 22:59:25.498065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.867 [2024-12-10 22:59:25.498079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.867 [2024-12-10 22:59:25.498086] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.867 [2024-12-10 22:59:25.498093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:17.867 [2024-12-10 22:59:25.498107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.867 qpair failed and we were unable to recover it. 00:28:17.867 [2024-12-10 22:59:25.508038] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.867 [2024-12-10 22:59:25.508131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.867 [2024-12-10 22:59:25.508145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.867 [2024-12-10 22:59:25.508153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.867 [2024-12-10 22:59:25.508163] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:17.867 [2024-12-10 22:59:25.508178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.867 qpair failed and we were unable to recover it. 00:28:17.867 [2024-12-10 22:59:25.518066] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.867 [2024-12-10 22:59:25.518121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.867 [2024-12-10 22:59:25.518136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.867 [2024-12-10 22:59:25.518143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.867 [2024-12-10 22:59:25.518150] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:17.867 [2024-12-10 22:59:25.518169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.867 qpair failed and we were unable to recover it. 00:28:17.867 [2024-12-10 22:59:25.528104] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.867 [2024-12-10 22:59:25.528168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.867 [2024-12-10 22:59:25.528182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.867 [2024-12-10 22:59:25.528189] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.867 [2024-12-10 22:59:25.528196] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:17.867 [2024-12-10 22:59:25.528211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.867 qpair failed and we were unable to recover it. 00:28:17.867 [2024-12-10 22:59:25.538155] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.867 [2024-12-10 22:59:25.538216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.867 [2024-12-10 22:59:25.538229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.867 [2024-12-10 22:59:25.538236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.868 [2024-12-10 22:59:25.538242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:17.868 [2024-12-10 22:59:25.538257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.868 qpair failed and we were unable to recover it. 00:28:17.868 [2024-12-10 22:59:25.548150] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.868 [2024-12-10 22:59:25.548211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.868 [2024-12-10 22:59:25.548225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.868 [2024-12-10 22:59:25.548233] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.868 [2024-12-10 22:59:25.548239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:17.868 [2024-12-10 22:59:25.548254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.868 qpair failed and we were unable to recover it. 00:28:17.868 [2024-12-10 22:59:25.558190] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.868 [2024-12-10 22:59:25.558254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.868 [2024-12-10 22:59:25.558269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.868 [2024-12-10 22:59:25.558276] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.868 [2024-12-10 22:59:25.558282] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:17.868 [2024-12-10 22:59:25.558297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.868 qpair failed and we were unable to recover it. 00:28:17.868 [2024-12-10 22:59:25.568288] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.868 [2024-12-10 22:59:25.568390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.868 [2024-12-10 22:59:25.568405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.868 [2024-12-10 22:59:25.568412] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.868 [2024-12-10 22:59:25.568419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:17.868 [2024-12-10 22:59:25.568433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.868 qpair failed and we were unable to recover it. 00:28:17.868 [2024-12-10 22:59:25.578241] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.868 [2024-12-10 22:59:25.578299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.868 [2024-12-10 22:59:25.578313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.868 [2024-12-10 22:59:25.578322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.868 [2024-12-10 22:59:25.578329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:17.868 [2024-12-10 22:59:25.578344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.868 qpair failed and we were unable to recover it. 00:28:17.868 [2024-12-10 22:59:25.588272] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:17.868 [2024-12-10 22:59:25.588330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:17.868 [2024-12-10 22:59:25.588344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:17.868 [2024-12-10 22:59:25.588352] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:17.868 [2024-12-10 22:59:25.588358] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:17.868 [2024-12-10 22:59:25.588372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:17.868 qpair failed and we were unable to recover it. 00:28:18.128 [2024-12-10 22:59:25.598309] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.128 [2024-12-10 22:59:25.598376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.128 [2024-12-10 22:59:25.598390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.128 [2024-12-10 22:59:25.598402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.128 [2024-12-10 22:59:25.598408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.128 [2024-12-10 22:59:25.598422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.128 qpair failed and we were unable to recover it. 00:28:18.128 [2024-12-10 22:59:25.608348] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.128 [2024-12-10 22:59:25.608409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.128 [2024-12-10 22:59:25.608422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.128 [2024-12-10 22:59:25.608429] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.128 [2024-12-10 22:59:25.608436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.128 [2024-12-10 22:59:25.608450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.128 qpair failed and we were unable to recover it. 00:28:18.128 [2024-12-10 22:59:25.618364] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.128 [2024-12-10 22:59:25.618421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.128 [2024-12-10 22:59:25.618435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.128 [2024-12-10 22:59:25.618442] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.128 [2024-12-10 22:59:25.618448] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.128 [2024-12-10 22:59:25.618462] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.128 qpair failed and we were unable to recover it. 00:28:18.128 [2024-12-10 22:59:25.628396] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.128 [2024-12-10 22:59:25.628452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.128 [2024-12-10 22:59:25.628465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.128 [2024-12-10 22:59:25.628474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.128 [2024-12-10 22:59:25.628481] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.128 [2024-12-10 22:59:25.628496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.128 qpair failed and we were unable to recover it. 00:28:18.128 [2024-12-10 22:59:25.638439] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.128 [2024-12-10 22:59:25.638494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.128 [2024-12-10 22:59:25.638508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.128 [2024-12-10 22:59:25.638515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.128 [2024-12-10 22:59:25.638522] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.128 [2024-12-10 22:59:25.638536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.128 qpair failed and we were unable to recover it. 00:28:18.128 [2024-12-10 22:59:25.648475] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.128 [2024-12-10 22:59:25.648534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.128 [2024-12-10 22:59:25.648549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.128 [2024-12-10 22:59:25.648555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.128 [2024-12-10 22:59:25.648562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.128 [2024-12-10 22:59:25.648576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.128 qpair failed and we were unable to recover it. 00:28:18.128 [2024-12-10 22:59:25.658488] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.128 [2024-12-10 22:59:25.658540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.128 [2024-12-10 22:59:25.658554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.128 [2024-12-10 22:59:25.658562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.128 [2024-12-10 22:59:25.658568] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.128 [2024-12-10 22:59:25.658582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.128 qpair failed and we were unable to recover it. 00:28:18.128 [2024-12-10 22:59:25.668524] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.128 [2024-12-10 22:59:25.668575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.128 [2024-12-10 22:59:25.668590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.128 [2024-12-10 22:59:25.668597] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.128 [2024-12-10 22:59:25.668604] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.128 [2024-12-10 22:59:25.668618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.128 qpair failed and we were unable to recover it. 00:28:18.128 [2024-12-10 22:59:25.678545] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.128 [2024-12-10 22:59:25.678601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.128 [2024-12-10 22:59:25.678615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.128 [2024-12-10 22:59:25.678622] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.128 [2024-12-10 22:59:25.678629] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.128 [2024-12-10 22:59:25.678644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.128 qpair failed and we were unable to recover it. 00:28:18.128 [2024-12-10 22:59:25.688579] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.128 [2024-12-10 22:59:25.688645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.128 [2024-12-10 22:59:25.688659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.128 [2024-12-10 22:59:25.688666] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.128 [2024-12-10 22:59:25.688672] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.128 [2024-12-10 22:59:25.688686] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.128 qpair failed and we were unable to recover it. 00:28:18.128 [2024-12-10 22:59:25.698605] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.128 [2024-12-10 22:59:25.698664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.128 [2024-12-10 22:59:25.698680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.128 [2024-12-10 22:59:25.698688] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.128 [2024-12-10 22:59:25.698694] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.128 [2024-12-10 22:59:25.698708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.128 qpair failed and we were unable to recover it. 00:28:18.128 [2024-12-10 22:59:25.708633] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.128 [2024-12-10 22:59:25.708727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.128 [2024-12-10 22:59:25.708741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.128 [2024-12-10 22:59:25.708748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.128 [2024-12-10 22:59:25.708754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.128 [2024-12-10 22:59:25.708769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.128 qpair failed and we were unable to recover it. 00:28:18.128 [2024-12-10 22:59:25.718666] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.128 [2024-12-10 22:59:25.718717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.128 [2024-12-10 22:59:25.718731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.128 [2024-12-10 22:59:25.718738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.128 [2024-12-10 22:59:25.718744] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.128 [2024-12-10 22:59:25.718758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.128 qpair failed and we were unable to recover it. 00:28:18.129 [2024-12-10 22:59:25.728735] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.129 [2024-12-10 22:59:25.728839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.129 [2024-12-10 22:59:25.728853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.129 [2024-12-10 22:59:25.728863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.129 [2024-12-10 22:59:25.728869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.129 [2024-12-10 22:59:25.728884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.129 qpair failed and we were unable to recover it. 00:28:18.129 [2024-12-10 22:59:25.738725] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.129 [2024-12-10 22:59:25.738781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.129 [2024-12-10 22:59:25.738796] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.129 [2024-12-10 22:59:25.738803] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.129 [2024-12-10 22:59:25.738809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.129 [2024-12-10 22:59:25.738824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.129 qpair failed and we were unable to recover it. 00:28:18.129 [2024-12-10 22:59:25.748749] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.129 [2024-12-10 22:59:25.748798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.129 [2024-12-10 22:59:25.748814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.129 [2024-12-10 22:59:25.748822] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.129 [2024-12-10 22:59:25.748828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.129 [2024-12-10 22:59:25.748844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.129 qpair failed and we were unable to recover it. 00:28:18.129 [2024-12-10 22:59:25.758791] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.129 [2024-12-10 22:59:25.758842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.129 [2024-12-10 22:59:25.758857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.129 [2024-12-10 22:59:25.758865] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.129 [2024-12-10 22:59:25.758874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.129 [2024-12-10 22:59:25.758889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.129 qpair failed and we were unable to recover it. 00:28:18.129 [2024-12-10 22:59:25.768817] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.129 [2024-12-10 22:59:25.768874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.129 [2024-12-10 22:59:25.768888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.129 [2024-12-10 22:59:25.768895] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.129 [2024-12-10 22:59:25.768902] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.129 [2024-12-10 22:59:25.768917] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.129 qpair failed and we were unable to recover it. 00:28:18.129 [2024-12-10 22:59:25.778898] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.129 [2024-12-10 22:59:25.779003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.129 [2024-12-10 22:59:25.779018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.129 [2024-12-10 22:59:25.779025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.129 [2024-12-10 22:59:25.779031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.129 [2024-12-10 22:59:25.779046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.129 qpair failed and we were unable to recover it. 00:28:18.129 [2024-12-10 22:59:25.788869] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.129 [2024-12-10 22:59:25.788925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.129 [2024-12-10 22:59:25.788940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.129 [2024-12-10 22:59:25.788948] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.129 [2024-12-10 22:59:25.788954] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.129 [2024-12-10 22:59:25.788968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.129 qpair failed and we were unable to recover it. 00:28:18.129 [2024-12-10 22:59:25.798899] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.129 [2024-12-10 22:59:25.798953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.129 [2024-12-10 22:59:25.798967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.129 [2024-12-10 22:59:25.798975] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.129 [2024-12-10 22:59:25.798982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.129 [2024-12-10 22:59:25.798997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.129 qpair failed and we were unable to recover it. 00:28:18.129 [2024-12-10 22:59:25.808981] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.129 [2024-12-10 22:59:25.809037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.129 [2024-12-10 22:59:25.809051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.129 [2024-12-10 22:59:25.809058] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.129 [2024-12-10 22:59:25.809065] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.129 [2024-12-10 22:59:25.809079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.129 qpair failed and we were unable to recover it. 00:28:18.129 [2024-12-10 22:59:25.818963] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.129 [2024-12-10 22:59:25.819024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.129 [2024-12-10 22:59:25.819038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.129 [2024-12-10 22:59:25.819046] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.129 [2024-12-10 22:59:25.819053] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.129 [2024-12-10 22:59:25.819067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.129 qpair failed and we were unable to recover it. 00:28:18.129 [2024-12-10 22:59:25.829003] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.129 [2024-12-10 22:59:25.829076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.129 [2024-12-10 22:59:25.829091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.129 [2024-12-10 22:59:25.829098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.129 [2024-12-10 22:59:25.829104] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.129 [2024-12-10 22:59:25.829120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.129 qpair failed and we were unable to recover it. 00:28:18.129 [2024-12-10 22:59:25.839040] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.129 [2024-12-10 22:59:25.839096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.129 [2024-12-10 22:59:25.839110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.129 [2024-12-10 22:59:25.839118] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.129 [2024-12-10 22:59:25.839125] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.129 [2024-12-10 22:59:25.839140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.129 qpair failed and we were unable to recover it. 00:28:18.129 [2024-12-10 22:59:25.849084] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.129 [2024-12-10 22:59:25.849154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.129 [2024-12-10 22:59:25.849174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.129 [2024-12-10 22:59:25.849182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.129 [2024-12-10 22:59:25.849188] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.130 [2024-12-10 22:59:25.849203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.130 qpair failed and we were unable to recover it. 00:28:18.389 [2024-12-10 22:59:25.859091] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.389 [2024-12-10 22:59:25.859144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.389 [2024-12-10 22:59:25.859163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.389 [2024-12-10 22:59:25.859174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.389 [2024-12-10 22:59:25.859180] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.389 [2024-12-10 22:59:25.859196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.389 qpair failed and we were unable to recover it. 00:28:18.389 [2024-12-10 22:59:25.869114] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.389 [2024-12-10 22:59:25.869173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.389 [2024-12-10 22:59:25.869187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.389 [2024-12-10 22:59:25.869195] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.389 [2024-12-10 22:59:25.869201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.389 [2024-12-10 22:59:25.869216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.389 qpair failed and we were unable to recover it. 00:28:18.389 [2024-12-10 22:59:25.879136] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.389 [2024-12-10 22:59:25.879214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.389 [2024-12-10 22:59:25.879229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.389 [2024-12-10 22:59:25.879236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.389 [2024-12-10 22:59:25.879243] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.389 [2024-12-10 22:59:25.879258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.389 qpair failed and we were unable to recover it. 00:28:18.389 [2024-12-10 22:59:25.889179] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.390 [2024-12-10 22:59:25.889235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.390 [2024-12-10 22:59:25.889249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.390 [2024-12-10 22:59:25.889256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.390 [2024-12-10 22:59:25.889263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.390 [2024-12-10 22:59:25.889277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.390 qpair failed and we were unable to recover it. 00:28:18.390 [2024-12-10 22:59:25.899188] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.390 [2024-12-10 22:59:25.899244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.390 [2024-12-10 22:59:25.899261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.390 [2024-12-10 22:59:25.899269] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.390 [2024-12-10 22:59:25.899277] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.390 [2024-12-10 22:59:25.899293] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.390 qpair failed and we were unable to recover it. 00:28:18.390 [2024-12-10 22:59:25.909220] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.390 [2024-12-10 22:59:25.909287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.390 [2024-12-10 22:59:25.909302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.390 [2024-12-10 22:59:25.909310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.390 [2024-12-10 22:59:25.909317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.390 [2024-12-10 22:59:25.909332] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.390 qpair failed and we were unable to recover it. 00:28:18.390 [2024-12-10 22:59:25.919245] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.390 [2024-12-10 22:59:25.919304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.390 [2024-12-10 22:59:25.919318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.390 [2024-12-10 22:59:25.919326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.390 [2024-12-10 22:59:25.919332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.390 [2024-12-10 22:59:25.919348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.390 qpair failed and we were unable to recover it. 00:28:18.390 [2024-12-10 22:59:25.929289] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.390 [2024-12-10 22:59:25.929346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.390 [2024-12-10 22:59:25.929360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.390 [2024-12-10 22:59:25.929368] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.390 [2024-12-10 22:59:25.929375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.390 [2024-12-10 22:59:25.929390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.390 qpair failed and we were unable to recover it. 00:28:18.390 [2024-12-10 22:59:25.939321] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.390 [2024-12-10 22:59:25.939376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.390 [2024-12-10 22:59:25.939391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.390 [2024-12-10 22:59:25.939398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.390 [2024-12-10 22:59:25.939405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.390 [2024-12-10 22:59:25.939420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.390 qpair failed and we were unable to recover it. 00:28:18.390 [2024-12-10 22:59:25.949276] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.390 [2024-12-10 22:59:25.949334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.390 [2024-12-10 22:59:25.949352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.390 [2024-12-10 22:59:25.949359] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.390 [2024-12-10 22:59:25.949366] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.390 [2024-12-10 22:59:25.949380] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.390 qpair failed and we were unable to recover it. 00:28:18.390 [2024-12-10 22:59:25.959370] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.390 [2024-12-10 22:59:25.959425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.390 [2024-12-10 22:59:25.959439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.390 [2024-12-10 22:59:25.959447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.390 [2024-12-10 22:59:25.959453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.390 [2024-12-10 22:59:25.959467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.390 qpair failed and we were unable to recover it. 00:28:18.390 [2024-12-10 22:59:25.969412] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.390 [2024-12-10 22:59:25.969471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.390 [2024-12-10 22:59:25.969485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.390 [2024-12-10 22:59:25.969493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.390 [2024-12-10 22:59:25.969499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.390 [2024-12-10 22:59:25.969514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.390 qpair failed and we were unable to recover it. 00:28:18.390 [2024-12-10 22:59:25.979439] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.390 [2024-12-10 22:59:25.979493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.390 [2024-12-10 22:59:25.979506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.390 [2024-12-10 22:59:25.979513] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.390 [2024-12-10 22:59:25.979519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.390 [2024-12-10 22:59:25.979534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.390 qpair failed and we were unable to recover it. 00:28:18.390 [2024-12-10 22:59:25.989457] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.390 [2024-12-10 22:59:25.989512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.390 [2024-12-10 22:59:25.989526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.390 [2024-12-10 22:59:25.989537] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.390 [2024-12-10 22:59:25.989543] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.390 [2024-12-10 22:59:25.989558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.390 qpair failed and we were unable to recover it. 00:28:18.390 [2024-12-10 22:59:25.999486] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.390 [2024-12-10 22:59:25.999538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.390 [2024-12-10 22:59:25.999552] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.390 [2024-12-10 22:59:25.999559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.390 [2024-12-10 22:59:25.999566] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.390 [2024-12-10 22:59:25.999580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.390 qpair failed and we were unable to recover it. 00:28:18.390 [2024-12-10 22:59:26.009530] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.390 [2024-12-10 22:59:26.009635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.390 [2024-12-10 22:59:26.009648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.390 [2024-12-10 22:59:26.009656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.390 [2024-12-10 22:59:26.009662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.390 [2024-12-10 22:59:26.009677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.390 qpair failed and we were unable to recover it. 00:28:18.390 [2024-12-10 22:59:26.019546] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.390 [2024-12-10 22:59:26.019603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.390 [2024-12-10 22:59:26.019617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.390 [2024-12-10 22:59:26.019624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.390 [2024-12-10 22:59:26.019631] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.391 [2024-12-10 22:59:26.019645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.391 qpair failed and we were unable to recover it. 00:28:18.391 [2024-12-10 22:59:26.029572] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.391 [2024-12-10 22:59:26.029640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.391 [2024-12-10 22:59:26.029654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.391 [2024-12-10 22:59:26.029662] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.391 [2024-12-10 22:59:26.029668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.391 [2024-12-10 22:59:26.029683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.391 qpair failed and we were unable to recover it. 00:28:18.391 [2024-12-10 22:59:26.039598] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.391 [2024-12-10 22:59:26.039653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.391 [2024-12-10 22:59:26.039670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.391 [2024-12-10 22:59:26.039679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.391 [2024-12-10 22:59:26.039686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.391 [2024-12-10 22:59:26.039701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.391 qpair failed and we were unable to recover it. 00:28:18.391 [2024-12-10 22:59:26.049584] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.391 [2024-12-10 22:59:26.049642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.391 [2024-12-10 22:59:26.049657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.391 [2024-12-10 22:59:26.049664] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.391 [2024-12-10 22:59:26.049671] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.391 [2024-12-10 22:59:26.049686] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.391 qpair failed and we were unable to recover it. 00:28:18.391 [2024-12-10 22:59:26.059587] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.391 [2024-12-10 22:59:26.059646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.391 [2024-12-10 22:59:26.059660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.391 [2024-12-10 22:59:26.059667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.391 [2024-12-10 22:59:26.059673] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.391 [2024-12-10 22:59:26.059688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.391 qpair failed and we were unable to recover it. 00:28:18.391 [2024-12-10 22:59:26.069719] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.391 [2024-12-10 22:59:26.069826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.391 [2024-12-10 22:59:26.069840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.391 [2024-12-10 22:59:26.069847] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.391 [2024-12-10 22:59:26.069854] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.391 [2024-12-10 22:59:26.069867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.391 qpair failed and we were unable to recover it. 00:28:18.391 [2024-12-10 22:59:26.079711] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.391 [2024-12-10 22:59:26.079768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.391 [2024-12-10 22:59:26.079785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.391 [2024-12-10 22:59:26.079792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.391 [2024-12-10 22:59:26.079798] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.391 [2024-12-10 22:59:26.079813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.391 qpair failed and we were unable to recover it. 00:28:18.391 [2024-12-10 22:59:26.089677] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.391 [2024-12-10 22:59:26.089732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.391 [2024-12-10 22:59:26.089746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.391 [2024-12-10 22:59:26.089752] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.391 [2024-12-10 22:59:26.089759] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.391 [2024-12-10 22:59:26.089774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.391 qpair failed and we were unable to recover it. 00:28:18.391 [2024-12-10 22:59:26.099768] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.391 [2024-12-10 22:59:26.099824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.391 [2024-12-10 22:59:26.099838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.391 [2024-12-10 22:59:26.099845] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.391 [2024-12-10 22:59:26.099851] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.391 [2024-12-10 22:59:26.099865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.391 qpair failed and we were unable to recover it. 00:28:18.391 [2024-12-10 22:59:26.109738] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.391 [2024-12-10 22:59:26.109790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.391 [2024-12-10 22:59:26.109804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.391 [2024-12-10 22:59:26.109811] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.391 [2024-12-10 22:59:26.109818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.391 [2024-12-10 22:59:26.109832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.391 qpair failed and we were unable to recover it. 00:28:18.651 [2024-12-10 22:59:26.119841] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.651 [2024-12-10 22:59:26.119896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.651 [2024-12-10 22:59:26.119909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.651 [2024-12-10 22:59:26.119920] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.651 [2024-12-10 22:59:26.119926] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.651 [2024-12-10 22:59:26.119941] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.651 qpair failed and we were unable to recover it. 00:28:18.651 [2024-12-10 22:59:26.129839] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.651 [2024-12-10 22:59:26.129896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.651 [2024-12-10 22:59:26.129911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.651 [2024-12-10 22:59:26.129918] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.651 [2024-12-10 22:59:26.129925] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.651 [2024-12-10 22:59:26.129940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.651 qpair failed and we were unable to recover it. 00:28:18.651 [2024-12-10 22:59:26.139915] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.651 [2024-12-10 22:59:26.139973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.651 [2024-12-10 22:59:26.139988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.651 [2024-12-10 22:59:26.139995] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.651 [2024-12-10 22:59:26.140002] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.651 [2024-12-10 22:59:26.140017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.651 qpair failed and we were unable to recover it. 00:28:18.651 [2024-12-10 22:59:26.149907] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.651 [2024-12-10 22:59:26.149964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.651 [2024-12-10 22:59:26.149979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.651 [2024-12-10 22:59:26.149986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.651 [2024-12-10 22:59:26.149992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.651 [2024-12-10 22:59:26.150007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.651 qpair failed and we were unable to recover it. 00:28:18.651 [2024-12-10 22:59:26.159938] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.651 [2024-12-10 22:59:26.159995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.651 [2024-12-10 22:59:26.160009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.651 [2024-12-10 22:59:26.160016] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.651 [2024-12-10 22:59:26.160023] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.652 [2024-12-10 22:59:26.160038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.652 qpair failed and we were unable to recover it. 00:28:18.652 [2024-12-10 22:59:26.169985] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.652 [2024-12-10 22:59:26.170047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.652 [2024-12-10 22:59:26.170062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.652 [2024-12-10 22:59:26.170071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.652 [2024-12-10 22:59:26.170077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.652 [2024-12-10 22:59:26.170092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.652 qpair failed and we were unable to recover it. 00:28:18.652 [2024-12-10 22:59:26.180006] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.652 [2024-12-10 22:59:26.180062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.652 [2024-12-10 22:59:26.180076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.652 [2024-12-10 22:59:26.180083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.652 [2024-12-10 22:59:26.180090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.652 [2024-12-10 22:59:26.180104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.652 qpair failed and we were unable to recover it. 00:28:18.652 [2024-12-10 22:59:26.190004] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.652 [2024-12-10 22:59:26.190065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.652 [2024-12-10 22:59:26.190080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.652 [2024-12-10 22:59:26.190087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.652 [2024-12-10 22:59:26.190093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.652 [2024-12-10 22:59:26.190108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.652 qpair failed and we were unable to recover it. 00:28:18.652 [2024-12-10 22:59:26.200057] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.652 [2024-12-10 22:59:26.200110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.652 [2024-12-10 22:59:26.200125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.652 [2024-12-10 22:59:26.200133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.652 [2024-12-10 22:59:26.200139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.652 [2024-12-10 22:59:26.200153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.652 qpair failed and we were unable to recover it. 00:28:18.652 [2024-12-10 22:59:26.210105] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.652 [2024-12-10 22:59:26.210168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.652 [2024-12-10 22:59:26.210187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.652 [2024-12-10 22:59:26.210194] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.652 [2024-12-10 22:59:26.210201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.652 [2024-12-10 22:59:26.210215] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.652 qpair failed and we were unable to recover it. 00:28:18.652 [2024-12-10 22:59:26.220129] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.652 [2024-12-10 22:59:26.220217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.652 [2024-12-10 22:59:26.220232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.652 [2024-12-10 22:59:26.220239] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.652 [2024-12-10 22:59:26.220246] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.652 [2024-12-10 22:59:26.220261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.652 qpair failed and we were unable to recover it. 00:28:18.652 [2024-12-10 22:59:26.230173] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.652 [2024-12-10 22:59:26.230244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.652 [2024-12-10 22:59:26.230257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.652 [2024-12-10 22:59:26.230265] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.652 [2024-12-10 22:59:26.230271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.652 [2024-12-10 22:59:26.230286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.652 qpair failed and we were unable to recover it. 00:28:18.652 [2024-12-10 22:59:26.240175] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.652 [2024-12-10 22:59:26.240231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.652 [2024-12-10 22:59:26.240246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.652 [2024-12-10 22:59:26.240253] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.652 [2024-12-10 22:59:26.240259] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.652 [2024-12-10 22:59:26.240274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.652 qpair failed and we were unable to recover it. 00:28:18.652 [2024-12-10 22:59:26.250227] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.652 [2024-12-10 22:59:26.250288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.652 [2024-12-10 22:59:26.250302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.652 [2024-12-10 22:59:26.250313] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.652 [2024-12-10 22:59:26.250320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.652 [2024-12-10 22:59:26.250334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.652 qpair failed and we were unable to recover it. 00:28:18.652 [2024-12-10 22:59:26.260242] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.652 [2024-12-10 22:59:26.260302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.652 [2024-12-10 22:59:26.260317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.652 [2024-12-10 22:59:26.260326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.652 [2024-12-10 22:59:26.260334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.652 [2024-12-10 22:59:26.260348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.652 qpair failed and we were unable to recover it. 00:28:18.652 [2024-12-10 22:59:26.270257] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.652 [2024-12-10 22:59:26.270352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.652 [2024-12-10 22:59:26.270366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.652 [2024-12-10 22:59:26.270373] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.652 [2024-12-10 22:59:26.270379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.652 [2024-12-10 22:59:26.270394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.652 qpair failed and we were unable to recover it. 00:28:18.652 [2024-12-10 22:59:26.280204] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.652 [2024-12-10 22:59:26.280258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.652 [2024-12-10 22:59:26.280272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.652 [2024-12-10 22:59:26.280279] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.652 [2024-12-10 22:59:26.280286] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.652 [2024-12-10 22:59:26.280300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.652 qpair failed and we were unable to recover it. 00:28:18.652 [2024-12-10 22:59:26.290344] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.652 [2024-12-10 22:59:26.290400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.652 [2024-12-10 22:59:26.290414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.652 [2024-12-10 22:59:26.290421] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.652 [2024-12-10 22:59:26.290428] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.652 [2024-12-10 22:59:26.290442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.652 qpair failed and we were unable to recover it. 00:28:18.652 [2024-12-10 22:59:26.300334] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.652 [2024-12-10 22:59:26.300394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.653 [2024-12-10 22:59:26.300409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.653 [2024-12-10 22:59:26.300415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.653 [2024-12-10 22:59:26.300422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.653 [2024-12-10 22:59:26.300437] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.653 qpair failed and we were unable to recover it. 00:28:18.653 [2024-12-10 22:59:26.310336] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.653 [2024-12-10 22:59:26.310394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.653 [2024-12-10 22:59:26.310408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.653 [2024-12-10 22:59:26.310415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.653 [2024-12-10 22:59:26.310422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.653 [2024-12-10 22:59:26.310436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.653 qpair failed and we were unable to recover it. 00:28:18.653 [2024-12-10 22:59:26.320375] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.653 [2024-12-10 22:59:26.320429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.653 [2024-12-10 22:59:26.320443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.653 [2024-12-10 22:59:26.320450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.653 [2024-12-10 22:59:26.320456] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.653 [2024-12-10 22:59:26.320471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.653 qpair failed and we were unable to recover it. 00:28:18.653 [2024-12-10 22:59:26.330435] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.653 [2024-12-10 22:59:26.330509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.653 [2024-12-10 22:59:26.330523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.653 [2024-12-10 22:59:26.330530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.653 [2024-12-10 22:59:26.330536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.653 [2024-12-10 22:59:26.330550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.653 qpair failed and we were unable to recover it. 00:28:18.653 [2024-12-10 22:59:26.340453] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.653 [2024-12-10 22:59:26.340509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.653 [2024-12-10 22:59:26.340527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.653 [2024-12-10 22:59:26.340534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.653 [2024-12-10 22:59:26.340541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.653 [2024-12-10 22:59:26.340556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.653 qpair failed and we were unable to recover it. 00:28:18.653 [2024-12-10 22:59:26.350417] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.653 [2024-12-10 22:59:26.350471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.653 [2024-12-10 22:59:26.350485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.653 [2024-12-10 22:59:26.350492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.653 [2024-12-10 22:59:26.350499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.653 [2024-12-10 22:59:26.350513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.653 qpair failed and we were unable to recover it. 00:28:18.653 [2024-12-10 22:59:26.360432] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.653 [2024-12-10 22:59:26.360515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.653 [2024-12-10 22:59:26.360529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.653 [2024-12-10 22:59:26.360536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.653 [2024-12-10 22:59:26.360542] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.653 [2024-12-10 22:59:26.360557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.653 qpair failed and we were unable to recover it. 00:28:18.653 [2024-12-10 22:59:26.370490] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.653 [2024-12-10 22:59:26.370572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.653 [2024-12-10 22:59:26.370586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.653 [2024-12-10 22:59:26.370593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.653 [2024-12-10 22:59:26.370599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.653 [2024-12-10 22:59:26.370614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.653 qpair failed and we were unable to recover it. 00:28:18.913 [2024-12-10 22:59:26.380589] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.913 [2024-12-10 22:59:26.380647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.913 [2024-12-10 22:59:26.380661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.913 [2024-12-10 22:59:26.380672] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.913 [2024-12-10 22:59:26.380680] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.913 [2024-12-10 22:59:26.380695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.913 qpair failed and we were unable to recover it. 00:28:18.913 [2024-12-10 22:59:26.390594] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.913 [2024-12-10 22:59:26.390648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.913 [2024-12-10 22:59:26.390661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.913 [2024-12-10 22:59:26.390669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.913 [2024-12-10 22:59:26.390676] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.913 [2024-12-10 22:59:26.390691] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.913 qpair failed and we were unable to recover it. 00:28:18.913 [2024-12-10 22:59:26.400544] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.913 [2024-12-10 22:59:26.400601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.913 [2024-12-10 22:59:26.400614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.913 [2024-12-10 22:59:26.400622] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.913 [2024-12-10 22:59:26.400629] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.913 [2024-12-10 22:59:26.400643] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.913 qpair failed and we were unable to recover it. 00:28:18.913 [2024-12-10 22:59:26.410664] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.913 [2024-12-10 22:59:26.410724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.913 [2024-12-10 22:59:26.410738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.913 [2024-12-10 22:59:26.410745] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.913 [2024-12-10 22:59:26.410751] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.913 [2024-12-10 22:59:26.410765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.913 qpair failed and we were unable to recover it. 00:28:18.913 [2024-12-10 22:59:26.420668] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.913 [2024-12-10 22:59:26.420727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.913 [2024-12-10 22:59:26.420741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.913 [2024-12-10 22:59:26.420748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.913 [2024-12-10 22:59:26.420755] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.913 [2024-12-10 22:59:26.420769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.913 qpair failed and we were unable to recover it. 00:28:18.913 [2024-12-10 22:59:26.430645] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.913 [2024-12-10 22:59:26.430703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.913 [2024-12-10 22:59:26.430717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.913 [2024-12-10 22:59:26.430724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.913 [2024-12-10 22:59:26.430731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.913 [2024-12-10 22:59:26.430745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.913 qpair failed and we were unable to recover it. 00:28:18.913 [2024-12-10 22:59:26.440660] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.913 [2024-12-10 22:59:26.440742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.913 [2024-12-10 22:59:26.440757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.913 [2024-12-10 22:59:26.440765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.913 [2024-12-10 22:59:26.440771] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.913 [2024-12-10 22:59:26.440786] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.913 qpair failed and we were unable to recover it. 00:28:18.913 [2024-12-10 22:59:26.450782] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.913 [2024-12-10 22:59:26.450837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.913 [2024-12-10 22:59:26.450851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.913 [2024-12-10 22:59:26.450858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.913 [2024-12-10 22:59:26.450865] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.913 [2024-12-10 22:59:26.450880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.913 qpair failed and we were unable to recover it. 00:28:18.913 [2024-12-10 22:59:26.460786] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.913 [2024-12-10 22:59:26.460841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.913 [2024-12-10 22:59:26.460856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.913 [2024-12-10 22:59:26.460863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.913 [2024-12-10 22:59:26.460870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.913 [2024-12-10 22:59:26.460885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.913 qpair failed and we were unable to recover it. 00:28:18.913 [2024-12-10 22:59:26.470755] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.913 [2024-12-10 22:59:26.470808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.913 [2024-12-10 22:59:26.470826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.913 [2024-12-10 22:59:26.470834] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.913 [2024-12-10 22:59:26.470841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.913 [2024-12-10 22:59:26.470856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.913 qpair failed and we were unable to recover it. 00:28:18.914 [2024-12-10 22:59:26.480788] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.914 [2024-12-10 22:59:26.480842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.914 [2024-12-10 22:59:26.480856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.914 [2024-12-10 22:59:26.480863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.914 [2024-12-10 22:59:26.480870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.914 [2024-12-10 22:59:26.480884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.914 qpair failed and we were unable to recover it. 00:28:18.914 [2024-12-10 22:59:26.490811] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.914 [2024-12-10 22:59:26.490888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.914 [2024-12-10 22:59:26.490902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.914 [2024-12-10 22:59:26.490908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.914 [2024-12-10 22:59:26.490915] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.914 [2024-12-10 22:59:26.490929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.914 qpair failed and we were unable to recover it. 00:28:18.914 [2024-12-10 22:59:26.500885] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.914 [2024-12-10 22:59:26.500938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.914 [2024-12-10 22:59:26.500952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.914 [2024-12-10 22:59:26.500959] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.914 [2024-12-10 22:59:26.500965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.914 [2024-12-10 22:59:26.500979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.914 qpair failed and we were unable to recover it. 00:28:18.914 [2024-12-10 22:59:26.510871] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.914 [2024-12-10 22:59:26.510929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.914 [2024-12-10 22:59:26.510944] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.914 [2024-12-10 22:59:26.510954] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.914 [2024-12-10 22:59:26.510961] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.914 [2024-12-10 22:59:26.510976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.914 qpair failed and we were unable to recover it. 00:28:18.914 [2024-12-10 22:59:26.520907] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.914 [2024-12-10 22:59:26.520960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.914 [2024-12-10 22:59:26.520975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.914 [2024-12-10 22:59:26.520982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.914 [2024-12-10 22:59:26.520988] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.914 [2024-12-10 22:59:26.521003] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.914 qpair failed and we were unable to recover it. 00:28:18.914 [2024-12-10 22:59:26.530978] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.914 [2024-12-10 22:59:26.531037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.914 [2024-12-10 22:59:26.531052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.914 [2024-12-10 22:59:26.531059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.914 [2024-12-10 22:59:26.531065] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.914 [2024-12-10 22:59:26.531080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.914 qpair failed and we were unable to recover it. 00:28:18.914 [2024-12-10 22:59:26.540963] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.914 [2024-12-10 22:59:26.541015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.914 [2024-12-10 22:59:26.541031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.914 [2024-12-10 22:59:26.541038] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.914 [2024-12-10 22:59:26.541044] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.914 [2024-12-10 22:59:26.541060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.914 qpair failed and we were unable to recover it. 00:28:18.914 [2024-12-10 22:59:26.550978] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.914 [2024-12-10 22:59:26.551036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.914 [2024-12-10 22:59:26.551050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.914 [2024-12-10 22:59:26.551057] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.914 [2024-12-10 22:59:26.551064] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.914 [2024-12-10 22:59:26.551078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.914 qpair failed and we were unable to recover it. 00:28:18.914 [2024-12-10 22:59:26.561055] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.914 [2024-12-10 22:59:26.561112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.914 [2024-12-10 22:59:26.561126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.914 [2024-12-10 22:59:26.561133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.914 [2024-12-10 22:59:26.561140] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.914 [2024-12-10 22:59:26.561154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.914 qpair failed and we were unable to recover it. 00:28:18.914 [2024-12-10 22:59:26.571049] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.914 [2024-12-10 22:59:26.571111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.914 [2024-12-10 22:59:26.571126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.914 [2024-12-10 22:59:26.571133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.914 [2024-12-10 22:59:26.571140] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.914 [2024-12-10 22:59:26.571155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.914 qpair failed and we were unable to recover it. 00:28:18.914 [2024-12-10 22:59:26.581140] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.914 [2024-12-10 22:59:26.581202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.914 [2024-12-10 22:59:26.581216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.914 [2024-12-10 22:59:26.581224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.914 [2024-12-10 22:59:26.581230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.914 [2024-12-10 22:59:26.581244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.914 qpair failed and we were unable to recover it. 00:28:18.914 [2024-12-10 22:59:26.591173] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.914 [2024-12-10 22:59:26.591229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.914 [2024-12-10 22:59:26.591243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.914 [2024-12-10 22:59:26.591251] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.914 [2024-12-10 22:59:26.591257] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.914 [2024-12-10 22:59:26.591272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.914 qpair failed and we were unable to recover it. 00:28:18.914 [2024-12-10 22:59:26.601181] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.914 [2024-12-10 22:59:26.601238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.914 [2024-12-10 22:59:26.601256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.914 [2024-12-10 22:59:26.601264] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.914 [2024-12-10 22:59:26.601270] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.914 [2024-12-10 22:59:26.601285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.914 qpair failed and we were unable to recover it. 00:28:18.914 [2024-12-10 22:59:26.611207] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.915 [2024-12-10 22:59:26.611265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.915 [2024-12-10 22:59:26.611280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.915 [2024-12-10 22:59:26.611287] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.915 [2024-12-10 22:59:26.611294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.915 [2024-12-10 22:59:26.611308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.915 qpair failed and we were unable to recover it. 00:28:18.915 [2024-12-10 22:59:26.621245] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.915 [2024-12-10 22:59:26.621304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.915 [2024-12-10 22:59:26.621318] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.915 [2024-12-10 22:59:26.621326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.915 [2024-12-10 22:59:26.621332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.915 [2024-12-10 22:59:26.621346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.915 qpair failed and we were unable to recover it. 00:28:18.915 [2024-12-10 22:59:26.631289] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:18.915 [2024-12-10 22:59:26.631348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:18.915 [2024-12-10 22:59:26.631361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:18.915 [2024-12-10 22:59:26.631369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:18.915 [2024-12-10 22:59:26.631376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:18.915 [2024-12-10 22:59:26.631390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:18.915 qpair failed and we were unable to recover it. 00:28:19.175 [2024-12-10 22:59:26.641322] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.175 [2024-12-10 22:59:26.641417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.175 [2024-12-10 22:59:26.641433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.175 [2024-12-10 22:59:26.641443] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.175 [2024-12-10 22:59:26.641450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.175 [2024-12-10 22:59:26.641465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.175 qpair failed and we were unable to recover it. 00:28:19.175 [2024-12-10 22:59:26.651398] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.175 [2024-12-10 22:59:26.651502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.175 [2024-12-10 22:59:26.651517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.175 [2024-12-10 22:59:26.651524] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.175 [2024-12-10 22:59:26.651531] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.175 [2024-12-10 22:59:26.651545] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.175 qpair failed and we were unable to recover it. 00:28:19.175 [2024-12-10 22:59:26.661369] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.175 [2024-12-10 22:59:26.661445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.175 [2024-12-10 22:59:26.661460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.175 [2024-12-10 22:59:26.661467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.175 [2024-12-10 22:59:26.661473] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.175 [2024-12-10 22:59:26.661488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.175 qpair failed and we were unable to recover it. 00:28:19.175 [2024-12-10 22:59:26.671397] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.175 [2024-12-10 22:59:26.671454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.175 [2024-12-10 22:59:26.671468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.175 [2024-12-10 22:59:26.671475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.175 [2024-12-10 22:59:26.671482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.175 [2024-12-10 22:59:26.671496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.175 qpair failed and we were unable to recover it. 00:28:19.175 [2024-12-10 22:59:26.681448] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.175 [2024-12-10 22:59:26.681500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.175 [2024-12-10 22:59:26.681513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.175 [2024-12-10 22:59:26.681520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.175 [2024-12-10 22:59:26.681526] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.175 [2024-12-10 22:59:26.681541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.175 qpair failed and we were unable to recover it. 00:28:19.175 [2024-12-10 22:59:26.691496] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.175 [2024-12-10 22:59:26.691566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.175 [2024-12-10 22:59:26.691581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.175 [2024-12-10 22:59:26.691588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.175 [2024-12-10 22:59:26.691595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.175 [2024-12-10 22:59:26.691610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.175 qpair failed and we were unable to recover it. 00:28:19.175 [2024-12-10 22:59:26.701500] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.175 [2024-12-10 22:59:26.701553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.175 [2024-12-10 22:59:26.701566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.175 [2024-12-10 22:59:26.701573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.175 [2024-12-10 22:59:26.701580] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.175 [2024-12-10 22:59:26.701595] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.175 qpair failed and we were unable to recover it. 00:28:19.175 [2024-12-10 22:59:26.711523] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.175 [2024-12-10 22:59:26.711602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.175 [2024-12-10 22:59:26.711616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.175 [2024-12-10 22:59:26.711624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.175 [2024-12-10 22:59:26.711630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.175 [2024-12-10 22:59:26.711645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.175 qpair failed and we were unable to recover it. 00:28:19.175 [2024-12-10 22:59:26.721553] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.175 [2024-12-10 22:59:26.721604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.175 [2024-12-10 22:59:26.721620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.175 [2024-12-10 22:59:26.721627] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.175 [2024-12-10 22:59:26.721634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.175 [2024-12-10 22:59:26.721649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.175 qpair failed and we were unable to recover it. 00:28:19.175 [2024-12-10 22:59:26.731648] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.175 [2024-12-10 22:59:26.731749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.175 [2024-12-10 22:59:26.731766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.175 [2024-12-10 22:59:26.731774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.175 [2024-12-10 22:59:26.731780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.175 [2024-12-10 22:59:26.731794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.175 qpair failed and we were unable to recover it. 00:28:19.175 [2024-12-10 22:59:26.741617] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.175 [2024-12-10 22:59:26.741676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.175 [2024-12-10 22:59:26.741691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.175 [2024-12-10 22:59:26.741699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.175 [2024-12-10 22:59:26.741705] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.175 [2024-12-10 22:59:26.741721] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.175 qpair failed and we were unable to recover it. 00:28:19.175 [2024-12-10 22:59:26.751655] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.175 [2024-12-10 22:59:26.751722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.175 [2024-12-10 22:59:26.751736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.175 [2024-12-10 22:59:26.751743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.176 [2024-12-10 22:59:26.751750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.176 [2024-12-10 22:59:26.751764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.176 qpair failed and we were unable to recover it. 00:28:19.176 [2024-12-10 22:59:26.761657] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.176 [2024-12-10 22:59:26.761707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.176 [2024-12-10 22:59:26.761722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.176 [2024-12-10 22:59:26.761729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.176 [2024-12-10 22:59:26.761736] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.176 [2024-12-10 22:59:26.761751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.176 qpair failed and we were unable to recover it. 00:28:19.176 [2024-12-10 22:59:26.771695] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.176 [2024-12-10 22:59:26.771762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.176 [2024-12-10 22:59:26.771776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.176 [2024-12-10 22:59:26.771787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.176 [2024-12-10 22:59:26.771793] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.176 [2024-12-10 22:59:26.771808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.176 qpair failed and we were unable to recover it. 00:28:19.176 [2024-12-10 22:59:26.781721] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.176 [2024-12-10 22:59:26.781790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.176 [2024-12-10 22:59:26.781804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.176 [2024-12-10 22:59:26.781811] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.176 [2024-12-10 22:59:26.781818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.176 [2024-12-10 22:59:26.781832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.176 qpair failed and we were unable to recover it. 00:28:19.176 [2024-12-10 22:59:26.791740] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.176 [2024-12-10 22:59:26.791796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.176 [2024-12-10 22:59:26.791810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.176 [2024-12-10 22:59:26.791817] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.176 [2024-12-10 22:59:26.791824] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.176 [2024-12-10 22:59:26.791838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.176 qpair failed and we were unable to recover it. 00:28:19.176 [2024-12-10 22:59:26.801772] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.176 [2024-12-10 22:59:26.801830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.176 [2024-12-10 22:59:26.801844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.176 [2024-12-10 22:59:26.801851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.176 [2024-12-10 22:59:26.801858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.176 [2024-12-10 22:59:26.801872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.176 qpair failed and we were unable to recover it. 00:28:19.176 [2024-12-10 22:59:26.811815] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.176 [2024-12-10 22:59:26.811874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.176 [2024-12-10 22:59:26.811887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.176 [2024-12-10 22:59:26.811895] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.176 [2024-12-10 22:59:26.811901] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.176 [2024-12-10 22:59:26.811916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.176 qpair failed and we were unable to recover it. 00:28:19.176 [2024-12-10 22:59:26.821892] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.176 [2024-12-10 22:59:26.822001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.176 [2024-12-10 22:59:26.822018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.176 [2024-12-10 22:59:26.822025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.176 [2024-12-10 22:59:26.822031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.176 [2024-12-10 22:59:26.822047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.176 qpair failed and we were unable to recover it. 00:28:19.176 [2024-12-10 22:59:26.831866] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.176 [2024-12-10 22:59:26.831922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.176 [2024-12-10 22:59:26.831937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.176 [2024-12-10 22:59:26.831944] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.176 [2024-12-10 22:59:26.831950] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.176 [2024-12-10 22:59:26.831965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.176 qpair failed and we were unable to recover it. 00:28:19.176 [2024-12-10 22:59:26.841888] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.176 [2024-12-10 22:59:26.841940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.176 [2024-12-10 22:59:26.841956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.176 [2024-12-10 22:59:26.841963] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.176 [2024-12-10 22:59:26.841970] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.176 [2024-12-10 22:59:26.841985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.176 qpair failed and we were unable to recover it. 00:28:19.176 [2024-12-10 22:59:26.851929] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.176 [2024-12-10 22:59:26.851990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.176 [2024-12-10 22:59:26.852004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.176 [2024-12-10 22:59:26.852012] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.176 [2024-12-10 22:59:26.852018] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.176 [2024-12-10 22:59:26.852033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.176 qpair failed and we were unable to recover it. 00:28:19.176 [2024-12-10 22:59:26.861947] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.176 [2024-12-10 22:59:26.861999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.176 [2024-12-10 22:59:26.862019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.176 [2024-12-10 22:59:26.862026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.176 [2024-12-10 22:59:26.862032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.176 [2024-12-10 22:59:26.862047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.176 qpair failed and we were unable to recover it. 00:28:19.176 [2024-12-10 22:59:26.872023] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.176 [2024-12-10 22:59:26.872082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.176 [2024-12-10 22:59:26.872096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.176 [2024-12-10 22:59:26.872103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.176 [2024-12-10 22:59:26.872110] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.176 [2024-12-10 22:59:26.872125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.176 qpair failed and we were unable to recover it. 00:28:19.176 [2024-12-10 22:59:26.882007] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.176 [2024-12-10 22:59:26.882063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.176 [2024-12-10 22:59:26.882077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.176 [2024-12-10 22:59:26.882084] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.176 [2024-12-10 22:59:26.882091] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.176 [2024-12-10 22:59:26.882105] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.176 qpair failed and we were unable to recover it. 00:28:19.176 [2024-12-10 22:59:26.892056] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.177 [2024-12-10 22:59:26.892126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.177 [2024-12-10 22:59:26.892140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.177 [2024-12-10 22:59:26.892147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.177 [2024-12-10 22:59:26.892154] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.177 [2024-12-10 22:59:26.892174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.177 qpair failed and we were unable to recover it. 00:28:19.436 [2024-12-10 22:59:26.902071] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.437 [2024-12-10 22:59:26.902172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.437 [2024-12-10 22:59:26.902189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.437 [2024-12-10 22:59:26.902200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.437 [2024-12-10 22:59:26.902206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.437 [2024-12-10 22:59:26.902223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.437 qpair failed and we were unable to recover it. 00:28:19.437 [2024-12-10 22:59:26.912105] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.437 [2024-12-10 22:59:26.912164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.437 [2024-12-10 22:59:26.912178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.437 [2024-12-10 22:59:26.912186] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.437 [2024-12-10 22:59:26.912193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.437 [2024-12-10 22:59:26.912209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.437 qpair failed and we were unable to recover it. 00:28:19.437 [2024-12-10 22:59:26.922117] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.437 [2024-12-10 22:59:26.922177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.437 [2024-12-10 22:59:26.922192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.437 [2024-12-10 22:59:26.922199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.437 [2024-12-10 22:59:26.922206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.437 [2024-12-10 22:59:26.922220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.437 qpair failed and we were unable to recover it. 00:28:19.437 [2024-12-10 22:59:26.932165] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.437 [2024-12-10 22:59:26.932222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.437 [2024-12-10 22:59:26.932236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.437 [2024-12-10 22:59:26.932243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.437 [2024-12-10 22:59:26.932250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.437 [2024-12-10 22:59:26.932265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.437 qpair failed and we were unable to recover it. 00:28:19.437 [2024-12-10 22:59:26.942198] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.437 [2024-12-10 22:59:26.942270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.437 [2024-12-10 22:59:26.942285] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.437 [2024-12-10 22:59:26.942293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.437 [2024-12-10 22:59:26.942299] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.437 [2024-12-10 22:59:26.942314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.437 qpair failed and we were unable to recover it. 00:28:19.437 [2024-12-10 22:59:26.952184] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.437 [2024-12-10 22:59:26.952240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.437 [2024-12-10 22:59:26.952254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.437 [2024-12-10 22:59:26.952261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.437 [2024-12-10 22:59:26.952268] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.437 [2024-12-10 22:59:26.952282] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.437 qpair failed and we were unable to recover it. 00:28:19.437 [2024-12-10 22:59:26.962244] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.437 [2024-12-10 22:59:26.962299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.437 [2024-12-10 22:59:26.962313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.437 [2024-12-10 22:59:26.962321] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.437 [2024-12-10 22:59:26.962327] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.437 [2024-12-10 22:59:26.962342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.437 qpair failed and we were unable to recover it. 00:28:19.437 [2024-12-10 22:59:26.972283] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.437 [2024-12-10 22:59:26.972352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.437 [2024-12-10 22:59:26.972367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.437 [2024-12-10 22:59:26.972374] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.437 [2024-12-10 22:59:26.972380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.437 [2024-12-10 22:59:26.972395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.437 qpair failed and we were unable to recover it. 00:28:19.437 [2024-12-10 22:59:26.982305] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.437 [2024-12-10 22:59:26.982364] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.437 [2024-12-10 22:59:26.982378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.437 [2024-12-10 22:59:26.982386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.437 [2024-12-10 22:59:26.982392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.437 [2024-12-10 22:59:26.982407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.437 qpair failed and we were unable to recover it. 00:28:19.437 [2024-12-10 22:59:26.992313] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.437 [2024-12-10 22:59:26.992376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.437 [2024-12-10 22:59:26.992394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.437 [2024-12-10 22:59:26.992401] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.437 [2024-12-10 22:59:26.992408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.437 [2024-12-10 22:59:26.992422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.437 qpair failed and we were unable to recover it. 00:28:19.437 [2024-12-10 22:59:27.002339] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.437 [2024-12-10 22:59:27.002396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.437 [2024-12-10 22:59:27.002410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.437 [2024-12-10 22:59:27.002418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.437 [2024-12-10 22:59:27.002424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.437 [2024-12-10 22:59:27.002439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.437 qpair failed and we were unable to recover it. 00:28:19.437 [2024-12-10 22:59:27.012412] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.437 [2024-12-10 22:59:27.012484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.437 [2024-12-10 22:59:27.012498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.437 [2024-12-10 22:59:27.012506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.437 [2024-12-10 22:59:27.012512] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.437 [2024-12-10 22:59:27.012528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.437 qpair failed and we were unable to recover it. 00:28:19.437 [2024-12-10 22:59:27.022428] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.437 [2024-12-10 22:59:27.022486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.437 [2024-12-10 22:59:27.022499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.437 [2024-12-10 22:59:27.022506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.437 [2024-12-10 22:59:27.022513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.437 [2024-12-10 22:59:27.022527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.437 qpair failed and we were unable to recover it. 00:28:19.437 [2024-12-10 22:59:27.032456] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.437 [2024-12-10 22:59:27.032510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.438 [2024-12-10 22:59:27.032524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.438 [2024-12-10 22:59:27.032535] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.438 [2024-12-10 22:59:27.032542] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.438 [2024-12-10 22:59:27.032557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.438 qpair failed and we were unable to recover it. 00:28:19.438 [2024-12-10 22:59:27.042476] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.438 [2024-12-10 22:59:27.042536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.438 [2024-12-10 22:59:27.042551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.438 [2024-12-10 22:59:27.042558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.438 [2024-12-10 22:59:27.042565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.438 [2024-12-10 22:59:27.042581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.438 qpair failed and we were unable to recover it. 00:28:19.438 [2024-12-10 22:59:27.052520] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.438 [2024-12-10 22:59:27.052580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.438 [2024-12-10 22:59:27.052595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.438 [2024-12-10 22:59:27.052602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.438 [2024-12-10 22:59:27.052608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.438 [2024-12-10 22:59:27.052623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.438 qpair failed and we were unable to recover it. 00:28:19.438 [2024-12-10 22:59:27.062595] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.438 [2024-12-10 22:59:27.062669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.438 [2024-12-10 22:59:27.062684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.438 [2024-12-10 22:59:27.062691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.438 [2024-12-10 22:59:27.062697] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.438 [2024-12-10 22:59:27.062712] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.438 qpair failed and we were unable to recover it. 00:28:19.438 [2024-12-10 22:59:27.072559] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.438 [2024-12-10 22:59:27.072620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.438 [2024-12-10 22:59:27.072634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.438 [2024-12-10 22:59:27.072642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.438 [2024-12-10 22:59:27.072648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.438 [2024-12-10 22:59:27.072662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.438 qpair failed and we were unable to recover it. 00:28:19.438 [2024-12-10 22:59:27.082591] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.438 [2024-12-10 22:59:27.082647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.438 [2024-12-10 22:59:27.082661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.438 [2024-12-10 22:59:27.082669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.438 [2024-12-10 22:59:27.082675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.438 [2024-12-10 22:59:27.082689] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.438 qpair failed and we were unable to recover it. 00:28:19.438 [2024-12-10 22:59:27.092642] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.438 [2024-12-10 22:59:27.092717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.438 [2024-12-10 22:59:27.092732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.438 [2024-12-10 22:59:27.092739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.438 [2024-12-10 22:59:27.092745] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.438 [2024-12-10 22:59:27.092760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.438 qpair failed and we were unable to recover it. 00:28:19.438 [2024-12-10 22:59:27.102654] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.438 [2024-12-10 22:59:27.102706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.438 [2024-12-10 22:59:27.102721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.438 [2024-12-10 22:59:27.102728] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.438 [2024-12-10 22:59:27.102734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.438 [2024-12-10 22:59:27.102748] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.438 qpair failed and we were unable to recover it. 00:28:19.438 [2024-12-10 22:59:27.112655] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.438 [2024-12-10 22:59:27.112722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.438 [2024-12-10 22:59:27.112736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.438 [2024-12-10 22:59:27.112743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.438 [2024-12-10 22:59:27.112749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.438 [2024-12-10 22:59:27.112764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.438 qpair failed and we were unable to recover it. 00:28:19.438 [2024-12-10 22:59:27.122731] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.438 [2024-12-10 22:59:27.122791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.438 [2024-12-10 22:59:27.122810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.438 [2024-12-10 22:59:27.122817] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.438 [2024-12-10 22:59:27.122823] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.438 [2024-12-10 22:59:27.122837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.438 qpair failed and we were unable to recover it. 00:28:19.438 [2024-12-10 22:59:27.132738] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.438 [2024-12-10 22:59:27.132796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.438 [2024-12-10 22:59:27.132809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.438 [2024-12-10 22:59:27.132816] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.438 [2024-12-10 22:59:27.132822] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.438 [2024-12-10 22:59:27.132837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.438 qpair failed and we were unable to recover it. 00:28:19.438 [2024-12-10 22:59:27.142795] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.438 [2024-12-10 22:59:27.142871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.438 [2024-12-10 22:59:27.142886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.438 [2024-12-10 22:59:27.142894] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.438 [2024-12-10 22:59:27.142900] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.438 [2024-12-10 22:59:27.142914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.438 qpair failed and we were unable to recover it. 00:28:19.438 [2024-12-10 22:59:27.152781] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.438 [2024-12-10 22:59:27.152855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.438 [2024-12-10 22:59:27.152869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.438 [2024-12-10 22:59:27.152876] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.438 [2024-12-10 22:59:27.152882] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.438 [2024-12-10 22:59:27.152897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.438 qpair failed and we were unable to recover it. 00:28:19.699 [2024-12-10 22:59:27.162821] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.699 [2024-12-10 22:59:27.162884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.699 [2024-12-10 22:59:27.162899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.699 [2024-12-10 22:59:27.162910] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.699 [2024-12-10 22:59:27.162916] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.699 [2024-12-10 22:59:27.162931] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.699 qpair failed and we were unable to recover it. 00:28:19.699 [2024-12-10 22:59:27.172852] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.699 [2024-12-10 22:59:27.172909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.699 [2024-12-10 22:59:27.172923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.699 [2024-12-10 22:59:27.172931] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.699 [2024-12-10 22:59:27.172937] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.699 [2024-12-10 22:59:27.172952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.699 qpair failed and we were unable to recover it. 00:28:19.699 [2024-12-10 22:59:27.182881] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.699 [2024-12-10 22:59:27.182938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.699 [2024-12-10 22:59:27.182951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.699 [2024-12-10 22:59:27.182958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.699 [2024-12-10 22:59:27.182965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.699 [2024-12-10 22:59:27.182980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.699 qpair failed and we were unable to recover it. 00:28:19.699 [2024-12-10 22:59:27.192913] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.699 [2024-12-10 22:59:27.192966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.699 [2024-12-10 22:59:27.192979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.699 [2024-12-10 22:59:27.192987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.699 [2024-12-10 22:59:27.192993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.699 [2024-12-10 22:59:27.193008] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.699 qpair failed and we were unable to recover it. 00:28:19.699 [2024-12-10 22:59:27.202939] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.699 [2024-12-10 22:59:27.202990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.699 [2024-12-10 22:59:27.203004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.699 [2024-12-10 22:59:27.203011] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.699 [2024-12-10 22:59:27.203018] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.699 [2024-12-10 22:59:27.203032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.699 qpair failed and we were unable to recover it. 00:28:19.699 [2024-12-10 22:59:27.212957] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.699 [2024-12-10 22:59:27.213013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.699 [2024-12-10 22:59:27.213026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.699 [2024-12-10 22:59:27.213033] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.699 [2024-12-10 22:59:27.213039] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.699 [2024-12-10 22:59:27.213053] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.699 qpair failed and we were unable to recover it. 00:28:19.699 [2024-12-10 22:59:27.223077] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.699 [2024-12-10 22:59:27.223146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.699 [2024-12-10 22:59:27.223163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.699 [2024-12-10 22:59:27.223171] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.699 [2024-12-10 22:59:27.223178] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.699 [2024-12-10 22:59:27.223192] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.699 qpair failed and we were unable to recover it. 00:28:19.699 [2024-12-10 22:59:27.233068] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.699 [2024-12-10 22:59:27.233125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.699 [2024-12-10 22:59:27.233139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.699 [2024-12-10 22:59:27.233147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.699 [2024-12-10 22:59:27.233153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.699 [2024-12-10 22:59:27.233172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.699 qpair failed and we were unable to recover it. 00:28:19.699 [2024-12-10 22:59:27.243085] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.699 [2024-12-10 22:59:27.243145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.699 [2024-12-10 22:59:27.243163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.699 [2024-12-10 22:59:27.243171] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.699 [2024-12-10 22:59:27.243177] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.699 [2024-12-10 22:59:27.243193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.699 qpair failed and we were unable to recover it. 00:28:19.699 [2024-12-10 22:59:27.253128] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.699 [2024-12-10 22:59:27.253230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.699 [2024-12-10 22:59:27.253249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.699 [2024-12-10 22:59:27.253256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.699 [2024-12-10 22:59:27.253262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.699 [2024-12-10 22:59:27.253277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.699 qpair failed and we were unable to recover it. 00:28:19.699 [2024-12-10 22:59:27.263168] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.699 [2024-12-10 22:59:27.263269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.699 [2024-12-10 22:59:27.263284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.699 [2024-12-10 22:59:27.263291] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.699 [2024-12-10 22:59:27.263299] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.700 [2024-12-10 22:59:27.263314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.700 qpair failed and we were unable to recover it. 00:28:19.700 [2024-12-10 22:59:27.273137] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.700 [2024-12-10 22:59:27.273200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.700 [2024-12-10 22:59:27.273214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.700 [2024-12-10 22:59:27.273222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.700 [2024-12-10 22:59:27.273228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.700 [2024-12-10 22:59:27.273243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.700 qpair failed and we were unable to recover it. 00:28:19.700 [2024-12-10 22:59:27.283134] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.700 [2024-12-10 22:59:27.283198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.700 [2024-12-10 22:59:27.283212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.700 [2024-12-10 22:59:27.283219] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.700 [2024-12-10 22:59:27.283226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.700 [2024-12-10 22:59:27.283240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.700 qpair failed and we were unable to recover it. 00:28:19.700 [2024-12-10 22:59:27.293202] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.700 [2024-12-10 22:59:27.293261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.700 [2024-12-10 22:59:27.293274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.700 [2024-12-10 22:59:27.293283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.700 [2024-12-10 22:59:27.293292] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.700 [2024-12-10 22:59:27.293306] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.700 qpair failed and we were unable to recover it. 00:28:19.700 [2024-12-10 22:59:27.303164] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.700 [2024-12-10 22:59:27.303246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.700 [2024-12-10 22:59:27.303260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.700 [2024-12-10 22:59:27.303267] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.700 [2024-12-10 22:59:27.303273] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.700 [2024-12-10 22:59:27.303288] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.700 qpair failed and we were unable to recover it. 00:28:19.700 [2024-12-10 22:59:27.313252] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.700 [2024-12-10 22:59:27.313306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.700 [2024-12-10 22:59:27.313320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.700 [2024-12-10 22:59:27.313327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.700 [2024-12-10 22:59:27.313333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.700 [2024-12-10 22:59:27.313348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.700 qpair failed and we were unable to recover it. 00:28:19.700 [2024-12-10 22:59:27.323249] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.700 [2024-12-10 22:59:27.323306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.700 [2024-12-10 22:59:27.323320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.700 [2024-12-10 22:59:27.323329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.700 [2024-12-10 22:59:27.323335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.700 [2024-12-10 22:59:27.323349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.700 qpair failed and we were unable to recover it. 00:28:19.700 [2024-12-10 22:59:27.333314] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.700 [2024-12-10 22:59:27.333374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.700 [2024-12-10 22:59:27.333388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.700 [2024-12-10 22:59:27.333396] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.700 [2024-12-10 22:59:27.333402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.700 [2024-12-10 22:59:27.333417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.700 qpair failed and we were unable to recover it. 00:28:19.700 [2024-12-10 22:59:27.343339] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.700 [2024-12-10 22:59:27.343393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.700 [2024-12-10 22:59:27.343408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.700 [2024-12-10 22:59:27.343415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.700 [2024-12-10 22:59:27.343422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.700 [2024-12-10 22:59:27.343438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.700 qpair failed and we were unable to recover it. 00:28:19.700 [2024-12-10 22:59:27.353366] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.700 [2024-12-10 22:59:27.353423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.700 [2024-12-10 22:59:27.353438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.700 [2024-12-10 22:59:27.353446] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.700 [2024-12-10 22:59:27.353453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.700 [2024-12-10 22:59:27.353467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.700 qpair failed and we were unable to recover it. 00:28:19.700 [2024-12-10 22:59:27.363457] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.700 [2024-12-10 22:59:27.363514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.700 [2024-12-10 22:59:27.363529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.700 [2024-12-10 22:59:27.363536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.700 [2024-12-10 22:59:27.363543] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.700 [2024-12-10 22:59:27.363557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.700 qpair failed and we were unable to recover it. 00:28:19.700 [2024-12-10 22:59:27.373439] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.700 [2024-12-10 22:59:27.373522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.700 [2024-12-10 22:59:27.373537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.700 [2024-12-10 22:59:27.373544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.700 [2024-12-10 22:59:27.373551] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.700 [2024-12-10 22:59:27.373565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.700 qpair failed and we were unable to recover it. 00:28:19.700 [2024-12-10 22:59:27.383487] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.700 [2024-12-10 22:59:27.383547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.700 [2024-12-10 22:59:27.383565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.700 [2024-12-10 22:59:27.383573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.700 [2024-12-10 22:59:27.383579] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.700 [2024-12-10 22:59:27.383594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.700 qpair failed and we were unable to recover it. 00:28:19.700 [2024-12-10 22:59:27.393488] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.700 [2024-12-10 22:59:27.393540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.700 [2024-12-10 22:59:27.393554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.700 [2024-12-10 22:59:27.393561] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.700 [2024-12-10 22:59:27.393567] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.700 [2024-12-10 22:59:27.393581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.700 qpair failed and we were unable to recover it. 00:28:19.700 [2024-12-10 22:59:27.403512] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.700 [2024-12-10 22:59:27.403573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.701 [2024-12-10 22:59:27.403587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.701 [2024-12-10 22:59:27.403595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.701 [2024-12-10 22:59:27.403602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.701 [2024-12-10 22:59:27.403616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.701 qpair failed and we were unable to recover it. 00:28:19.701 [2024-12-10 22:59:27.413557] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.701 [2024-12-10 22:59:27.413613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.701 [2024-12-10 22:59:27.413627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.701 [2024-12-10 22:59:27.413635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.701 [2024-12-10 22:59:27.413641] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.701 [2024-12-10 22:59:27.413656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.701 qpair failed and we were unable to recover it. 00:28:19.701 [2024-12-10 22:59:27.423593] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.701 [2024-12-10 22:59:27.423647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.701 [2024-12-10 22:59:27.423662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.701 [2024-12-10 22:59:27.423669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.701 [2024-12-10 22:59:27.423679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.701 [2024-12-10 22:59:27.423694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.701 qpair failed and we were unable to recover it. 00:28:19.961 [2024-12-10 22:59:27.433625] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.961 [2024-12-10 22:59:27.433681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.961 [2024-12-10 22:59:27.433695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.961 [2024-12-10 22:59:27.433702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.961 [2024-12-10 22:59:27.433709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.961 [2024-12-10 22:59:27.433723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.961 qpair failed and we were unable to recover it. 00:28:19.961 [2024-12-10 22:59:27.443575] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.961 [2024-12-10 22:59:27.443633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.961 [2024-12-10 22:59:27.443648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.961 [2024-12-10 22:59:27.443656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.961 [2024-12-10 22:59:27.443662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.961 [2024-12-10 22:59:27.443678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.961 qpair failed and we were unable to recover it. 00:28:19.961 [2024-12-10 22:59:27.453690] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.961 [2024-12-10 22:59:27.453764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.961 [2024-12-10 22:59:27.453779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.961 [2024-12-10 22:59:27.453786] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.961 [2024-12-10 22:59:27.453792] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.961 [2024-12-10 22:59:27.453806] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.961 qpair failed and we were unable to recover it. 00:28:19.961 [2024-12-10 22:59:27.463696] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.961 [2024-12-10 22:59:27.463753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.961 [2024-12-10 22:59:27.463767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.961 [2024-12-10 22:59:27.463776] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.961 [2024-12-10 22:59:27.463783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.961 [2024-12-10 22:59:27.463797] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.961 qpair failed and we were unable to recover it. 00:28:19.961 [2024-12-10 22:59:27.473759] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.961 [2024-12-10 22:59:27.473817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.961 [2024-12-10 22:59:27.473832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.961 [2024-12-10 22:59:27.473839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.961 [2024-12-10 22:59:27.473845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.961 [2024-12-10 22:59:27.473860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.961 qpair failed and we were unable to recover it. 00:28:19.961 [2024-12-10 22:59:27.483788] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.961 [2024-12-10 22:59:27.483846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.961 [2024-12-10 22:59:27.483859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.961 [2024-12-10 22:59:27.483867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.961 [2024-12-10 22:59:27.483873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.961 [2024-12-10 22:59:27.483887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.961 qpair failed and we were unable to recover it. 00:28:19.961 [2024-12-10 22:59:27.493798] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.961 [2024-12-10 22:59:27.493862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.961 [2024-12-10 22:59:27.493876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.961 [2024-12-10 22:59:27.493883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.961 [2024-12-10 22:59:27.493889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.961 [2024-12-10 22:59:27.493904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.961 qpair failed and we were unable to recover it. 00:28:19.961 [2024-12-10 22:59:27.503846] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.961 [2024-12-10 22:59:27.503915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.961 [2024-12-10 22:59:27.503929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.961 [2024-12-10 22:59:27.503936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.961 [2024-12-10 22:59:27.503942] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.961 [2024-12-10 22:59:27.503957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.961 qpair failed and we were unable to recover it. 00:28:19.961 [2024-12-10 22:59:27.513887] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.961 [2024-12-10 22:59:27.513991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.961 [2024-12-10 22:59:27.514010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.961 [2024-12-10 22:59:27.514018] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.961 [2024-12-10 22:59:27.514025] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.961 [2024-12-10 22:59:27.514041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.961 qpair failed and we were unable to recover it. 00:28:19.961 [2024-12-10 22:59:27.523878] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.961 [2024-12-10 22:59:27.523932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.961 [2024-12-10 22:59:27.523947] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.961 [2024-12-10 22:59:27.523954] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.961 [2024-12-10 22:59:27.523961] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.961 [2024-12-10 22:59:27.523977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.961 qpair failed and we were unable to recover it. 00:28:19.961 [2024-12-10 22:59:27.533903] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.961 [2024-12-10 22:59:27.533959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.961 [2024-12-10 22:59:27.533973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.961 [2024-12-10 22:59:27.533981] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.961 [2024-12-10 22:59:27.533987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.961 [2024-12-10 22:59:27.534002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.961 qpair failed and we were unable to recover it. 00:28:19.961 [2024-12-10 22:59:27.543943] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.961 [2024-12-10 22:59:27.544008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.961 [2024-12-10 22:59:27.544023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.961 [2024-12-10 22:59:27.544030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.961 [2024-12-10 22:59:27.544037] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.961 [2024-12-10 22:59:27.544052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.962 qpair failed and we were unable to recover it. 00:28:19.962 [2024-12-10 22:59:27.553971] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.962 [2024-12-10 22:59:27.554027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.962 [2024-12-10 22:59:27.554041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.962 [2024-12-10 22:59:27.554050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.962 [2024-12-10 22:59:27.554061] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.962 [2024-12-10 22:59:27.554076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.962 qpair failed and we were unable to recover it. 00:28:19.962 [2024-12-10 22:59:27.564035] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.962 [2024-12-10 22:59:27.564101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.962 [2024-12-10 22:59:27.564117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.962 [2024-12-10 22:59:27.564124] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.962 [2024-12-10 22:59:27.564130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.962 [2024-12-10 22:59:27.564145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.962 qpair failed and we were unable to recover it. 00:28:19.962 [2024-12-10 22:59:27.574025] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.962 [2024-12-10 22:59:27.574081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.962 [2024-12-10 22:59:27.574096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.962 [2024-12-10 22:59:27.574103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.962 [2024-12-10 22:59:27.574109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.962 [2024-12-10 22:59:27.574124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.962 qpair failed and we were unable to recover it. 00:28:19.962 [2024-12-10 22:59:27.584062] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.962 [2024-12-10 22:59:27.584119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.962 [2024-12-10 22:59:27.584134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.962 [2024-12-10 22:59:27.584141] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.962 [2024-12-10 22:59:27.584147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.962 [2024-12-10 22:59:27.584166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.962 qpair failed and we were unable to recover it. 00:28:19.962 [2024-12-10 22:59:27.594081] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.962 [2024-12-10 22:59:27.594140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.962 [2024-12-10 22:59:27.594154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.962 [2024-12-10 22:59:27.594164] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.962 [2024-12-10 22:59:27.594171] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.962 [2024-12-10 22:59:27.594186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.962 qpair failed and we were unable to recover it. 00:28:19.962 [2024-12-10 22:59:27.604043] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.962 [2024-12-10 22:59:27.604099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.962 [2024-12-10 22:59:27.604114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.962 [2024-12-10 22:59:27.604122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.962 [2024-12-10 22:59:27.604128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.962 [2024-12-10 22:59:27.604142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.962 qpair failed and we were unable to recover it. 00:28:19.962 [2024-12-10 22:59:27.614136] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.962 [2024-12-10 22:59:27.614202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.962 [2024-12-10 22:59:27.614217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.962 [2024-12-10 22:59:27.614225] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.962 [2024-12-10 22:59:27.614231] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.962 [2024-12-10 22:59:27.614247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.962 qpair failed and we were unable to recover it. 00:28:19.962 [2024-12-10 22:59:27.624106] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.962 [2024-12-10 22:59:27.624171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.962 [2024-12-10 22:59:27.624185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.962 [2024-12-10 22:59:27.624193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.962 [2024-12-10 22:59:27.624199] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.962 [2024-12-10 22:59:27.624214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.962 qpair failed and we were unable to recover it. 00:28:19.962 [2024-12-10 22:59:27.634210] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.962 [2024-12-10 22:59:27.634292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.962 [2024-12-10 22:59:27.634307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.962 [2024-12-10 22:59:27.634314] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.962 [2024-12-10 22:59:27.634321] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.962 [2024-12-10 22:59:27.634336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.962 qpair failed and we were unable to recover it. 00:28:19.962 [2024-12-10 22:59:27.644240] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.962 [2024-12-10 22:59:27.644296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.962 [2024-12-10 22:59:27.644315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.962 [2024-12-10 22:59:27.644323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.962 [2024-12-10 22:59:27.644329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.962 [2024-12-10 22:59:27.644344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.962 qpair failed and we were unable to recover it. 00:28:19.962 [2024-12-10 22:59:27.654268] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.962 [2024-12-10 22:59:27.654341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.962 [2024-12-10 22:59:27.654356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.962 [2024-12-10 22:59:27.654363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.962 [2024-12-10 22:59:27.654369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.962 [2024-12-10 22:59:27.654384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.962 qpair failed and we were unable to recover it. 00:28:19.962 [2024-12-10 22:59:27.664287] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.962 [2024-12-10 22:59:27.664342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.962 [2024-12-10 22:59:27.664356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.962 [2024-12-10 22:59:27.664364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.962 [2024-12-10 22:59:27.664370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.962 [2024-12-10 22:59:27.664385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.962 qpair failed and we were unable to recover it. 00:28:19.962 [2024-12-10 22:59:27.674336] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.962 [2024-12-10 22:59:27.674394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.962 [2024-12-10 22:59:27.674410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.962 [2024-12-10 22:59:27.674417] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.962 [2024-12-10 22:59:27.674423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.962 [2024-12-10 22:59:27.674438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.962 qpair failed and we were unable to recover it. 00:28:19.962 [2024-12-10 22:59:27.684318] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:19.962 [2024-12-10 22:59:27.684376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:19.962 [2024-12-10 22:59:27.684390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:19.963 [2024-12-10 22:59:27.684398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:19.963 [2024-12-10 22:59:27.684407] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:19.963 [2024-12-10 22:59:27.684422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:19.963 qpair failed and we were unable to recover it. 00:28:20.222 [2024-12-10 22:59:27.694377] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.222 [2024-12-10 22:59:27.694479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.222 [2024-12-10 22:59:27.694492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.222 [2024-12-10 22:59:27.694499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.222 [2024-12-10 22:59:27.694506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:20.222 [2024-12-10 22:59:27.694520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.222 qpair failed and we were unable to recover it. 00:28:20.222 [2024-12-10 22:59:27.704405] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.222 [2024-12-10 22:59:27.704480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.222 [2024-12-10 22:59:27.704494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.223 [2024-12-10 22:59:27.704502] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.223 [2024-12-10 22:59:27.704508] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:20.223 [2024-12-10 22:59:27.704522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.223 qpair failed and we were unable to recover it. 00:28:20.223 [2024-12-10 22:59:27.714456] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.223 [2024-12-10 22:59:27.714540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.223 [2024-12-10 22:59:27.714556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.223 [2024-12-10 22:59:27.714564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.223 [2024-12-10 22:59:27.714571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:20.223 [2024-12-10 22:59:27.714585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.223 qpair failed and we were unable to recover it. 00:28:20.223 [2024-12-10 22:59:27.724463] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.223 [2024-12-10 22:59:27.724517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.223 [2024-12-10 22:59:27.724531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.223 [2024-12-10 22:59:27.724538] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.223 [2024-12-10 22:59:27.724544] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:20.223 [2024-12-10 22:59:27.724559] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.223 qpair failed and we were unable to recover it. 00:28:20.223 [2024-12-10 22:59:27.734420] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.223 [2024-12-10 22:59:27.734475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.223 [2024-12-10 22:59:27.734489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.223 [2024-12-10 22:59:27.734496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.223 [2024-12-10 22:59:27.734502] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:20.223 [2024-12-10 22:59:27.734516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.223 qpair failed and we were unable to recover it. 00:28:20.223 [2024-12-10 22:59:27.744515] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.223 [2024-12-10 22:59:27.744574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.223 [2024-12-10 22:59:27.744589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.223 [2024-12-10 22:59:27.744596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.223 [2024-12-10 22:59:27.744603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:20.223 [2024-12-10 22:59:27.744618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.223 qpair failed and we were unable to recover it. 00:28:20.223 [2024-12-10 22:59:27.754470] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.223 [2024-12-10 22:59:27.754530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.223 [2024-12-10 22:59:27.754545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.223 [2024-12-10 22:59:27.754551] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.223 [2024-12-10 22:59:27.754558] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:20.223 [2024-12-10 22:59:27.754573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.223 qpair failed and we were unable to recover it. 00:28:20.223 [2024-12-10 22:59:27.764502] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.223 [2024-12-10 22:59:27.764562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.223 [2024-12-10 22:59:27.764577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.223 [2024-12-10 22:59:27.764584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.223 [2024-12-10 22:59:27.764591] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:20.223 [2024-12-10 22:59:27.764606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.223 qpair failed and we were unable to recover it. 00:28:20.223 [2024-12-10 22:59:27.774591] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.223 [2024-12-10 22:59:27.774649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.223 [2024-12-10 22:59:27.774666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.223 [2024-12-10 22:59:27.774673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.223 [2024-12-10 22:59:27.774680] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:20.223 [2024-12-10 22:59:27.774694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.223 qpair failed and we were unable to recover it. 00:28:20.223 [2024-12-10 22:59:27.784658] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.223 [2024-12-10 22:59:27.784712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.223 [2024-12-10 22:59:27.784726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.223 [2024-12-10 22:59:27.784733] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.223 [2024-12-10 22:59:27.784740] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:20.223 [2024-12-10 22:59:27.784754] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.223 qpair failed and we were unable to recover it. 00:28:20.223 [2024-12-10 22:59:27.794654] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.223 [2024-12-10 22:59:27.794712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.223 [2024-12-10 22:59:27.794726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.223 [2024-12-10 22:59:27.794733] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.223 [2024-12-10 22:59:27.794741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:20.223 [2024-12-10 22:59:27.794756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.223 qpair failed and we were unable to recover it. 00:28:20.223 [2024-12-10 22:59:27.804612] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.223 [2024-12-10 22:59:27.804667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.223 [2024-12-10 22:59:27.804681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.223 [2024-12-10 22:59:27.804688] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.223 [2024-12-10 22:59:27.804695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:20.223 [2024-12-10 22:59:27.804709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.223 qpair failed and we were unable to recover it. 00:28:20.223 [2024-12-10 22:59:27.814645] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.223 [2024-12-10 22:59:27.814703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.223 [2024-12-10 22:59:27.814717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.223 [2024-12-10 22:59:27.814725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.223 [2024-12-10 22:59:27.814734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:20.223 [2024-12-10 22:59:27.814750] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.223 qpair failed and we were unable to recover it. 00:28:20.223 [2024-12-10 22:59:27.824680] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.223 [2024-12-10 22:59:27.824737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.223 [2024-12-10 22:59:27.824751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.223 [2024-12-10 22:59:27.824758] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.223 [2024-12-10 22:59:27.824764] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:20.223 [2024-12-10 22:59:27.824778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.223 qpair failed and we were unable to recover it. 00:28:20.223 [2024-12-10 22:59:27.834682] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.224 [2024-12-10 22:59:27.834778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.224 [2024-12-10 22:59:27.834793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.224 [2024-12-10 22:59:27.834800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.224 [2024-12-10 22:59:27.834808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:20.224 [2024-12-10 22:59:27.834823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.224 qpair failed and we were unable to recover it. 00:28:20.224 [2024-12-10 22:59:27.844790] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.224 [2024-12-10 22:59:27.844862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.224 [2024-12-10 22:59:27.844878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.224 [2024-12-10 22:59:27.844884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.224 [2024-12-10 22:59:27.844891] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:20.224 [2024-12-10 22:59:27.844906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.224 qpair failed and we were unable to recover it. 00:28:20.224 [2024-12-10 22:59:27.854753] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.224 [2024-12-10 22:59:27.854810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.224 [2024-12-10 22:59:27.854824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.224 [2024-12-10 22:59:27.854832] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.224 [2024-12-10 22:59:27.854838] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:20.224 [2024-12-10 22:59:27.854853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.224 qpair failed and we were unable to recover it. 00:28:20.224 [2024-12-10 22:59:27.864829] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.224 [2024-12-10 22:59:27.864885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.224 [2024-12-10 22:59:27.864900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.224 [2024-12-10 22:59:27.864907] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.224 [2024-12-10 22:59:27.864914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:20.224 [2024-12-10 22:59:27.864929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.224 qpair failed and we were unable to recover it. 00:28:20.224 [2024-12-10 22:59:27.874810] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.224 [2024-12-10 22:59:27.874862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.224 [2024-12-10 22:59:27.874876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.224 [2024-12-10 22:59:27.874883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.224 [2024-12-10 22:59:27.874890] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:20.224 [2024-12-10 22:59:27.874904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.224 qpair failed and we were unable to recover it. 00:28:20.224 [2024-12-10 22:59:27.884892] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.224 [2024-12-10 22:59:27.884947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.224 [2024-12-10 22:59:27.884961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.224 [2024-12-10 22:59:27.884968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.224 [2024-12-10 22:59:27.884975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:20.224 [2024-12-10 22:59:27.884989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.224 qpair failed and we were unable to recover it. 00:28:20.224 [2024-12-10 22:59:27.894950] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.224 [2024-12-10 22:59:27.895008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.224 [2024-12-10 22:59:27.895022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.224 [2024-12-10 22:59:27.895029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.224 [2024-12-10 22:59:27.895036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:20.224 [2024-12-10 22:59:27.895050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.224 qpair failed and we were unable to recover it. 00:28:20.224 [2024-12-10 22:59:27.904896] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.224 [2024-12-10 22:59:27.904982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.224 [2024-12-10 22:59:27.905003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.224 [2024-12-10 22:59:27.905010] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.224 [2024-12-10 22:59:27.905017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:20.224 [2024-12-10 22:59:27.905032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.224 qpair failed and we were unable to recover it. 00:28:20.224 [2024-12-10 22:59:27.914986] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.224 [2024-12-10 22:59:27.915040] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.224 [2024-12-10 22:59:27.915055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.224 [2024-12-10 22:59:27.915062] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.224 [2024-12-10 22:59:27.915069] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:20.224 [2024-12-10 22:59:27.915085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.224 qpair failed and we were unable to recover it. 00:28:20.224 [2024-12-10 22:59:27.925027] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.224 [2024-12-10 22:59:27.925098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.224 [2024-12-10 22:59:27.925112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.224 [2024-12-10 22:59:27.925120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.224 [2024-12-10 22:59:27.925126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:20.224 [2024-12-10 22:59:27.925140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.224 qpair failed and we were unable to recover it. 00:28:20.224 [2024-12-10 22:59:27.935041] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.224 [2024-12-10 22:59:27.935100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.224 [2024-12-10 22:59:27.935114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.224 [2024-12-10 22:59:27.935122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.224 [2024-12-10 22:59:27.935129] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:20.224 [2024-12-10 22:59:27.935143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.224 qpair failed and we were unable to recover it. 00:28:20.224 [2024-12-10 22:59:27.945073] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.224 [2024-12-10 22:59:27.945124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.224 [2024-12-10 22:59:27.945139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.224 [2024-12-10 22:59:27.945146] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.224 [2024-12-10 22:59:27.945156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:20.224 [2024-12-10 22:59:27.945177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.224 qpair failed and we were unable to recover it. 00:28:20.484 [2024-12-10 22:59:27.955142] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.484 [2024-12-10 22:59:27.955211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.484 [2024-12-10 22:59:27.955226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.484 [2024-12-10 22:59:27.955233] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.484 [2024-12-10 22:59:27.955239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:20.484 [2024-12-10 22:59:27.955254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.484 qpair failed and we were unable to recover it. 00:28:20.484 [2024-12-10 22:59:27.965137] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.484 [2024-12-10 22:59:27.965210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.484 [2024-12-10 22:59:27.965225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.484 [2024-12-10 22:59:27.965233] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.484 [2024-12-10 22:59:27.965239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:20.484 [2024-12-10 22:59:27.965253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.484 qpair failed and we were unable to recover it. 00:28:20.484 [2024-12-10 22:59:27.975155] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.484 [2024-12-10 22:59:27.975228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.484 [2024-12-10 22:59:27.975243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.484 [2024-12-10 22:59:27.975251] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.485 [2024-12-10 22:59:27.975258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:20.485 [2024-12-10 22:59:27.975273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.485 qpair failed and we were unable to recover it. 00:28:20.485 [2024-12-10 22:59:27.985189] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.485 [2024-12-10 22:59:27.985247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.485 [2024-12-10 22:59:27.985261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.485 [2024-12-10 22:59:27.985269] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.485 [2024-12-10 22:59:27.985275] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:20.485 [2024-12-10 22:59:27.985290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.485 qpair failed and we were unable to recover it. 00:28:20.485 [2024-12-10 22:59:27.995226] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.485 [2024-12-10 22:59:27.995324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.485 [2024-12-10 22:59:27.995339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.485 [2024-12-10 22:59:27.995346] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.485 [2024-12-10 22:59:27.995352] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:20.485 [2024-12-10 22:59:27.995366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.485 qpair failed and we were unable to recover it. 00:28:20.485 [2024-12-10 22:59:28.005233] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.485 [2024-12-10 22:59:28.005290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.485 [2024-12-10 22:59:28.005304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.485 [2024-12-10 22:59:28.005313] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.485 [2024-12-10 22:59:28.005319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:20.485 [2024-12-10 22:59:28.005334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.485 qpair failed and we were unable to recover it. 00:28:20.485 [2024-12-10 22:59:28.015196] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.485 [2024-12-10 22:59:28.015257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.485 [2024-12-10 22:59:28.015271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.485 [2024-12-10 22:59:28.015278] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.485 [2024-12-10 22:59:28.015285] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:20.485 [2024-12-10 22:59:28.015300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.485 qpair failed and we were unable to recover it. 00:28:20.485 [2024-12-10 22:59:28.025241] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.485 [2024-12-10 22:59:28.025300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.485 [2024-12-10 22:59:28.025314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.485 [2024-12-10 22:59:28.025322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.485 [2024-12-10 22:59:28.025328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:20.485 [2024-12-10 22:59:28.025343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.485 qpair failed and we were unable to recover it. 00:28:20.485 [2024-12-10 22:59:28.035328] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.485 [2024-12-10 22:59:28.035389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.485 [2024-12-10 22:59:28.035407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.485 [2024-12-10 22:59:28.035414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.485 [2024-12-10 22:59:28.035420] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:20.485 [2024-12-10 22:59:28.035435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.485 qpair failed and we were unable to recover it. 00:28:20.485 [2024-12-10 22:59:28.045339] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.485 [2024-12-10 22:59:28.045392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.485 [2024-12-10 22:59:28.045407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.485 [2024-12-10 22:59:28.045414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.485 [2024-12-10 22:59:28.045421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:20.485 [2024-12-10 22:59:28.045436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.485 qpair failed and we were unable to recover it. 00:28:20.485 [2024-12-10 22:59:28.055379] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.485 [2024-12-10 22:59:28.055436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.485 [2024-12-10 22:59:28.055450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.485 [2024-12-10 22:59:28.055458] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.485 [2024-12-10 22:59:28.055464] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:20.485 [2024-12-10 22:59:28.055479] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.485 qpair failed and we were unable to recover it. 00:28:20.485 [2024-12-10 22:59:28.065443] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.485 [2024-12-10 22:59:28.065547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.485 [2024-12-10 22:59:28.065563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.485 [2024-12-10 22:59:28.065569] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.485 [2024-12-10 22:59:28.065576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:20.485 [2024-12-10 22:59:28.065592] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.485 qpair failed and we were unable to recover it. 00:28:20.485 [2024-12-10 22:59:28.075441] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.485 [2024-12-10 22:59:28.075497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.485 [2024-12-10 22:59:28.075511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.485 [2024-12-10 22:59:28.075519] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.485 [2024-12-10 22:59:28.075529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:20.485 [2024-12-10 22:59:28.075544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.485 qpair failed and we were unable to recover it. 00:28:20.485 [2024-12-10 22:59:28.085376] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.485 [2024-12-10 22:59:28.085430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.485 [2024-12-10 22:59:28.085445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.485 [2024-12-10 22:59:28.085452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.485 [2024-12-10 22:59:28.085458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:20.485 [2024-12-10 22:59:28.085473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.485 qpair failed and we were unable to recover it. 00:28:20.485 [2024-12-10 22:59:28.095486] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.485 [2024-12-10 22:59:28.095545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.485 [2024-12-10 22:59:28.095559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.485 [2024-12-10 22:59:28.095566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.485 [2024-12-10 22:59:28.095573] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:20.485 [2024-12-10 22:59:28.095588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.485 qpair failed and we were unable to recover it. 00:28:20.485 [2024-12-10 22:59:28.105504] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.485 [2024-12-10 22:59:28.105563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.485 [2024-12-10 22:59:28.105579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.485 [2024-12-10 22:59:28.105586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.485 [2024-12-10 22:59:28.105593] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:20.486 [2024-12-10 22:59:28.105608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.486 qpair failed and we were unable to recover it. 00:28:20.486 [2024-12-10 22:59:28.115560] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.486 [2024-12-10 22:59:28.115619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.486 [2024-12-10 22:59:28.115633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.486 [2024-12-10 22:59:28.115640] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.486 [2024-12-10 22:59:28.115647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:20.486 [2024-12-10 22:59:28.115661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.486 qpair failed and we were unable to recover it. 00:28:20.486 [2024-12-10 22:59:28.125559] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.486 [2024-12-10 22:59:28.125613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.486 [2024-12-10 22:59:28.125627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.486 [2024-12-10 22:59:28.125634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.486 [2024-12-10 22:59:28.125641] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:20.486 [2024-12-10 22:59:28.125656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.486 qpair failed and we were unable to recover it. 00:28:20.486 [2024-12-10 22:59:28.135620] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.486 [2024-12-10 22:59:28.135682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.486 [2024-12-10 22:59:28.135695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.486 [2024-12-10 22:59:28.135703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.486 [2024-12-10 22:59:28.135709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:20.486 [2024-12-10 22:59:28.135723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.486 qpair failed and we were unable to recover it. 00:28:20.486 [2024-12-10 22:59:28.145626] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.486 [2024-12-10 22:59:28.145683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.486 [2024-12-10 22:59:28.145698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.486 [2024-12-10 22:59:28.145706] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.486 [2024-12-10 22:59:28.145713] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:20.486 [2024-12-10 22:59:28.145728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.486 qpair failed and we were unable to recover it. 00:28:20.486 [2024-12-10 22:59:28.155669] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.486 [2024-12-10 22:59:28.155722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.486 [2024-12-10 22:59:28.155736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.486 [2024-12-10 22:59:28.155743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.486 [2024-12-10 22:59:28.155749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:20.486 [2024-12-10 22:59:28.155764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.486 qpair failed and we were unable to recover it. 00:28:20.486 [2024-12-10 22:59:28.165690] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.486 [2024-12-10 22:59:28.165742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.486 [2024-12-10 22:59:28.165761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.486 [2024-12-10 22:59:28.165770] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.486 [2024-12-10 22:59:28.165776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:20.486 [2024-12-10 22:59:28.165791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.486 qpair failed and we were unable to recover it. 00:28:20.486 [2024-12-10 22:59:28.175703] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.486 [2024-12-10 22:59:28.175761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.486 [2024-12-10 22:59:28.175775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.486 [2024-12-10 22:59:28.175784] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.486 [2024-12-10 22:59:28.175790] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:20.486 [2024-12-10 22:59:28.175805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.486 qpair failed and we were unable to recover it. 00:28:20.486 [2024-12-10 22:59:28.185755] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.486 [2024-12-10 22:59:28.185808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.486 [2024-12-10 22:59:28.185821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.486 [2024-12-10 22:59:28.185828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.486 [2024-12-10 22:59:28.185835] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:20.486 [2024-12-10 22:59:28.185849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.486 qpair failed and we were unable to recover it. 00:28:20.486 [2024-12-10 22:59:28.195816] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.486 [2024-12-10 22:59:28.195883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.486 [2024-12-10 22:59:28.195898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.486 [2024-12-10 22:59:28.195905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.486 [2024-12-10 22:59:28.195911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:20.486 [2024-12-10 22:59:28.195925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.486 qpair failed and we were unable to recover it. 00:28:20.486 [2024-12-10 22:59:28.205812] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.486 [2024-12-10 22:59:28.205866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.486 [2024-12-10 22:59:28.205880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.486 [2024-12-10 22:59:28.205887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.486 [2024-12-10 22:59:28.205897] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:20.486 [2024-12-10 22:59:28.205911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.486 qpair failed and we were unable to recover it. 00:28:20.746 [2024-12-10 22:59:28.215824] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.746 [2024-12-10 22:59:28.215883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.746 [2024-12-10 22:59:28.215898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.746 [2024-12-10 22:59:28.215906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.746 [2024-12-10 22:59:28.215912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:20.746 [2024-12-10 22:59:28.215926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.746 qpair failed and we were unable to recover it. 00:28:20.746 [2024-12-10 22:59:28.225872] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.746 [2024-12-10 22:59:28.225928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.746 [2024-12-10 22:59:28.225943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.746 [2024-12-10 22:59:28.225952] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.746 [2024-12-10 22:59:28.225958] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:20.746 [2024-12-10 22:59:28.225973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.746 qpair failed and we were unable to recover it. 00:28:20.746 [2024-12-10 22:59:28.235859] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.746 [2024-12-10 22:59:28.235942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.746 [2024-12-10 22:59:28.235957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.746 [2024-12-10 22:59:28.235964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.746 [2024-12-10 22:59:28.235971] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:20.746 [2024-12-10 22:59:28.235986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.746 qpair failed and we were unable to recover it. 00:28:20.746 [2024-12-10 22:59:28.245982] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.746 [2024-12-10 22:59:28.246086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.746 [2024-12-10 22:59:28.246103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.746 [2024-12-10 22:59:28.246110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.746 [2024-12-10 22:59:28.246117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:20.746 [2024-12-10 22:59:28.246132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.746 qpair failed and we were unable to recover it. 00:28:20.746 [2024-12-10 22:59:28.255980] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.746 [2024-12-10 22:59:28.256042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.746 [2024-12-10 22:59:28.256056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.746 [2024-12-10 22:59:28.256063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.746 [2024-12-10 22:59:28.256070] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:20.746 [2024-12-10 22:59:28.256084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.746 qpair failed and we were unable to recover it. 00:28:20.746 [2024-12-10 22:59:28.265939] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.746 [2024-12-10 22:59:28.265997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.747 [2024-12-10 22:59:28.266012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.747 [2024-12-10 22:59:28.266020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.747 [2024-12-10 22:59:28.266026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:20.747 [2024-12-10 22:59:28.266041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.747 qpair failed and we were unable to recover it. 00:28:20.747 [2024-12-10 22:59:28.276021] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.747 [2024-12-10 22:59:28.276077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.747 [2024-12-10 22:59:28.276091] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.747 [2024-12-10 22:59:28.276098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.747 [2024-12-10 22:59:28.276105] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:20.747 [2024-12-10 22:59:28.276120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.747 qpair failed and we were unable to recover it. 00:28:20.747 [2024-12-10 22:59:28.286043] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.747 [2024-12-10 22:59:28.286099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.747 [2024-12-10 22:59:28.286113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.747 [2024-12-10 22:59:28.286121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.747 [2024-12-10 22:59:28.286127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:20.747 [2024-12-10 22:59:28.286142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.747 qpair failed and we were unable to recover it. 00:28:20.747 [2024-12-10 22:59:28.296075] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.747 [2024-12-10 22:59:28.296136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.747 [2024-12-10 22:59:28.296153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.747 [2024-12-10 22:59:28.296166] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.747 [2024-12-10 22:59:28.296172] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:20.747 [2024-12-10 22:59:28.296186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.747 qpair failed and we were unable to recover it. 00:28:20.747 [2024-12-10 22:59:28.306103] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.747 [2024-12-10 22:59:28.306164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.747 [2024-12-10 22:59:28.306179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.747 [2024-12-10 22:59:28.306185] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.747 [2024-12-10 22:59:28.306192] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:20.747 [2024-12-10 22:59:28.306207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.747 qpair failed and we were unable to recover it. 00:28:20.747 [2024-12-10 22:59:28.316131] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.747 [2024-12-10 22:59:28.316190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.747 [2024-12-10 22:59:28.316204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.747 [2024-12-10 22:59:28.316211] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.747 [2024-12-10 22:59:28.316218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:20.747 [2024-12-10 22:59:28.316232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.747 qpair failed and we were unable to recover it. 00:28:20.747 [2024-12-10 22:59:28.326216] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.747 [2024-12-10 22:59:28.326271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.747 [2024-12-10 22:59:28.326285] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.747 [2024-12-10 22:59:28.326292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.747 [2024-12-10 22:59:28.326299] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:20.747 [2024-12-10 22:59:28.326313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.747 qpair failed and we were unable to recover it. 00:28:20.747 [2024-12-10 22:59:28.336201] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.747 [2024-12-10 22:59:28.336286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.747 [2024-12-10 22:59:28.336302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.747 [2024-12-10 22:59:28.336309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.747 [2024-12-10 22:59:28.336319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:20.747 [2024-12-10 22:59:28.336335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.747 qpair failed and we were unable to recover it. 00:28:20.747 [2024-12-10 22:59:28.346253] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.747 [2024-12-10 22:59:28.346315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.747 [2024-12-10 22:59:28.346330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.747 [2024-12-10 22:59:28.346337] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.747 [2024-12-10 22:59:28.346344] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:20.747 [2024-12-10 22:59:28.346359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.747 qpair failed and we were unable to recover it. 00:28:20.747 [2024-12-10 22:59:28.356239] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.747 [2024-12-10 22:59:28.356295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.747 [2024-12-10 22:59:28.356310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.747 [2024-12-10 22:59:28.356316] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.747 [2024-12-10 22:59:28.356323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:20.747 [2024-12-10 22:59:28.356338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.747 qpair failed and we were unable to recover it. 00:28:20.747 [2024-12-10 22:59:28.366264] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.747 [2024-12-10 22:59:28.366316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.747 [2024-12-10 22:59:28.366331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.747 [2024-12-10 22:59:28.366338] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.747 [2024-12-10 22:59:28.366345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:20.747 [2024-12-10 22:59:28.366360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.747 qpair failed and we were unable to recover it. 00:28:20.747 [2024-12-10 22:59:28.376309] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.747 [2024-12-10 22:59:28.376367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.747 [2024-12-10 22:59:28.376381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.747 [2024-12-10 22:59:28.376388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.747 [2024-12-10 22:59:28.376395] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:20.747 [2024-12-10 22:59:28.376409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.747 qpair failed and we were unable to recover it. 00:28:20.747 [2024-12-10 22:59:28.386327] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.747 [2024-12-10 22:59:28.386382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.747 [2024-12-10 22:59:28.386396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.747 [2024-12-10 22:59:28.386403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.747 [2024-12-10 22:59:28.386410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:20.747 [2024-12-10 22:59:28.386425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.747 qpair failed and we were unable to recover it. 00:28:20.747 [2024-12-10 22:59:28.396358] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.747 [2024-12-10 22:59:28.396415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.747 [2024-12-10 22:59:28.396429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.747 [2024-12-10 22:59:28.396436] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.747 [2024-12-10 22:59:28.396442] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:20.748 [2024-12-10 22:59:28.396457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.748 qpair failed and we were unable to recover it. 00:28:20.748 [2024-12-10 22:59:28.406393] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.748 [2024-12-10 22:59:28.406446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.748 [2024-12-10 22:59:28.406460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.748 [2024-12-10 22:59:28.406467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.748 [2024-12-10 22:59:28.406474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:20.748 [2024-12-10 22:59:28.406488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.748 qpair failed and we were unable to recover it. 00:28:20.748 [2024-12-10 22:59:28.416428] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.748 [2024-12-10 22:59:28.416486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.748 [2024-12-10 22:59:28.416500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.748 [2024-12-10 22:59:28.416507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.748 [2024-12-10 22:59:28.416514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:20.748 [2024-12-10 22:59:28.416528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.748 qpair failed and we were unable to recover it. 00:28:20.748 [2024-12-10 22:59:28.426460] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.748 [2024-12-10 22:59:28.426517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.748 [2024-12-10 22:59:28.426535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.748 [2024-12-10 22:59:28.426543] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.748 [2024-12-10 22:59:28.426549] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:20.748 [2024-12-10 22:59:28.426564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.748 qpair failed and we were unable to recover it. 00:28:20.748 [2024-12-10 22:59:28.436523] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.748 [2024-12-10 22:59:28.436612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.748 [2024-12-10 22:59:28.436626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.748 [2024-12-10 22:59:28.436633] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.748 [2024-12-10 22:59:28.436639] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:20.748 [2024-12-10 22:59:28.436653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.748 qpair failed and we were unable to recover it. 00:28:20.748 [2024-12-10 22:59:28.446498] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.748 [2024-12-10 22:59:28.446552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.748 [2024-12-10 22:59:28.446567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.748 [2024-12-10 22:59:28.446574] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.748 [2024-12-10 22:59:28.446581] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:20.748 [2024-12-10 22:59:28.446596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.748 qpair failed and we were unable to recover it. 00:28:20.748 [2024-12-10 22:59:28.456497] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.748 [2024-12-10 22:59:28.456578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.748 [2024-12-10 22:59:28.456592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.748 [2024-12-10 22:59:28.456599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.748 [2024-12-10 22:59:28.456605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:20.748 [2024-12-10 22:59:28.456620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.748 qpair failed and we were unable to recover it. 00:28:20.748 [2024-12-10 22:59:28.466492] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:20.748 [2024-12-10 22:59:28.466552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:20.748 [2024-12-10 22:59:28.466569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:20.748 [2024-12-10 22:59:28.466580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:20.748 [2024-12-10 22:59:28.466592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:20.748 [2024-12-10 22:59:28.466608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:20.748 qpair failed and we were unable to recover it. 00:28:21.008 [2024-12-10 22:59:28.476599] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.008 [2024-12-10 22:59:28.476703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.008 [2024-12-10 22:59:28.476717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.008 [2024-12-10 22:59:28.476724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.008 [2024-12-10 22:59:28.476730] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.008 [2024-12-10 22:59:28.476745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.008 qpair failed and we were unable to recover it. 00:28:21.008 [2024-12-10 22:59:28.486641] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.008 [2024-12-10 22:59:28.486695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.008 [2024-12-10 22:59:28.486709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.008 [2024-12-10 22:59:28.486717] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.008 [2024-12-10 22:59:28.486723] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.008 [2024-12-10 22:59:28.486738] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.008 qpair failed and we were unable to recover it. 00:28:21.008 [2024-12-10 22:59:28.496665] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.008 [2024-12-10 22:59:28.496732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.008 [2024-12-10 22:59:28.496747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.008 [2024-12-10 22:59:28.496754] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.008 [2024-12-10 22:59:28.496760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.008 [2024-12-10 22:59:28.496774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.008 qpair failed and we were unable to recover it. 00:28:21.008 [2024-12-10 22:59:28.506675] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.008 [2024-12-10 22:59:28.506730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.008 [2024-12-10 22:59:28.506743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.008 [2024-12-10 22:59:28.506750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.008 [2024-12-10 22:59:28.506757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.008 [2024-12-10 22:59:28.506772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.008 qpair failed and we were unable to recover it. 00:28:21.008 [2024-12-10 22:59:28.516728] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.008 [2024-12-10 22:59:28.516790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.008 [2024-12-10 22:59:28.516805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.008 [2024-12-10 22:59:28.516812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.008 [2024-12-10 22:59:28.516819] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.008 [2024-12-10 22:59:28.516834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.008 qpair failed and we were unable to recover it. 00:28:21.008 [2024-12-10 22:59:28.526738] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.008 [2024-12-10 22:59:28.526801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.008 [2024-12-10 22:59:28.526814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.008 [2024-12-10 22:59:28.526821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.008 [2024-12-10 22:59:28.526828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.008 [2024-12-10 22:59:28.526842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.008 qpair failed and we were unable to recover it. 00:28:21.008 [2024-12-10 22:59:28.536774] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.008 [2024-12-10 22:59:28.536842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.008 [2024-12-10 22:59:28.536855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.008 [2024-12-10 22:59:28.536863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.008 [2024-12-10 22:59:28.536869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.008 [2024-12-10 22:59:28.536883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.008 qpair failed and we were unable to recover it. 00:28:21.008 [2024-12-10 22:59:28.546826] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.008 [2024-12-10 22:59:28.546879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.008 [2024-12-10 22:59:28.546894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.008 [2024-12-10 22:59:28.546901] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.009 [2024-12-10 22:59:28.546907] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.009 [2024-12-10 22:59:28.546923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.009 qpair failed and we were unable to recover it. 00:28:21.009 [2024-12-10 22:59:28.556827] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.009 [2024-12-10 22:59:28.556876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.009 [2024-12-10 22:59:28.556893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.009 [2024-12-10 22:59:28.556900] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.009 [2024-12-10 22:59:28.556907] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.009 [2024-12-10 22:59:28.556921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.009 qpair failed and we were unable to recover it. 00:28:21.009 [2024-12-10 22:59:28.566854] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.009 [2024-12-10 22:59:28.566915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.009 [2024-12-10 22:59:28.566930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.009 [2024-12-10 22:59:28.566937] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.009 [2024-12-10 22:59:28.566943] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.009 [2024-12-10 22:59:28.566958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.009 qpair failed and we were unable to recover it. 00:28:21.009 [2024-12-10 22:59:28.576893] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.009 [2024-12-10 22:59:28.576951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.009 [2024-12-10 22:59:28.576965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.009 [2024-12-10 22:59:28.576972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.009 [2024-12-10 22:59:28.576979] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.009 [2024-12-10 22:59:28.576993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.009 qpair failed and we were unable to recover it. 00:28:21.009 [2024-12-10 22:59:28.586955] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.009 [2024-12-10 22:59:28.587022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.009 [2024-12-10 22:59:28.587037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.009 [2024-12-10 22:59:28.587044] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.009 [2024-12-10 22:59:28.587051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.009 [2024-12-10 22:59:28.587065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.009 qpair failed and we were unable to recover it. 00:28:21.009 [2024-12-10 22:59:28.596945] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.009 [2024-12-10 22:59:28.597000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.009 [2024-12-10 22:59:28.597014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.009 [2024-12-10 22:59:28.597021] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.009 [2024-12-10 22:59:28.597031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.009 [2024-12-10 22:59:28.597047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.009 qpair failed and we were unable to recover it. 00:28:21.009 [2024-12-10 22:59:28.606951] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.009 [2024-12-10 22:59:28.607006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.009 [2024-12-10 22:59:28.607021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.009 [2024-12-10 22:59:28.607028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.009 [2024-12-10 22:59:28.607035] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.009 [2024-12-10 22:59:28.607049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.009 qpair failed and we were unable to recover it. 00:28:21.009 [2024-12-10 22:59:28.616921] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.009 [2024-12-10 22:59:28.616978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.009 [2024-12-10 22:59:28.616993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.009 [2024-12-10 22:59:28.617000] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.009 [2024-12-10 22:59:28.617007] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.009 [2024-12-10 22:59:28.617022] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.009 qpair failed and we were unable to recover it. 00:28:21.009 [2024-12-10 22:59:28.627025] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.009 [2024-12-10 22:59:28.627084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.009 [2024-12-10 22:59:28.627098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.009 [2024-12-10 22:59:28.627106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.009 [2024-12-10 22:59:28.627112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.009 [2024-12-10 22:59:28.627128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.009 qpair failed and we were unable to recover it. 00:28:21.009 [2024-12-10 22:59:28.637091] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.009 [2024-12-10 22:59:28.637198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.009 [2024-12-10 22:59:28.637212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.009 [2024-12-10 22:59:28.637220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.009 [2024-12-10 22:59:28.637226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.009 [2024-12-10 22:59:28.637242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.009 qpair failed and we were unable to recover it. 00:28:21.009 [2024-12-10 22:59:28.647071] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.009 [2024-12-10 22:59:28.647127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.009 [2024-12-10 22:59:28.647142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.009 [2024-12-10 22:59:28.647149] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.009 [2024-12-10 22:59:28.647155] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.009 [2024-12-10 22:59:28.647175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.009 qpair failed and we were unable to recover it. 00:28:21.009 [2024-12-10 22:59:28.657120] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.009 [2024-12-10 22:59:28.657181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.009 [2024-12-10 22:59:28.657195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.009 [2024-12-10 22:59:28.657203] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.009 [2024-12-10 22:59:28.657209] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.009 [2024-12-10 22:59:28.657224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.009 qpair failed and we were unable to recover it. 00:28:21.009 [2024-12-10 22:59:28.667103] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.009 [2024-12-10 22:59:28.667194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.009 [2024-12-10 22:59:28.667210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.009 [2024-12-10 22:59:28.667217] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.009 [2024-12-10 22:59:28.667223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.009 [2024-12-10 22:59:28.667238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.009 qpair failed and we were unable to recover it. 00:28:21.009 [2024-12-10 22:59:28.677095] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.009 [2024-12-10 22:59:28.677149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.009 [2024-12-10 22:59:28.677167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.009 [2024-12-10 22:59:28.677174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.009 [2024-12-10 22:59:28.677181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.009 [2024-12-10 22:59:28.677195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.009 qpair failed and we were unable to recover it. 00:28:21.009 [2024-12-10 22:59:28.687202] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.010 [2024-12-10 22:59:28.687258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.010 [2024-12-10 22:59:28.687276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.010 [2024-12-10 22:59:28.687283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.010 [2024-12-10 22:59:28.687290] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.010 [2024-12-10 22:59:28.687305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.010 qpair failed and we were unable to recover it. 00:28:21.010 [2024-12-10 22:59:28.697243] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.010 [2024-12-10 22:59:28.697336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.010 [2024-12-10 22:59:28.697352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.010 [2024-12-10 22:59:28.697359] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.010 [2024-12-10 22:59:28.697366] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.010 [2024-12-10 22:59:28.697381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.010 qpair failed and we were unable to recover it. 00:28:21.010 [2024-12-10 22:59:28.707266] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.010 [2024-12-10 22:59:28.707331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.010 [2024-12-10 22:59:28.707345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.010 [2024-12-10 22:59:28.707352] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.010 [2024-12-10 22:59:28.707359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.010 [2024-12-10 22:59:28.707374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.010 qpair failed and we were unable to recover it. 00:28:21.010 [2024-12-10 22:59:28.717340] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.010 [2024-12-10 22:59:28.717397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.010 [2024-12-10 22:59:28.717411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.010 [2024-12-10 22:59:28.717418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.010 [2024-12-10 22:59:28.717424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.010 [2024-12-10 22:59:28.717438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.010 qpair failed and we were unable to recover it. 00:28:21.010 [2024-12-10 22:59:28.727366] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.010 [2024-12-10 22:59:28.727418] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.010 [2024-12-10 22:59:28.727431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.010 [2024-12-10 22:59:28.727439] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.010 [2024-12-10 22:59:28.727448] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.010 [2024-12-10 22:59:28.727463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.010 qpair failed and we were unable to recover it. 00:28:21.270 [2024-12-10 22:59:28.737342] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.270 [2024-12-10 22:59:28.737430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.270 [2024-12-10 22:59:28.737444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.270 [2024-12-10 22:59:28.737451] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.270 [2024-12-10 22:59:28.737457] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.270 [2024-12-10 22:59:28.737472] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.270 qpair failed and we were unable to recover it. 00:28:21.270 [2024-12-10 22:59:28.747383] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.270 [2024-12-10 22:59:28.747435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.270 [2024-12-10 22:59:28.747451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.270 [2024-12-10 22:59:28.747458] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.270 [2024-12-10 22:59:28.747465] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.270 [2024-12-10 22:59:28.747480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.270 qpair failed and we were unable to recover it. 00:28:21.270 [2024-12-10 22:59:28.757401] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.270 [2024-12-10 22:59:28.757455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.270 [2024-12-10 22:59:28.757470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.270 [2024-12-10 22:59:28.757477] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.270 [2024-12-10 22:59:28.757483] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.270 [2024-12-10 22:59:28.757497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.270 qpair failed and we were unable to recover it. 00:28:21.270 [2024-12-10 22:59:28.767432] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.270 [2024-12-10 22:59:28.767490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.270 [2024-12-10 22:59:28.767504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.270 [2024-12-10 22:59:28.767513] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.270 [2024-12-10 22:59:28.767520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.270 [2024-12-10 22:59:28.767534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.270 qpair failed and we were unable to recover it. 00:28:21.270 [2024-12-10 22:59:28.777381] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.270 [2024-12-10 22:59:28.777438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.270 [2024-12-10 22:59:28.777452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.270 [2024-12-10 22:59:28.777459] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.270 [2024-12-10 22:59:28.777466] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.270 [2024-12-10 22:59:28.777480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.270 qpair failed and we were unable to recover it. 00:28:21.270 [2024-12-10 22:59:28.787489] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.270 [2024-12-10 22:59:28.787565] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.270 [2024-12-10 22:59:28.787579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.270 [2024-12-10 22:59:28.787586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.270 [2024-12-10 22:59:28.787592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.270 [2024-12-10 22:59:28.787607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.270 qpair failed and we were unable to recover it. 00:28:21.270 [2024-12-10 22:59:28.797510] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.270 [2024-12-10 22:59:28.797572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.270 [2024-12-10 22:59:28.797587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.270 [2024-12-10 22:59:28.797594] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.270 [2024-12-10 22:59:28.797601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.270 [2024-12-10 22:59:28.797615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.270 qpair failed and we were unable to recover it. 00:28:21.270 [2024-12-10 22:59:28.807545] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.270 [2024-12-10 22:59:28.807599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.270 [2024-12-10 22:59:28.807612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.270 [2024-12-10 22:59:28.807619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.270 [2024-12-10 22:59:28.807625] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.270 [2024-12-10 22:59:28.807639] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.270 qpair failed and we were unable to recover it. 00:28:21.270 [2024-12-10 22:59:28.817545] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.270 [2024-12-10 22:59:28.817630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.270 [2024-12-10 22:59:28.817650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.270 [2024-12-10 22:59:28.817657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.270 [2024-12-10 22:59:28.817663] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.270 [2024-12-10 22:59:28.817678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.270 qpair failed and we were unable to recover it. 00:28:21.270 [2024-12-10 22:59:28.827583] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.270 [2024-12-10 22:59:28.827671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.270 [2024-12-10 22:59:28.827684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.270 [2024-12-10 22:59:28.827691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.270 [2024-12-10 22:59:28.827697] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.270 [2024-12-10 22:59:28.827711] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.270 qpair failed and we were unable to recover it. 00:28:21.270 [2024-12-10 22:59:28.837635] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.270 [2024-12-10 22:59:28.837688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.270 [2024-12-10 22:59:28.837702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.270 [2024-12-10 22:59:28.837710] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.270 [2024-12-10 22:59:28.837716] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.270 [2024-12-10 22:59:28.837730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.270 qpair failed and we were unable to recover it. 00:28:21.270 [2024-12-10 22:59:28.847568] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.271 [2024-12-10 22:59:28.847631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.271 [2024-12-10 22:59:28.847646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.271 [2024-12-10 22:59:28.847653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.271 [2024-12-10 22:59:28.847659] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.271 [2024-12-10 22:59:28.847674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.271 qpair failed and we were unable to recover it. 00:28:21.271 [2024-12-10 22:59:28.857631] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.271 [2024-12-10 22:59:28.857704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.271 [2024-12-10 22:59:28.857718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.271 [2024-12-10 22:59:28.857725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.271 [2024-12-10 22:59:28.857734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.271 [2024-12-10 22:59:28.857749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.271 qpair failed and we were unable to recover it. 00:28:21.271 [2024-12-10 22:59:28.867750] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.271 [2024-12-10 22:59:28.867852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.271 [2024-12-10 22:59:28.867866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.271 [2024-12-10 22:59:28.867873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.271 [2024-12-10 22:59:28.867880] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.271 [2024-12-10 22:59:28.867894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.271 qpair failed and we were unable to recover it. 00:28:21.271 [2024-12-10 22:59:28.877756] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.271 [2024-12-10 22:59:28.877855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.271 [2024-12-10 22:59:28.877869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.271 [2024-12-10 22:59:28.877876] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.271 [2024-12-10 22:59:28.877882] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.271 [2024-12-10 22:59:28.877896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.271 qpair failed and we were unable to recover it. 00:28:21.271 [2024-12-10 22:59:28.887727] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.271 [2024-12-10 22:59:28.887793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.271 [2024-12-10 22:59:28.887808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.271 [2024-12-10 22:59:28.887815] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.271 [2024-12-10 22:59:28.887821] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.271 [2024-12-10 22:59:28.887835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.271 qpair failed and we were unable to recover it. 00:28:21.271 [2024-12-10 22:59:28.897792] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.271 [2024-12-10 22:59:28.897850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.271 [2024-12-10 22:59:28.897864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.271 [2024-12-10 22:59:28.897871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.271 [2024-12-10 22:59:28.897878] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.271 [2024-12-10 22:59:28.897892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.271 qpair failed and we were unable to recover it. 00:28:21.271 [2024-12-10 22:59:28.907818] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.271 [2024-12-10 22:59:28.907880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.271 [2024-12-10 22:59:28.907897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.271 [2024-12-10 22:59:28.907905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.271 [2024-12-10 22:59:28.907911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.271 [2024-12-10 22:59:28.907927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.271 qpair failed and we were unable to recover it. 00:28:21.271 [2024-12-10 22:59:28.917839] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.271 [2024-12-10 22:59:28.917891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.271 [2024-12-10 22:59:28.917906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.271 [2024-12-10 22:59:28.917913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.271 [2024-12-10 22:59:28.917920] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.271 [2024-12-10 22:59:28.917934] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.271 qpair failed and we were unable to recover it. 00:28:21.271 [2024-12-10 22:59:28.927806] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.271 [2024-12-10 22:59:28.927856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.271 [2024-12-10 22:59:28.927870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.271 [2024-12-10 22:59:28.927877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.271 [2024-12-10 22:59:28.927884] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.271 [2024-12-10 22:59:28.927899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.271 qpair failed and we were unable to recover it. 00:28:21.271 [2024-12-10 22:59:28.937905] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.271 [2024-12-10 22:59:28.937973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.271 [2024-12-10 22:59:28.937986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.271 [2024-12-10 22:59:28.937994] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.271 [2024-12-10 22:59:28.938000] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.271 [2024-12-10 22:59:28.938014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.271 qpair failed and we were unable to recover it. 00:28:21.271 [2024-12-10 22:59:28.947928] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.271 [2024-12-10 22:59:28.947983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.271 [2024-12-10 22:59:28.948001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.271 [2024-12-10 22:59:28.948008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.271 [2024-12-10 22:59:28.948015] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.271 [2024-12-10 22:59:28.948030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.271 qpair failed and we were unable to recover it. 00:28:21.271 [2024-12-10 22:59:28.957998] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.271 [2024-12-10 22:59:28.958061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.271 [2024-12-10 22:59:28.958076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.271 [2024-12-10 22:59:28.958083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.271 [2024-12-10 22:59:28.958089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.271 [2024-12-10 22:59:28.958104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.271 qpair failed and we were unable to recover it. 00:28:21.271 [2024-12-10 22:59:28.968018] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.271 [2024-12-10 22:59:28.968085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.271 [2024-12-10 22:59:28.968100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.271 [2024-12-10 22:59:28.968107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.271 [2024-12-10 22:59:28.968113] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.271 [2024-12-10 22:59:28.968128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.271 qpair failed and we were unable to recover it. 00:28:21.271 [2024-12-10 22:59:28.978035] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.271 [2024-12-10 22:59:28.978106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.271 [2024-12-10 22:59:28.978120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.272 [2024-12-10 22:59:28.978127] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.272 [2024-12-10 22:59:28.978133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.272 [2024-12-10 22:59:28.978148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.272 qpair failed and we were unable to recover it. 00:28:21.272 [2024-12-10 22:59:28.988039] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.272 [2024-12-10 22:59:28.988096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.272 [2024-12-10 22:59:28.988110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.272 [2024-12-10 22:59:28.988117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.272 [2024-12-10 22:59:28.988127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.272 [2024-12-10 22:59:28.988142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.272 qpair failed and we were unable to recover it. 00:28:21.532 [2024-12-10 22:59:28.998062] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.532 [2024-12-10 22:59:28.998118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.532 [2024-12-10 22:59:28.998132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.532 [2024-12-10 22:59:28.998140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.532 [2024-12-10 22:59:28.998146] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.532 [2024-12-10 22:59:28.998164] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.532 qpair failed and we were unable to recover it. 00:28:21.532 [2024-12-10 22:59:29.008081] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.532 [2024-12-10 22:59:29.008145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.532 [2024-12-10 22:59:29.008163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.532 [2024-12-10 22:59:29.008172] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.532 [2024-12-10 22:59:29.008179] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.532 [2024-12-10 22:59:29.008193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.532 qpair failed and we were unable to recover it. 00:28:21.532 [2024-12-10 22:59:29.018130] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.532 [2024-12-10 22:59:29.018195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.532 [2024-12-10 22:59:29.018209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.532 [2024-12-10 22:59:29.018216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.532 [2024-12-10 22:59:29.018223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.532 [2024-12-10 22:59:29.018238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.532 qpair failed and we were unable to recover it. 00:28:21.532 [2024-12-10 22:59:29.028146] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.532 [2024-12-10 22:59:29.028218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.532 [2024-12-10 22:59:29.028232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.532 [2024-12-10 22:59:29.028239] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.532 [2024-12-10 22:59:29.028245] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.532 [2024-12-10 22:59:29.028260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.532 qpair failed and we were unable to recover it. 00:28:21.532 [2024-12-10 22:59:29.038154] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.532 [2024-12-10 22:59:29.038217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.532 [2024-12-10 22:59:29.038230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.532 [2024-12-10 22:59:29.038238] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.532 [2024-12-10 22:59:29.038245] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.532 [2024-12-10 22:59:29.038259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.532 qpair failed and we were unable to recover it. 00:28:21.532 [2024-12-10 22:59:29.048242] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.532 [2024-12-10 22:59:29.048302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.532 [2024-12-10 22:59:29.048317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.532 [2024-12-10 22:59:29.048324] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.532 [2024-12-10 22:59:29.048331] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.532 [2024-12-10 22:59:29.048346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.532 qpair failed and we were unable to recover it. 00:28:21.532 [2024-12-10 22:59:29.058273] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.532 [2024-12-10 22:59:29.058376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.532 [2024-12-10 22:59:29.058393] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.532 [2024-12-10 22:59:29.058401] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.532 [2024-12-10 22:59:29.058408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.532 [2024-12-10 22:59:29.058423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.532 qpair failed and we were unable to recover it. 00:28:21.532 [2024-12-10 22:59:29.068259] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.532 [2024-12-10 22:59:29.068322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.532 [2024-12-10 22:59:29.068335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.532 [2024-12-10 22:59:29.068343] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.532 [2024-12-10 22:59:29.068349] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.532 [2024-12-10 22:59:29.068363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.532 qpair failed and we were unable to recover it. 00:28:21.532 [2024-12-10 22:59:29.078270] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.532 [2024-12-10 22:59:29.078378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.532 [2024-12-10 22:59:29.078395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.532 [2024-12-10 22:59:29.078403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.532 [2024-12-10 22:59:29.078409] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.532 [2024-12-10 22:59:29.078424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.532 qpair failed and we were unable to recover it. 00:28:21.532 [2024-12-10 22:59:29.088325] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.532 [2024-12-10 22:59:29.088383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.532 [2024-12-10 22:59:29.088397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.532 [2024-12-10 22:59:29.088404] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.532 [2024-12-10 22:59:29.088410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.532 [2024-12-10 22:59:29.088425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.532 qpair failed and we were unable to recover it. 00:28:21.532 [2024-12-10 22:59:29.098361] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.532 [2024-12-10 22:59:29.098434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.532 [2024-12-10 22:59:29.098449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.532 [2024-12-10 22:59:29.098456] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.532 [2024-12-10 22:59:29.098462] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.532 [2024-12-10 22:59:29.098477] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.532 qpair failed and we were unable to recover it. 00:28:21.532 [2024-12-10 22:59:29.108337] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.532 [2024-12-10 22:59:29.108434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.532 [2024-12-10 22:59:29.108450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.532 [2024-12-10 22:59:29.108458] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.532 [2024-12-10 22:59:29.108465] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.532 [2024-12-10 22:59:29.108480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.533 qpair failed and we were unable to recover it. 00:28:21.533 [2024-12-10 22:59:29.118367] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.533 [2024-12-10 22:59:29.118422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.533 [2024-12-10 22:59:29.118437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.533 [2024-12-10 22:59:29.118444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.533 [2024-12-10 22:59:29.118454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.533 [2024-12-10 22:59:29.118469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.533 qpair failed and we were unable to recover it. 00:28:21.533 [2024-12-10 22:59:29.128380] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.533 [2024-12-10 22:59:29.128433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.533 [2024-12-10 22:59:29.128447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.533 [2024-12-10 22:59:29.128454] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.533 [2024-12-10 22:59:29.128461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.533 [2024-12-10 22:59:29.128475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.533 qpair failed and we were unable to recover it. 00:28:21.533 [2024-12-10 22:59:29.138410] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.533 [2024-12-10 22:59:29.138465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.533 [2024-12-10 22:59:29.138479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.533 [2024-12-10 22:59:29.138486] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.533 [2024-12-10 22:59:29.138492] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.533 [2024-12-10 22:59:29.138507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.533 qpair failed and we were unable to recover it. 00:28:21.533 [2024-12-10 22:59:29.148491] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.533 [2024-12-10 22:59:29.148557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.533 [2024-12-10 22:59:29.148572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.533 [2024-12-10 22:59:29.148579] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.533 [2024-12-10 22:59:29.148586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.533 [2024-12-10 22:59:29.148601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.533 qpair failed and we were unable to recover it. 00:28:21.533 [2024-12-10 22:59:29.158519] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.533 [2024-12-10 22:59:29.158576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.533 [2024-12-10 22:59:29.158589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.533 [2024-12-10 22:59:29.158596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.533 [2024-12-10 22:59:29.158603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.533 [2024-12-10 22:59:29.158617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.533 qpair failed and we were unable to recover it. 00:28:21.533 [2024-12-10 22:59:29.168551] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.533 [2024-12-10 22:59:29.168616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.533 [2024-12-10 22:59:29.168631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.533 [2024-12-10 22:59:29.168638] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.533 [2024-12-10 22:59:29.168645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.533 [2024-12-10 22:59:29.168659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.533 qpair failed and we were unable to recover it. 00:28:21.533 [2024-12-10 22:59:29.178624] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.533 [2024-12-10 22:59:29.178683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.533 [2024-12-10 22:59:29.178697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.533 [2024-12-10 22:59:29.178704] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.533 [2024-12-10 22:59:29.178711] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.533 [2024-12-10 22:59:29.178728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.533 qpair failed and we were unable to recover it. 00:28:21.533 [2024-12-10 22:59:29.188583] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.533 [2024-12-10 22:59:29.188675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.533 [2024-12-10 22:59:29.188690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.533 [2024-12-10 22:59:29.188697] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.533 [2024-12-10 22:59:29.188703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.533 [2024-12-10 22:59:29.188718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.533 qpair failed and we were unable to recover it. 00:28:21.533 [2024-12-10 22:59:29.198668] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.533 [2024-12-10 22:59:29.198726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.533 [2024-12-10 22:59:29.198740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.533 [2024-12-10 22:59:29.198747] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.533 [2024-12-10 22:59:29.198754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.533 [2024-12-10 22:59:29.198769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.533 qpair failed and we were unable to recover it. 00:28:21.533 [2024-12-10 22:59:29.208674] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.533 [2024-12-10 22:59:29.208774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.533 [2024-12-10 22:59:29.208791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.533 [2024-12-10 22:59:29.208799] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.533 [2024-12-10 22:59:29.208805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.533 [2024-12-10 22:59:29.208819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.533 qpair failed and we were unable to recover it. 00:28:21.533 [2024-12-10 22:59:29.218687] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.533 [2024-12-10 22:59:29.218746] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.533 [2024-12-10 22:59:29.218760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.533 [2024-12-10 22:59:29.218768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.533 [2024-12-10 22:59:29.218774] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.533 [2024-12-10 22:59:29.218789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.533 qpair failed and we were unable to recover it. 00:28:21.533 [2024-12-10 22:59:29.228742] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.533 [2024-12-10 22:59:29.228795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.533 [2024-12-10 22:59:29.228809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.533 [2024-12-10 22:59:29.228816] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.533 [2024-12-10 22:59:29.228822] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.533 [2024-12-10 22:59:29.228836] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.533 qpair failed and we were unable to recover it. 00:28:21.533 [2024-12-10 22:59:29.238739] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.533 [2024-12-10 22:59:29.238791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.533 [2024-12-10 22:59:29.238806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.533 [2024-12-10 22:59:29.238813] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.533 [2024-12-10 22:59:29.238819] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.533 [2024-12-10 22:59:29.238832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.533 qpair failed and we were unable to recover it. 00:28:21.533 [2024-12-10 22:59:29.248722] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.533 [2024-12-10 22:59:29.248774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.533 [2024-12-10 22:59:29.248788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.534 [2024-12-10 22:59:29.248795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.534 [2024-12-10 22:59:29.248805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.534 [2024-12-10 22:59:29.248820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.534 qpair failed and we were unable to recover it. 00:28:21.796 [2024-12-10 22:59:29.258791] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.796 [2024-12-10 22:59:29.258880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.796 [2024-12-10 22:59:29.258894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.796 [2024-12-10 22:59:29.258901] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.796 [2024-12-10 22:59:29.258908] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.796 [2024-12-10 22:59:29.258922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.796 qpair failed and we were unable to recover it. 00:28:21.797 [2024-12-10 22:59:29.268790] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.797 [2024-12-10 22:59:29.268854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.797 [2024-12-10 22:59:29.268868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.797 [2024-12-10 22:59:29.268876] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.797 [2024-12-10 22:59:29.268882] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.797 [2024-12-10 22:59:29.268898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.797 qpair failed and we were unable to recover it. 00:28:21.797 [2024-12-10 22:59:29.278843] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.797 [2024-12-10 22:59:29.278905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.797 [2024-12-10 22:59:29.278919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.797 [2024-12-10 22:59:29.278925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.797 [2024-12-10 22:59:29.278932] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.797 [2024-12-10 22:59:29.278946] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.797 qpair failed and we were unable to recover it. 00:28:21.797 [2024-12-10 22:59:29.288944] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.797 [2024-12-10 22:59:29.289004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.797 [2024-12-10 22:59:29.289019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.797 [2024-12-10 22:59:29.289026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.797 [2024-12-10 22:59:29.289032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.797 [2024-12-10 22:59:29.289047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.797 qpair failed and we were unable to recover it. 00:28:21.797 [2024-12-10 22:59:29.298870] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.797 [2024-12-10 22:59:29.298925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.797 [2024-12-10 22:59:29.298938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.797 [2024-12-10 22:59:29.298945] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.797 [2024-12-10 22:59:29.298952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.797 [2024-12-10 22:59:29.298966] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.797 qpair failed and we were unable to recover it. 00:28:21.797 [2024-12-10 22:59:29.308896] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.797 [2024-12-10 22:59:29.308953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.797 [2024-12-10 22:59:29.308967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.797 [2024-12-10 22:59:29.308974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.797 [2024-12-10 22:59:29.308980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.797 [2024-12-10 22:59:29.308995] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.797 qpair failed and we were unable to recover it. 00:28:21.797 [2024-12-10 22:59:29.318978] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.797 [2024-12-10 22:59:29.319027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.797 [2024-12-10 22:59:29.319041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.797 [2024-12-10 22:59:29.319048] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.797 [2024-12-10 22:59:29.319055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.797 [2024-12-10 22:59:29.319069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.797 qpair failed and we were unable to recover it. 00:28:21.797 [2024-12-10 22:59:29.328995] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.797 [2024-12-10 22:59:29.329050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.797 [2024-12-10 22:59:29.329063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.797 [2024-12-10 22:59:29.329070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.797 [2024-12-10 22:59:29.329077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.797 [2024-12-10 22:59:29.329091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.797 qpair failed and we were unable to recover it. 00:28:21.797 [2024-12-10 22:59:29.338979] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.797 [2024-12-10 22:59:29.339039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.797 [2024-12-10 22:59:29.339056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.797 [2024-12-10 22:59:29.339064] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.797 [2024-12-10 22:59:29.339070] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.797 [2024-12-10 22:59:29.339084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.797 qpair failed and we were unable to recover it. 00:28:21.797 [2024-12-10 22:59:29.349039] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.797 [2024-12-10 22:59:29.349126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.797 [2024-12-10 22:59:29.349141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.797 [2024-12-10 22:59:29.349148] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.797 [2024-12-10 22:59:29.349155] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.797 [2024-12-10 22:59:29.349175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.797 qpair failed and we were unable to recover it. 00:28:21.797 [2024-12-10 22:59:29.359033] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.797 [2024-12-10 22:59:29.359098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.797 [2024-12-10 22:59:29.359113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.797 [2024-12-10 22:59:29.359121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.798 [2024-12-10 22:59:29.359127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.798 [2024-12-10 22:59:29.359142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.798 qpair failed and we were unable to recover it. 00:28:21.798 [2024-12-10 22:59:29.369125] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.798 [2024-12-10 22:59:29.369204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.798 [2024-12-10 22:59:29.369219] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.798 [2024-12-10 22:59:29.369226] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.798 [2024-12-10 22:59:29.369232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.798 [2024-12-10 22:59:29.369248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.798 qpair failed and we were unable to recover it. 00:28:21.798 [2024-12-10 22:59:29.379100] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.798 [2024-12-10 22:59:29.379155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.798 [2024-12-10 22:59:29.379175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.798 [2024-12-10 22:59:29.379182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.798 [2024-12-10 22:59:29.379192] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.798 [2024-12-10 22:59:29.379207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.798 qpair failed and we were unable to recover it. 00:28:21.798 [2024-12-10 22:59:29.389199] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.798 [2024-12-10 22:59:29.389253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.798 [2024-12-10 22:59:29.389267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.798 [2024-12-10 22:59:29.389275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.798 [2024-12-10 22:59:29.389281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.798 [2024-12-10 22:59:29.389296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.798 qpair failed and we were unable to recover it. 00:28:21.798 [2024-12-10 22:59:29.399246] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.798 [2024-12-10 22:59:29.399309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.798 [2024-12-10 22:59:29.399323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.798 [2024-12-10 22:59:29.399330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.798 [2024-12-10 22:59:29.399337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.798 [2024-12-10 22:59:29.399351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.798 qpair failed and we were unable to recover it. 00:28:21.798 [2024-12-10 22:59:29.409181] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.798 [2024-12-10 22:59:29.409238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.798 [2024-12-10 22:59:29.409252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.798 [2024-12-10 22:59:29.409260] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.798 [2024-12-10 22:59:29.409266] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.798 [2024-12-10 22:59:29.409281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.798 qpair failed and we were unable to recover it. 00:28:21.798 [2024-12-10 22:59:29.419294] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.798 [2024-12-10 22:59:29.419376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.798 [2024-12-10 22:59:29.419390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.798 [2024-12-10 22:59:29.419398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.798 [2024-12-10 22:59:29.419404] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.798 [2024-12-10 22:59:29.419418] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.798 qpair failed and we were unable to recover it. 00:28:21.798 [2024-12-10 22:59:29.429252] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.798 [2024-12-10 22:59:29.429308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.798 [2024-12-10 22:59:29.429323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.798 [2024-12-10 22:59:29.429330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.798 [2024-12-10 22:59:29.429337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.798 [2024-12-10 22:59:29.429351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.798 qpair failed and we were unable to recover it. 00:28:21.798 [2024-12-10 22:59:29.439316] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.798 [2024-12-10 22:59:29.439399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.798 [2024-12-10 22:59:29.439413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.798 [2024-12-10 22:59:29.439421] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.798 [2024-12-10 22:59:29.439427] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.798 [2024-12-10 22:59:29.439440] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.798 qpair failed and we were unable to recover it. 00:28:21.798 [2024-12-10 22:59:29.449301] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.798 [2024-12-10 22:59:29.449402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.798 [2024-12-10 22:59:29.449417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.798 [2024-12-10 22:59:29.449424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.798 [2024-12-10 22:59:29.449431] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.798 [2024-12-10 22:59:29.449447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.798 qpair failed and we were unable to recover it. 00:28:21.798 [2024-12-10 22:59:29.459381] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.799 [2024-12-10 22:59:29.459458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.799 [2024-12-10 22:59:29.459472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.799 [2024-12-10 22:59:29.459480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.799 [2024-12-10 22:59:29.459486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.799 [2024-12-10 22:59:29.459500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.799 qpair failed and we were unable to recover it. 00:28:21.799 [2024-12-10 22:59:29.469380] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.799 [2024-12-10 22:59:29.469467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.799 [2024-12-10 22:59:29.469487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.799 [2024-12-10 22:59:29.469494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.799 [2024-12-10 22:59:29.469500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.799 [2024-12-10 22:59:29.469515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.799 qpair failed and we were unable to recover it. 00:28:21.799 [2024-12-10 22:59:29.479459] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.799 [2024-12-10 22:59:29.479522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.799 [2024-12-10 22:59:29.479537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.799 [2024-12-10 22:59:29.479545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.799 [2024-12-10 22:59:29.479552] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.799 [2024-12-10 22:59:29.479567] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.799 qpair failed and we were unable to recover it. 00:28:21.799 [2024-12-10 22:59:29.489432] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.799 [2024-12-10 22:59:29.489490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.799 [2024-12-10 22:59:29.489504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.799 [2024-12-10 22:59:29.489512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.799 [2024-12-10 22:59:29.489518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.799 [2024-12-10 22:59:29.489532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.799 qpair failed and we were unable to recover it. 00:28:21.799 [2024-12-10 22:59:29.499458] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.799 [2024-12-10 22:59:29.499520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.799 [2024-12-10 22:59:29.499534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.799 [2024-12-10 22:59:29.499541] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.799 [2024-12-10 22:59:29.499548] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.799 [2024-12-10 22:59:29.499563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.799 qpair failed and we were unable to recover it. 00:28:21.799 [2024-12-10 22:59:29.509474] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:21.799 [2024-12-10 22:59:29.509576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:21.799 [2024-12-10 22:59:29.509590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:21.799 [2024-12-10 22:59:29.509597] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:21.799 [2024-12-10 22:59:29.509607] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:21.799 [2024-12-10 22:59:29.509622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:21.799 qpair failed and we were unable to recover it. 00:28:22.090 [2024-12-10 22:59:29.519556] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.090 [2024-12-10 22:59:29.519629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.090 [2024-12-10 22:59:29.519644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.090 [2024-12-10 22:59:29.519651] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.090 [2024-12-10 22:59:29.519657] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:22.090 [2024-12-10 22:59:29.519672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.090 qpair failed and we were unable to recover it. 00:28:22.090 [2024-12-10 22:59:29.529619] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.090 [2024-12-10 22:59:29.529685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.090 [2024-12-10 22:59:29.529701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.090 [2024-12-10 22:59:29.529709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.090 [2024-12-10 22:59:29.529715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:22.090 [2024-12-10 22:59:29.529730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.090 qpair failed and we were unable to recover it. 00:28:22.090 [2024-12-10 22:59:29.539631] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.090 [2024-12-10 22:59:29.539691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.090 [2024-12-10 22:59:29.539706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.090 [2024-12-10 22:59:29.539713] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.090 [2024-12-10 22:59:29.539721] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:22.090 [2024-12-10 22:59:29.539735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.090 qpair failed and we were unable to recover it. 00:28:22.090 [2024-12-10 22:59:29.549666] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.091 [2024-12-10 22:59:29.549721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.091 [2024-12-10 22:59:29.549736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.091 [2024-12-10 22:59:29.549743] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.091 [2024-12-10 22:59:29.549749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:22.091 [2024-12-10 22:59:29.549765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.091 qpair failed and we were unable to recover it. 00:28:22.091 [2024-12-10 22:59:29.559720] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.091 [2024-12-10 22:59:29.559828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.091 [2024-12-10 22:59:29.559842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.091 [2024-12-10 22:59:29.559849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.091 [2024-12-10 22:59:29.559855] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:22.091 [2024-12-10 22:59:29.559870] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.091 qpair failed and we were unable to recover it. 00:28:22.091 [2024-12-10 22:59:29.569699] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.091 [2024-12-10 22:59:29.569749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.091 [2024-12-10 22:59:29.569764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.091 [2024-12-10 22:59:29.569771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.091 [2024-12-10 22:59:29.569778] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:22.091 [2024-12-10 22:59:29.569792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.091 qpair failed and we were unable to recover it. 00:28:22.091 [2024-12-10 22:59:29.579751] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.091 [2024-12-10 22:59:29.579822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.091 [2024-12-10 22:59:29.579836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.091 [2024-12-10 22:59:29.579843] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.091 [2024-12-10 22:59:29.579850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:22.091 [2024-12-10 22:59:29.579865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.091 qpair failed and we were unable to recover it. 00:28:22.091 [2024-12-10 22:59:29.589769] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.091 [2024-12-10 22:59:29.589827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.091 [2024-12-10 22:59:29.589841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.091 [2024-12-10 22:59:29.589848] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.091 [2024-12-10 22:59:29.589855] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:22.091 [2024-12-10 22:59:29.589869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.091 qpair failed and we were unable to recover it. 00:28:22.091 [2024-12-10 22:59:29.599777] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.091 [2024-12-10 22:59:29.599834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.091 [2024-12-10 22:59:29.599852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.091 [2024-12-10 22:59:29.599860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.091 [2024-12-10 22:59:29.599866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:22.091 [2024-12-10 22:59:29.599880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.091 qpair failed and we were unable to recover it. 00:28:22.091 [2024-12-10 22:59:29.609820] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.091 [2024-12-10 22:59:29.609873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.091 [2024-12-10 22:59:29.609887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.091 [2024-12-10 22:59:29.609895] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.091 [2024-12-10 22:59:29.609902] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:22.091 [2024-12-10 22:59:29.609916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.091 qpair failed and we were unable to recover it. 00:28:22.091 [2024-12-10 22:59:29.619842] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.091 [2024-12-10 22:59:29.619899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.091 [2024-12-10 22:59:29.619913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.091 [2024-12-10 22:59:29.619920] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.091 [2024-12-10 22:59:29.619926] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:22.091 [2024-12-10 22:59:29.619940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.091 qpair failed and we were unable to recover it. 00:28:22.091 [2024-12-10 22:59:29.629916] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.091 [2024-12-10 22:59:29.629977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.091 [2024-12-10 22:59:29.629992] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.091 [2024-12-10 22:59:29.630000] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.091 [2024-12-10 22:59:29.630006] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:22.091 [2024-12-10 22:59:29.630020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.091 qpair failed and we were unable to recover it. 00:28:22.091 [2024-12-10 22:59:29.639887] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.091 [2024-12-10 22:59:29.639943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.091 [2024-12-10 22:59:29.639957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.091 [2024-12-10 22:59:29.639964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.091 [2024-12-10 22:59:29.639974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:22.091 [2024-12-10 22:59:29.639988] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.091 qpair failed and we were unable to recover it. 00:28:22.091 [2024-12-10 22:59:29.649909] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.091 [2024-12-10 22:59:29.649964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.091 [2024-12-10 22:59:29.649979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.091 [2024-12-10 22:59:29.649987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.091 [2024-12-10 22:59:29.649993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:22.091 [2024-12-10 22:59:29.650008] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.091 qpair failed and we were unable to recover it. 00:28:22.091 [2024-12-10 22:59:29.660003] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.091 [2024-12-10 22:59:29.660063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.091 [2024-12-10 22:59:29.660077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.091 [2024-12-10 22:59:29.660085] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.091 [2024-12-10 22:59:29.660091] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:22.091 [2024-12-10 22:59:29.660105] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.091 qpair failed and we were unable to recover it. 00:28:22.091 [2024-12-10 22:59:29.669990] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.091 [2024-12-10 22:59:29.670051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.091 [2024-12-10 22:59:29.670065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.091 [2024-12-10 22:59:29.670073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.091 [2024-12-10 22:59:29.670079] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:22.091 [2024-12-10 22:59:29.670094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.091 qpair failed and we were unable to recover it. 00:28:22.091 [2024-12-10 22:59:29.680006] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.091 [2024-12-10 22:59:29.680059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.091 [2024-12-10 22:59:29.680073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.091 [2024-12-10 22:59:29.680079] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.092 [2024-12-10 22:59:29.680086] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:22.092 [2024-12-10 22:59:29.680100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.092 qpair failed and we were unable to recover it. 00:28:22.092 [2024-12-10 22:59:29.690051] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.092 [2024-12-10 22:59:29.690104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.092 [2024-12-10 22:59:29.690119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.092 [2024-12-10 22:59:29.690127] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.092 [2024-12-10 22:59:29.690133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:22.092 [2024-12-10 22:59:29.690148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.092 qpair failed and we were unable to recover it. 00:28:22.092 [2024-12-10 22:59:29.700074] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.092 [2024-12-10 22:59:29.700132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.092 [2024-12-10 22:59:29.700146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.092 [2024-12-10 22:59:29.700153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.092 [2024-12-10 22:59:29.700165] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:22.092 [2024-12-10 22:59:29.700180] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.092 qpair failed and we were unable to recover it. 00:28:22.092 [2024-12-10 22:59:29.710102] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.092 [2024-12-10 22:59:29.710163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.092 [2024-12-10 22:59:29.710178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.092 [2024-12-10 22:59:29.710185] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.092 [2024-12-10 22:59:29.710192] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:22.092 [2024-12-10 22:59:29.710207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.092 qpair failed and we were unable to recover it. 00:28:22.092 [2024-12-10 22:59:29.720118] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.092 [2024-12-10 22:59:29.720178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.092 [2024-12-10 22:59:29.720192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.092 [2024-12-10 22:59:29.720200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.092 [2024-12-10 22:59:29.720206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:22.092 [2024-12-10 22:59:29.720220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.092 qpair failed and we were unable to recover it. 00:28:22.092 [2024-12-10 22:59:29.730198] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.092 [2024-12-10 22:59:29.730268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.092 [2024-12-10 22:59:29.730288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.092 [2024-12-10 22:59:29.730295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.092 [2024-12-10 22:59:29.730302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:22.092 [2024-12-10 22:59:29.730317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.092 qpair failed and we were unable to recover it. 00:28:22.092 [2024-12-10 22:59:29.740242] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.092 [2024-12-10 22:59:29.740348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.092 [2024-12-10 22:59:29.740363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.092 [2024-12-10 22:59:29.740370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.092 [2024-12-10 22:59:29.740376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:22.092 [2024-12-10 22:59:29.740390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.092 qpair failed and we were unable to recover it. 00:28:22.092 [2024-12-10 22:59:29.750216] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.092 [2024-12-10 22:59:29.750272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.092 [2024-12-10 22:59:29.750287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.092 [2024-12-10 22:59:29.750295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.092 [2024-12-10 22:59:29.750301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:22.092 [2024-12-10 22:59:29.750317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.092 qpair failed and we were unable to recover it. 00:28:22.092 [2024-12-10 22:59:29.760245] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.092 [2024-12-10 22:59:29.760300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.092 [2024-12-10 22:59:29.760315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.092 [2024-12-10 22:59:29.760323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.092 [2024-12-10 22:59:29.760329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:22.092 [2024-12-10 22:59:29.760344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.092 qpair failed and we were unable to recover it. 00:28:22.092 [2024-12-10 22:59:29.770270] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.092 [2024-12-10 22:59:29.770324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.092 [2024-12-10 22:59:29.770338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.092 [2024-12-10 22:59:29.770346] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.092 [2024-12-10 22:59:29.770356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:22.092 [2024-12-10 22:59:29.770371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.092 qpair failed and we were unable to recover it. 00:28:22.092 [2024-12-10 22:59:29.780312] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.092 [2024-12-10 22:59:29.780370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.092 [2024-12-10 22:59:29.780384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.092 [2024-12-10 22:59:29.780391] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.092 [2024-12-10 22:59:29.780398] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:22.092 [2024-12-10 22:59:29.780412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.092 qpair failed and we were unable to recover it. 00:28:22.092 [2024-12-10 22:59:29.790336] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.092 [2024-12-10 22:59:29.790387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.092 [2024-12-10 22:59:29.790401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.092 [2024-12-10 22:59:29.790408] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.092 [2024-12-10 22:59:29.790415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:22.092 [2024-12-10 22:59:29.790429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.092 qpair failed and we were unable to recover it. 00:28:22.092 [2024-12-10 22:59:29.800399] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.092 [2024-12-10 22:59:29.800458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.092 [2024-12-10 22:59:29.800472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.092 [2024-12-10 22:59:29.800480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.092 [2024-12-10 22:59:29.800486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:22.092 [2024-12-10 22:59:29.800500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.092 qpair failed and we were unable to recover it. 00:28:22.358 [2024-12-10 22:59:29.810375] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.358 [2024-12-10 22:59:29.810441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.358 [2024-12-10 22:59:29.810455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.358 [2024-12-10 22:59:29.810462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.358 [2024-12-10 22:59:29.810468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:22.358 [2024-12-10 22:59:29.810483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.358 qpair failed and we were unable to recover it. 00:28:22.358 [2024-12-10 22:59:29.820438] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.358 [2024-12-10 22:59:29.820497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.358 [2024-12-10 22:59:29.820511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.358 [2024-12-10 22:59:29.820519] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.358 [2024-12-10 22:59:29.820525] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:22.358 [2024-12-10 22:59:29.820539] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.358 qpair failed and we were unable to recover it. 00:28:22.358 [2024-12-10 22:59:29.830469] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.358 [2024-12-10 22:59:29.830528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.359 [2024-12-10 22:59:29.830542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.359 [2024-12-10 22:59:29.830550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.359 [2024-12-10 22:59:29.830556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:22.359 [2024-12-10 22:59:29.830570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.359 qpair failed and we were unable to recover it. 00:28:22.359 [2024-12-10 22:59:29.840545] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.359 [2024-12-10 22:59:29.840606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.359 [2024-12-10 22:59:29.840620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.359 [2024-12-10 22:59:29.840628] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.359 [2024-12-10 22:59:29.840634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:22.359 [2024-12-10 22:59:29.840648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.359 qpair failed and we were unable to recover it. 00:28:22.359 [2024-12-10 22:59:29.850517] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.359 [2024-12-10 22:59:29.850571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.359 [2024-12-10 22:59:29.850585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.359 [2024-12-10 22:59:29.850592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.359 [2024-12-10 22:59:29.850599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:22.359 [2024-12-10 22:59:29.850614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.359 qpair failed and we were unable to recover it. 00:28:22.359 [2024-12-10 22:59:29.860580] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.359 [2024-12-10 22:59:29.860636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.359 [2024-12-10 22:59:29.860654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.359 [2024-12-10 22:59:29.860662] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.359 [2024-12-10 22:59:29.860670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:22.359 [2024-12-10 22:59:29.860684] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.359 qpair failed and we were unable to recover it. 00:28:22.359 [2024-12-10 22:59:29.870558] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.359 [2024-12-10 22:59:29.870617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.359 [2024-12-10 22:59:29.870631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.359 [2024-12-10 22:59:29.870638] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.359 [2024-12-10 22:59:29.870645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:22.359 [2024-12-10 22:59:29.870659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.359 qpair failed and we were unable to recover it. 00:28:22.359 [2024-12-10 22:59:29.880597] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.359 [2024-12-10 22:59:29.880651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.359 [2024-12-10 22:59:29.880665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.359 [2024-12-10 22:59:29.880672] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.359 [2024-12-10 22:59:29.880678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:22.359 [2024-12-10 22:59:29.880693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.359 qpair failed and we were unable to recover it. 00:28:22.359 [2024-12-10 22:59:29.890651] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.359 [2024-12-10 22:59:29.890708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.359 [2024-12-10 22:59:29.890722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.359 [2024-12-10 22:59:29.890729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.359 [2024-12-10 22:59:29.890736] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:22.359 [2024-12-10 22:59:29.890750] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.359 qpair failed and we were unable to recover it. 00:28:22.359 [2024-12-10 22:59:29.900665] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.359 [2024-12-10 22:59:29.900723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.359 [2024-12-10 22:59:29.900740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.359 [2024-12-10 22:59:29.900748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.359 [2024-12-10 22:59:29.900758] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:22.359 [2024-12-10 22:59:29.900775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.359 qpair failed and we were unable to recover it. 00:28:22.359 [2024-12-10 22:59:29.910698] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.359 [2024-12-10 22:59:29.910784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.359 [2024-12-10 22:59:29.910799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.359 [2024-12-10 22:59:29.910806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.359 [2024-12-10 22:59:29.910813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:22.359 [2024-12-10 22:59:29.910828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.359 qpair failed and we were unable to recover it. 00:28:22.359 [2024-12-10 22:59:29.920713] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.359 [2024-12-10 22:59:29.920783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.359 [2024-12-10 22:59:29.920798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.359 [2024-12-10 22:59:29.920805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.359 [2024-12-10 22:59:29.920812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:22.359 [2024-12-10 22:59:29.920826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.359 qpair failed and we were unable to recover it. 00:28:22.359 [2024-12-10 22:59:29.930745] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.359 [2024-12-10 22:59:29.930802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.359 [2024-12-10 22:59:29.930816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.359 [2024-12-10 22:59:29.930824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.359 [2024-12-10 22:59:29.930831] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:22.359 [2024-12-10 22:59:29.930845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.359 qpair failed and we were unable to recover it. 00:28:22.359 [2024-12-10 22:59:29.940774] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.359 [2024-12-10 22:59:29.940834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.359 [2024-12-10 22:59:29.940848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.359 [2024-12-10 22:59:29.940855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.359 [2024-12-10 22:59:29.940862] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:22.359 [2024-12-10 22:59:29.940876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.359 qpair failed and we were unable to recover it. 00:28:22.359 [2024-12-10 22:59:29.950789] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.359 [2024-12-10 22:59:29.950892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.360 [2024-12-10 22:59:29.950907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.360 [2024-12-10 22:59:29.950913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.360 [2024-12-10 22:59:29.950920] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:22.360 [2024-12-10 22:59:29.950935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.360 qpair failed and we were unable to recover it. 00:28:22.360 [2024-12-10 22:59:29.960849] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.360 [2024-12-10 22:59:29.960904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.360 [2024-12-10 22:59:29.960917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.360 [2024-12-10 22:59:29.960924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.360 [2024-12-10 22:59:29.960931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:22.360 [2024-12-10 22:59:29.960946] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.360 qpair failed and we were unable to recover it. 00:28:22.360 [2024-12-10 22:59:29.970851] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.360 [2024-12-10 22:59:29.970905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.360 [2024-12-10 22:59:29.970921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.360 [2024-12-10 22:59:29.970928] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.360 [2024-12-10 22:59:29.970935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:22.360 [2024-12-10 22:59:29.970950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.360 qpair failed and we were unable to recover it. 00:28:22.360 [2024-12-10 22:59:29.980891] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.360 [2024-12-10 22:59:29.980951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.360 [2024-12-10 22:59:29.980967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.360 [2024-12-10 22:59:29.980974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.360 [2024-12-10 22:59:29.980980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:22.360 [2024-12-10 22:59:29.980995] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.360 qpair failed and we were unable to recover it. 00:28:22.360 [2024-12-10 22:59:29.990919] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.360 [2024-12-10 22:59:29.990977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.360 [2024-12-10 22:59:29.990995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.360 [2024-12-10 22:59:29.991003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.360 [2024-12-10 22:59:29.991009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:22.360 [2024-12-10 22:59:29.991024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.360 qpair failed and we were unable to recover it. 00:28:22.360 [2024-12-10 22:59:30.000895] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.360 [2024-12-10 22:59:30.000955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.360 [2024-12-10 22:59:30.000975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.360 [2024-12-10 22:59:30.000986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.360 [2024-12-10 22:59:30.000994] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:22.360 [2024-12-10 22:59:30.001014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.360 qpair failed and we were unable to recover it. 00:28:22.360 [2024-12-10 22:59:30.011012] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.360 [2024-12-10 22:59:30.011078] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.360 [2024-12-10 22:59:30.011099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.360 [2024-12-10 22:59:30.011107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.360 [2024-12-10 22:59:30.011114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:22.360 [2024-12-10 22:59:30.011133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.360 qpair failed and we were unable to recover it. 00:28:22.360 [2024-12-10 22:59:30.021016] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.360 [2024-12-10 22:59:30.021077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.360 [2024-12-10 22:59:30.021093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.360 [2024-12-10 22:59:30.021102] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.360 [2024-12-10 22:59:30.021110] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:22.360 [2024-12-10 22:59:30.021125] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.360 qpair failed and we were unable to recover it. 00:28:22.360 [2024-12-10 22:59:30.031195] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.360 [2024-12-10 22:59:30.031272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.360 [2024-12-10 22:59:30.031291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.360 [2024-12-10 22:59:30.031300] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.360 [2024-12-10 22:59:30.031311] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:22.360 [2024-12-10 22:59:30.031329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.360 qpair failed and we were unable to recover it. 00:28:22.360 [2024-12-10 22:59:30.041091] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.360 [2024-12-10 22:59:30.041153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.360 [2024-12-10 22:59:30.041174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.360 [2024-12-10 22:59:30.041182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.360 [2024-12-10 22:59:30.041190] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:22.360 [2024-12-10 22:59:30.041205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.360 qpair failed and we were unable to recover it. 00:28:22.360 [2024-12-10 22:59:30.051087] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.360 [2024-12-10 22:59:30.051145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.360 [2024-12-10 22:59:30.051166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.360 [2024-12-10 22:59:30.051174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.360 [2024-12-10 22:59:30.051182] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:22.360 [2024-12-10 22:59:30.051198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.360 qpair failed and we were unable to recover it. 00:28:22.360 [2024-12-10 22:59:30.061139] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.360 [2024-12-10 22:59:30.061209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.360 [2024-12-10 22:59:30.061226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.360 [2024-12-10 22:59:30.061233] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.360 [2024-12-10 22:59:30.061240] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:22.360 [2024-12-10 22:59:30.061255] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.360 qpair failed and we were unable to recover it. 00:28:22.360 [2024-12-10 22:59:30.071186] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.360 [2024-12-10 22:59:30.071248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.360 [2024-12-10 22:59:30.071266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.360 [2024-12-10 22:59:30.071275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.360 [2024-12-10 22:59:30.071281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:22.360 [2024-12-10 22:59:30.071297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.360 qpair failed and we were unable to recover it. 00:28:22.360 [2024-12-10 22:59:30.081189] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.360 [2024-12-10 22:59:30.081297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.360 [2024-12-10 22:59:30.081312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.360 [2024-12-10 22:59:30.081319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.361 [2024-12-10 22:59:30.081325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:22.361 [2024-12-10 22:59:30.081340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.361 qpair failed and we were unable to recover it. 00:28:22.621 [2024-12-10 22:59:30.091199] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.621 [2024-12-10 22:59:30.091253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.621 [2024-12-10 22:59:30.091268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.621 [2024-12-10 22:59:30.091275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.621 [2024-12-10 22:59:30.091282] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:22.621 [2024-12-10 22:59:30.091297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.621 qpair failed and we were unable to recover it. 00:28:22.621 [2024-12-10 22:59:30.101324] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.621 [2024-12-10 22:59:30.101403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.621 [2024-12-10 22:59:30.101418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.621 [2024-12-10 22:59:30.101425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.621 [2024-12-10 22:59:30.101432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:22.621 [2024-12-10 22:59:30.101447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.621 qpair failed and we were unable to recover it. 00:28:22.621 [2024-12-10 22:59:30.111266] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.621 [2024-12-10 22:59:30.111344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.621 [2024-12-10 22:59:30.111359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.621 [2024-12-10 22:59:30.111366] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.621 [2024-12-10 22:59:30.111373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:22.621 [2024-12-10 22:59:30.111388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.621 qpair failed and we were unable to recover it. 00:28:22.621 [2024-12-10 22:59:30.121287] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.621 [2024-12-10 22:59:30.121348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.621 [2024-12-10 22:59:30.121368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.621 [2024-12-10 22:59:30.121376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.621 [2024-12-10 22:59:30.121382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:22.621 [2024-12-10 22:59:30.121397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.621 qpair failed and we were unable to recover it. 00:28:22.621 [2024-12-10 22:59:30.131352] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.621 [2024-12-10 22:59:30.131417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.621 [2024-12-10 22:59:30.131430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.621 [2024-12-10 22:59:30.131438] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.621 [2024-12-10 22:59:30.131444] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:22.621 [2024-12-10 22:59:30.131458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.621 qpair failed and we were unable to recover it. 00:28:22.621 [2024-12-10 22:59:30.141331] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.621 [2024-12-10 22:59:30.141387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.621 [2024-12-10 22:59:30.141401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.621 [2024-12-10 22:59:30.141408] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.621 [2024-12-10 22:59:30.141414] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:22.621 [2024-12-10 22:59:30.141429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.621 qpair failed and we were unable to recover it. 00:28:22.621 [2024-12-10 22:59:30.151381] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.621 [2024-12-10 22:59:30.151436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.621 [2024-12-10 22:59:30.151451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.621 [2024-12-10 22:59:30.151458] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.621 [2024-12-10 22:59:30.151465] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:22.621 [2024-12-10 22:59:30.151481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.621 qpair failed and we were unable to recover it. 00:28:22.621 [2024-12-10 22:59:30.161401] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.621 [2024-12-10 22:59:30.161460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.621 [2024-12-10 22:59:30.161475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.621 [2024-12-10 22:59:30.161482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.621 [2024-12-10 22:59:30.161492] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:22.621 [2024-12-10 22:59:30.161508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.621 qpair failed and we were unable to recover it. 00:28:22.621 [2024-12-10 22:59:30.171426] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.621 [2024-12-10 22:59:30.171484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.621 [2024-12-10 22:59:30.171499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.621 [2024-12-10 22:59:30.171506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.621 [2024-12-10 22:59:30.171513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:22.621 [2024-12-10 22:59:30.171527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.621 qpair failed and we were unable to recover it. 00:28:22.621 [2024-12-10 22:59:30.181488] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.621 [2024-12-10 22:59:30.181582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.621 [2024-12-10 22:59:30.181596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.621 [2024-12-10 22:59:30.181603] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.621 [2024-12-10 22:59:30.181610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:22.621 [2024-12-10 22:59:30.181624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.621 qpair failed and we were unable to recover it. 00:28:22.621 [2024-12-10 22:59:30.191495] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.621 [2024-12-10 22:59:30.191549] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.621 [2024-12-10 22:59:30.191562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.621 [2024-12-10 22:59:30.191570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.621 [2024-12-10 22:59:30.191576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:22.621 [2024-12-10 22:59:30.191591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.621 qpair failed and we were unable to recover it. 00:28:22.621 [2024-12-10 22:59:30.201498] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.621 [2024-12-10 22:59:30.201598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.621 [2024-12-10 22:59:30.201612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.621 [2024-12-10 22:59:30.201619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.622 [2024-12-10 22:59:30.201626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:22.622 [2024-12-10 22:59:30.201640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.622 qpair failed and we were unable to recover it. 00:28:22.622 [2024-12-10 22:59:30.211556] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.622 [2024-12-10 22:59:30.211612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.622 [2024-12-10 22:59:30.211626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.622 [2024-12-10 22:59:30.211634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.622 [2024-12-10 22:59:30.211640] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xa92be0 00:28:22.622 [2024-12-10 22:59:30.211654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:22.622 qpair failed and we were unable to recover it. 00:28:22.622 [2024-12-10 22:59:30.221635] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.622 [2024-12-10 22:59:30.221735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.622 [2024-12-10 22:59:30.221789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.622 [2024-12-10 22:59:30.221814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.622 [2024-12-10 22:59:30.221835] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd0f0000b90 00:28:22.622 [2024-12-10 22:59:30.221886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:22.622 qpair failed and we were unable to recover it. 00:28:22.622 [2024-12-10 22:59:30.231624] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.622 [2024-12-10 22:59:30.231704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.622 [2024-12-10 22:59:30.231732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.622 [2024-12-10 22:59:30.231746] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.622 [2024-12-10 22:59:30.231759] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd0f0000b90 00:28:22.622 [2024-12-10 22:59:30.231791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:22.622 qpair failed and we were unable to recover it. 00:28:22.622 [2024-12-10 22:59:30.241654] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.622 [2024-12-10 22:59:30.241757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.622 [2024-12-10 22:59:30.241811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.622 [2024-12-10 22:59:30.241835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.622 [2024-12-10 22:59:30.241855] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd0ec000b90 00:28:22.622 [2024-12-10 22:59:30.241905] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:22.622 qpair failed and we were unable to recover it. 00:28:22.622 [2024-12-10 22:59:30.251667] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.622 [2024-12-10 22:59:30.251738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.622 [2024-12-10 22:59:30.251773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.622 [2024-12-10 22:59:30.251788] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.622 [2024-12-10 22:59:30.251801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd0ec000b90 00:28:22.622 [2024-12-10 22:59:30.251831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:22.622 qpair failed and we were unable to recover it. 00:28:22.622 [2024-12-10 22:59:30.252016] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:28:22.622 A controller has encountered a failure and is being reset. 00:28:22.622 [2024-12-10 22:59:30.261898] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.622 [2024-12-10 22:59:30.262000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.622 [2024-12-10 22:59:30.262055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.622 [2024-12-10 22:59:30.262081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.622 [2024-12-10 22:59:30.262101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd0f8000b90 00:28:22.622 [2024-12-10 22:59:30.262151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.622 qpair failed and we were unable to recover it. 00:28:22.622 [2024-12-10 22:59:30.271761] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:22.622 [2024-12-10 22:59:30.271841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:22.622 [2024-12-10 22:59:30.271869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:22.622 [2024-12-10 22:59:30.271883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:22.622 [2024-12-10 22:59:30.271897] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fd0f8000b90 00:28:22.622 [2024-12-10 22:59:30.271928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:22.622 qpair failed and we were unable to recover it. 00:28:22.622 Controller properly reset. 00:28:22.622 Initializing NVMe Controllers 00:28:22.622 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:22.622 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:22.622 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:28:22.622 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:28:22.622 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:28:22.622 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:28:22.622 Initialization complete. Launching workers. 00:28:22.622 Starting thread on core 1 00:28:22.622 Starting thread on core 2 00:28:22.622 Starting thread on core 3 00:28:22.622 Starting thread on core 0 00:28:22.622 22:59:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:28:22.622 00:28:22.622 real 0m10.681s 00:28:22.622 user 0m19.528s 00:28:22.622 sys 0m4.783s 00:28:22.622 22:59:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:22.622 22:59:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:22.622 ************************************ 00:28:22.622 END TEST nvmf_target_disconnect_tc2 00:28:22.622 ************************************ 00:28:22.622 22:59:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:28:22.622 22:59:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:28:22.622 22:59:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:28:22.622 22:59:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:22.622 22:59:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:28:22.622 22:59:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:22.622 22:59:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:28:22.622 22:59:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:22.622 22:59:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:22.882 rmmod nvme_tcp 00:28:22.882 rmmod nvme_fabrics 00:28:22.882 rmmod nvme_keyring 00:28:22.882 22:59:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:22.882 22:59:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:28:22.882 22:59:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:28:22.882 22:59:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 2535652 ']' 00:28:22.882 22:59:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 2535652 00:28:22.882 22:59:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2535652 ']' 00:28:22.882 22:59:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 2535652 00:28:22.882 22:59:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:28:22.882 22:59:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:22.882 22:59:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2535652 00:28:22.882 22:59:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:28:22.882 22:59:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:28:22.882 22:59:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2535652' 00:28:22.882 killing process with pid 2535652 00:28:22.882 22:59:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 2535652 00:28:22.882 22:59:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 2535652 00:28:23.143 22:59:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:23.143 22:59:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:23.143 22:59:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:23.143 22:59:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:28:23.143 22:59:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:28:23.143 22:59:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:23.143 22:59:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:28:23.143 22:59:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:23.143 22:59:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:23.143 22:59:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:23.143 22:59:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:23.143 22:59:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:25.049 22:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:25.049 00:28:25.049 real 0m19.416s 00:28:25.049 user 0m46.689s 00:28:25.049 sys 0m9.741s 00:28:25.049 22:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:25.049 22:59:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:25.049 ************************************ 00:28:25.049 END TEST nvmf_target_disconnect 00:28:25.049 ************************************ 00:28:25.049 22:59:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:28:25.049 00:28:25.049 real 5m51.356s 00:28:25.049 user 10m32.765s 00:28:25.049 sys 1m58.343s 00:28:25.049 22:59:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:25.049 22:59:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.049 ************************************ 00:28:25.049 END TEST nvmf_host 00:28:25.049 ************************************ 00:28:25.309 22:59:32 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:28:25.309 22:59:32 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:28:25.309 22:59:32 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:28:25.309 22:59:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:25.309 22:59:32 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:25.309 22:59:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:25.309 ************************************ 00:28:25.309 START TEST nvmf_target_core_interrupt_mode 00:28:25.309 ************************************ 00:28:25.309 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:28:25.309 * Looking for test storage... 00:28:25.309 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf 00:28:25.309 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:25.309 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lcov --version 00:28:25.309 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:25.309 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:25.309 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:25.309 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:25.309 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:25.309 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:28:25.309 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:28:25.309 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:28:25.309 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:28:25.309 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:28:25.309 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:28:25.309 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:28:25.309 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:25.309 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:28:25.309 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:28:25.309 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:25.309 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:25.309 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:28:25.309 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:28:25.309 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:25.309 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:28:25.309 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:28:25.309 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:28:25.309 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:28:25.309 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:25.309 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:28:25.309 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:28:25.309 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:25.309 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:25.309 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:28:25.309 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:25.309 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:25.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:25.309 --rc genhtml_branch_coverage=1 00:28:25.309 --rc genhtml_function_coverage=1 00:28:25.309 --rc genhtml_legend=1 00:28:25.309 --rc geninfo_all_blocks=1 00:28:25.309 --rc geninfo_unexecuted_blocks=1 00:28:25.309 00:28:25.309 ' 00:28:25.309 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:25.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:25.309 --rc genhtml_branch_coverage=1 00:28:25.309 --rc genhtml_function_coverage=1 00:28:25.309 --rc genhtml_legend=1 00:28:25.309 --rc geninfo_all_blocks=1 00:28:25.309 --rc geninfo_unexecuted_blocks=1 00:28:25.309 00:28:25.309 ' 00:28:25.309 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:25.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:25.309 --rc genhtml_branch_coverage=1 00:28:25.309 --rc genhtml_function_coverage=1 00:28:25.309 --rc genhtml_legend=1 00:28:25.309 --rc geninfo_all_blocks=1 00:28:25.309 --rc geninfo_unexecuted_blocks=1 00:28:25.309 00:28:25.309 ' 00:28:25.309 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:25.309 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:25.309 --rc genhtml_branch_coverage=1 00:28:25.309 --rc genhtml_function_coverage=1 00:28:25.309 --rc genhtml_legend=1 00:28:25.309 --rc geninfo_all_blocks=1 00:28:25.309 --rc geninfo_unexecuted_blocks=1 00:28:25.309 00:28:25.309 ' 00:28:25.309 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:28:25.309 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:28:25.309 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:28:25.309 22:59:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:28:25.309 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:25.309 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:25.309 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:25.309 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:25.309 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:25.309 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:25.309 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:25.309 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:25.309 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:25.309 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:25.309 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:25.309 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:25.309 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:25.309 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:25.309 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:25.309 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:25.309 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:28:25.309 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:28:25.309 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:25.309 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:25.309 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:25.309 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.309 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.310 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.310 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:28:25.310 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.310 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:28:25.310 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:25.310 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:25.310 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:25.310 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:25.310 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:25.310 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:25.310 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:25.310 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:25.310 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:25.310 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:25.310 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:28:25.310 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:28:25.310 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:28:25.310 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:28:25.310 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:25.310 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:25.310 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:25.569 ************************************ 00:28:25.569 START TEST nvmf_abort 00:28:25.569 ************************************ 00:28:25.569 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:28:25.569 * Looking for test storage... 00:28:25.569 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:28:25.569 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:25.569 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:28:25.569 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:25.569 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:25.569 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:25.569 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:25.569 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:25.569 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:28:25.570 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:28:25.570 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:28:25.570 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:28:25.570 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:28:25.570 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:28:25.570 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:28:25.570 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:25.570 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:28:25.570 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:28:25.570 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:25.570 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:25.570 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:28:25.570 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:28:25.570 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:25.570 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:28:25.570 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:28:25.570 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:28:25.570 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:28:25.570 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:25.570 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:28:25.570 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:28:25.570 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:25.570 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:25.570 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:28:25.570 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:25.570 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:25.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:25.570 --rc genhtml_branch_coverage=1 00:28:25.570 --rc genhtml_function_coverage=1 00:28:25.570 --rc genhtml_legend=1 00:28:25.570 --rc geninfo_all_blocks=1 00:28:25.570 --rc geninfo_unexecuted_blocks=1 00:28:25.570 00:28:25.570 ' 00:28:25.570 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:25.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:25.570 --rc genhtml_branch_coverage=1 00:28:25.570 --rc genhtml_function_coverage=1 00:28:25.570 --rc genhtml_legend=1 00:28:25.570 --rc geninfo_all_blocks=1 00:28:25.570 --rc geninfo_unexecuted_blocks=1 00:28:25.570 00:28:25.570 ' 00:28:25.570 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:25.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:25.570 --rc genhtml_branch_coverage=1 00:28:25.570 --rc genhtml_function_coverage=1 00:28:25.570 --rc genhtml_legend=1 00:28:25.570 --rc geninfo_all_blocks=1 00:28:25.570 --rc geninfo_unexecuted_blocks=1 00:28:25.570 00:28:25.570 ' 00:28:25.570 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:25.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:25.570 --rc genhtml_branch_coverage=1 00:28:25.570 --rc genhtml_function_coverage=1 00:28:25.570 --rc genhtml_legend=1 00:28:25.570 --rc geninfo_all_blocks=1 00:28:25.570 --rc geninfo_unexecuted_blocks=1 00:28:25.570 00:28:25.570 ' 00:28:25.570 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:28:25.570 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:28:25.570 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:25.570 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:25.570 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:25.570 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:25.570 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:25.570 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:25.570 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:25.570 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:25.570 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:25.570 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:25.570 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:25.570 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:25.570 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:25.570 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:25.570 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:25.570 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:25.570 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:28:25.570 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:28:25.570 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:25.570 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:25.570 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:25.570 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.570 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.570 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.570 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:28:25.570 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.570 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:28:25.570 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:25.570 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:25.570 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:25.570 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:25.570 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:25.570 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:25.570 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:25.570 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:25.570 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:25.570 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:25.570 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:25.570 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:28:25.570 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:28:25.570 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:25.570 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:25.571 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:25.571 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:25.571 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:25.571 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:25.571 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:25.571 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:25.571 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:25.571 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:25.571 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:28:25.571 22:59:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:32.152 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:32.153 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:28:32.153 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:32.153 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:32.153 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:32.153 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:32.153 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:32.153 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:28:32.153 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:32.153 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:28:32.153 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:28:32.153 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:28:32.153 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:28:32.153 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:28:32.153 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:28:32.153 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:32.153 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:32.153 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:32.153 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:32.153 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:32.153 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:32.153 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:32.153 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:32.153 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:32.153 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:32.153 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:32.153 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:32.153 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:32.153 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:32.153 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:32.153 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:32.153 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:32.153 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:32.153 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:32.153 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:32.153 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:32.153 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:32.153 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:32.153 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:32.153 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:32.153 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:32.153 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:32.153 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:32.153 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:32.153 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:32.153 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:32.153 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:32.153 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:32.153 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:32.153 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:32.153 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:32.153 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:32.153 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:32.153 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:32.153 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:32.153 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:32.153 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:32.153 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:32.153 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:32.153 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:32.153 Found net devices under 0000:86:00.0: cvl_0_0 00:28:32.153 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:32.153 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:32.153 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:32.153 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:32.153 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:32.153 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:32.153 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:32.153 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:32.153 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:32.153 Found net devices under 0000:86:00.1: cvl_0_1 00:28:32.153 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:32.153 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:32.153 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:28:32.153 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:32.153 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:32.153 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:32.153 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:32.153 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:32.153 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:32.153 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:32.153 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:32.153 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:32.153 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:32.153 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:32.153 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:32.153 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:32.153 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:32.153 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:32.153 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:32.153 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:32.153 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:32.153 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:32.153 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:32.154 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:32.154 22:59:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:32.154 22:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:32.154 22:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:32.154 22:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:32.154 22:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:32.154 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:32.154 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.370 ms 00:28:32.154 00:28:32.154 --- 10.0.0.2 ping statistics --- 00:28:32.154 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:32.154 rtt min/avg/max/mdev = 0.370/0.370/0.370/0.000 ms 00:28:32.154 22:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:32.154 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:32.154 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:28:32.154 00:28:32.154 --- 10.0.0.1 ping statistics --- 00:28:32.154 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:32.154 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:28:32.154 22:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:32.154 22:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:28:32.154 22:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:32.154 22:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:32.154 22:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:32.154 22:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:32.154 22:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:32.154 22:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:32.154 22:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:32.154 22:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:28:32.154 22:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:32.154 22:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:32.154 22:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:32.154 22:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2540204 00:28:32.154 22:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:28:32.154 22:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2540204 00:28:32.154 22:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 2540204 ']' 00:28:32.154 22:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:32.154 22:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:32.154 22:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:32.154 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:32.154 22:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:32.154 22:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:32.154 [2024-12-10 22:59:39.274248] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:32.154 [2024-12-10 22:59:39.275196] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:28:32.154 [2024-12-10 22:59:39.275231] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:32.154 [2024-12-10 22:59:39.352998] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:32.154 [2024-12-10 22:59:39.391480] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:32.154 [2024-12-10 22:59:39.391519] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:32.154 [2024-12-10 22:59:39.391526] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:32.154 [2024-12-10 22:59:39.391532] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:32.154 [2024-12-10 22:59:39.391539] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:32.154 [2024-12-10 22:59:39.392923] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:28:32.154 [2024-12-10 22:59:39.393010] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:32.154 [2024-12-10 22:59:39.393011] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:28:32.154 [2024-12-10 22:59:39.460901] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:32.154 [2024-12-10 22:59:39.461722] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:32.154 [2024-12-10 22:59:39.461871] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:32.154 [2024-12-10 22:59:39.462044] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:32.154 22:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:32.154 22:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:28:32.154 22:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:32.154 22:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:32.154 22:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:32.154 22:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:32.154 22:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:28:32.154 22:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.154 22:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:32.154 [2024-12-10 22:59:39.541876] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:32.154 22:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.154 22:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:28:32.154 22:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.154 22:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:32.154 Malloc0 00:28:32.154 22:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.154 22:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:28:32.154 22:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.154 22:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:32.154 Delay0 00:28:32.154 22:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.154 22:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:28:32.154 22:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.154 22:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:32.154 22:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.154 22:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:28:32.154 22:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.154 22:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:32.154 22:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.154 22:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:32.154 22:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.154 22:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:32.154 [2024-12-10 22:59:39.637833] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:32.154 22:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.154 22:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:32.154 22:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.154 22:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:32.154 22:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.154 22:59:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:28:32.155 [2024-12-10 22:59:39.762820] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:28:34.689 Initializing NVMe Controllers 00:28:34.689 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:28:34.689 controller IO queue size 128 less than required 00:28:34.689 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:28:34.689 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:28:34.689 Initialization complete. Launching workers. 00:28:34.689 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 36718 00:28:34.689 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 36775, failed to submit 66 00:28:34.689 success 36718, unsuccessful 57, failed 0 00:28:34.689 22:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:34.689 22:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.689 22:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:34.689 22:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.689 22:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:28:34.689 22:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:28:34.689 22:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:34.689 22:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:28:34.689 22:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:34.689 22:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:28:34.689 22:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:34.689 22:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:34.689 rmmod nvme_tcp 00:28:34.689 rmmod nvme_fabrics 00:28:34.689 rmmod nvme_keyring 00:28:34.690 22:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:34.690 22:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:28:34.690 22:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:28:34.690 22:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2540204 ']' 00:28:34.690 22:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2540204 00:28:34.690 22:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 2540204 ']' 00:28:34.690 22:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 2540204 00:28:34.690 22:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:28:34.690 22:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:34.690 22:59:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2540204 00:28:34.690 22:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:34.690 22:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:34.690 22:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2540204' 00:28:34.690 killing process with pid 2540204 00:28:34.690 22:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 2540204 00:28:34.690 22:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 2540204 00:28:34.690 22:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:34.690 22:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:34.690 22:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:34.690 22:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:28:34.690 22:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:28:34.690 22:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:34.690 22:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:28:34.690 22:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:34.690 22:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:34.690 22:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:34.690 22:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:34.690 22:59:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:36.595 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:36.595 00:28:36.595 real 0m11.230s 00:28:36.595 user 0m10.602s 00:28:36.595 sys 0m5.671s 00:28:36.595 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:36.595 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:36.595 ************************************ 00:28:36.595 END TEST nvmf_abort 00:28:36.595 ************************************ 00:28:36.854 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:28:36.854 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:36.854 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:36.854 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:36.854 ************************************ 00:28:36.854 START TEST nvmf_ns_hotplug_stress 00:28:36.854 ************************************ 00:28:36.854 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:28:36.854 * Looking for test storage... 00:28:36.854 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:28:36.854 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:36.854 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:28:36.854 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:36.854 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:36.854 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:36.854 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:36.854 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:36.854 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:28:36.854 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:28:36.854 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:28:36.854 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:28:36.854 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:28:36.854 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:28:36.854 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:28:36.854 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:36.854 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:28:36.854 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:28:36.854 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:36.854 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:36.854 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:28:36.854 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:28:36.854 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:36.854 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:28:36.854 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:28:36.854 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:28:36.854 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:28:36.854 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:36.855 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:28:36.855 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:28:36.855 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:36.855 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:36.855 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:28:36.855 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:36.855 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:36.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:36.855 --rc genhtml_branch_coverage=1 00:28:36.855 --rc genhtml_function_coverage=1 00:28:36.855 --rc genhtml_legend=1 00:28:36.855 --rc geninfo_all_blocks=1 00:28:36.855 --rc geninfo_unexecuted_blocks=1 00:28:36.855 00:28:36.855 ' 00:28:36.855 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:36.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:36.855 --rc genhtml_branch_coverage=1 00:28:36.855 --rc genhtml_function_coverage=1 00:28:36.855 --rc genhtml_legend=1 00:28:36.855 --rc geninfo_all_blocks=1 00:28:36.855 --rc geninfo_unexecuted_blocks=1 00:28:36.855 00:28:36.855 ' 00:28:36.855 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:36.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:36.855 --rc genhtml_branch_coverage=1 00:28:36.855 --rc genhtml_function_coverage=1 00:28:36.855 --rc genhtml_legend=1 00:28:36.855 --rc geninfo_all_blocks=1 00:28:36.855 --rc geninfo_unexecuted_blocks=1 00:28:36.855 00:28:36.855 ' 00:28:36.855 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:36.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:36.855 --rc genhtml_branch_coverage=1 00:28:36.855 --rc genhtml_function_coverage=1 00:28:36.855 --rc genhtml_legend=1 00:28:36.855 --rc geninfo_all_blocks=1 00:28:36.855 --rc geninfo_unexecuted_blocks=1 00:28:36.855 00:28:36.855 ' 00:28:36.855 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:28:36.855 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:28:36.855 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:36.855 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:36.855 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:36.855 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:36.855 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:36.855 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:36.855 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:36.855 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:36.855 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:36.855 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:36.855 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:36.855 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:36.855 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:36.855 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:36.855 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:36.855 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:36.855 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:28:36.855 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:28:36.855 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:36.855 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:36.855 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:36.855 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:36.855 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:36.855 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:36.855 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:28:36.855 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:36.855 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:28:36.855 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:36.855 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:36.855 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:36.855 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:36.855 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:36.855 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:28:36.855 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:28:36.855 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:36.855 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:36.855 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:37.114 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:28:37.114 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:28:37.114 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:37.114 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:37.114 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:37.114 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:37.114 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:37.114 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:37.114 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:37.114 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:37.114 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:37.114 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:37.114 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:28:37.114 22:59:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:43.685 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:43.685 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:28:43.685 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:43.685 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:43.685 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:43.685 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:43.685 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:43.685 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:28:43.685 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:43.685 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:28:43.685 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:28:43.685 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:28:43.685 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:28:43.685 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:28:43.685 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:28:43.685 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:43.685 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:43.685 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:43.685 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:43.685 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:43.685 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:43.685 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:43.685 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:43.685 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:43.685 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:43.685 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:43.685 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:43.685 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:43.685 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:43.685 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:43.685 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:43.685 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:43.685 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:43.685 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:43.685 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:43.685 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:43.685 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:43.685 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:43.685 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:43.685 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:43.685 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:43.685 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:43.685 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:43.685 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:43.685 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:43.685 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:43.685 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:43.685 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:43.685 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:43.686 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:43.686 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:43.686 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:43.686 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:43.686 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:43.686 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:43.686 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:43.686 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:43.686 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:43.686 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:43.686 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:43.686 Found net devices under 0000:86:00.0: cvl_0_0 00:28:43.686 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:43.686 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:43.686 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:43.686 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:43.686 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:43.686 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:43.686 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:43.686 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:43.686 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:43.686 Found net devices under 0000:86:00.1: cvl_0_1 00:28:43.686 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:43.686 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:43.686 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:28:43.686 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:43.686 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:43.686 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:43.686 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:43.686 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:43.686 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:43.686 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:43.686 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:43.686 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:43.686 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:43.686 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:43.686 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:43.686 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:43.686 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:43.686 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:43.686 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:43.686 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:43.686 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:43.686 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:43.686 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:43.686 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:43.686 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:43.686 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:43.686 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:43.686 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:43.686 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:43.686 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:43.686 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.241 ms 00:28:43.686 00:28:43.686 --- 10.0.0.2 ping statistics --- 00:28:43.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:43.686 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:28:43.686 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:43.686 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:43.686 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:28:43.686 00:28:43.686 --- 10.0.0.1 ping statistics --- 00:28:43.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:43.686 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:28:43.686 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:43.686 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:28:43.686 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:43.686 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:43.686 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:43.686 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:43.686 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:43.686 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:43.686 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:43.686 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:28:43.686 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:43.686 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:43.686 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:43.686 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2544191 00:28:43.686 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2544191 00:28:43.686 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:28:43.686 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 2544191 ']' 00:28:43.686 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:43.686 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:43.686 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:43.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:43.686 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:43.686 22:59:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:43.686 [2024-12-10 22:59:50.567229] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:43.686 [2024-12-10 22:59:50.568137] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:28:43.686 [2024-12-10 22:59:50.568178] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:43.686 [2024-12-10 22:59:50.645029] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:43.686 [2024-12-10 22:59:50.686727] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:43.686 [2024-12-10 22:59:50.686762] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:43.686 [2024-12-10 22:59:50.686769] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:43.686 [2024-12-10 22:59:50.686775] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:43.686 [2024-12-10 22:59:50.686780] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:43.686 [2024-12-10 22:59:50.688201] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:28:43.686 [2024-12-10 22:59:50.688308] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:43.686 [2024-12-10 22:59:50.688309] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:28:43.686 [2024-12-10 22:59:50.757172] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:43.686 [2024-12-10 22:59:50.758068] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:43.687 [2024-12-10 22:59:50.758411] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:43.687 [2024-12-10 22:59:50.758512] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:43.687 22:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:43.687 22:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:28:43.687 22:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:43.687 22:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:43.687 22:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:43.946 22:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:43.946 22:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:28:43.946 22:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:43.946 [2024-12-10 22:59:51.609077] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:43.946 22:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:44.205 22:59:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:44.464 [2024-12-10 22:59:52.013418] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:44.464 22:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:44.723 22:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:28:44.723 Malloc0 00:28:44.982 22:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:28:44.982 Delay0 00:28:44.982 22:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:45.241 22:59:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:28:45.500 NULL1 00:28:45.500 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:28:45.759 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:28:45.759 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2544677 00:28:45.759 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2544677 00:28:45.759 22:59:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:46.696 Read completed with error (sct=0, sc=11) 00:28:46.955 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:46.955 22:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:46.955 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:46.955 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:46.955 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:46.955 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:46.955 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:46.955 22:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:28:46.955 22:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:28:47.214 true 00:28:47.214 22:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2544677 00:28:47.214 22:59:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:48.150 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:48.150 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:28:48.150 22:59:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:28:48.409 true 00:28:48.409 22:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2544677 00:28:48.409 22:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:48.668 22:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:48.926 22:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:28:48.926 22:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:28:48.926 true 00:28:48.926 22:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2544677 00:28:48.926 22:59:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:50.303 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:50.303 22:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:50.303 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:50.303 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:50.303 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:50.303 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:50.303 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:50.303 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:50.303 22:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:28:50.303 22:59:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:28:50.563 true 00:28:50.563 22:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2544677 00:28:50.563 22:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:51.500 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:51.500 22:59:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:51.500 22:59:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:28:51.500 22:59:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:28:51.758 true 00:28:51.758 22:59:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2544677 00:28:51.758 22:59:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:52.017 22:59:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:52.018 22:59:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:28:52.018 22:59:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:28:52.276 true 00:28:52.276 22:59:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2544677 00:28:52.276 22:59:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:53.653 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:53.653 23:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:53.653 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:53.653 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:53.653 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:53.653 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:53.653 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:53.653 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:53.653 23:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:28:53.653 23:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:28:53.913 true 00:28:53.913 23:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2544677 00:28:53.913 23:00:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:54.849 23:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:54.849 23:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:28:54.849 23:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:28:55.108 true 00:28:55.108 23:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2544677 00:28:55.108 23:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:55.367 23:00:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:55.626 23:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:28:55.626 23:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:28:55.626 true 00:28:55.626 23:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2544677 00:28:55.626 23:00:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:57.004 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:57.004 23:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:57.004 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:57.004 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:57.004 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:57.004 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:57.004 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:57.004 23:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:28:57.004 23:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:28:57.263 true 00:28:57.263 23:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2544677 00:28:57.263 23:00:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:58.197 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:58.197 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:28:58.197 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:28:58.455 true 00:28:58.455 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2544677 00:28:58.455 23:00:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:58.713 23:00:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:58.713 23:00:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:28:58.713 23:00:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:28:58.972 true 00:28:58.972 23:00:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2544677 00:28:58.972 23:00:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:00.350 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:00.350 23:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:00.350 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:00.350 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:00.350 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:00.350 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:00.350 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:00.350 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:00.350 23:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:29:00.350 23:00:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:29:00.609 true 00:29:00.609 23:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2544677 00:29:00.609 23:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:01.545 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:01.545 23:00:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:01.545 23:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:29:01.545 23:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:29:01.803 true 00:29:01.803 23:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2544677 00:29:01.804 23:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:02.062 23:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:02.062 23:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:29:02.062 23:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:29:02.321 true 00:29:02.321 23:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2544677 00:29:02.321 23:00:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:03.695 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:03.695 23:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:03.695 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:03.695 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:03.695 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:03.695 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:03.695 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:03.695 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:03.695 23:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:29:03.695 23:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:29:03.952 true 00:29:03.952 23:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2544677 00:29:03.952 23:00:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:04.883 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:04.883 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:29:04.883 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:29:05.142 true 00:29:05.142 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2544677 00:29:05.142 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:05.400 23:00:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:05.400 23:00:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:29:05.400 23:00:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:29:05.658 true 00:29:05.658 23:00:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2544677 00:29:05.658 23:00:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:07.033 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:07.033 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:07.033 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:07.033 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:07.033 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:07.033 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:07.033 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:07.033 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:29:07.033 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:29:07.291 true 00:29:07.291 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2544677 00:29:07.291 23:00:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:07.859 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:08.118 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:08.118 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:08.118 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:29:08.118 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:29:08.377 true 00:29:08.377 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2544677 00:29:08.377 23:00:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:08.635 23:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:08.893 23:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:29:08.893 23:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:29:08.893 true 00:29:08.893 23:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2544677 00:29:08.893 23:00:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:10.270 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:10.270 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:10.270 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:10.270 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:10.270 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:10.270 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:10.270 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:10.270 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:29:10.270 23:00:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:29:10.529 true 00:29:10.529 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2544677 00:29:10.529 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:11.464 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:11.464 23:00:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:11.464 23:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:29:11.464 23:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:29:11.722 true 00:29:11.722 23:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2544677 00:29:11.722 23:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:11.980 23:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:12.238 23:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:29:12.238 23:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:29:12.238 true 00:29:12.238 23:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2544677 00:29:12.238 23:00:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:13.615 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:13.615 23:00:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:13.615 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:13.615 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:13.615 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:13.615 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:13.615 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:13.615 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:13.615 23:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:29:13.615 23:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:29:13.873 true 00:29:13.873 23:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2544677 00:29:13.873 23:00:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:14.808 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:29:14.808 23:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:14.808 23:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:29:14.808 23:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:29:15.066 true 00:29:15.066 23:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2544677 00:29:15.066 23:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:15.324 23:00:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:15.583 23:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:29:15.583 23:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:29:15.583 true 00:29:15.583 23:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2544677 00:29:15.583 23:00:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:16.958 Initializing NVMe Controllers 00:29:16.958 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:16.958 Controller IO queue size 128, less than required. 00:29:16.958 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:16.958 Controller IO queue size 128, less than required. 00:29:16.958 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:16.958 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:16.958 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:16.958 Initialization complete. Launching workers. 00:29:16.958 ======================================================== 00:29:16.958 Latency(us) 00:29:16.958 Device Information : IOPS MiB/s Average min max 00:29:16.958 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2273.67 1.11 41130.69 2163.91 1023331.95 00:29:16.958 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 18250.63 8.91 7013.23 1322.51 383170.64 00:29:16.958 ======================================================== 00:29:16.958 Total : 20524.30 10.02 10792.74 1322.51 1023331.95 00:29:16.958 00:29:16.958 23:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:16.958 23:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:29:16.958 23:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:29:17.216 true 00:29:17.216 23:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2544677 00:29:17.216 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2544677) - No such process 00:29:17.216 23:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2544677 00:29:17.216 23:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:17.216 23:00:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:17.482 23:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:29:17.482 23:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:29:17.482 23:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:29:17.482 23:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:17.482 23:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:29:17.741 null0 00:29:17.741 23:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:17.741 23:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:17.741 23:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:29:17.999 null1 00:29:17.999 23:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:17.999 23:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:17.999 23:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:29:17.999 null2 00:29:17.999 23:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:17.999 23:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:17.999 23:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:29:18.258 null3 00:29:18.258 23:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:18.258 23:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:18.258 23:00:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:29:18.516 null4 00:29:18.516 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:18.516 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:18.516 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:29:18.775 null5 00:29:18.775 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:18.775 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:18.775 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:29:18.775 null6 00:29:18.775 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:18.775 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:18.775 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:29:19.048 null7 00:29:19.048 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:19.048 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:19.048 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:29:19.048 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:19.048 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:19.048 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:19.048 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:29:19.048 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:19.048 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:29:19.048 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:19.048 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:19.048 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:19.048 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:19.048 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:29:19.048 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:19.048 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:29:19.048 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:19.048 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:19.048 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:19.048 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:19.048 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:19.048 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:19.048 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:29:19.048 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:19.048 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:29:19.048 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:19.048 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:19.048 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:19.048 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:19.048 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:29:19.048 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:19.048 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:19.048 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:29:19.048 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:19.048 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:19.048 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:19.048 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:19.048 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:29:19.048 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:19.048 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:19.048 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:29:19.048 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:19.048 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:19.048 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:19.048 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:19.048 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:29:19.048 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:19.048 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:19.048 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:29:19.048 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:19.048 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:19.048 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:19.048 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:19.048 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:19.048 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:29:19.048 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:19.048 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:29:19.048 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:19.048 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:19.048 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:19.048 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:19.048 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:19.048 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:19.048 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:29:19.048 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2550537 2550540 2550542 2550545 2550548 2550550 2550552 2550554 00:29:19.048 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:29:19.048 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:19.048 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:19.048 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:19.340 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:19.340 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:19.340 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:19.340 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:19.340 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:19.340 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:19.340 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:19.340 23:00:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:19.632 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:19.632 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:19.632 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:19.632 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:19.632 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:19.632 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:19.632 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:19.632 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:19.632 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:19.632 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:19.632 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:19.632 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:19.632 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:19.632 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:19.632 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:19.632 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:19.632 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:19.632 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:19.632 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:19.632 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:19.632 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:19.632 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:19.632 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:19.632 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:19.632 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:19.632 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:19.632 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:19.632 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:19.632 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:19.632 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:19.632 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:19.632 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:19.916 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:19.916 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:19.916 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:19.916 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:19.916 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:19.916 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:19.916 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:19.916 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:19.916 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:19.916 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:19.916 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:19.916 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:19.916 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:19.916 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:19.916 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:19.916 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:19.916 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:19.916 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:19.916 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:19.916 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:19.916 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:19.916 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:19.916 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:19.916 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:20.174 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:20.174 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:20.174 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:20.174 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:20.174 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:20.174 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:20.174 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:20.174 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:20.433 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:20.433 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:20.433 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:20.433 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:20.433 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:20.433 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:20.433 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:20.433 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:20.433 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:20.433 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:20.433 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:20.433 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:20.433 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:20.433 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:20.433 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:20.433 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:20.433 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:20.433 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:20.433 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:20.433 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:20.433 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:20.433 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:20.433 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:20.433 23:00:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:20.433 23:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:20.433 23:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:20.434 23:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:20.434 23:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:20.434 23:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:20.434 23:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:20.434 23:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:20.434 23:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:20.692 23:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:20.693 23:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:20.693 23:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:20.693 23:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:20.693 23:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:20.693 23:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:20.693 23:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:20.693 23:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:20.693 23:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:20.693 23:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:20.693 23:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:20.693 23:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:20.693 23:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:20.693 23:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:20.693 23:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:20.693 23:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:20.693 23:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:20.693 23:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:20.693 23:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:20.693 23:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:20.693 23:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:20.693 23:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:20.693 23:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:20.693 23:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:20.952 23:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:20.952 23:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:20.952 23:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:20.952 23:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:20.952 23:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:20.952 23:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:20.952 23:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:20.952 23:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:21.210 23:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:21.210 23:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:21.210 23:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:21.210 23:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:21.211 23:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:21.211 23:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:21.211 23:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:21.211 23:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:21.211 23:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:21.211 23:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:21.211 23:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:21.211 23:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:21.211 23:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:21.211 23:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:21.211 23:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:21.211 23:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:21.211 23:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:21.211 23:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:21.211 23:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:21.211 23:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:21.211 23:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:21.211 23:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:21.211 23:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:21.211 23:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:21.469 23:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:21.469 23:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:21.469 23:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:21.469 23:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:21.469 23:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:21.469 23:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:21.469 23:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:21.469 23:00:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:21.469 23:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:21.469 23:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:21.469 23:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:21.469 23:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:21.469 23:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:21.469 23:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:21.469 23:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:21.469 23:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:21.469 23:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:21.469 23:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:21.469 23:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:21.469 23:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:21.469 23:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:21.469 23:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:21.469 23:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:21.469 23:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:21.469 23:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:21.469 23:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:21.469 23:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:21.469 23:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:21.469 23:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:21.469 23:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:21.469 23:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:21.469 23:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:21.727 23:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:21.727 23:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:21.727 23:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:21.727 23:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:21.727 23:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:21.727 23:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:21.727 23:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:21.727 23:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:21.986 23:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:21.986 23:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:21.986 23:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:21.986 23:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:21.986 23:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:21.986 23:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:21.986 23:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:21.986 23:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:21.986 23:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:21.986 23:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:21.986 23:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:21.986 23:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:21.986 23:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:21.986 23:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:21.986 23:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:21.986 23:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:21.986 23:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:21.986 23:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:21.986 23:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:21.986 23:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:21.986 23:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:21.986 23:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:21.986 23:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:21.986 23:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:22.245 23:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:22.245 23:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:22.245 23:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:22.245 23:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:22.245 23:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:22.245 23:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:22.245 23:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:22.245 23:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:22.504 23:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:22.504 23:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:22.504 23:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:22.504 23:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:22.504 23:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:22.505 23:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:22.505 23:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:22.505 23:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:22.505 23:00:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:22.505 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:22.505 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:22.505 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:22.505 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:22.505 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:22.505 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:22.505 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:22.505 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:22.505 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:22.505 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:22.505 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:22.505 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:22.505 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:22.505 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:22.505 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:22.505 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:22.505 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:22.505 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:22.505 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:22.505 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:22.505 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:22.763 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:22.763 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:22.763 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:22.763 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:22.763 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:22.763 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:22.763 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:22.763 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:22.763 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:22.763 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:22.763 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:22.763 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:22.763 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:22.763 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:22.763 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:22.763 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:22.763 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:22.763 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:22.763 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:22.763 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:22.764 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:22.764 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:22.764 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:22.764 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:22.764 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:22.764 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:23.022 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:23.022 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:23.022 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:23.022 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:23.022 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:23.022 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:23.022 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:23.022 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:23.282 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:23.282 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:23.282 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:23.282 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:23.282 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:23.282 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:23.282 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:23.282 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:23.282 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:23.282 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:23.282 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:23.282 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:23.282 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:23.282 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:23.282 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:23.282 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:23.282 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:29:23.282 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:29:23.282 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:23.282 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:29:23.282 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:23.282 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:29:23.282 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:23.282 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:23.282 rmmod nvme_tcp 00:29:23.282 rmmod nvme_fabrics 00:29:23.282 rmmod nvme_keyring 00:29:23.282 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:23.282 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:29:23.282 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:29:23.282 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2544191 ']' 00:29:23.282 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2544191 00:29:23.282 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 2544191 ']' 00:29:23.282 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 2544191 00:29:23.282 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:29:23.282 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:23.282 23:00:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2544191 00:29:23.541 23:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:23.541 23:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:23.541 23:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2544191' 00:29:23.541 killing process with pid 2544191 00:29:23.541 23:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 2544191 00:29:23.541 23:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 2544191 00:29:23.541 23:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:23.541 23:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:23.541 23:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:23.541 23:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:29:23.541 23:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:29:23.541 23:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:23.541 23:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:29:23.541 23:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:23.541 23:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:23.541 23:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:23.541 23:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:23.541 23:00:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:26.078 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:26.078 00:29:26.078 real 0m48.889s 00:29:26.078 user 3m0.625s 00:29:26.078 sys 0m20.116s 00:29:26.078 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:26.078 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:26.078 ************************************ 00:29:26.078 END TEST nvmf_ns_hotplug_stress 00:29:26.078 ************************************ 00:29:26.078 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:29:26.078 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:26.078 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:26.078 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:26.078 ************************************ 00:29:26.078 START TEST nvmf_delete_subsystem 00:29:26.078 ************************************ 00:29:26.078 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:29:26.078 * Looking for test storage... 00:29:26.078 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:29:26.078 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:26.078 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:29:26.078 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:26.078 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:26.078 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:26.078 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:26.078 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:26.078 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:29:26.078 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:29:26.078 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:29:26.078 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:29:26.078 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:29:26.078 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:29:26.078 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:29:26.078 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:26.078 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:29:26.078 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:29:26.078 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:26.078 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:26.078 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:29:26.078 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:29:26.078 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:26.078 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:29:26.078 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:29:26.078 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:29:26.079 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:29:26.079 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:26.079 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:29:26.079 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:29:26.079 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:26.079 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:26.079 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:29:26.079 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:26.079 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:26.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:26.079 --rc genhtml_branch_coverage=1 00:29:26.079 --rc genhtml_function_coverage=1 00:29:26.079 --rc genhtml_legend=1 00:29:26.079 --rc geninfo_all_blocks=1 00:29:26.079 --rc geninfo_unexecuted_blocks=1 00:29:26.079 00:29:26.079 ' 00:29:26.079 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:26.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:26.079 --rc genhtml_branch_coverage=1 00:29:26.079 --rc genhtml_function_coverage=1 00:29:26.079 --rc genhtml_legend=1 00:29:26.079 --rc geninfo_all_blocks=1 00:29:26.079 --rc geninfo_unexecuted_blocks=1 00:29:26.079 00:29:26.079 ' 00:29:26.079 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:26.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:26.079 --rc genhtml_branch_coverage=1 00:29:26.079 --rc genhtml_function_coverage=1 00:29:26.079 --rc genhtml_legend=1 00:29:26.079 --rc geninfo_all_blocks=1 00:29:26.079 --rc geninfo_unexecuted_blocks=1 00:29:26.079 00:29:26.079 ' 00:29:26.079 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:26.079 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:26.079 --rc genhtml_branch_coverage=1 00:29:26.079 --rc genhtml_function_coverage=1 00:29:26.079 --rc genhtml_legend=1 00:29:26.079 --rc geninfo_all_blocks=1 00:29:26.079 --rc geninfo_unexecuted_blocks=1 00:29:26.079 00:29:26.079 ' 00:29:26.079 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:29:26.079 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:29:26.079 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:26.079 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:26.079 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:26.079 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:26.079 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:26.079 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:26.079 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:26.079 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:26.079 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:26.079 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:26.079 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:26.079 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:26.079 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:26.079 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:26.079 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:26.079 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:26.079 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:29:26.079 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:29:26.079 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:26.079 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:26.079 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:26.079 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:26.079 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:26.079 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:26.079 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:29:26.079 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:26.079 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:29:26.079 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:26.079 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:26.079 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:26.079 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:26.079 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:26.079 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:26.079 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:26.079 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:26.079 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:26.079 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:26.079 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:29:26.079 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:26.079 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:26.079 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:26.079 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:26.079 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:26.079 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:26.079 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:26.079 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:26.079 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:26.079 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:26.079 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:29:26.079 23:00:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:32.649 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:32.649 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:29:32.649 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:32.649 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:32.649 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:32.649 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:32.649 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:32.649 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:29:32.649 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:32.649 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:29:32.649 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:29:32.649 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:29:32.649 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:29:32.649 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:29:32.649 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:29:32.649 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:32.649 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:32.649 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:32.649 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:32.649 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:32.649 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:32.649 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:32.649 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:32.649 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:32.649 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:32.649 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:32.649 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:32.649 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:32.649 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:32.649 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:32.650 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:32.650 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:32.650 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:32.650 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:32.650 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:32.650 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:32.650 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:32.650 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:32.650 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:32.650 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:32.650 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:32.650 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:32.650 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:32.650 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:32.650 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:32.650 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:32.650 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:32.650 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:32.650 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:32.650 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:32.650 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:32.650 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:32.650 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:32.650 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:32.650 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:32.650 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:32.650 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:32.650 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:32.650 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:32.650 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:32.650 Found net devices under 0000:86:00.0: cvl_0_0 00:29:32.650 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:32.650 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:32.650 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:32.650 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:32.650 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:32.650 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:32.650 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:32.650 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:32.650 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:32.650 Found net devices under 0000:86:00.1: cvl_0_1 00:29:32.650 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:32.650 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:32.650 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:29:32.650 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:32.650 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:32.650 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:32.650 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:32.650 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:32.650 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:32.650 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:32.650 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:32.650 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:32.650 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:32.650 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:32.650 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:32.650 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:32.650 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:32.650 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:32.650 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:32.650 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:32.650 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:32.650 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:32.650 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:32.650 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:32.650 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:32.650 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:32.650 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:32.650 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:32.650 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:32.650 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:32.650 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.305 ms 00:29:32.650 00:29:32.650 --- 10.0.0.2 ping statistics --- 00:29:32.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:32.650 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:29:32.650 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:32.650 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:32.650 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:29:32.650 00:29:32.650 --- 10.0.0.1 ping statistics --- 00:29:32.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:32.650 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:29:32.650 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:32.650 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:29:32.650 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:32.650 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:32.650 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:32.650 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:32.650 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:32.650 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:32.650 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:32.650 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:29:32.650 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:32.650 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:32.650 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:32.650 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2554912 00:29:32.650 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2554912 00:29:32.650 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:29:32.650 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 2554912 ']' 00:29:32.650 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:32.650 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:32.650 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:32.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:32.651 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:32.651 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:32.651 [2024-12-10 23:00:39.448668] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:32.651 [2024-12-10 23:00:39.449577] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:29:32.651 [2024-12-10 23:00:39.449610] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:32.651 [2024-12-10 23:00:39.529121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:32.651 [2024-12-10 23:00:39.567139] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:32.651 [2024-12-10 23:00:39.567181] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:32.651 [2024-12-10 23:00:39.567190] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:32.651 [2024-12-10 23:00:39.567196] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:32.651 [2024-12-10 23:00:39.567201] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:32.651 [2024-12-10 23:00:39.568374] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:32.651 [2024-12-10 23:00:39.568374] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:32.651 [2024-12-10 23:00:39.637764] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:32.651 [2024-12-10 23:00:39.638398] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:32.651 [2024-12-10 23:00:39.638578] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:32.651 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:32.651 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:29:32.651 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:32.651 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:32.651 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:32.651 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:32.651 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:32.651 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.651 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:32.651 [2024-12-10 23:00:39.717063] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:32.651 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.651 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:32.651 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.651 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:32.651 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.651 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:32.651 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.651 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:32.651 [2024-12-10 23:00:39.745430] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:32.651 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.651 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:29:32.651 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.651 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:32.651 NULL1 00:29:32.651 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.651 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:32.651 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.651 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:32.651 Delay0 00:29:32.651 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.651 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:32.651 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.651 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:32.651 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.651 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2555026 00:29:32.651 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:29:32.651 23:00:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:29:32.651 [2024-12-10 23:00:39.859337] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:34.555 23:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:34.555 23:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.555 23:00:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:34.555 Write completed with error (sct=0, sc=8) 00:29:34.555 Read completed with error (sct=0, sc=8) 00:29:34.555 starting I/O failed: -6 00:29:34.555 Write completed with error (sct=0, sc=8) 00:29:34.555 Read completed with error (sct=0, sc=8) 00:29:34.555 Read completed with error (sct=0, sc=8) 00:29:34.555 Read completed with error (sct=0, sc=8) 00:29:34.555 starting I/O failed: -6 00:29:34.555 Write completed with error (sct=0, sc=8) 00:29:34.555 Read completed with error (sct=0, sc=8) 00:29:34.555 Read completed with error (sct=0, sc=8) 00:29:34.555 Read completed with error (sct=0, sc=8) 00:29:34.555 starting I/O failed: -6 00:29:34.555 Read completed with error (sct=0, sc=8) 00:29:34.555 Write completed with error (sct=0, sc=8) 00:29:34.555 Write completed with error (sct=0, sc=8) 00:29:34.555 Read completed with error (sct=0, sc=8) 00:29:34.555 starting I/O failed: -6 00:29:34.555 Write completed with error (sct=0, sc=8) 00:29:34.555 Read completed with error (sct=0, sc=8) 00:29:34.555 Write completed with error (sct=0, sc=8) 00:29:34.555 Read completed with error (sct=0, sc=8) 00:29:34.555 starting I/O failed: -6 00:29:34.555 Write completed with error (sct=0, sc=8) 00:29:34.555 Write completed with error (sct=0, sc=8) 00:29:34.555 Write completed with error (sct=0, sc=8) 00:29:34.555 Write completed with error (sct=0, sc=8) 00:29:34.555 starting I/O failed: -6 00:29:34.555 Read completed with error (sct=0, sc=8) 00:29:34.555 Read completed with error (sct=0, sc=8) 00:29:34.555 Write completed with error (sct=0, sc=8) 00:29:34.555 Read completed with error (sct=0, sc=8) 00:29:34.555 starting I/O failed: -6 00:29:34.555 Read completed with error (sct=0, sc=8) 00:29:34.555 Write completed with error (sct=0, sc=8) 00:29:34.555 Read completed with error (sct=0, sc=8) 00:29:34.555 Read completed with error (sct=0, sc=8) 00:29:34.555 starting I/O failed: -6 00:29:34.555 Write completed with error (sct=0, sc=8) 00:29:34.555 Write completed with error (sct=0, sc=8) 00:29:34.555 Read completed with error (sct=0, sc=8) 00:29:34.555 Read completed with error (sct=0, sc=8) 00:29:34.555 starting I/O failed: -6 00:29:34.555 Write completed with error (sct=0, sc=8) 00:29:34.555 Read completed with error (sct=0, sc=8) 00:29:34.555 Read completed with error (sct=0, sc=8) 00:29:34.555 Read completed with error (sct=0, sc=8) 00:29:34.555 Write completed with error (sct=0, sc=8) 00:29:34.555 starting I/O failed: -6 00:29:34.555 Read completed with error (sct=0, sc=8) 00:29:34.555 Write completed with error (sct=0, sc=8) 00:29:34.555 starting I/O failed: -6 00:29:34.555 Read completed with error (sct=0, sc=8) 00:29:34.555 Write completed with error (sct=0, sc=8) 00:29:34.555 Read completed with error (sct=0, sc=8) 00:29:34.555 Read completed with error (sct=0, sc=8) 00:29:34.555 Write completed with error (sct=0, sc=8) 00:29:34.555 starting I/O failed: -6 00:29:34.555 Read completed with error (sct=0, sc=8) 00:29:34.555 Write completed with error (sct=0, sc=8) 00:29:34.555 Write completed with error (sct=0, sc=8) 00:29:34.555 starting I/O failed: -6 00:29:34.555 Read completed with error (sct=0, sc=8) 00:29:34.555 Read completed with error (sct=0, sc=8) 00:29:34.555 Read completed with error (sct=0, sc=8) 00:29:34.555 Write completed with error (sct=0, sc=8) 00:29:34.555 Read completed with error (sct=0, sc=8) 00:29:34.555 starting I/O failed: -6 00:29:34.555 Write completed with error (sct=0, sc=8) 00:29:34.555 Read completed with error (sct=0, sc=8) 00:29:34.555 Read completed with error (sct=0, sc=8) 00:29:34.555 Read completed with error (sct=0, sc=8) 00:29:34.555 starting I/O failed: -6 00:29:34.555 Read completed with error (sct=0, sc=8) 00:29:34.555 Read completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Write completed with error (sct=0, sc=8) 00:29:34.556 starting I/O failed: -6 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Write completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 starting I/O failed: -6 00:29:34.556 Write completed with error (sct=0, sc=8) 00:29:34.556 Write completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 starting I/O failed: -6 00:29:34.556 starting I/O failed: -6 00:29:34.556 Write completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 starting I/O failed: -6 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Write completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 starting I/O failed: -6 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 starting I/O failed: -6 00:29:34.556 Write completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 starting I/O failed: -6 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Write completed with error (sct=0, sc=8) 00:29:34.556 Write completed with error (sct=0, sc=8) 00:29:34.556 Write completed with error (sct=0, sc=8) 00:29:34.556 starting I/O failed: -6 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Write completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Write completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 [2024-12-10 23:00:42.075901] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21134a0 is same with the state(6) to be set 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Write completed with error (sct=0, sc=8) 00:29:34.556 Write completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Write completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Write completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Write completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Write completed with error (sct=0, sc=8) 00:29:34.556 Write completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Write completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Write completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Write completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Write completed with error (sct=0, sc=8) 00:29:34.556 Write completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Write completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Write completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Write completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Write completed with error (sct=0, sc=8) 00:29:34.556 Write completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Write completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Write completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Write completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Write completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Write completed with error (sct=0, sc=8) 00:29:34.556 Write completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Write completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Write completed with error (sct=0, sc=8) 00:29:34.556 Read completed with error (sct=0, sc=8) 00:29:34.556 Write completed with error (sct=0, sc=8) 00:29:35.494 [2024-12-10 23:00:43.038189] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21149b0 is same with the state(6) to be set 00:29:35.494 Read completed with error (sct=0, sc=8) 00:29:35.494 Read completed with error (sct=0, sc=8) 00:29:35.494 Read completed with error (sct=0, sc=8) 00:29:35.494 Read completed with error (sct=0, sc=8) 00:29:35.494 Read completed with error (sct=0, sc=8) 00:29:35.494 Read completed with error (sct=0, sc=8) 00:29:35.494 Read completed with error (sct=0, sc=8) 00:29:35.494 Write completed with error (sct=0, sc=8) 00:29:35.494 Read completed with error (sct=0, sc=8) 00:29:35.494 Write completed with error (sct=0, sc=8) 00:29:35.494 Read completed with error (sct=0, sc=8) 00:29:35.494 Read completed with error (sct=0, sc=8) 00:29:35.494 Read completed with error (sct=0, sc=8) 00:29:35.494 Read completed with error (sct=0, sc=8) 00:29:35.494 Write completed with error (sct=0, sc=8) 00:29:35.494 Read completed with error (sct=0, sc=8) 00:29:35.494 Read completed with error (sct=0, sc=8) 00:29:35.494 Read completed with error (sct=0, sc=8) 00:29:35.494 Read completed with error (sct=0, sc=8) 00:29:35.494 Read completed with error (sct=0, sc=8) 00:29:35.494 Read completed with error (sct=0, sc=8) 00:29:35.494 Write completed with error (sct=0, sc=8) 00:29:35.494 Read completed with error (sct=0, sc=8) 00:29:35.494 Write completed with error (sct=0, sc=8) 00:29:35.494 Read completed with error (sct=0, sc=8) 00:29:35.494 [2024-12-10 23:00:43.078370] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f662c00d060 is same with the state(6) to be set 00:29:35.494 Read completed with error (sct=0, sc=8) 00:29:35.494 Read completed with error (sct=0, sc=8) 00:29:35.494 Write completed with error (sct=0, sc=8) 00:29:35.494 Read completed with error (sct=0, sc=8) 00:29:35.494 Read completed with error (sct=0, sc=8) 00:29:35.494 Read completed with error (sct=0, sc=8) 00:29:35.494 Read completed with error (sct=0, sc=8) 00:29:35.494 Read completed with error (sct=0, sc=8) 00:29:35.494 Read completed with error (sct=0, sc=8) 00:29:35.494 Write completed with error (sct=0, sc=8) 00:29:35.494 Read completed with error (sct=0, sc=8) 00:29:35.494 Read completed with error (sct=0, sc=8) 00:29:35.494 Read completed with error (sct=0, sc=8) 00:29:35.494 Read completed with error (sct=0, sc=8) 00:29:35.494 Read completed with error (sct=0, sc=8) 00:29:35.494 Read completed with error (sct=0, sc=8) 00:29:35.494 Read completed with error (sct=0, sc=8) 00:29:35.494 Read completed with error (sct=0, sc=8) 00:29:35.494 Read completed with error (sct=0, sc=8) 00:29:35.494 Write completed with error (sct=0, sc=8) 00:29:35.494 Read completed with error (sct=0, sc=8) 00:29:35.494 Write completed with error (sct=0, sc=8) 00:29:35.494 Write completed with error (sct=0, sc=8) 00:29:35.494 Write completed with error (sct=0, sc=8) 00:29:35.494 Read completed with error (sct=0, sc=8) 00:29:35.494 Read completed with error (sct=0, sc=8) 00:29:35.494 Read completed with error (sct=0, sc=8) 00:29:35.494 [2024-12-10 23:00:43.078782] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21132c0 is same with the state(6) to be set 00:29:35.494 Read completed with error (sct=0, sc=8) 00:29:35.494 Read completed with error (sct=0, sc=8) 00:29:35.494 Read completed with error (sct=0, sc=8) 00:29:35.494 Read completed with error (sct=0, sc=8) 00:29:35.494 Write completed with error (sct=0, sc=8) 00:29:35.494 Write completed with error (sct=0, sc=8) 00:29:35.494 Read completed with error (sct=0, sc=8) 00:29:35.494 Read completed with error (sct=0, sc=8) 00:29:35.494 Read completed with error (sct=0, sc=8) 00:29:35.494 Read completed with error (sct=0, sc=8) 00:29:35.494 Write completed with error (sct=0, sc=8) 00:29:35.494 Write completed with error (sct=0, sc=8) 00:29:35.494 Read completed with error (sct=0, sc=8) 00:29:35.494 Read completed with error (sct=0, sc=8) 00:29:35.494 Read completed with error (sct=0, sc=8) 00:29:35.494 Read completed with error (sct=0, sc=8) 00:29:35.494 Read completed with error (sct=0, sc=8) 00:29:35.494 Read completed with error (sct=0, sc=8) 00:29:35.494 Write completed with error (sct=0, sc=8) 00:29:35.494 Write completed with error (sct=0, sc=8) 00:29:35.494 Read completed with error (sct=0, sc=8) 00:29:35.494 Read completed with error (sct=0, sc=8) 00:29:35.494 Read completed with error (sct=0, sc=8) 00:29:35.494 Write completed with error (sct=0, sc=8) 00:29:35.494 [2024-12-10 23:00:43.078903] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f662c00d840 is same with the state(6) to be set 00:29:35.494 Read completed with error (sct=0, sc=8) 00:29:35.494 Write completed with error (sct=0, sc=8) 00:29:35.494 Write completed with error (sct=0, sc=8) 00:29:35.494 Read completed with error (sct=0, sc=8) 00:29:35.494 Read completed with error (sct=0, sc=8) 00:29:35.494 Read completed with error (sct=0, sc=8) 00:29:35.494 Write completed with error (sct=0, sc=8) 00:29:35.494 Write completed with error (sct=0, sc=8) 00:29:35.494 Read completed with error (sct=0, sc=8) 00:29:35.494 Read completed with error (sct=0, sc=8) 00:29:35.494 Read completed with error (sct=0, sc=8) 00:29:35.494 Write completed with error (sct=0, sc=8) 00:29:35.494 Write completed with error (sct=0, sc=8) 00:29:35.494 Write completed with error (sct=0, sc=8) 00:29:35.494 Read completed with error (sct=0, sc=8) 00:29:35.494 Read completed with error (sct=0, sc=8) 00:29:35.494 Read completed with error (sct=0, sc=8) 00:29:35.494 Write completed with error (sct=0, sc=8) 00:29:35.494 Read completed with error (sct=0, sc=8) 00:29:35.494 Read completed with error (sct=0, sc=8) 00:29:35.494 Read completed with error (sct=0, sc=8) 00:29:35.494 Write completed with error (sct=0, sc=8) 00:29:35.494 Read completed with error (sct=0, sc=8) 00:29:35.494 Read completed with error (sct=0, sc=8) 00:29:35.494 [2024-12-10 23:00:43.079972] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2113680 is same with the state(6) to be set 00:29:35.494 Initializing NVMe Controllers 00:29:35.494 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:35.494 Controller IO queue size 128, less than required. 00:29:35.494 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:35.495 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:29:35.495 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:29:35.495 Initialization complete. Launching workers. 00:29:35.495 ======================================================== 00:29:35.495 Latency(us) 00:29:35.495 Device Information : IOPS MiB/s Average min max 00:29:35.495 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 169.68 0.08 903168.19 440.29 2002072.94 00:29:35.495 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 169.68 0.08 897082.63 344.32 1012990.89 00:29:35.495 ======================================================== 00:29:35.495 Total : 339.36 0.17 900125.41 344.32 2002072.94 00:29:35.495 00:29:35.495 [2024-12-10 23:00:43.080407] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21149b0 (9): Bad file descriptor 00:29:35.495 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf: errors occurred 00:29:35.495 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.495 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:29:35.495 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2555026 00:29:35.495 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:29:36.063 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:29:36.063 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2555026 00:29:36.063 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2555026) - No such process 00:29:36.063 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2555026 00:29:36.063 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:29:36.063 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2555026 00:29:36.063 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:29:36.063 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:36.063 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:29:36.063 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:36.063 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 2555026 00:29:36.063 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:29:36.063 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:36.063 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:36.063 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:36.063 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:36.063 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.063 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:36.063 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.063 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:36.063 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.063 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:36.063 [2024-12-10 23:00:43.613322] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:36.063 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.063 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:36.063 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.063 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:36.063 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.063 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2555620 00:29:36.063 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:29:36.063 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:29:36.063 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2555620 00:29:36.063 23:00:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:36.063 [2024-12-10 23:00:43.699143] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:36.631 23:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:36.631 23:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2555620 00:29:36.631 23:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:37.197 23:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:37.197 23:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2555620 00:29:37.197 23:00:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:37.455 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:37.455 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2555620 00:29:37.455 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:38.021 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:38.021 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2555620 00:29:38.021 23:00:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:38.588 23:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:38.588 23:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2555620 00:29:38.588 23:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:39.155 23:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:39.155 23:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2555620 00:29:39.155 23:00:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:39.414 Initializing NVMe Controllers 00:29:39.414 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:39.414 Controller IO queue size 128, less than required. 00:29:39.414 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:39.414 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:29:39.414 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:29:39.414 Initialization complete. Launching workers. 00:29:39.414 ======================================================== 00:29:39.414 Latency(us) 00:29:39.414 Device Information : IOPS MiB/s Average min max 00:29:39.414 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002399.22 1000159.68 1008348.06 00:29:39.414 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004280.23 1000144.26 1042196.26 00:29:39.414 ======================================================== 00:29:39.414 Total : 256.00 0.12 1003339.73 1000144.26 1042196.26 00:29:39.414 00:29:39.673 23:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:39.673 23:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2555620 00:29:39.673 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2555620) - No such process 00:29:39.673 23:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2555620 00:29:39.673 23:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:29:39.673 23:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:29:39.673 23:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:39.673 23:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:29:39.673 23:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:39.673 23:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:29:39.673 23:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:39.673 23:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:39.673 rmmod nvme_tcp 00:29:39.673 rmmod nvme_fabrics 00:29:39.673 rmmod nvme_keyring 00:29:39.673 23:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:39.673 23:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:29:39.673 23:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:29:39.673 23:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2554912 ']' 00:29:39.673 23:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2554912 00:29:39.673 23:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 2554912 ']' 00:29:39.673 23:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 2554912 00:29:39.673 23:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:29:39.673 23:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:39.673 23:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2554912 00:29:39.673 23:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:39.673 23:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:39.673 23:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2554912' 00:29:39.673 killing process with pid 2554912 00:29:39.673 23:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 2554912 00:29:39.673 23:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 2554912 00:29:39.933 23:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:39.933 23:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:39.933 23:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:39.933 23:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:29:39.933 23:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:29:39.933 23:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:39.933 23:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:29:39.933 23:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:39.933 23:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:39.933 23:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:39.933 23:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:39.933 23:00:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:41.839 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:41.839 00:29:41.839 real 0m16.179s 00:29:41.839 user 0m26.479s 00:29:41.839 sys 0m6.027s 00:29:41.839 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:41.839 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:41.839 ************************************ 00:29:41.839 END TEST nvmf_delete_subsystem 00:29:41.839 ************************************ 00:29:41.839 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:29:41.839 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:41.839 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:41.839 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:42.099 ************************************ 00:29:42.099 START TEST nvmf_host_management 00:29:42.099 ************************************ 00:29:42.099 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:29:42.099 * Looking for test storage... 00:29:42.099 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:29:42.099 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:42.099 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:29:42.099 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:42.099 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:42.099 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:42.099 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:42.099 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:42.099 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:29:42.099 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:29:42.099 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:29:42.099 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:29:42.099 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:29:42.099 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:29:42.099 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:29:42.099 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:42.099 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:29:42.099 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:29:42.099 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:42.099 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:42.099 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:29:42.099 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:29:42.099 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:42.099 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:29:42.099 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:29:42.099 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:29:42.099 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:29:42.099 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:42.099 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:29:42.099 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:29:42.099 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:42.099 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:42.099 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:29:42.099 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:42.099 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:42.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:42.099 --rc genhtml_branch_coverage=1 00:29:42.099 --rc genhtml_function_coverage=1 00:29:42.099 --rc genhtml_legend=1 00:29:42.099 --rc geninfo_all_blocks=1 00:29:42.099 --rc geninfo_unexecuted_blocks=1 00:29:42.099 00:29:42.099 ' 00:29:42.099 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:42.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:42.099 --rc genhtml_branch_coverage=1 00:29:42.099 --rc genhtml_function_coverage=1 00:29:42.099 --rc genhtml_legend=1 00:29:42.099 --rc geninfo_all_blocks=1 00:29:42.099 --rc geninfo_unexecuted_blocks=1 00:29:42.099 00:29:42.099 ' 00:29:42.099 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:42.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:42.099 --rc genhtml_branch_coverage=1 00:29:42.099 --rc genhtml_function_coverage=1 00:29:42.099 --rc genhtml_legend=1 00:29:42.099 --rc geninfo_all_blocks=1 00:29:42.099 --rc geninfo_unexecuted_blocks=1 00:29:42.099 00:29:42.099 ' 00:29:42.099 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:42.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:42.099 --rc genhtml_branch_coverage=1 00:29:42.099 --rc genhtml_function_coverage=1 00:29:42.099 --rc genhtml_legend=1 00:29:42.099 --rc geninfo_all_blocks=1 00:29:42.099 --rc geninfo_unexecuted_blocks=1 00:29:42.099 00:29:42.099 ' 00:29:42.099 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:29:42.099 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:29:42.099 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:42.099 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:42.099 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:42.099 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:42.099 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:42.099 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:42.099 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:42.099 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:42.099 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:42.099 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:42.099 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:42.099 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:42.099 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:42.099 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:42.099 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:42.099 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:42.099 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:29:42.099 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:29:42.099 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:42.099 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:42.099 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:42.099 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.099 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.100 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.100 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:29:42.100 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.100 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:29:42.100 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:42.100 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:42.100 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:42.100 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:42.100 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:42.100 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:42.100 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:42.100 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:42.100 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:42.100 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:42.100 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:42.100 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:42.100 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:29:42.100 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:42.100 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:42.100 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:42.100 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:42.100 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:42.100 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:42.100 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:42.100 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:42.100 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:42.100 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:42.100 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:29:42.100 23:00:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:48.671 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:48.671 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:29:48.671 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:48.671 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:48.671 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:48.671 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:48.671 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:48.671 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:29:48.671 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:48.671 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:29:48.671 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:29:48.671 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:29:48.671 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:29:48.671 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:29:48.671 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:29:48.671 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:48.671 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:48.671 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:48.671 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:48.671 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:48.671 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:48.671 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:48.671 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:48.671 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:48.671 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:48.671 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:48.671 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:48.671 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:48.671 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:48.671 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:48.671 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:48.671 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:48.671 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:48.671 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:48.671 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:48.671 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:48.671 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:48.671 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:48.671 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:48.671 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:48.671 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:48.671 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:48.671 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:48.671 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:48.671 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:48.671 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:48.671 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:48.671 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:48.671 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:48.671 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:48.671 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:48.671 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:48.671 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:48.671 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:48.671 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:48.671 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:48.671 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:48.671 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:48.671 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:48.671 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:48.671 Found net devices under 0000:86:00.0: cvl_0_0 00:29:48.672 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:48.672 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:48.672 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:48.672 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:48.672 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:48.672 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:48.672 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:48.672 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:48.672 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:48.672 Found net devices under 0000:86:00.1: cvl_0_1 00:29:48.672 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:48.672 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:48.672 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:29:48.672 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:48.672 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:48.672 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:48.672 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:48.672 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:48.672 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:48.672 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:48.672 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:48.672 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:48.672 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:48.672 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:48.672 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:48.672 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:48.672 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:48.672 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:48.672 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:48.672 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:48.672 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:48.672 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:48.672 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:48.672 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:48.672 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:48.672 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:48.672 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:48.672 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:48.672 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:48.672 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:48.672 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.396 ms 00:29:48.672 00:29:48.672 --- 10.0.0.2 ping statistics --- 00:29:48.672 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:48.672 rtt min/avg/max/mdev = 0.396/0.396/0.396/0.000 ms 00:29:48.672 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:48.672 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:48.672 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:29:48.672 00:29:48.672 --- 10.0.0.1 ping statistics --- 00:29:48.672 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:48.672 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:29:48.672 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:48.672 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:29:48.672 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:48.672 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:48.672 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:48.672 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:48.672 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:48.672 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:48.672 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:48.672 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:29:48.672 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:29:48.672 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:29:48.672 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:48.672 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:48.672 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:48.672 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2559654 00:29:48.672 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2559654 00:29:48.672 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:29:48.672 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2559654 ']' 00:29:48.672 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:48.672 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:48.672 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:48.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:48.672 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:48.672 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:48.672 [2024-12-10 23:00:55.723152] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:48.672 [2024-12-10 23:00:55.724171] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:29:48.672 [2024-12-10 23:00:55.724229] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:48.672 [2024-12-10 23:00:55.807218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:48.672 [2024-12-10 23:00:55.848636] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:48.672 [2024-12-10 23:00:55.848676] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:48.672 [2024-12-10 23:00:55.848684] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:48.672 [2024-12-10 23:00:55.848690] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:48.672 [2024-12-10 23:00:55.848697] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:48.672 [2024-12-10 23:00:55.850176] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:29:48.672 [2024-12-10 23:00:55.850267] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:29:48.672 [2024-12-10 23:00:55.850374] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:48.672 [2024-12-10 23:00:55.850374] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:29:48.672 [2024-12-10 23:00:55.919648] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:48.672 [2024-12-10 23:00:55.920499] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:48.672 [2024-12-10 23:00:55.920787] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:29:48.672 [2024-12-10 23:00:55.921179] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:48.672 [2024-12-10 23:00:55.921220] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:48.672 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:48.672 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:29:48.672 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:48.672 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:48.672 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:48.672 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:48.672 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:48.672 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.673 23:00:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:48.673 [2024-12-10 23:00:55.999203] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:48.673 23:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.673 23:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:29:48.673 23:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:48.673 23:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:48.673 23:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/rpcs.txt 00:29:48.673 23:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:29:48.673 23:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:29:48.673 23:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.673 23:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:48.673 Malloc0 00:29:48.673 [2024-12-10 23:00:56.099415] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:48.673 23:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.673 23:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:29:48.673 23:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:48.673 23:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:48.673 23:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2559865 00:29:48.673 23:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2559865 /var/tmp/bdevperf.sock 00:29:48.673 23:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2559865 ']' 00:29:48.673 23:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:48.673 23:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:48.673 23:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:29:48.673 23:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:48.673 23:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:48.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:48.673 23:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:29:48.673 23:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:48.673 23:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:29:48.673 23:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:48.673 23:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:48.673 23:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:48.673 { 00:29:48.673 "params": { 00:29:48.673 "name": "Nvme$subsystem", 00:29:48.673 "trtype": "$TEST_TRANSPORT", 00:29:48.673 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:48.673 "adrfam": "ipv4", 00:29:48.673 "trsvcid": "$NVMF_PORT", 00:29:48.673 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:48.673 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:48.673 "hdgst": ${hdgst:-false}, 00:29:48.673 "ddgst": ${ddgst:-false} 00:29:48.673 }, 00:29:48.673 "method": "bdev_nvme_attach_controller" 00:29:48.673 } 00:29:48.673 EOF 00:29:48.673 )") 00:29:48.673 23:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:29:48.673 23:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:29:48.673 23:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:29:48.673 23:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:48.673 "params": { 00:29:48.673 "name": "Nvme0", 00:29:48.673 "trtype": "tcp", 00:29:48.673 "traddr": "10.0.0.2", 00:29:48.673 "adrfam": "ipv4", 00:29:48.673 "trsvcid": "4420", 00:29:48.673 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:48.673 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:48.673 "hdgst": false, 00:29:48.673 "ddgst": false 00:29:48.673 }, 00:29:48.673 "method": "bdev_nvme_attach_controller" 00:29:48.673 }' 00:29:48.673 [2024-12-10 23:00:56.196540] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:29:48.673 [2024-12-10 23:00:56.196590] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2559865 ] 00:29:48.673 [2024-12-10 23:00:56.274342] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:48.673 [2024-12-10 23:00:56.314858] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:48.934 Running I/O for 10 seconds... 00:29:48.934 23:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:48.934 23:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:29:48.934 23:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:48.934 23:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.934 23:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:48.934 23:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.934 23:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:48.934 23:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:29:48.934 23:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:48.934 23:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:29:48.934 23:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:29:48.934 23:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:29:48.934 23:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:29:48.934 23:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:29:48.934 23:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:29:48.934 23:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:29:48.934 23:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.934 23:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:48.934 23:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.934 23:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=101 00:29:48.934 23:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 101 -ge 100 ']' 00:29:48.934 23:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:29:48.934 23:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:29:48.934 23:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:29:48.934 23:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:29:48.934 23:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.934 23:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:48.934 [2024-12-10 23:00:56.610904] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98c1b0 is same with the state(6) to be set 00:29:48.934 [2024-12-10 23:00:56.610941] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98c1b0 is same with the state(6) to be set 00:29:48.934 [2024-12-10 23:00:56.610949] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98c1b0 is same with the state(6) to be set 00:29:48.934 [2024-12-10 23:00:56.610956] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98c1b0 is same with the state(6) to be set 00:29:48.934 [2024-12-10 23:00:56.610962] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98c1b0 is same with the state(6) to be set 00:29:48.934 [2024-12-10 23:00:56.610968] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98c1b0 is same with the state(6) to be set 00:29:48.934 [2024-12-10 23:00:56.610974] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98c1b0 is same with the state(6) to be set 00:29:48.934 [2024-12-10 23:00:56.610980] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98c1b0 is same with the state(6) to be set 00:29:48.934 [2024-12-10 23:00:56.610986] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98c1b0 is same with the state(6) to be set 00:29:48.934 [2024-12-10 23:00:56.610998] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98c1b0 is same with the state(6) to be set 00:29:48.934 [2024-12-10 23:00:56.611004] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98c1b0 is same with the state(6) to be set 00:29:48.934 [2024-12-10 23:00:56.611011] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98c1b0 is same with the state(6) to be set 00:29:48.934 [2024-12-10 23:00:56.612626] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:48.934 [2024-12-10 23:00:56.612659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.934 [2024-12-10 23:00:56.612669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:48.934 [2024-12-10 23:00:56.612677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.934 [2024-12-10 23:00:56.612685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:48.934 [2024-12-10 23:00:56.612692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.934 [2024-12-10 23:00:56.612699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:48.934 [2024-12-10 23:00:56.612706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.934 [2024-12-10 23:00:56.612712] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dc1a0 is same with the state(6) to be set 00:29:48.934 23:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.934 23:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:29:48.934 23:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:48.934 23:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:48.934 [2024-12-10 23:00:56.620435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.934 [2024-12-10 23:00:56.620459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.934 [2024-12-10 23:00:56.620474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.934 [2024-12-10 23:00:56.620483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.934 [2024-12-10 23:00:56.620491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.934 [2024-12-10 23:00:56.620499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.934 [2024-12-10 23:00:56.620507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.934 [2024-12-10 23:00:56.620513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.934 [2024-12-10 23:00:56.620522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.934 [2024-12-10 23:00:56.620529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.934 [2024-12-10 23:00:56.620542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.934 [2024-12-10 23:00:56.620549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.934 [2024-12-10 23:00:56.620558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.934 [2024-12-10 23:00:56.620564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.934 [2024-12-10 23:00:56.620573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.934 [2024-12-10 23:00:56.620579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.934 [2024-12-10 23:00:56.620587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.934 [2024-12-10 23:00:56.620594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.934 [2024-12-10 23:00:56.620603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.934 [2024-12-10 23:00:56.620610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.934 [2024-12-10 23:00:56.620618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.934 [2024-12-10 23:00:56.620625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.934 [2024-12-10 23:00:56.620633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.934 [2024-12-10 23:00:56.620640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.934 [2024-12-10 23:00:56.620650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.935 [2024-12-10 23:00:56.620657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.935 [2024-12-10 23:00:56.620666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.935 [2024-12-10 23:00:56.620673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.935 [2024-12-10 23:00:56.620681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.935 [2024-12-10 23:00:56.620688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.935 [2024-12-10 23:00:56.620696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.935 [2024-12-10 23:00:56.620704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.935 [2024-12-10 23:00:56.620713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.935 [2024-12-10 23:00:56.620720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.935 [2024-12-10 23:00:56.620728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.935 [2024-12-10 23:00:56.620736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.935 [2024-12-10 23:00:56.620745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.935 [2024-12-10 23:00:56.620752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.935 [2024-12-10 23:00:56.620762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.935 [2024-12-10 23:00:56.620769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.935 [2024-12-10 23:00:56.620777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.935 [2024-12-10 23:00:56.620784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.935 [2024-12-10 23:00:56.620793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.935 [2024-12-10 23:00:56.620800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.935 [2024-12-10 23:00:56.620808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.935 [2024-12-10 23:00:56.620816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.935 [2024-12-10 23:00:56.620824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.935 [2024-12-10 23:00:56.620831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.935 [2024-12-10 23:00:56.620839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.935 [2024-12-10 23:00:56.620845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.935 [2024-12-10 23:00:56.620854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.935 [2024-12-10 23:00:56.620861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.935 [2024-12-10 23:00:56.620869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.935 [2024-12-10 23:00:56.620876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.935 [2024-12-10 23:00:56.620884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.935 [2024-12-10 23:00:56.620890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.935 [2024-12-10 23:00:56.620899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.935 [2024-12-10 23:00:56.620905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.935 [2024-12-10 23:00:56.620915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.935 [2024-12-10 23:00:56.620921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.935 [2024-12-10 23:00:56.620933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.935 [2024-12-10 23:00:56.620940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.935 [2024-12-10 23:00:56.620948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.935 [2024-12-10 23:00:56.620954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.935 [2024-12-10 23:00:56.620963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.935 [2024-12-10 23:00:56.620969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.935 [2024-12-10 23:00:56.620978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.935 [2024-12-10 23:00:56.620984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.935 [2024-12-10 23:00:56.620992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.935 [2024-12-10 23:00:56.620998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.935 [2024-12-10 23:00:56.621007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.935 [2024-12-10 23:00:56.621015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.935 [2024-12-10 23:00:56.621023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.935 [2024-12-10 23:00:56.621030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.935 [2024-12-10 23:00:56.621039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.935 [2024-12-10 23:00:56.621045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.935 [2024-12-10 23:00:56.621054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.935 [2024-12-10 23:00:56.621060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.935 [2024-12-10 23:00:56.621069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.935 [2024-12-10 23:00:56.621076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.935 [2024-12-10 23:00:56.621085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.935 [2024-12-10 23:00:56.621091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.935 [2024-12-10 23:00:56.621099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.935 [2024-12-10 23:00:56.621106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.935 [2024-12-10 23:00:56.621114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.935 [2024-12-10 23:00:56.621123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.935 [2024-12-10 23:00:56.621132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.935 [2024-12-10 23:00:56.621138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.935 [2024-12-10 23:00:56.621146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.935 [2024-12-10 23:00:56.621153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.935 [2024-12-10 23:00:56.621166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.935 [2024-12-10 23:00:56.621173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.935 [2024-12-10 23:00:56.621182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.935 [2024-12-10 23:00:56.621189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.935 [2024-12-10 23:00:56.621197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.935 [2024-12-10 23:00:56.621203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.935 [2024-12-10 23:00:56.621212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.935 [2024-12-10 23:00:56.621218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.935 [2024-12-10 23:00:56.621226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.935 [2024-12-10 23:00:56.621234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.935 [2024-12-10 23:00:56.621242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.935 [2024-12-10 23:00:56.621249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.935 [2024-12-10 23:00:56.621257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.935 [2024-12-10 23:00:56.621264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.935 [2024-12-10 23:00:56.621272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.935 [2024-12-10 23:00:56.621279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.936 [2024-12-10 23:00:56.621288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.936 [2024-12-10 23:00:56.621295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.936 [2024-12-10 23:00:56.621303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.936 [2024-12-10 23:00:56.621309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.936 [2024-12-10 23:00:56.621318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.936 [2024-12-10 23:00:56.621325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.936 [2024-12-10 23:00:56.621332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.936 [2024-12-10 23:00:56.621340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.936 [2024-12-10 23:00:56.621348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.936 [2024-12-10 23:00:56.621354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.936 [2024-12-10 23:00:56.621362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.936 [2024-12-10 23:00:56.621369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.936 [2024-12-10 23:00:56.621377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.936 [2024-12-10 23:00:56.621383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.936 [2024-12-10 23:00:56.621392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.936 [2024-12-10 23:00:56.621399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.936 [2024-12-10 23:00:56.621407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.936 [2024-12-10 23:00:56.621414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.936 [2024-12-10 23:00:56.621422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.936 [2024-12-10 23:00:56.621428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.936 [2024-12-10 23:00:56.621436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.936 [2024-12-10 23:00:56.621443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.936 [2024-12-10 23:00:56.622386] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:48.936 23:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:48.936 23:00:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:29:48.936 task offset: 24576 on job bdev=Nvme0n1 fails 00:29:48.936 00:29:48.936 Latency(us) 00:29:48.936 [2024-12-10T22:00:56.665Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:48.936 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:48.936 Job: Nvme0n1 ended in about 0.11 seconds with error 00:29:48.936 Verification LBA range: start 0x0 length 0x400 00:29:48.936 Nvme0n1 : 0.11 1737.46 108.59 579.15 0.00 25439.33 1410.45 27240.18 00:29:48.936 [2024-12-10T22:00:56.665Z] =================================================================================================================== 00:29:48.936 [2024-12-10T22:00:56.665Z] Total : 1737.46 108.59 579.15 0.00 25439.33 1410.45 27240.18 00:29:48.936 [2024-12-10 23:00:56.624776] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:48.936 [2024-12-10 23:00:56.624796] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17dc1a0 (9): Bad file descriptor 00:29:49.195 [2024-12-10 23:00:56.719344] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:29:50.132 23:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2559865 00:29:50.132 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2559865) - No such process 00:29:50.132 23:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:29:50.132 23:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:29:50.132 23:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:29:50.132 23:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:29:50.132 23:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:29:50.132 23:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:29:50.132 23:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:50.132 23:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:50.132 { 00:29:50.132 "params": { 00:29:50.132 "name": "Nvme$subsystem", 00:29:50.132 "trtype": "$TEST_TRANSPORT", 00:29:50.132 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:50.132 "adrfam": "ipv4", 00:29:50.132 "trsvcid": "$NVMF_PORT", 00:29:50.132 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:50.132 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:50.132 "hdgst": ${hdgst:-false}, 00:29:50.132 "ddgst": ${ddgst:-false} 00:29:50.132 }, 00:29:50.132 "method": "bdev_nvme_attach_controller" 00:29:50.132 } 00:29:50.132 EOF 00:29:50.132 )") 00:29:50.132 23:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:29:50.132 23:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:29:50.132 23:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:29:50.132 23:00:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:50.132 "params": { 00:29:50.132 "name": "Nvme0", 00:29:50.132 "trtype": "tcp", 00:29:50.132 "traddr": "10.0.0.2", 00:29:50.132 "adrfam": "ipv4", 00:29:50.132 "trsvcid": "4420", 00:29:50.132 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:50.132 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:50.132 "hdgst": false, 00:29:50.132 "ddgst": false 00:29:50.132 }, 00:29:50.132 "method": "bdev_nvme_attach_controller" 00:29:50.132 }' 00:29:50.132 [2024-12-10 23:00:57.679678] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:29:50.132 [2024-12-10 23:00:57.679728] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2560109 ] 00:29:50.132 [2024-12-10 23:00:57.754202] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:50.132 [2024-12-10 23:00:57.793380] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:50.702 Running I/O for 1 seconds... 00:29:51.640 2048.00 IOPS, 128.00 MiB/s 00:29:51.640 Latency(us) 00:29:51.640 [2024-12-10T22:00:59.369Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:51.640 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:51.640 Verification LBA range: start 0x0 length 0x400 00:29:51.640 Nvme0n1 : 1.03 2050.70 128.17 0.00 0.00 30713.19 6297.15 27240.18 00:29:51.640 [2024-12-10T22:00:59.369Z] =================================================================================================================== 00:29:51.640 [2024-12-10T22:00:59.369Z] Total : 2050.70 128.17 0.00 0.00 30713.19 6297.15 27240.18 00:29:51.640 23:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:29:51.640 23:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:29:51.640 23:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/bdevperf.conf 00:29:51.640 23:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/rpcs.txt 00:29:51.640 23:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:29:51.640 23:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:51.640 23:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:29:51.640 23:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:51.640 23:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:29:51.640 23:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:51.640 23:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:51.899 rmmod nvme_tcp 00:29:51.899 rmmod nvme_fabrics 00:29:51.899 rmmod nvme_keyring 00:29:51.899 23:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:51.899 23:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:29:51.899 23:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:29:51.899 23:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2559654 ']' 00:29:51.899 23:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2559654 00:29:51.899 23:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 2559654 ']' 00:29:51.899 23:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 2559654 00:29:51.899 23:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:29:51.899 23:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:51.899 23:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2559654 00:29:51.899 23:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:51.899 23:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:51.899 23:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2559654' 00:29:51.899 killing process with pid 2559654 00:29:51.899 23:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 2559654 00:29:51.899 23:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 2559654 00:29:52.158 [2024-12-10 23:00:59.632795] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:29:52.158 23:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:52.158 23:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:52.158 23:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:52.158 23:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:29:52.158 23:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:29:52.158 23:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:52.158 23:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:29:52.158 23:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:52.158 23:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:52.158 23:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:52.158 23:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:52.158 23:00:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:54.063 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:54.063 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:29:54.063 00:29:54.063 real 0m12.150s 00:29:54.063 user 0m17.193s 00:29:54.063 sys 0m6.144s 00:29:54.063 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:54.063 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:54.063 ************************************ 00:29:54.063 END TEST nvmf_host_management 00:29:54.063 ************************************ 00:29:54.063 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:29:54.063 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:54.063 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:54.063 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:54.323 ************************************ 00:29:54.323 START TEST nvmf_lvol 00:29:54.323 ************************************ 00:29:54.323 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:29:54.323 * Looking for test storage... 00:29:54.323 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:29:54.323 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:54.323 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:29:54.323 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:54.324 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:54.324 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:54.324 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:54.324 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:54.324 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:29:54.324 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:29:54.324 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:29:54.324 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:29:54.324 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:29:54.324 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:29:54.324 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:29:54.324 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:54.324 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:29:54.324 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:29:54.324 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:54.324 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:54.324 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:29:54.324 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:29:54.324 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:54.324 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:29:54.324 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:29:54.324 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:29:54.324 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:29:54.324 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:54.324 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:29:54.324 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:29:54.324 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:54.324 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:54.324 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:29:54.324 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:54.324 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:54.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:54.324 --rc genhtml_branch_coverage=1 00:29:54.324 --rc genhtml_function_coverage=1 00:29:54.324 --rc genhtml_legend=1 00:29:54.324 --rc geninfo_all_blocks=1 00:29:54.324 --rc geninfo_unexecuted_blocks=1 00:29:54.324 00:29:54.324 ' 00:29:54.324 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:54.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:54.324 --rc genhtml_branch_coverage=1 00:29:54.324 --rc genhtml_function_coverage=1 00:29:54.324 --rc genhtml_legend=1 00:29:54.324 --rc geninfo_all_blocks=1 00:29:54.324 --rc geninfo_unexecuted_blocks=1 00:29:54.324 00:29:54.324 ' 00:29:54.324 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:54.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:54.324 --rc genhtml_branch_coverage=1 00:29:54.324 --rc genhtml_function_coverage=1 00:29:54.324 --rc genhtml_legend=1 00:29:54.324 --rc geninfo_all_blocks=1 00:29:54.324 --rc geninfo_unexecuted_blocks=1 00:29:54.324 00:29:54.324 ' 00:29:54.324 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:54.324 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:54.324 --rc genhtml_branch_coverage=1 00:29:54.324 --rc genhtml_function_coverage=1 00:29:54.324 --rc genhtml_legend=1 00:29:54.324 --rc geninfo_all_blocks=1 00:29:54.324 --rc geninfo_unexecuted_blocks=1 00:29:54.324 00:29:54.324 ' 00:29:54.324 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:29:54.324 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:29:54.324 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:54.324 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:54.324 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:54.324 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:54.324 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:54.324 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:54.324 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:54.324 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:54.324 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:54.324 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:54.324 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:54.324 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:54.324 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:54.324 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:54.324 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:54.324 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:54.324 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:29:54.324 23:01:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:29:54.324 23:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:54.324 23:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:54.324 23:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:54.324 23:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.324 23:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.324 23:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.324 23:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:29:54.324 23:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:54.324 23:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:29:54.324 23:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:54.324 23:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:54.324 23:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:54.324 23:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:54.324 23:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:54.324 23:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:29:54.324 23:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:29:54.324 23:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:54.324 23:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:54.324 23:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:54.324 23:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:54.324 23:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:54.325 23:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:29:54.325 23:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:29:54.325 23:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:29:54.325 23:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:29:54.325 23:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:54.325 23:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:54.325 23:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:54.325 23:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:54.325 23:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:54.325 23:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:54.325 23:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:54.325 23:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:54.325 23:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:54.325 23:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:54.325 23:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:29:54.325 23:01:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:00.897 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:00.897 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:00.897 Found net devices under 0000:86:00.0: cvl_0_0 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:00.897 Found net devices under 0000:86:00.1: cvl_0_1 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:00.897 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:00.897 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:00.897 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.394 ms 00:30:00.898 00:30:00.898 --- 10.0.0.2 ping statistics --- 00:30:00.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:00.898 rtt min/avg/max/mdev = 0.394/0.394/0.394/0.000 ms 00:30:00.898 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:00.898 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:00.898 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.233 ms 00:30:00.898 00:30:00.898 --- 10.0.0.1 ping statistics --- 00:30:00.898 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:00.898 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:30:00.898 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:00.898 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:30:00.898 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:00.898 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:00.898 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:00.898 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:00.898 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:00.898 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:00.898 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:00.898 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:30:00.898 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:00.898 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:00.898 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:00.898 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2563866 00:30:00.898 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2563866 00:30:00.898 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:30:00.898 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 2563866 ']' 00:30:00.898 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:00.898 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:00.898 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:00.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:00.898 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:00.898 23:01:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:00.898 [2024-12-10 23:01:07.983433] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:00.898 [2024-12-10 23:01:07.984347] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:30:00.898 [2024-12-10 23:01:07.984380] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:00.898 [2024-12-10 23:01:08.063672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:00.898 [2024-12-10 23:01:08.104331] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:00.898 [2024-12-10 23:01:08.104366] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:00.898 [2024-12-10 23:01:08.104374] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:00.898 [2024-12-10 23:01:08.104380] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:00.898 [2024-12-10 23:01:08.104385] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:00.898 [2024-12-10 23:01:08.105619] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:30:00.898 [2024-12-10 23:01:08.105724] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:30:00.898 [2024-12-10 23:01:08.105726] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:30:00.898 [2024-12-10 23:01:08.173283] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:00.898 [2024-12-10 23:01:08.174181] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:00.898 [2024-12-10 23:01:08.174328] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:00.898 [2024-12-10 23:01:08.174477] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:00.898 23:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:00.898 23:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:30:00.898 23:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:00.898 23:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:00.898 23:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:00.898 23:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:00.898 23:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:00.898 [2024-12-10 23:01:08.418404] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:00.898 23:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:01.157 23:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:30:01.158 23:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:01.417 23:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:30:01.417 23:01:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:30:01.417 23:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:30:01.676 23:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=99798c9e-50d4-4d37-913c-3da0f9d90b61 00:30:01.676 23:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_create -u 99798c9e-50d4-4d37-913c-3da0f9d90b61 lvol 20 00:30:01.935 23:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=1fb98df3-17f4-4c53-b9be-133b808e6efe 00:30:01.935 23:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:02.194 23:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1fb98df3-17f4-4c53-b9be-133b808e6efe 00:30:02.194 23:01:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:02.453 [2024-12-10 23:01:10.078311] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:02.453 23:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:02.712 23:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2564244 00:30:02.712 23:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:30:02.712 23:01:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:30:03.654 23:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_snapshot 1fb98df3-17f4-4c53-b9be-133b808e6efe MY_SNAPSHOT 00:30:03.913 23:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=5a5aff8f-47d1-41e5-a62b-45d77590723f 00:30:03.913 23:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_resize 1fb98df3-17f4-4c53-b9be-133b808e6efe 30 00:30:04.171 23:01:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_clone 5a5aff8f-47d1-41e5-a62b-45d77590723f MY_CLONE 00:30:04.430 23:01:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=182e76bd-0971-4a89-9243-0a2cde5a1988 00:30:04.430 23:01:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_inflate 182e76bd-0971-4a89-9243-0a2cde5a1988 00:30:04.997 23:01:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2564244 00:30:13.113 Initializing NVMe Controllers 00:30:13.113 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:30:13.113 Controller IO queue size 128, less than required. 00:30:13.113 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:13.113 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:30:13.113 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:30:13.113 Initialization complete. Launching workers. 00:30:13.113 ======================================================== 00:30:13.113 Latency(us) 00:30:13.113 Device Information : IOPS MiB/s Average min max 00:30:13.113 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11942.90 46.65 10719.08 5161.37 57016.94 00:30:13.113 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11783.00 46.03 10865.19 3807.95 59012.23 00:30:13.113 ======================================================== 00:30:13.113 Total : 23725.90 92.68 10791.64 3807.95 59012.23 00:30:13.113 00:30:13.113 23:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:13.371 23:01:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_delete 1fb98df3-17f4-4c53-b9be-133b808e6efe 00:30:13.371 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 99798c9e-50d4-4d37-913c-3da0f9d90b61 00:30:13.630 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:30:13.630 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:30:13.630 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:30:13.630 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:13.630 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:30:13.630 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:13.630 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:30:13.630 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:13.630 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:13.630 rmmod nvme_tcp 00:30:13.630 rmmod nvme_fabrics 00:30:13.630 rmmod nvme_keyring 00:30:13.630 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:13.630 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:30:13.630 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:30:13.630 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2563866 ']' 00:30:13.630 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2563866 00:30:13.630 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 2563866 ']' 00:30:13.630 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 2563866 00:30:13.630 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:30:13.932 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:13.932 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2563866 00:30:13.932 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:13.932 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:13.932 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2563866' 00:30:13.932 killing process with pid 2563866 00:30:13.932 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 2563866 00:30:13.932 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 2563866 00:30:13.932 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:13.932 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:13.932 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:13.932 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:30:13.932 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:30:13.932 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:30:13.932 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:13.932 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:13.932 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:13.932 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:13.932 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:13.932 23:01:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:16.513 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:16.513 00:30:16.513 real 0m21.879s 00:30:16.513 user 0m55.378s 00:30:16.513 sys 0m10.130s 00:30:16.513 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:16.513 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:16.513 ************************************ 00:30:16.513 END TEST nvmf_lvol 00:30:16.513 ************************************ 00:30:16.513 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:30:16.513 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:16.513 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:16.513 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:16.513 ************************************ 00:30:16.513 START TEST nvmf_lvs_grow 00:30:16.513 ************************************ 00:30:16.513 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:30:16.513 * Looking for test storage... 00:30:16.513 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:30:16.513 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:16.513 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:30:16.513 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:16.513 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:16.513 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:16.513 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:16.513 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:16.513 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:30:16.513 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:30:16.513 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:30:16.513 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:30:16.513 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:30:16.513 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:30:16.513 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:30:16.513 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:16.513 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:30:16.513 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:30:16.513 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:16.513 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:16.513 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:30:16.513 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:30:16.513 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:16.513 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:30:16.513 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:30:16.513 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:30:16.514 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:30:16.514 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:16.514 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:30:16.514 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:30:16.514 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:16.514 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:16.514 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:30:16.514 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:16.514 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:16.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:16.514 --rc genhtml_branch_coverage=1 00:30:16.514 --rc genhtml_function_coverage=1 00:30:16.514 --rc genhtml_legend=1 00:30:16.514 --rc geninfo_all_blocks=1 00:30:16.514 --rc geninfo_unexecuted_blocks=1 00:30:16.514 00:30:16.514 ' 00:30:16.514 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:16.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:16.514 --rc genhtml_branch_coverage=1 00:30:16.514 --rc genhtml_function_coverage=1 00:30:16.514 --rc genhtml_legend=1 00:30:16.514 --rc geninfo_all_blocks=1 00:30:16.514 --rc geninfo_unexecuted_blocks=1 00:30:16.514 00:30:16.514 ' 00:30:16.514 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:16.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:16.514 --rc genhtml_branch_coverage=1 00:30:16.514 --rc genhtml_function_coverage=1 00:30:16.514 --rc genhtml_legend=1 00:30:16.514 --rc geninfo_all_blocks=1 00:30:16.514 --rc geninfo_unexecuted_blocks=1 00:30:16.514 00:30:16.514 ' 00:30:16.514 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:16.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:16.514 --rc genhtml_branch_coverage=1 00:30:16.514 --rc genhtml_function_coverage=1 00:30:16.514 --rc genhtml_legend=1 00:30:16.514 --rc geninfo_all_blocks=1 00:30:16.514 --rc geninfo_unexecuted_blocks=1 00:30:16.514 00:30:16.514 ' 00:30:16.514 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:30:16.514 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:30:16.514 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:16.514 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:16.514 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:16.514 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:16.514 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:16.514 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:16.514 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:16.514 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:16.514 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:16.514 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:16.514 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:16.514 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:16.514 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:16.514 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:16.514 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:16.514 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:16.514 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:30:16.514 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:30:16.514 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:16.514 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:16.514 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:16.514 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:16.514 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:16.514 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:16.514 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:30:16.514 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:16.514 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:30:16.514 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:16.514 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:16.514 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:16.514 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:16.514 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:16.514 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:16.514 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:16.514 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:16.514 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:16.514 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:16.514 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:30:16.514 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:16.514 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:30:16.514 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:16.514 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:16.514 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:16.514 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:16.514 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:16.514 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:16.514 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:16.514 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:16.515 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:16.515 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:16.515 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:30:16.515 23:01:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:23.083 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:23.083 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:23.083 Found net devices under 0000:86:00.0: cvl_0_0 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:23.083 Found net devices under 0000:86:00.1: cvl_0_1 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:23.083 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:23.083 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.398 ms 00:30:23.083 00:30:23.083 --- 10.0.0.2 ping statistics --- 00:30:23.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:23.083 rtt min/avg/max/mdev = 0.398/0.398/0.398/0.000 ms 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:23.083 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:23.083 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:30:23.083 00:30:23.083 --- 10.0.0.1 ping statistics --- 00:30:23.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:23.083 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:23.083 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2569495 00:30:23.084 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:30:23.084 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2569495 00:30:23.084 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 2569495 ']' 00:30:23.084 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:23.084 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:23.084 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:23.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:23.084 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:23.084 23:01:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:23.084 [2024-12-10 23:01:29.929099] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:23.084 [2024-12-10 23:01:29.930020] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:30:23.084 [2024-12-10 23:01:29.930054] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:23.084 [2024-12-10 23:01:30.010467] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:23.084 [2024-12-10 23:01:30.052354] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:23.084 [2024-12-10 23:01:30.052388] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:23.084 [2024-12-10 23:01:30.052395] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:23.084 [2024-12-10 23:01:30.052403] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:23.084 [2024-12-10 23:01:30.052408] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:23.084 [2024-12-10 23:01:30.052933] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:30:23.084 [2024-12-10 23:01:30.121553] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:23.084 [2024-12-10 23:01:30.121747] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:23.084 23:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:23.084 23:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:30:23.084 23:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:23.084 23:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:23.084 23:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:23.084 23:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:23.084 23:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:23.084 [2024-12-10 23:01:30.357587] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:23.084 23:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:30:23.084 23:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:23.084 23:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:23.084 23:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:23.084 ************************************ 00:30:23.084 START TEST lvs_grow_clean 00:30:23.084 ************************************ 00:30:23.084 23:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:30:23.084 23:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:30:23.084 23:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:30:23.084 23:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:30:23.084 23:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:30:23.084 23:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:30:23.084 23:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:30:23.084 23:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev 00:30:23.084 23:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev 00:30:23.084 23:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:23.084 23:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:30:23.084 23:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:30:23.343 23:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=263f0a41-7044-4be3-a966-20baa0fb3add 00:30:23.343 23:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 263f0a41-7044-4be3-a966-20baa0fb3add 00:30:23.343 23:01:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:30:23.343 23:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:30:23.343 23:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:30:23.601 23:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_create -u 263f0a41-7044-4be3-a966-20baa0fb3add lvol 150 00:30:23.601 23:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=e7d74d01-804c-4b33-ba17-106e250dd5ed 00:30:23.601 23:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev 00:30:23.601 23:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:30:23.860 [2024-12-10 23:01:31.441331] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:30:23.860 [2024-12-10 23:01:31.441459] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:30:23.860 true 00:30:23.860 23:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 263f0a41-7044-4be3-a966-20baa0fb3add 00:30:23.860 23:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:30:24.119 23:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:30:24.119 23:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:24.378 23:01:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e7d74d01-804c-4b33-ba17-106e250dd5ed 00:30:24.378 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:24.636 [2024-12-10 23:01:32.213802] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:24.636 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:24.894 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2569989 00:30:24.895 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:30:24.895 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:24.895 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2569989 /var/tmp/bdevperf.sock 00:30:24.895 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 2569989 ']' 00:30:24.895 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:24.895 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:24.895 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:24.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:24.895 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:24.895 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:30:24.895 [2024-12-10 23:01:32.468375] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:30:24.895 [2024-12-10 23:01:32.468424] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2569989 ] 00:30:24.895 [2024-12-10 23:01:32.541636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:24.895 [2024-12-10 23:01:32.583332] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:30:25.153 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:25.153 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:30:25.153 23:01:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:30:25.411 Nvme0n1 00:30:25.411 23:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:30:25.670 [ 00:30:25.670 { 00:30:25.670 "name": "Nvme0n1", 00:30:25.670 "aliases": [ 00:30:25.670 "e7d74d01-804c-4b33-ba17-106e250dd5ed" 00:30:25.670 ], 00:30:25.670 "product_name": "NVMe disk", 00:30:25.670 "block_size": 4096, 00:30:25.670 "num_blocks": 38912, 00:30:25.670 "uuid": "e7d74d01-804c-4b33-ba17-106e250dd5ed", 00:30:25.670 "numa_id": 1, 00:30:25.670 "assigned_rate_limits": { 00:30:25.670 "rw_ios_per_sec": 0, 00:30:25.670 "rw_mbytes_per_sec": 0, 00:30:25.670 "r_mbytes_per_sec": 0, 00:30:25.670 "w_mbytes_per_sec": 0 00:30:25.670 }, 00:30:25.670 "claimed": false, 00:30:25.670 "zoned": false, 00:30:25.670 "supported_io_types": { 00:30:25.670 "read": true, 00:30:25.670 "write": true, 00:30:25.670 "unmap": true, 00:30:25.670 "flush": true, 00:30:25.670 "reset": true, 00:30:25.670 "nvme_admin": true, 00:30:25.670 "nvme_io": true, 00:30:25.670 "nvme_io_md": false, 00:30:25.670 "write_zeroes": true, 00:30:25.670 "zcopy": false, 00:30:25.670 "get_zone_info": false, 00:30:25.670 "zone_management": false, 00:30:25.670 "zone_append": false, 00:30:25.670 "compare": true, 00:30:25.670 "compare_and_write": true, 00:30:25.670 "abort": true, 00:30:25.670 "seek_hole": false, 00:30:25.670 "seek_data": false, 00:30:25.670 "copy": true, 00:30:25.670 "nvme_iov_md": false 00:30:25.670 }, 00:30:25.670 "memory_domains": [ 00:30:25.670 { 00:30:25.670 "dma_device_id": "system", 00:30:25.670 "dma_device_type": 1 00:30:25.670 } 00:30:25.670 ], 00:30:25.670 "driver_specific": { 00:30:25.670 "nvme": [ 00:30:25.670 { 00:30:25.670 "trid": { 00:30:25.670 "trtype": "TCP", 00:30:25.670 "adrfam": "IPv4", 00:30:25.670 "traddr": "10.0.0.2", 00:30:25.670 "trsvcid": "4420", 00:30:25.670 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:25.670 }, 00:30:25.670 "ctrlr_data": { 00:30:25.670 "cntlid": 1, 00:30:25.670 "vendor_id": "0x8086", 00:30:25.670 "model_number": "SPDK bdev Controller", 00:30:25.670 "serial_number": "SPDK0", 00:30:25.670 "firmware_revision": "25.01", 00:30:25.670 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:25.670 "oacs": { 00:30:25.670 "security": 0, 00:30:25.670 "format": 0, 00:30:25.670 "firmware": 0, 00:30:25.670 "ns_manage": 0 00:30:25.670 }, 00:30:25.670 "multi_ctrlr": true, 00:30:25.670 "ana_reporting": false 00:30:25.670 }, 00:30:25.670 "vs": { 00:30:25.670 "nvme_version": "1.3" 00:30:25.670 }, 00:30:25.670 "ns_data": { 00:30:25.670 "id": 1, 00:30:25.670 "can_share": true 00:30:25.670 } 00:30:25.670 } 00:30:25.670 ], 00:30:25.670 "mp_policy": "active_passive" 00:30:25.670 } 00:30:25.670 } 00:30:25.670 ] 00:30:25.670 23:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2570020 00:30:25.670 23:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:30:25.670 23:01:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:25.670 Running I/O for 10 seconds... 00:30:27.047 Latency(us) 00:30:27.047 [2024-12-10T22:01:34.776Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:27.047 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:27.047 Nvme0n1 : 1.00 22225.00 86.82 0.00 0.00 0.00 0.00 0.00 00:30:27.047 [2024-12-10T22:01:34.776Z] =================================================================================================================== 00:30:27.047 [2024-12-10T22:01:34.776Z] Total : 22225.00 86.82 0.00 0.00 0.00 0.00 0.00 00:30:27.047 00:30:27.613 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 263f0a41-7044-4be3-a966-20baa0fb3add 00:30:27.872 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:27.872 Nvme0n1 : 2.00 22542.50 88.06 0.00 0.00 0.00 0.00 0.00 00:30:27.872 [2024-12-10T22:01:35.601Z] =================================================================================================================== 00:30:27.872 [2024-12-10T22:01:35.601Z] Total : 22542.50 88.06 0.00 0.00 0.00 0.00 0.00 00:30:27.872 00:30:27.872 true 00:30:27.872 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 263f0a41-7044-4be3-a966-20baa0fb3add 00:30:27.872 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:30:28.130 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:30:28.130 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:30:28.130 23:01:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2570020 00:30:28.697 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:28.697 Nvme0n1 : 3.00 22690.67 88.64 0.00 0.00 0.00 0.00 0.00 00:30:28.697 [2024-12-10T22:01:36.426Z] =================================================================================================================== 00:30:28.697 [2024-12-10T22:01:36.426Z] Total : 22690.67 88.64 0.00 0.00 0.00 0.00 0.00 00:30:28.697 00:30:30.073 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:30.073 Nvme0n1 : 4.00 22769.00 88.94 0.00 0.00 0.00 0.00 0.00 00:30:30.073 [2024-12-10T22:01:37.802Z] =================================================================================================================== 00:30:30.073 [2024-12-10T22:01:37.802Z] Total : 22769.00 88.94 0.00 0.00 0.00 0.00 0.00 00:30:30.073 00:30:31.009 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:31.009 Nvme0n1 : 5.00 22838.00 89.21 0.00 0.00 0.00 0.00 0.00 00:30:31.009 [2024-12-10T22:01:38.738Z] =================================================================================================================== 00:30:31.009 [2024-12-10T22:01:38.738Z] Total : 22838.00 89.21 0.00 0.00 0.00 0.00 0.00 00:30:31.009 00:30:31.944 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:31.944 Nvme0n1 : 6.00 22884.00 89.39 0.00 0.00 0.00 0.00 0.00 00:30:31.944 [2024-12-10T22:01:39.673Z] =================================================================================================================== 00:30:31.944 [2024-12-10T22:01:39.673Z] Total : 22884.00 89.39 0.00 0.00 0.00 0.00 0.00 00:30:31.944 00:30:32.880 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:32.880 Nvme0n1 : 7.00 22898.71 89.45 0.00 0.00 0.00 0.00 0.00 00:30:32.880 [2024-12-10T22:01:40.609Z] =================================================================================================================== 00:30:32.880 [2024-12-10T22:01:40.609Z] Total : 22898.71 89.45 0.00 0.00 0.00 0.00 0.00 00:30:32.880 00:30:33.816 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:33.816 Nvme0n1 : 8.00 22893.88 89.43 0.00 0.00 0.00 0.00 0.00 00:30:33.816 [2024-12-10T22:01:41.545Z] =================================================================================================================== 00:30:33.816 [2024-12-10T22:01:41.545Z] Total : 22893.88 89.43 0.00 0.00 0.00 0.00 0.00 00:30:33.816 00:30:34.752 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:34.752 Nvme0n1 : 9.00 22918.33 89.52 0.00 0.00 0.00 0.00 0.00 00:30:34.752 [2024-12-10T22:01:42.481Z] =================================================================================================================== 00:30:34.752 [2024-12-10T22:01:42.481Z] Total : 22918.33 89.52 0.00 0.00 0.00 0.00 0.00 00:30:34.752 00:30:35.689 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:35.689 Nvme0n1 : 10.00 22950.60 89.65 0.00 0.00 0.00 0.00 0.00 00:30:35.689 [2024-12-10T22:01:43.418Z] =================================================================================================================== 00:30:35.689 [2024-12-10T22:01:43.418Z] Total : 22950.60 89.65 0.00 0.00 0.00 0.00 0.00 00:30:35.689 00:30:35.689 00:30:35.689 Latency(us) 00:30:35.689 [2024-12-10T22:01:43.418Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:35.689 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:35.689 Nvme0n1 : 10.00 22954.55 89.67 0.00 0.00 5573.03 3333.79 26898.25 00:30:35.689 [2024-12-10T22:01:43.418Z] =================================================================================================================== 00:30:35.689 [2024-12-10T22:01:43.418Z] Total : 22954.55 89.67 0.00 0.00 5573.03 3333.79 26898.25 00:30:35.689 { 00:30:35.689 "results": [ 00:30:35.689 { 00:30:35.689 "job": "Nvme0n1", 00:30:35.689 "core_mask": "0x2", 00:30:35.689 "workload": "randwrite", 00:30:35.689 "status": "finished", 00:30:35.689 "queue_depth": 128, 00:30:35.689 "io_size": 4096, 00:30:35.689 "runtime": 10.003854, 00:30:35.689 "iops": 22954.55331515234, 00:30:35.689 "mibps": 89.66622388731383, 00:30:35.689 "io_failed": 0, 00:30:35.689 "io_timeout": 0, 00:30:35.689 "avg_latency_us": 5573.02784530847, 00:30:35.689 "min_latency_us": 3333.7878260869566, 00:30:35.689 "max_latency_us": 26898.253913043478 00:30:35.689 } 00:30:35.689 ], 00:30:35.689 "core_count": 1 00:30:35.689 } 00:30:35.948 23:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2569989 00:30:35.948 23:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 2569989 ']' 00:30:35.948 23:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 2569989 00:30:35.948 23:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:30:35.948 23:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:35.948 23:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2569989 00:30:35.948 23:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:35.948 23:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:35.948 23:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2569989' 00:30:35.948 killing process with pid 2569989 00:30:35.948 23:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 2569989 00:30:35.948 Received shutdown signal, test time was about 10.000000 seconds 00:30:35.948 00:30:35.948 Latency(us) 00:30:35.948 [2024-12-10T22:01:43.677Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:35.948 [2024-12-10T22:01:43.677Z] =================================================================================================================== 00:30:35.948 [2024-12-10T22:01:43.677Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:35.948 23:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 2569989 00:30:35.948 23:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:36.207 23:01:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:36.466 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 263f0a41-7044-4be3-a966-20baa0fb3add 00:30:36.466 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:30:36.725 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:30:36.725 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:30:36.725 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:36.725 [2024-12-10 23:01:44.425396] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:30:36.985 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 263f0a41-7044-4be3-a966-20baa0fb3add 00:30:36.985 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:30:36.985 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 263f0a41-7044-4be3-a966-20baa0fb3add 00:30:36.985 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:30:36.985 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:36.985 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:30:36.985 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:36.985 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:30:36.985 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:36.985 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:30:36.985 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py ]] 00:30:36.985 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 263f0a41-7044-4be3-a966-20baa0fb3add 00:30:36.985 request: 00:30:36.985 { 00:30:36.985 "uuid": "263f0a41-7044-4be3-a966-20baa0fb3add", 00:30:36.985 "method": "bdev_lvol_get_lvstores", 00:30:36.985 "req_id": 1 00:30:36.985 } 00:30:36.985 Got JSON-RPC error response 00:30:36.985 response: 00:30:36.985 { 00:30:36.985 "code": -19, 00:30:36.985 "message": "No such device" 00:30:36.985 } 00:30:36.985 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:30:36.985 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:36.985 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:36.985 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:36.985 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:37.244 aio_bdev 00:30:37.244 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev e7d74d01-804c-4b33-ba17-106e250dd5ed 00:30:37.244 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=e7d74d01-804c-4b33-ba17-106e250dd5ed 00:30:37.244 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:37.244 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:30:37.244 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:37.244 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:37.244 23:01:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:37.502 23:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_get_bdevs -b e7d74d01-804c-4b33-ba17-106e250dd5ed -t 2000 00:30:37.759 [ 00:30:37.759 { 00:30:37.759 "name": "e7d74d01-804c-4b33-ba17-106e250dd5ed", 00:30:37.759 "aliases": [ 00:30:37.759 "lvs/lvol" 00:30:37.759 ], 00:30:37.759 "product_name": "Logical Volume", 00:30:37.759 "block_size": 4096, 00:30:37.759 "num_blocks": 38912, 00:30:37.759 "uuid": "e7d74d01-804c-4b33-ba17-106e250dd5ed", 00:30:37.759 "assigned_rate_limits": { 00:30:37.759 "rw_ios_per_sec": 0, 00:30:37.759 "rw_mbytes_per_sec": 0, 00:30:37.759 "r_mbytes_per_sec": 0, 00:30:37.759 "w_mbytes_per_sec": 0 00:30:37.759 }, 00:30:37.759 "claimed": false, 00:30:37.759 "zoned": false, 00:30:37.759 "supported_io_types": { 00:30:37.759 "read": true, 00:30:37.759 "write": true, 00:30:37.759 "unmap": true, 00:30:37.759 "flush": false, 00:30:37.759 "reset": true, 00:30:37.759 "nvme_admin": false, 00:30:37.759 "nvme_io": false, 00:30:37.759 "nvme_io_md": false, 00:30:37.759 "write_zeroes": true, 00:30:37.759 "zcopy": false, 00:30:37.759 "get_zone_info": false, 00:30:37.759 "zone_management": false, 00:30:37.759 "zone_append": false, 00:30:37.759 "compare": false, 00:30:37.759 "compare_and_write": false, 00:30:37.759 "abort": false, 00:30:37.759 "seek_hole": true, 00:30:37.759 "seek_data": true, 00:30:37.759 "copy": false, 00:30:37.759 "nvme_iov_md": false 00:30:37.759 }, 00:30:37.759 "driver_specific": { 00:30:37.759 "lvol": { 00:30:37.759 "lvol_store_uuid": "263f0a41-7044-4be3-a966-20baa0fb3add", 00:30:37.759 "base_bdev": "aio_bdev", 00:30:37.759 "thin_provision": false, 00:30:37.759 "num_allocated_clusters": 38, 00:30:37.759 "snapshot": false, 00:30:37.759 "clone": false, 00:30:37.759 "esnap_clone": false 00:30:37.759 } 00:30:37.759 } 00:30:37.759 } 00:30:37.759 ] 00:30:37.759 23:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:30:37.759 23:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 263f0a41-7044-4be3-a966-20baa0fb3add 00:30:37.759 23:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:30:37.759 23:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:30:37.759 23:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:30:37.759 23:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 263f0a41-7044-4be3-a966-20baa0fb3add 00:30:38.015 23:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:30:38.016 23:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_delete e7d74d01-804c-4b33-ba17-106e250dd5ed 00:30:38.274 23:01:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 263f0a41-7044-4be3-a966-20baa0fb3add 00:30:38.533 23:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:38.792 23:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev 00:30:38.792 00:30:38.792 real 0m15.896s 00:30:38.792 user 0m15.426s 00:30:38.792 sys 0m1.491s 00:30:38.792 23:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:38.792 23:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:30:38.792 ************************************ 00:30:38.792 END TEST lvs_grow_clean 00:30:38.792 ************************************ 00:30:38.792 23:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:30:38.792 23:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:38.792 23:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:38.792 23:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:38.792 ************************************ 00:30:38.792 START TEST lvs_grow_dirty 00:30:38.792 ************************************ 00:30:38.792 23:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:30:38.792 23:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:30:38.792 23:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:30:38.792 23:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:30:38.792 23:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:30:38.792 23:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:30:38.792 23:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:30:38.792 23:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev 00:30:38.792 23:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev 00:30:38.792 23:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:39.051 23:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:30:39.051 23:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:30:39.310 23:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=28715e70-721e-4a4c-812d-e781d38e4d57 00:30:39.310 23:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 28715e70-721e-4a4c-812d-e781d38e4d57 00:30:39.310 23:01:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:30:39.310 23:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:30:39.310 23:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:30:39.310 23:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_create -u 28715e70-721e-4a4c-812d-e781d38e4d57 lvol 150 00:30:39.569 23:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=9e10ad87-9f01-4cda-a80b-8353aaeafce6 00:30:39.569 23:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev 00:30:39.569 23:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:30:39.828 [2024-12-10 23:01:47.405335] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:30:39.828 [2024-12-10 23:01:47.405464] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:30:39.828 true 00:30:39.828 23:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 28715e70-721e-4a4c-812d-e781d38e4d57 00:30:39.828 23:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:30:40.087 23:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:30:40.087 23:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:40.087 23:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 9e10ad87-9f01-4cda-a80b-8353aaeafce6 00:30:40.346 23:01:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:40.604 [2024-12-10 23:01:48.161756] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:40.604 23:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:40.864 23:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2572572 00:30:40.864 23:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:40.864 23:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:30:40.864 23:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2572572 /var/tmp/bdevperf.sock 00:30:40.864 23:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2572572 ']' 00:30:40.864 23:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:40.864 23:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:40.864 23:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:40.864 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:40.864 23:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:40.864 23:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:40.864 [2024-12-10 23:01:48.433237] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:30:40.864 [2024-12-10 23:01:48.433288] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2572572 ] 00:30:40.864 [2024-12-10 23:01:48.510057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:40.864 [2024-12-10 23:01:48.551045] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:30:41.123 23:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:41.123 23:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:30:41.123 23:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:30:41.382 Nvme0n1 00:30:41.382 23:01:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:30:41.641 [ 00:30:41.641 { 00:30:41.641 "name": "Nvme0n1", 00:30:41.641 "aliases": [ 00:30:41.641 "9e10ad87-9f01-4cda-a80b-8353aaeafce6" 00:30:41.641 ], 00:30:41.641 "product_name": "NVMe disk", 00:30:41.641 "block_size": 4096, 00:30:41.641 "num_blocks": 38912, 00:30:41.641 "uuid": "9e10ad87-9f01-4cda-a80b-8353aaeafce6", 00:30:41.641 "numa_id": 1, 00:30:41.641 "assigned_rate_limits": { 00:30:41.641 "rw_ios_per_sec": 0, 00:30:41.641 "rw_mbytes_per_sec": 0, 00:30:41.641 "r_mbytes_per_sec": 0, 00:30:41.641 "w_mbytes_per_sec": 0 00:30:41.641 }, 00:30:41.641 "claimed": false, 00:30:41.641 "zoned": false, 00:30:41.641 "supported_io_types": { 00:30:41.641 "read": true, 00:30:41.641 "write": true, 00:30:41.641 "unmap": true, 00:30:41.641 "flush": true, 00:30:41.641 "reset": true, 00:30:41.641 "nvme_admin": true, 00:30:41.641 "nvme_io": true, 00:30:41.641 "nvme_io_md": false, 00:30:41.641 "write_zeroes": true, 00:30:41.641 "zcopy": false, 00:30:41.641 "get_zone_info": false, 00:30:41.641 "zone_management": false, 00:30:41.641 "zone_append": false, 00:30:41.641 "compare": true, 00:30:41.641 "compare_and_write": true, 00:30:41.641 "abort": true, 00:30:41.641 "seek_hole": false, 00:30:41.641 "seek_data": false, 00:30:41.641 "copy": true, 00:30:41.641 "nvme_iov_md": false 00:30:41.641 }, 00:30:41.641 "memory_domains": [ 00:30:41.641 { 00:30:41.641 "dma_device_id": "system", 00:30:41.641 "dma_device_type": 1 00:30:41.641 } 00:30:41.641 ], 00:30:41.641 "driver_specific": { 00:30:41.641 "nvme": [ 00:30:41.641 { 00:30:41.641 "trid": { 00:30:41.641 "trtype": "TCP", 00:30:41.642 "adrfam": "IPv4", 00:30:41.642 "traddr": "10.0.0.2", 00:30:41.642 "trsvcid": "4420", 00:30:41.642 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:41.642 }, 00:30:41.642 "ctrlr_data": { 00:30:41.642 "cntlid": 1, 00:30:41.642 "vendor_id": "0x8086", 00:30:41.642 "model_number": "SPDK bdev Controller", 00:30:41.642 "serial_number": "SPDK0", 00:30:41.642 "firmware_revision": "25.01", 00:30:41.642 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:41.642 "oacs": { 00:30:41.642 "security": 0, 00:30:41.642 "format": 0, 00:30:41.642 "firmware": 0, 00:30:41.642 "ns_manage": 0 00:30:41.642 }, 00:30:41.642 "multi_ctrlr": true, 00:30:41.642 "ana_reporting": false 00:30:41.642 }, 00:30:41.642 "vs": { 00:30:41.642 "nvme_version": "1.3" 00:30:41.642 }, 00:30:41.642 "ns_data": { 00:30:41.642 "id": 1, 00:30:41.642 "can_share": true 00:30:41.642 } 00:30:41.642 } 00:30:41.642 ], 00:30:41.642 "mp_policy": "active_passive" 00:30:41.642 } 00:30:41.642 } 00:30:41.642 ] 00:30:41.642 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2572606 00:30:41.642 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:30:41.642 23:01:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:41.642 Running I/O for 10 seconds... 00:30:42.578 Latency(us) 00:30:42.578 [2024-12-10T22:01:50.307Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:42.578 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:42.578 Nvme0n1 : 1.00 22162.00 86.57 0.00 0.00 0.00 0.00 0.00 00:30:42.578 [2024-12-10T22:01:50.307Z] =================================================================================================================== 00:30:42.578 [2024-12-10T22:01:50.307Z] Total : 22162.00 86.57 0.00 0.00 0.00 0.00 0.00 00:30:42.578 00:30:43.514 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 28715e70-721e-4a4c-812d-e781d38e4d57 00:30:43.514 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:43.514 Nvme0n1 : 2.00 22440.00 87.66 0.00 0.00 0.00 0.00 0.00 00:30:43.514 [2024-12-10T22:01:51.243Z] =================================================================================================================== 00:30:43.514 [2024-12-10T22:01:51.243Z] Total : 22440.00 87.66 0.00 0.00 0.00 0.00 0.00 00:30:43.514 00:30:43.773 true 00:30:43.773 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 28715e70-721e-4a4c-812d-e781d38e4d57 00:30:43.773 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:30:44.032 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:30:44.032 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:30:44.032 23:01:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2572606 00:30:44.600 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:44.600 Nvme0n1 : 3.00 22622.33 88.37 0.00 0.00 0.00 0.00 0.00 00:30:44.600 [2024-12-10T22:01:52.329Z] =================================================================================================================== 00:30:44.600 [2024-12-10T22:01:52.329Z] Total : 22622.33 88.37 0.00 0.00 0.00 0.00 0.00 00:30:44.600 00:30:45.536 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:45.536 Nvme0n1 : 4.00 22745.25 88.85 0.00 0.00 0.00 0.00 0.00 00:30:45.536 [2024-12-10T22:01:53.265Z] =================================================================================================================== 00:30:45.536 [2024-12-10T22:01:53.265Z] Total : 22745.25 88.85 0.00 0.00 0.00 0.00 0.00 00:30:45.536 00:30:46.913 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:46.913 Nvme0n1 : 5.00 22806.40 89.09 0.00 0.00 0.00 0.00 0.00 00:30:46.913 [2024-12-10T22:01:54.642Z] =================================================================================================================== 00:30:46.913 [2024-12-10T22:01:54.642Z] Total : 22806.40 89.09 0.00 0.00 0.00 0.00 0.00 00:30:46.913 00:30:47.849 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:47.849 Nvme0n1 : 6.00 22852.67 89.27 0.00 0.00 0.00 0.00 0.00 00:30:47.849 [2024-12-10T22:01:55.578Z] =================================================================================================================== 00:30:47.849 [2024-12-10T22:01:55.578Z] Total : 22852.67 89.27 0.00 0.00 0.00 0.00 0.00 00:30:47.849 00:30:48.786 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:48.786 Nvme0n1 : 7.00 22908.14 89.48 0.00 0.00 0.00 0.00 0.00 00:30:48.786 [2024-12-10T22:01:56.515Z] =================================================================================================================== 00:30:48.786 [2024-12-10T22:01:56.515Z] Total : 22908.14 89.48 0.00 0.00 0.00 0.00 0.00 00:30:48.786 00:30:49.721 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:49.721 Nvme0n1 : 8.00 22918.00 89.52 0.00 0.00 0.00 0.00 0.00 00:30:49.721 [2024-12-10T22:01:57.450Z] =================================================================================================================== 00:30:49.721 [2024-12-10T22:01:57.450Z] Total : 22918.00 89.52 0.00 0.00 0.00 0.00 0.00 00:30:49.721 00:30:50.657 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:50.657 Nvme0n1 : 9.00 22946.89 89.64 0.00 0.00 0.00 0.00 0.00 00:30:50.657 [2024-12-10T22:01:58.386Z] =================================================================================================================== 00:30:50.657 [2024-12-10T22:01:58.386Z] Total : 22946.89 89.64 0.00 0.00 0.00 0.00 0.00 00:30:50.657 00:30:51.594 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:51.594 Nvme0n1 : 10.00 22974.80 89.75 0.00 0.00 0.00 0.00 0.00 00:30:51.594 [2024-12-10T22:01:59.323Z] =================================================================================================================== 00:30:51.594 [2024-12-10T22:01:59.323Z] Total : 22974.80 89.75 0.00 0.00 0.00 0.00 0.00 00:30:51.594 00:30:51.594 00:30:51.594 Latency(us) 00:30:51.594 [2024-12-10T22:01:59.323Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:51.594 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:51.594 Nvme0n1 : 10.01 22973.86 89.74 0.00 0.00 5568.60 3205.57 27468.13 00:30:51.594 [2024-12-10T22:01:59.323Z] =================================================================================================================== 00:30:51.594 [2024-12-10T22:01:59.323Z] Total : 22973.86 89.74 0.00 0.00 5568.60 3205.57 27468.13 00:30:51.594 { 00:30:51.594 "results": [ 00:30:51.594 { 00:30:51.594 "job": "Nvme0n1", 00:30:51.594 "core_mask": "0x2", 00:30:51.594 "workload": "randwrite", 00:30:51.594 "status": "finished", 00:30:51.594 "queue_depth": 128, 00:30:51.594 "io_size": 4096, 00:30:51.594 "runtime": 10.005981, 00:30:51.594 "iops": 22973.859334731897, 00:30:51.594 "mibps": 89.74163802629647, 00:30:51.594 "io_failed": 0, 00:30:51.594 "io_timeout": 0, 00:30:51.594 "avg_latency_us": 5568.599707880317, 00:30:51.594 "min_latency_us": 3205.5652173913045, 00:30:51.594 "max_latency_us": 27468.132173913044 00:30:51.594 } 00:30:51.594 ], 00:30:51.594 "core_count": 1 00:30:51.594 } 00:30:51.594 23:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2572572 00:30:51.594 23:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 2572572 ']' 00:30:51.594 23:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 2572572 00:30:51.594 23:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:30:51.594 23:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:51.594 23:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2572572 00:30:51.853 23:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:51.853 23:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:51.853 23:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2572572' 00:30:51.853 killing process with pid 2572572 00:30:51.853 23:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 2572572 00:30:51.853 Received shutdown signal, test time was about 10.000000 seconds 00:30:51.853 00:30:51.853 Latency(us) 00:30:51.853 [2024-12-10T22:01:59.582Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:51.853 [2024-12-10T22:01:59.582Z] =================================================================================================================== 00:30:51.853 [2024-12-10T22:01:59.582Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:51.853 23:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 2572572 00:30:51.853 23:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:52.111 23:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:52.370 23:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 28715e70-721e-4a4c-812d-e781d38e4d57 00:30:52.370 23:01:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:30:52.628 23:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:30:52.628 23:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:30:52.628 23:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2569495 00:30:52.628 23:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2569495 00:30:52.628 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2569495 Killed "${NVMF_APP[@]}" "$@" 00:30:52.628 23:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:30:52.628 23:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:30:52.628 23:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:52.628 23:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:52.628 23:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:52.628 23:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2574413 00:30:52.628 23:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2574413 00:30:52.628 23:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:30:52.628 23:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2574413 ']' 00:30:52.628 23:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:52.628 23:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:52.628 23:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:52.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:52.628 23:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:52.628 23:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:52.628 [2024-12-10 23:02:00.180375] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:52.628 [2024-12-10 23:02:00.181294] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:30:52.628 [2024-12-10 23:02:00.181333] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:52.628 [2024-12-10 23:02:00.259188] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:52.628 [2024-12-10 23:02:00.299437] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:52.628 [2024-12-10 23:02:00.299474] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:52.628 [2024-12-10 23:02:00.299482] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:52.628 [2024-12-10 23:02:00.299488] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:52.628 [2024-12-10 23:02:00.299493] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:52.628 [2024-12-10 23:02:00.300021] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:30:52.887 [2024-12-10 23:02:00.369203] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:52.887 [2024-12-10 23:02:00.369423] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:52.887 23:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:52.887 23:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:30:52.887 23:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:52.887 23:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:52.887 23:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:52.887 23:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:52.887 23:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:53.146 [2024-12-10 23:02:00.613579] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:30:53.146 [2024-12-10 23:02:00.613783] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:30:53.146 [2024-12-10 23:02:00.613869] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:30:53.146 23:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:30:53.146 23:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 9e10ad87-9f01-4cda-a80b-8353aaeafce6 00:30:53.146 23:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=9e10ad87-9f01-4cda-a80b-8353aaeafce6 00:30:53.146 23:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:53.146 23:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:30:53.146 23:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:53.146 23:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:53.146 23:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:53.146 23:02:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_get_bdevs -b 9e10ad87-9f01-4cda-a80b-8353aaeafce6 -t 2000 00:30:53.404 [ 00:30:53.404 { 00:30:53.404 "name": "9e10ad87-9f01-4cda-a80b-8353aaeafce6", 00:30:53.404 "aliases": [ 00:30:53.404 "lvs/lvol" 00:30:53.404 ], 00:30:53.404 "product_name": "Logical Volume", 00:30:53.404 "block_size": 4096, 00:30:53.404 "num_blocks": 38912, 00:30:53.404 "uuid": "9e10ad87-9f01-4cda-a80b-8353aaeafce6", 00:30:53.404 "assigned_rate_limits": { 00:30:53.404 "rw_ios_per_sec": 0, 00:30:53.404 "rw_mbytes_per_sec": 0, 00:30:53.404 "r_mbytes_per_sec": 0, 00:30:53.404 "w_mbytes_per_sec": 0 00:30:53.404 }, 00:30:53.404 "claimed": false, 00:30:53.404 "zoned": false, 00:30:53.404 "supported_io_types": { 00:30:53.404 "read": true, 00:30:53.404 "write": true, 00:30:53.404 "unmap": true, 00:30:53.404 "flush": false, 00:30:53.404 "reset": true, 00:30:53.404 "nvme_admin": false, 00:30:53.404 "nvme_io": false, 00:30:53.404 "nvme_io_md": false, 00:30:53.404 "write_zeroes": true, 00:30:53.404 "zcopy": false, 00:30:53.404 "get_zone_info": false, 00:30:53.404 "zone_management": false, 00:30:53.404 "zone_append": false, 00:30:53.404 "compare": false, 00:30:53.404 "compare_and_write": false, 00:30:53.404 "abort": false, 00:30:53.404 "seek_hole": true, 00:30:53.404 "seek_data": true, 00:30:53.404 "copy": false, 00:30:53.404 "nvme_iov_md": false 00:30:53.404 }, 00:30:53.404 "driver_specific": { 00:30:53.404 "lvol": { 00:30:53.404 "lvol_store_uuid": "28715e70-721e-4a4c-812d-e781d38e4d57", 00:30:53.404 "base_bdev": "aio_bdev", 00:30:53.404 "thin_provision": false, 00:30:53.404 "num_allocated_clusters": 38, 00:30:53.404 "snapshot": false, 00:30:53.404 "clone": false, 00:30:53.404 "esnap_clone": false 00:30:53.404 } 00:30:53.404 } 00:30:53.404 } 00:30:53.404 ] 00:30:53.405 23:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:30:53.405 23:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:30:53.405 23:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 28715e70-721e-4a4c-812d-e781d38e4d57 00:30:53.663 23:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:30:53.663 23:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 28715e70-721e-4a4c-812d-e781d38e4d57 00:30:53.663 23:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:30:53.922 23:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:30:53.922 23:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:53.922 [2024-12-10 23:02:01.612495] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:30:54.181 23:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 28715e70-721e-4a4c-812d-e781d38e4d57 00:30:54.181 23:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:30:54.181 23:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 28715e70-721e-4a4c-812d-e781d38e4d57 00:30:54.181 23:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:30:54.181 23:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:54.181 23:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:30:54.181 23:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:54.181 23:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:30:54.181 23:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:54.181 23:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:30:54.181 23:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py ]] 00:30:54.181 23:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 28715e70-721e-4a4c-812d-e781d38e4d57 00:30:54.181 request: 00:30:54.181 { 00:30:54.181 "uuid": "28715e70-721e-4a4c-812d-e781d38e4d57", 00:30:54.181 "method": "bdev_lvol_get_lvstores", 00:30:54.181 "req_id": 1 00:30:54.181 } 00:30:54.181 Got JSON-RPC error response 00:30:54.181 response: 00:30:54.181 { 00:30:54.181 "code": -19, 00:30:54.181 "message": "No such device" 00:30:54.181 } 00:30:54.181 23:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:30:54.181 23:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:54.181 23:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:54.181 23:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:54.181 23:02:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:54.439 aio_bdev 00:30:54.439 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 9e10ad87-9f01-4cda-a80b-8353aaeafce6 00:30:54.439 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=9e10ad87-9f01-4cda-a80b-8353aaeafce6 00:30:54.439 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:54.439 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:30:54.439 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:54.439 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:54.439 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:54.698 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_get_bdevs -b 9e10ad87-9f01-4cda-a80b-8353aaeafce6 -t 2000 00:30:54.957 [ 00:30:54.957 { 00:30:54.957 "name": "9e10ad87-9f01-4cda-a80b-8353aaeafce6", 00:30:54.957 "aliases": [ 00:30:54.957 "lvs/lvol" 00:30:54.957 ], 00:30:54.957 "product_name": "Logical Volume", 00:30:54.957 "block_size": 4096, 00:30:54.957 "num_blocks": 38912, 00:30:54.957 "uuid": "9e10ad87-9f01-4cda-a80b-8353aaeafce6", 00:30:54.957 "assigned_rate_limits": { 00:30:54.957 "rw_ios_per_sec": 0, 00:30:54.957 "rw_mbytes_per_sec": 0, 00:30:54.957 "r_mbytes_per_sec": 0, 00:30:54.957 "w_mbytes_per_sec": 0 00:30:54.957 }, 00:30:54.957 "claimed": false, 00:30:54.957 "zoned": false, 00:30:54.957 "supported_io_types": { 00:30:54.957 "read": true, 00:30:54.957 "write": true, 00:30:54.957 "unmap": true, 00:30:54.957 "flush": false, 00:30:54.957 "reset": true, 00:30:54.957 "nvme_admin": false, 00:30:54.957 "nvme_io": false, 00:30:54.957 "nvme_io_md": false, 00:30:54.957 "write_zeroes": true, 00:30:54.957 "zcopy": false, 00:30:54.957 "get_zone_info": false, 00:30:54.957 "zone_management": false, 00:30:54.957 "zone_append": false, 00:30:54.957 "compare": false, 00:30:54.957 "compare_and_write": false, 00:30:54.957 "abort": false, 00:30:54.957 "seek_hole": true, 00:30:54.957 "seek_data": true, 00:30:54.957 "copy": false, 00:30:54.957 "nvme_iov_md": false 00:30:54.957 }, 00:30:54.957 "driver_specific": { 00:30:54.957 "lvol": { 00:30:54.957 "lvol_store_uuid": "28715e70-721e-4a4c-812d-e781d38e4d57", 00:30:54.957 "base_bdev": "aio_bdev", 00:30:54.957 "thin_provision": false, 00:30:54.957 "num_allocated_clusters": 38, 00:30:54.957 "snapshot": false, 00:30:54.957 "clone": false, 00:30:54.957 "esnap_clone": false 00:30:54.957 } 00:30:54.957 } 00:30:54.957 } 00:30:54.957 ] 00:30:54.957 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:30:54.957 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 28715e70-721e-4a4c-812d-e781d38e4d57 00:30:54.957 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:30:54.957 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:30:54.957 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 28715e70-721e-4a4c-812d-e781d38e4d57 00:30:54.957 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:30:55.215 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:30:55.215 23:02:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_delete 9e10ad87-9f01-4cda-a80b-8353aaeafce6 00:30:55.473 23:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 28715e70-721e-4a4c-812d-e781d38e4d57 00:30:55.732 23:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:55.732 23:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aio_bdev 00:30:55.732 00:30:55.732 real 0m17.073s 00:30:55.732 user 0m34.457s 00:30:55.732 sys 0m3.898s 00:30:55.732 23:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:55.732 23:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:55.732 ************************************ 00:30:55.732 END TEST lvs_grow_dirty 00:30:55.732 ************************************ 00:30:55.991 23:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:30:55.991 23:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:30:55.991 23:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:30:55.991 23:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:30:55.991 23:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:30:55.991 23:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:30:55.991 23:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:30:55.991 23:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:30:55.991 23:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:30:55.991 nvmf_trace.0 00:30:55.991 23:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:30:55.991 23:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:30:55.991 23:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:55.991 23:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:30:55.991 23:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:55.991 23:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:30:55.991 23:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:55.991 23:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:55.991 rmmod nvme_tcp 00:30:55.991 rmmod nvme_fabrics 00:30:55.991 rmmod nvme_keyring 00:30:55.991 23:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:55.991 23:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:30:55.991 23:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:30:55.991 23:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2574413 ']' 00:30:55.991 23:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2574413 00:30:55.991 23:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 2574413 ']' 00:30:55.991 23:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 2574413 00:30:55.991 23:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:30:55.991 23:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:55.991 23:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2574413 00:30:55.991 23:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:55.991 23:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:55.991 23:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2574413' 00:30:55.991 killing process with pid 2574413 00:30:55.991 23:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 2574413 00:30:55.991 23:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 2574413 00:30:56.251 23:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:56.251 23:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:56.251 23:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:56.251 23:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:30:56.251 23:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:30:56.251 23:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:56.251 23:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:30:56.251 23:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:56.251 23:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:56.251 23:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:56.251 23:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:56.251 23:02:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:58.786 23:02:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:58.786 00:30:58.786 real 0m42.157s 00:30:58.786 user 0m52.453s 00:30:58.786 sys 0m10.225s 00:30:58.786 23:02:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:58.786 23:02:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:58.786 ************************************ 00:30:58.786 END TEST nvmf_lvs_grow 00:30:58.786 ************************************ 00:30:58.786 23:02:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:30:58.786 23:02:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:58.786 23:02:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:58.786 23:02:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:58.786 ************************************ 00:30:58.786 START TEST nvmf_bdev_io_wait 00:30:58.786 ************************************ 00:30:58.786 23:02:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:30:58.786 * Looking for test storage... 00:30:58.786 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:30:58.786 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:58.786 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:30:58.786 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:58.786 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:58.786 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:58.786 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:58.786 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:58.786 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:30:58.786 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:30:58.786 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:30:58.786 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:30:58.786 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:30:58.786 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:30:58.786 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:30:58.786 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:58.786 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:30:58.786 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:30:58.786 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:58.786 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:58.786 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:30:58.786 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:30:58.786 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:58.786 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:30:58.786 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:30:58.786 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:30:58.786 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:30:58.786 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:58.786 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:30:58.786 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:30:58.786 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:58.786 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:58.786 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:30:58.786 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:58.786 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:58.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:58.786 --rc genhtml_branch_coverage=1 00:30:58.786 --rc genhtml_function_coverage=1 00:30:58.786 --rc genhtml_legend=1 00:30:58.786 --rc geninfo_all_blocks=1 00:30:58.786 --rc geninfo_unexecuted_blocks=1 00:30:58.786 00:30:58.786 ' 00:30:58.786 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:58.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:58.786 --rc genhtml_branch_coverage=1 00:30:58.786 --rc genhtml_function_coverage=1 00:30:58.786 --rc genhtml_legend=1 00:30:58.786 --rc geninfo_all_blocks=1 00:30:58.786 --rc geninfo_unexecuted_blocks=1 00:30:58.786 00:30:58.786 ' 00:30:58.786 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:58.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:58.786 --rc genhtml_branch_coverage=1 00:30:58.786 --rc genhtml_function_coverage=1 00:30:58.786 --rc genhtml_legend=1 00:30:58.786 --rc geninfo_all_blocks=1 00:30:58.786 --rc geninfo_unexecuted_blocks=1 00:30:58.786 00:30:58.786 ' 00:30:58.786 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:58.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:58.786 --rc genhtml_branch_coverage=1 00:30:58.786 --rc genhtml_function_coverage=1 00:30:58.786 --rc genhtml_legend=1 00:30:58.786 --rc geninfo_all_blocks=1 00:30:58.786 --rc geninfo_unexecuted_blocks=1 00:30:58.786 00:30:58.786 ' 00:30:58.786 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:30:58.786 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:30:58.786 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:58.786 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:58.786 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:58.786 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:58.786 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:58.786 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:58.786 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:58.786 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:58.786 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:58.786 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:58.786 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:58.787 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:58.787 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:58.787 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:58.787 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:58.787 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:58.787 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:30:58.787 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:30:58.787 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:58.787 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:58.787 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:58.787 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:58.787 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:58.787 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:58.787 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:30:58.787 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:58.787 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:30:58.787 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:58.787 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:58.787 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:58.787 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:58.787 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:58.787 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:58.787 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:58.787 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:58.787 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:58.787 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:58.787 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:58.787 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:58.787 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:30:58.787 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:58.787 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:58.787 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:58.787 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:58.787 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:58.787 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:58.787 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:58.787 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:58.787 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:58.787 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:58.787 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:30:58.787 23:02:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:04.145 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:04.145 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:31:04.145 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:04.145 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:04.145 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:04.145 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:04.145 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:04.145 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:31:04.145 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:04.145 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:31:04.145 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:31:04.145 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:31:04.145 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:31:04.145 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:31:04.145 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:31:04.145 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:04.145 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:04.145 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:04.145 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:04.145 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:04.145 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:04.145 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:04.145 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:04.145 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:04.145 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:04.145 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:04.145 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:04.145 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:04.145 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:04.145 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:04.145 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:04.145 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:04.145 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:04.145 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:04.145 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:04.145 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:04.145 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:04.145 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:04.145 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:04.145 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:04.145 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:04.145 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:04.145 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:04.145 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:04.145 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:04.145 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:04.145 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:04.145 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:04.145 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:04.145 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:04.145 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:04.145 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:04.145 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:04.145 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:04.145 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:04.145 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:04.145 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:04.145 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:04.145 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:04.145 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:04.145 Found net devices under 0000:86:00.0: cvl_0_0 00:31:04.145 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:04.145 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:04.145 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:04.145 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:04.145 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:04.145 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:04.145 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:04.145 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:04.145 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:04.145 Found net devices under 0000:86:00.1: cvl_0_1 00:31:04.145 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:04.145 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:04.145 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:31:04.145 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:04.145 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:04.145 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:04.145 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:04.145 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:04.145 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:04.145 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:04.145 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:04.145 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:04.145 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:04.145 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:04.145 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:04.145 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:04.145 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:04.145 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:04.145 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:04.145 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:04.145 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:04.438 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:04.438 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:04.438 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:04.438 23:02:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:04.438 23:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:04.438 23:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:04.438 23:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:04.438 23:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:04.438 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:04.438 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.362 ms 00:31:04.438 00:31:04.438 --- 10.0.0.2 ping statistics --- 00:31:04.438 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:04.438 rtt min/avg/max/mdev = 0.362/0.362/0.362/0.000 ms 00:31:04.438 23:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:04.438 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:04.438 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.149 ms 00:31:04.438 00:31:04.438 --- 10.0.0.1 ping statistics --- 00:31:04.438 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:04.438 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:31:04.438 23:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:04.438 23:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:31:04.438 23:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:04.438 23:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:04.438 23:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:04.438 23:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:04.438 23:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:04.438 23:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:04.438 23:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:04.438 23:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:31:04.438 23:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:04.438 23:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:04.438 23:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:04.438 23:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2578470 00:31:04.438 23:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:31:04.438 23:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2578470 00:31:04.438 23:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 2578470 ']' 00:31:04.438 23:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:04.438 23:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:04.438 23:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:04.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:04.438 23:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:04.438 23:02:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:04.697 [2024-12-10 23:02:12.195602] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:04.697 [2024-12-10 23:02:12.196583] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:31:04.697 [2024-12-10 23:02:12.196616] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:04.697 [2024-12-10 23:02:12.279048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:04.697 [2024-12-10 23:02:12.326056] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:04.697 [2024-12-10 23:02:12.326092] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:04.697 [2024-12-10 23:02:12.326099] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:04.697 [2024-12-10 23:02:12.326105] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:04.697 [2024-12-10 23:02:12.326110] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:04.697 [2024-12-10 23:02:12.327539] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:31:04.697 [2024-12-10 23:02:12.327645] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:31:04.697 [2024-12-10 23:02:12.327726] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:31:04.697 [2024-12-10 23:02:12.327727] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:31:04.697 [2024-12-10 23:02:12.328104] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:05.635 23:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:05.635 23:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:31:05.635 23:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:05.635 23:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:05.635 23:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:05.635 23:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:05.635 23:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:31:05.635 23:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.636 23:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:05.636 23:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.636 23:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:31:05.636 23:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.636 23:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:05.636 [2024-12-10 23:02:13.140404] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:05.636 [2024-12-10 23:02:13.141183] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:05.636 [2024-12-10 23:02:13.141297] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:05.636 [2024-12-10 23:02:13.141444] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:05.636 23:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.636 23:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:05.636 23:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.636 23:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:05.636 [2024-12-10 23:02:13.152254] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:05.636 23:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.636 23:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:05.636 23:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.636 23:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:05.636 Malloc0 00:31:05.636 23:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.636 23:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:05.636 23:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.636 23:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:05.636 23:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.636 23:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:05.636 23:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.636 23:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:05.636 23:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.636 23:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:05.636 23:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:05.636 23:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:05.636 [2024-12-10 23:02:13.216616] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:05.636 23:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:05.636 23:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2578717 00:31:05.636 23:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:31:05.636 23:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:31:05.636 23:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2578719 00:31:05.636 23:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:31:05.636 23:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:31:05.636 23:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:05.636 23:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:05.636 { 00:31:05.636 "params": { 00:31:05.636 "name": "Nvme$subsystem", 00:31:05.636 "trtype": "$TEST_TRANSPORT", 00:31:05.636 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:05.636 "adrfam": "ipv4", 00:31:05.636 "trsvcid": "$NVMF_PORT", 00:31:05.636 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:05.636 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:05.636 "hdgst": ${hdgst:-false}, 00:31:05.636 "ddgst": ${ddgst:-false} 00:31:05.636 }, 00:31:05.636 "method": "bdev_nvme_attach_controller" 00:31:05.636 } 00:31:05.636 EOF 00:31:05.636 )") 00:31:05.636 23:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:31:05.636 23:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:31:05.636 23:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2578721 00:31:05.636 23:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:31:05.636 23:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:31:05.636 23:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:31:05.636 23:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:31:05.636 23:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:05.636 23:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:05.636 { 00:31:05.636 "params": { 00:31:05.636 "name": "Nvme$subsystem", 00:31:05.636 "trtype": "$TEST_TRANSPORT", 00:31:05.636 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:05.636 "adrfam": "ipv4", 00:31:05.636 "trsvcid": "$NVMF_PORT", 00:31:05.636 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:05.636 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:05.636 "hdgst": ${hdgst:-false}, 00:31:05.636 "ddgst": ${ddgst:-false} 00:31:05.636 }, 00:31:05.636 "method": "bdev_nvme_attach_controller" 00:31:05.636 } 00:31:05.636 EOF 00:31:05.636 )") 00:31:05.636 23:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:31:05.636 23:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2578724 00:31:05.636 23:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:31:05.636 23:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:31:05.636 23:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:31:05.636 23:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:05.636 23:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:05.636 { 00:31:05.636 "params": { 00:31:05.636 "name": "Nvme$subsystem", 00:31:05.636 "trtype": "$TEST_TRANSPORT", 00:31:05.636 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:05.636 "adrfam": "ipv4", 00:31:05.636 "trsvcid": "$NVMF_PORT", 00:31:05.636 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:05.636 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:05.636 "hdgst": ${hdgst:-false}, 00:31:05.636 "ddgst": ${ddgst:-false} 00:31:05.636 }, 00:31:05.636 "method": "bdev_nvme_attach_controller" 00:31:05.636 } 00:31:05.636 EOF 00:31:05.636 )") 00:31:05.636 23:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:31:05.636 23:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:31:05.636 23:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:31:05.636 23:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:31:05.636 23:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:31:05.636 23:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:05.636 23:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:05.636 { 00:31:05.636 "params": { 00:31:05.636 "name": "Nvme$subsystem", 00:31:05.636 "trtype": "$TEST_TRANSPORT", 00:31:05.636 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:05.636 "adrfam": "ipv4", 00:31:05.636 "trsvcid": "$NVMF_PORT", 00:31:05.636 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:05.636 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:05.636 "hdgst": ${hdgst:-false}, 00:31:05.636 "ddgst": ${ddgst:-false} 00:31:05.636 }, 00:31:05.636 "method": "bdev_nvme_attach_controller" 00:31:05.636 } 00:31:05.636 EOF 00:31:05.636 )") 00:31:05.636 23:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:31:05.636 23:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2578717 00:31:05.636 23:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:31:05.636 23:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:31:05.636 23:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:31:05.636 23:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:31:05.636 23:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:31:05.636 23:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:05.636 "params": { 00:31:05.636 "name": "Nvme1", 00:31:05.636 "trtype": "tcp", 00:31:05.636 "traddr": "10.0.0.2", 00:31:05.636 "adrfam": "ipv4", 00:31:05.637 "trsvcid": "4420", 00:31:05.637 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:05.637 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:05.637 "hdgst": false, 00:31:05.637 "ddgst": false 00:31:05.637 }, 00:31:05.637 "method": "bdev_nvme_attach_controller" 00:31:05.637 }' 00:31:05.637 23:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:31:05.637 23:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:31:05.637 23:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:05.637 "params": { 00:31:05.637 "name": "Nvme1", 00:31:05.637 "trtype": "tcp", 00:31:05.637 "traddr": "10.0.0.2", 00:31:05.637 "adrfam": "ipv4", 00:31:05.637 "trsvcid": "4420", 00:31:05.637 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:05.637 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:05.637 "hdgst": false, 00:31:05.637 "ddgst": false 00:31:05.637 }, 00:31:05.637 "method": "bdev_nvme_attach_controller" 00:31:05.637 }' 00:31:05.637 23:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:31:05.637 23:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:05.637 "params": { 00:31:05.637 "name": "Nvme1", 00:31:05.637 "trtype": "tcp", 00:31:05.637 "traddr": "10.0.0.2", 00:31:05.637 "adrfam": "ipv4", 00:31:05.637 "trsvcid": "4420", 00:31:05.637 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:05.637 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:05.637 "hdgst": false, 00:31:05.637 "ddgst": false 00:31:05.637 }, 00:31:05.637 "method": "bdev_nvme_attach_controller" 00:31:05.637 }' 00:31:05.637 23:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:31:05.637 23:02:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:05.637 "params": { 00:31:05.637 "name": "Nvme1", 00:31:05.637 "trtype": "tcp", 00:31:05.637 "traddr": "10.0.0.2", 00:31:05.637 "adrfam": "ipv4", 00:31:05.637 "trsvcid": "4420", 00:31:05.637 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:05.637 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:05.637 "hdgst": false, 00:31:05.637 "ddgst": false 00:31:05.637 }, 00:31:05.637 "method": "bdev_nvme_attach_controller" 00:31:05.637 }' 00:31:05.637 [2024-12-10 23:02:13.254581] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:31:05.637 [2024-12-10 23:02:13.254629] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:31:05.637 [2024-12-10 23:02:13.268775] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:31:05.637 [2024-12-10 23:02:13.268818] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:31:05.637 [2024-12-10 23:02:13.269134] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:31:05.637 [2024-12-10 23:02:13.269178] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:31:05.637 [2024-12-10 23:02:13.271420] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:31:05.637 [2024-12-10 23:02:13.271470] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:31:05.896 [2024-12-10 23:02:13.426134] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:05.896 [2024-12-10 23:02:13.468276] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:31:05.896 [2024-12-10 23:02:13.517364] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:05.896 [2024-12-10 23:02:13.559222] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:31:05.896 [2024-12-10 23:02:13.617489] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:06.155 [2024-12-10 23:02:13.675747] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:31:06.155 [2024-12-10 23:02:13.678618] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:06.155 [2024-12-10 23:02:13.719214] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:31:06.155 Running I/O for 1 seconds... 00:31:06.155 Running I/O for 1 seconds... 00:31:06.414 Running I/O for 1 seconds... 00:31:06.414 Running I/O for 1 seconds... 00:31:07.350 14150.00 IOPS, 55.27 MiB/s 00:31:07.350 Latency(us) 00:31:07.350 [2024-12-10T22:02:15.079Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:07.350 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:31:07.350 Nvme1n1 : 1.01 14195.50 55.45 0.00 0.00 8988.25 3348.03 10371.78 00:31:07.350 [2024-12-10T22:02:15.079Z] =================================================================================================================== 00:31:07.350 [2024-12-10T22:02:15.079Z] Total : 14195.50 55.45 0.00 0.00 8988.25 3348.03 10371.78 00:31:07.350 6764.00 IOPS, 26.42 MiB/s 00:31:07.350 Latency(us) 00:31:07.350 [2024-12-10T22:02:15.079Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:07.350 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:31:07.350 Nvme1n1 : 1.01 6823.73 26.66 0.00 0.00 18602.15 1638.40 28038.01 00:31:07.350 [2024-12-10T22:02:15.079Z] =================================================================================================================== 00:31:07.350 [2024-12-10T22:02:15.079Z] Total : 6823.73 26.66 0.00 0.00 18602.15 1638.40 28038.01 00:31:07.350 236312.00 IOPS, 923.09 MiB/s 00:31:07.351 Latency(us) 00:31:07.351 [2024-12-10T22:02:15.080Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:07.351 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:31:07.351 Nvme1n1 : 1.00 235940.37 921.64 0.00 0.00 539.57 229.73 1552.92 00:31:07.351 [2024-12-10T22:02:15.080Z] =================================================================================================================== 00:31:07.351 [2024-12-10T22:02:15.080Z] Total : 235940.37 921.64 0.00 0.00 539.57 229.73 1552.92 00:31:07.351 7218.00 IOPS, 28.20 MiB/s 00:31:07.351 Latency(us) 00:31:07.351 [2024-12-10T22:02:15.080Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:07.351 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:31:07.351 Nvme1n1 : 1.00 7330.27 28.63 0.00 0.00 17426.74 2578.70 36928.11 00:31:07.351 [2024-12-10T22:02:15.080Z] =================================================================================================================== 00:31:07.351 [2024-12-10T22:02:15.080Z] Total : 7330.27 28.63 0.00 0.00 17426.74 2578.70 36928.11 00:31:07.351 23:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2578719 00:31:07.351 23:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2578721 00:31:07.351 23:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2578724 00:31:07.611 23:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:07.611 23:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:07.611 23:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:07.611 23:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:07.611 23:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:31:07.612 23:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:31:07.612 23:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:07.612 23:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:31:07.612 23:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:07.612 23:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:31:07.612 23:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:07.612 23:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:07.612 rmmod nvme_tcp 00:31:07.612 rmmod nvme_fabrics 00:31:07.612 rmmod nvme_keyring 00:31:07.612 23:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:07.612 23:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:31:07.612 23:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:31:07.612 23:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2578470 ']' 00:31:07.612 23:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2578470 00:31:07.612 23:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 2578470 ']' 00:31:07.612 23:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 2578470 00:31:07.612 23:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:31:07.612 23:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:07.612 23:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2578470 00:31:07.612 23:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:07.612 23:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:07.612 23:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2578470' 00:31:07.612 killing process with pid 2578470 00:31:07.612 23:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 2578470 00:31:07.612 23:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 2578470 00:31:07.872 23:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:07.872 23:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:07.872 23:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:07.872 23:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:31:07.872 23:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:31:07.872 23:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:31:07.872 23:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:07.873 23:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:07.873 23:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:07.873 23:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:07.873 23:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:07.873 23:02:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:09.779 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:09.779 00:31:09.779 real 0m11.461s 00:31:09.779 user 0m15.241s 00:31:09.779 sys 0m6.400s 00:31:09.779 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:09.779 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:31:09.779 ************************************ 00:31:09.779 END TEST nvmf_bdev_io_wait 00:31:09.779 ************************************ 00:31:09.779 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:31:09.779 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:09.779 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:09.779 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:10.038 ************************************ 00:31:10.038 START TEST nvmf_queue_depth 00:31:10.039 ************************************ 00:31:10.039 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:31:10.039 * Looking for test storage... 00:31:10.039 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:31:10.039 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:10.039 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:31:10.039 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:10.039 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:10.039 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:10.039 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:10.039 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:10.039 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:31:10.039 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:31:10.039 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:31:10.039 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:31:10.039 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:31:10.039 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:31:10.039 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:31:10.039 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:10.039 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:31:10.039 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:31:10.039 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:10.039 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:10.039 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:31:10.039 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:31:10.039 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:10.039 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:31:10.039 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:31:10.039 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:31:10.039 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:31:10.039 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:10.039 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:31:10.039 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:31:10.039 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:10.039 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:10.039 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:31:10.039 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:10.039 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:10.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:10.039 --rc genhtml_branch_coverage=1 00:31:10.039 --rc genhtml_function_coverage=1 00:31:10.039 --rc genhtml_legend=1 00:31:10.039 --rc geninfo_all_blocks=1 00:31:10.039 --rc geninfo_unexecuted_blocks=1 00:31:10.039 00:31:10.039 ' 00:31:10.039 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:10.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:10.039 --rc genhtml_branch_coverage=1 00:31:10.039 --rc genhtml_function_coverage=1 00:31:10.039 --rc genhtml_legend=1 00:31:10.039 --rc geninfo_all_blocks=1 00:31:10.039 --rc geninfo_unexecuted_blocks=1 00:31:10.039 00:31:10.039 ' 00:31:10.039 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:10.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:10.039 --rc genhtml_branch_coverage=1 00:31:10.039 --rc genhtml_function_coverage=1 00:31:10.039 --rc genhtml_legend=1 00:31:10.039 --rc geninfo_all_blocks=1 00:31:10.039 --rc geninfo_unexecuted_blocks=1 00:31:10.039 00:31:10.039 ' 00:31:10.039 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:10.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:10.039 --rc genhtml_branch_coverage=1 00:31:10.039 --rc genhtml_function_coverage=1 00:31:10.039 --rc genhtml_legend=1 00:31:10.039 --rc geninfo_all_blocks=1 00:31:10.039 --rc geninfo_unexecuted_blocks=1 00:31:10.039 00:31:10.039 ' 00:31:10.039 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:31:10.039 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:31:10.039 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:10.039 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:10.039 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:10.039 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:10.039 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:10.039 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:10.039 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:10.039 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:10.039 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:10.039 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:10.039 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:10.039 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:10.039 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:10.039 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:10.039 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:10.039 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:10.039 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:31:10.039 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:31:10.039 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:10.039 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:10.039 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:10.039 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:10.039 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:10.039 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:10.039 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:31:10.039 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:10.039 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:31:10.039 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:10.039 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:10.039 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:10.040 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:10.040 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:10.040 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:10.040 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:10.040 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:10.040 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:10.040 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:10.040 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:31:10.040 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:31:10.040 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:10.040 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:31:10.040 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:10.040 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:10.040 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:10.040 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:10.040 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:10.040 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:10.040 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:10.040 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:10.040 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:10.040 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:10.040 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:31:10.040 23:02:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:16.610 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:16.610 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:31:16.610 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:16.610 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:16.610 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:16.610 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:16.610 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:16.610 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:31:16.610 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:16.610 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:31:16.610 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:31:16.610 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:31:16.610 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:31:16.610 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:31:16.610 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:31:16.610 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:16.610 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:16.610 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:16.610 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:16.610 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:16.610 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:16.610 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:16.610 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:16.610 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:16.610 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:16.610 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:16.610 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:16.610 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:16.610 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:16.610 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:16.610 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:16.610 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:16.610 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:16.610 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:16.610 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:16.610 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:16.610 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:16.610 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:16.610 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:16.610 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:16.610 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:16.610 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:16.610 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:16.610 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:16.610 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:16.610 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:16.610 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:16.610 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:16.610 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:16.610 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:16.610 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:16.610 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:16.610 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:16.610 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:16.610 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:16.610 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:16.610 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:16.610 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:16.610 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:16.610 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:16.610 Found net devices under 0000:86:00.0: cvl_0_0 00:31:16.611 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:16.611 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:16.611 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:16.611 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:16.611 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:16.611 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:16.611 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:16.611 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:16.611 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:16.611 Found net devices under 0000:86:00.1: cvl_0_1 00:31:16.611 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:16.611 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:16.611 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:31:16.611 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:16.611 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:16.611 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:16.611 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:16.611 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:16.611 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:16.611 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:16.611 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:16.611 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:16.611 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:16.611 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:16.611 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:16.611 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:16.611 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:16.611 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:16.611 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:16.611 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:16.611 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:16.611 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:16.611 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:16.611 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:16.611 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:16.611 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:16.611 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:16.611 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:16.611 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:16.611 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:16.611 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.379 ms 00:31:16.611 00:31:16.611 --- 10.0.0.2 ping statistics --- 00:31:16.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:16.611 rtt min/avg/max/mdev = 0.379/0.379/0.379/0.000 ms 00:31:16.611 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:16.611 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:16.611 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:31:16.611 00:31:16.611 --- 10.0.0.1 ping statistics --- 00:31:16.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:16.611 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:31:16.611 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:16.611 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:31:16.611 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:16.611 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:16.611 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:16.611 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:16.611 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:16.611 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:16.611 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:16.611 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:31:16.611 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:16.611 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:16.611 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:16.611 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2582502 00:31:16.611 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2582502 00:31:16.611 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:31:16.611 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2582502 ']' 00:31:16.611 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:16.611 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:16.611 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:16.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:16.611 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:16.611 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:16.611 [2024-12-10 23:02:23.668142] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:16.611 [2024-12-10 23:02:23.669135] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:31:16.611 [2024-12-10 23:02:23.669181] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:16.611 [2024-12-10 23:02:23.750636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:16.611 [2024-12-10 23:02:23.792279] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:16.611 [2024-12-10 23:02:23.792314] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:16.611 [2024-12-10 23:02:23.792321] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:16.611 [2024-12-10 23:02:23.792327] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:16.611 [2024-12-10 23:02:23.792333] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:16.611 [2024-12-10 23:02:23.792844] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:31:16.611 [2024-12-10 23:02:23.860922] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:16.611 [2024-12-10 23:02:23.861143] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:16.611 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:16.611 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:31:16.611 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:16.611 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:16.611 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:16.611 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:16.611 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:16.611 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:16.611 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:16.611 [2024-12-10 23:02:23.941496] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:16.611 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:16.611 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:16.611 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:16.611 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:16.611 Malloc0 00:31:16.611 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:16.612 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:16.612 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:16.612 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:16.612 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:16.612 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:16.612 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:16.612 23:02:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:16.612 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:16.612 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:16.612 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:16.612 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:16.612 [2024-12-10 23:02:24.009661] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:16.612 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:16.612 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2582592 00:31:16.612 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:16.612 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:31:16.612 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2582592 /var/tmp/bdevperf.sock 00:31:16.612 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2582592 ']' 00:31:16.612 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:16.612 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:16.612 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:16.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:16.612 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:16.612 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:16.612 [2024-12-10 23:02:24.060529] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:31:16.612 [2024-12-10 23:02:24.060573] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2582592 ] 00:31:16.612 [2024-12-10 23:02:24.124996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:16.612 [2024-12-10 23:02:24.169917] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:31:16.612 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:16.612 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:31:16.612 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:16.612 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:16.612 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:16.871 NVMe0n1 00:31:16.871 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:16.871 23:02:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:16.871 Running I/O for 10 seconds... 00:31:18.744 11488.00 IOPS, 44.88 MiB/s [2024-12-10T22:02:27.850Z] 11936.00 IOPS, 46.62 MiB/s [2024-12-10T22:02:28.786Z] 12128.00 IOPS, 47.38 MiB/s [2024-12-10T22:02:29.722Z] 12126.75 IOPS, 47.37 MiB/s [2024-12-10T22:02:30.656Z] 12088.40 IOPS, 47.22 MiB/s [2024-12-10T22:02:31.592Z] 12113.50 IOPS, 47.32 MiB/s [2024-12-10T22:02:32.527Z] 12148.86 IOPS, 47.46 MiB/s [2024-12-10T22:02:33.463Z] 12171.62 IOPS, 47.55 MiB/s [2024-12-10T22:02:34.841Z] 12216.33 IOPS, 47.72 MiB/s [2024-12-10T22:02:34.841Z] 12264.60 IOPS, 47.91 MiB/s 00:31:27.112 Latency(us) 00:31:27.112 [2024-12-10T22:02:34.841Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:27.112 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:31:27.112 Verification LBA range: start 0x0 length 0x4000 00:31:27.112 NVMe0n1 : 10.10 12227.80 47.76 0.00 0.00 83121.64 19717.79 63370.46 00:31:27.112 [2024-12-10T22:02:34.841Z] =================================================================================================================== 00:31:27.112 [2024-12-10T22:02:34.841Z] Total : 12227.80 47.76 0.00 0.00 83121.64 19717.79 63370.46 00:31:27.112 { 00:31:27.112 "results": [ 00:31:27.112 { 00:31:27.112 "job": "NVMe0n1", 00:31:27.112 "core_mask": "0x1", 00:31:27.112 "workload": "verify", 00:31:27.112 "status": "finished", 00:31:27.112 "verify_range": { 00:31:27.112 "start": 0, 00:31:27.112 "length": 16384 00:31:27.112 }, 00:31:27.112 "queue_depth": 1024, 00:31:27.112 "io_size": 4096, 00:31:27.112 "runtime": 10.101818, 00:31:27.112 "iops": 12227.798996180687, 00:31:27.112 "mibps": 47.76483982883081, 00:31:27.112 "io_failed": 0, 00:31:27.112 "io_timeout": 0, 00:31:27.112 "avg_latency_us": 83121.63816003286, 00:31:27.112 "min_latency_us": 19717.787826086955, 00:31:27.112 "max_latency_us": 63370.46260869565 00:31:27.112 } 00:31:27.112 ], 00:31:27.112 "core_count": 1 00:31:27.112 } 00:31:27.112 23:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2582592 00:31:27.112 23:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2582592 ']' 00:31:27.112 23:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2582592 00:31:27.112 23:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:31:27.112 23:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:27.112 23:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2582592 00:31:27.112 23:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:27.112 23:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:27.112 23:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2582592' 00:31:27.112 killing process with pid 2582592 00:31:27.112 23:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2582592 00:31:27.112 Received shutdown signal, test time was about 10.000000 seconds 00:31:27.112 00:31:27.112 Latency(us) 00:31:27.112 [2024-12-10T22:02:34.841Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:27.112 [2024-12-10T22:02:34.841Z] =================================================================================================================== 00:31:27.112 [2024-12-10T22:02:34.841Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:27.112 23:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2582592 00:31:27.112 23:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:31:27.112 23:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:31:27.112 23:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:27.112 23:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:31:27.112 23:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:27.112 23:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:31:27.112 23:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:27.112 23:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:27.112 rmmod nvme_tcp 00:31:27.112 rmmod nvme_fabrics 00:31:27.112 rmmod nvme_keyring 00:31:27.371 23:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:27.371 23:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:31:27.371 23:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:31:27.371 23:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2582502 ']' 00:31:27.371 23:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2582502 00:31:27.371 23:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2582502 ']' 00:31:27.371 23:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2582502 00:31:27.371 23:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:31:27.371 23:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:27.371 23:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2582502 00:31:27.371 23:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:27.371 23:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:27.371 23:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2582502' 00:31:27.371 killing process with pid 2582502 00:31:27.371 23:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2582502 00:31:27.371 23:02:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2582502 00:31:27.371 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:27.371 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:27.371 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:27.371 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:31:27.371 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:31:27.371 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:27.371 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:31:27.371 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:27.371 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:27.371 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:27.371 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:27.371 23:02:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:29.905 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:29.905 00:31:29.905 real 0m19.645s 00:31:29.905 user 0m22.764s 00:31:29.905 sys 0m6.182s 00:31:29.905 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:29.905 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:29.905 ************************************ 00:31:29.905 END TEST nvmf_queue_depth 00:31:29.905 ************************************ 00:31:29.905 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:31:29.905 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:29.905 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:29.905 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:29.905 ************************************ 00:31:29.905 START TEST nvmf_target_multipath 00:31:29.905 ************************************ 00:31:29.905 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:31:29.905 * Looking for test storage... 00:31:29.905 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:31:29.905 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:29.905 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:31:29.905 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:29.905 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:29.905 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:29.906 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:29.906 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:29.906 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:31:29.906 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:31:29.906 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:31:29.906 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:31:29.906 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:31:29.906 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:31:29.906 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:31:29.906 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:29.906 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:31:29.906 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:31:29.906 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:29.906 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:29.906 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:31:29.906 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:31:29.906 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:29.906 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:31:29.906 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:31:29.906 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:31:29.906 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:31:29.906 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:29.906 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:31:29.906 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:31:29.906 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:29.906 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:29.906 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:31:29.906 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:29.906 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:29.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:29.906 --rc genhtml_branch_coverage=1 00:31:29.906 --rc genhtml_function_coverage=1 00:31:29.906 --rc genhtml_legend=1 00:31:29.906 --rc geninfo_all_blocks=1 00:31:29.906 --rc geninfo_unexecuted_blocks=1 00:31:29.906 00:31:29.906 ' 00:31:29.906 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:29.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:29.906 --rc genhtml_branch_coverage=1 00:31:29.906 --rc genhtml_function_coverage=1 00:31:29.906 --rc genhtml_legend=1 00:31:29.906 --rc geninfo_all_blocks=1 00:31:29.906 --rc geninfo_unexecuted_blocks=1 00:31:29.906 00:31:29.906 ' 00:31:29.906 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:29.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:29.906 --rc genhtml_branch_coverage=1 00:31:29.906 --rc genhtml_function_coverage=1 00:31:29.906 --rc genhtml_legend=1 00:31:29.906 --rc geninfo_all_blocks=1 00:31:29.906 --rc geninfo_unexecuted_blocks=1 00:31:29.906 00:31:29.906 ' 00:31:29.906 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:29.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:29.906 --rc genhtml_branch_coverage=1 00:31:29.906 --rc genhtml_function_coverage=1 00:31:29.906 --rc genhtml_legend=1 00:31:29.906 --rc geninfo_all_blocks=1 00:31:29.906 --rc geninfo_unexecuted_blocks=1 00:31:29.906 00:31:29.906 ' 00:31:29.906 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:31:29.906 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:31:29.906 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:29.906 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:29.906 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:29.906 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:29.906 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:29.906 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:29.906 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:29.906 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:29.906 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:29.906 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:29.906 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:29.906 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:29.906 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:29.906 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:29.906 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:29.906 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:29.906 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:31:29.906 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:31:29.906 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:29.906 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:29.906 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:29.906 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:29.906 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:29.906 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:29.906 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:31:29.906 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:29.906 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:31:29.906 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:29.906 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:29.906 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:29.906 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:29.906 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:29.906 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:29.906 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:29.906 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:29.906 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:29.906 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:29.907 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:29.907 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:29.907 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:31:29.907 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:31:29.907 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:31:29.907 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:29.907 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:29.907 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:29.907 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:29.907 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:29.907 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:29.907 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:29.907 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:29.907 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:29.907 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:29.907 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:31:29.907 23:02:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:31:36.475 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:36.475 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:31:36.475 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:36.475 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:36.475 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:36.475 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:36.475 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:36.475 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:31:36.475 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:36.475 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:31:36.475 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:31:36.475 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:31:36.475 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:31:36.475 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:31:36.475 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:31:36.475 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:36.475 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:36.475 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:36.475 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:36.475 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:36.475 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:36.475 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:36.476 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:36.476 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:36.476 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:36.476 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:36.476 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:36.476 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:36.476 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:36.476 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:36.476 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:36.476 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:36.476 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:36.476 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:36.476 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:36.476 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:36.476 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:36.476 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:36.476 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:36.476 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:36.476 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:36.476 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:36.476 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:36.476 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:36.476 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:36.476 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:36.476 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:36.476 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:36.476 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:36.476 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:36.476 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:36.476 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:36.476 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:36.476 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:36.476 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:36.476 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:36.476 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:36.476 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:36.476 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:36.476 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:36.476 Found net devices under 0000:86:00.0: cvl_0_0 00:31:36.476 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:36.476 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:36.476 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:36.476 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:36.476 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:36.476 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:36.476 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:36.476 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:36.476 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:36.476 Found net devices under 0000:86:00.1: cvl_0_1 00:31:36.476 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:36.476 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:36.476 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:31:36.476 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:36.476 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:36.476 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:36.476 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:36.476 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:36.476 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:36.476 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:36.476 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:36.476 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:36.476 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:36.476 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:36.476 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:36.476 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:36.476 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:36.476 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:36.476 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:36.476 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:36.476 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:36.476 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:36.476 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:36.476 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:36.476 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:36.476 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:36.476 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:36.476 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:36.476 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:36.476 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:36.476 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.397 ms 00:31:36.476 00:31:36.476 --- 10.0.0.2 ping statistics --- 00:31:36.476 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:36.476 rtt min/avg/max/mdev = 0.397/0.397/0.397/0.000 ms 00:31:36.476 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:36.476 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:36.476 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:31:36.476 00:31:36.476 --- 10.0.0.1 ping statistics --- 00:31:36.476 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:36.476 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:31:36.476 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:36.476 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:31:36.476 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:36.476 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:36.476 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:36.476 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:36.476 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:36.476 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:36.476 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:36.476 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:31:36.477 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:31:36.477 only one NIC for nvmf test 00:31:36.477 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:31:36.477 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:36.477 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:31:36.477 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:36.477 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:31:36.477 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:36.477 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:36.477 rmmod nvme_tcp 00:31:36.477 rmmod nvme_fabrics 00:31:36.477 rmmod nvme_keyring 00:31:36.477 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:36.477 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:31:36.477 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:31:36.477 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:31:36.477 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:36.477 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:36.477 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:36.477 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:31:36.477 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:31:36.477 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:36.477 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:31:36.477 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:36.477 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:36.477 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:36.477 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:36.477 23:02:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:37.855 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:37.855 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:31:37.855 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:31:37.855 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:37.855 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:31:37.855 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:37.855 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:31:37.855 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:37.855 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:37.855 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:37.855 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:31:37.855 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:31:37.855 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:31:37.855 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:37.855 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:37.855 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:37.855 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:31:37.855 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:31:37.855 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:37.855 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:31:37.855 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:37.855 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:37.855 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:37.855 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:37.855 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:37.855 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:37.855 00:31:37.855 real 0m8.264s 00:31:37.855 user 0m1.798s 00:31:37.855 sys 0m4.474s 00:31:37.855 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:37.855 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:31:37.855 ************************************ 00:31:37.855 END TEST nvmf_target_multipath 00:31:37.855 ************************************ 00:31:37.855 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:31:37.855 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:37.855 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:37.855 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:37.855 ************************************ 00:31:37.855 START TEST nvmf_zcopy 00:31:37.855 ************************************ 00:31:37.855 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:31:38.115 * Looking for test storage... 00:31:38.115 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:31:38.115 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:38.115 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:31:38.115 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:38.115 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:38.115 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:38.115 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:38.115 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:38.115 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:31:38.115 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:31:38.115 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:31:38.115 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:31:38.115 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:31:38.115 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:31:38.115 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:31:38.115 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:38.115 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:31:38.115 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:31:38.115 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:38.115 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:38.115 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:31:38.115 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:31:38.115 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:38.115 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:31:38.115 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:31:38.115 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:31:38.115 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:31:38.115 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:38.115 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:31:38.115 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:31:38.115 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:38.115 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:38.115 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:31:38.115 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:38.115 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:38.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:38.115 --rc genhtml_branch_coverage=1 00:31:38.115 --rc genhtml_function_coverage=1 00:31:38.115 --rc genhtml_legend=1 00:31:38.115 --rc geninfo_all_blocks=1 00:31:38.115 --rc geninfo_unexecuted_blocks=1 00:31:38.115 00:31:38.115 ' 00:31:38.115 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:38.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:38.115 --rc genhtml_branch_coverage=1 00:31:38.115 --rc genhtml_function_coverage=1 00:31:38.115 --rc genhtml_legend=1 00:31:38.115 --rc geninfo_all_blocks=1 00:31:38.115 --rc geninfo_unexecuted_blocks=1 00:31:38.115 00:31:38.115 ' 00:31:38.115 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:38.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:38.115 --rc genhtml_branch_coverage=1 00:31:38.115 --rc genhtml_function_coverage=1 00:31:38.115 --rc genhtml_legend=1 00:31:38.115 --rc geninfo_all_blocks=1 00:31:38.115 --rc geninfo_unexecuted_blocks=1 00:31:38.115 00:31:38.115 ' 00:31:38.115 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:38.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:38.115 --rc genhtml_branch_coverage=1 00:31:38.115 --rc genhtml_function_coverage=1 00:31:38.115 --rc genhtml_legend=1 00:31:38.115 --rc geninfo_all_blocks=1 00:31:38.115 --rc geninfo_unexecuted_blocks=1 00:31:38.115 00:31:38.115 ' 00:31:38.115 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:31:38.115 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:31:38.115 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:38.115 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:38.115 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:38.115 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:38.115 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:38.115 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:38.115 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:38.115 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:38.115 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:38.115 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:38.115 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:38.115 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:38.115 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:38.115 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:38.115 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:38.115 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:38.115 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:31:38.115 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:31:38.116 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:38.116 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:38.116 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:38.116 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:38.116 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:38.116 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:38.116 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:31:38.116 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:38.116 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:31:38.116 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:38.116 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:38.116 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:38.116 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:38.116 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:38.116 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:38.116 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:38.116 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:38.116 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:38.116 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:38.116 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:31:38.116 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:38.116 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:38.116 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:38.116 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:38.116 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:38.116 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:38.116 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:38.116 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:38.116 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:38.116 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:38.116 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:31:38.116 23:02:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:44.684 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:44.684 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:31:44.684 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:44.684 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:44.684 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:44.684 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:44.684 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:44.684 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:31:44.684 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:44.684 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:31:44.684 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:31:44.684 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:31:44.684 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:31:44.684 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:31:44.684 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:31:44.684 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:44.684 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:44.684 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:44.684 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:44.684 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:44.684 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:44.684 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:44.684 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:44.684 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:44.684 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:44.684 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:44.684 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:44.684 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:44.684 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:44.684 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:44.684 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:44.684 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:44.684 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:44.684 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:44.684 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:44.684 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:44.684 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:44.684 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:44.684 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:44.684 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:44.684 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:44.684 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:44.684 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:44.684 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:44.684 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:44.684 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:44.684 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:44.684 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:44.684 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:44.684 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:44.684 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:44.684 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:44.684 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:44.684 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:44.684 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:44.684 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:44.684 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:44.684 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:44.684 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:44.684 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:44.684 Found net devices under 0000:86:00.0: cvl_0_0 00:31:44.684 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:44.684 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:44.684 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:44.684 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:44.684 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:44.684 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:44.684 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:44.684 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:44.684 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:44.684 Found net devices under 0000:86:00.1: cvl_0_1 00:31:44.684 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:44.684 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:44.684 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:31:44.684 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:44.684 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:44.684 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:44.684 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:44.684 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:44.684 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:44.684 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:44.684 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:44.684 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:44.684 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:44.684 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:44.684 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:44.684 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:44.684 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:44.684 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:44.684 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:44.684 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:44.685 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:44.685 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:44.685 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:44.685 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:44.685 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:44.685 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:44.685 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:44.685 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:44.685 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:44.685 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:44.685 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:31:44.685 00:31:44.685 --- 10.0.0.2 ping statistics --- 00:31:44.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:44.685 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:31:44.685 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:44.685 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:44.685 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.151 ms 00:31:44.685 00:31:44.685 --- 10.0.0.1 ping statistics --- 00:31:44.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:44.685 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:31:44.685 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:44.685 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:31:44.685 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:44.685 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:44.685 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:44.685 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:44.685 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:44.685 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:44.685 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:44.685 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:31:44.685 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:44.685 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:44.685 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:44.685 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2591264 00:31:44.685 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:31:44.685 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2591264 00:31:44.685 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 2591264 ']' 00:31:44.685 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:44.685 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:44.685 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:44.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:44.685 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:44.685 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:44.685 [2024-12-10 23:02:51.748400] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:44.685 [2024-12-10 23:02:51.749386] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:31:44.685 [2024-12-10 23:02:51.749422] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:44.685 [2024-12-10 23:02:51.830113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:44.685 [2024-12-10 23:02:51.870900] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:44.685 [2024-12-10 23:02:51.870936] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:44.685 [2024-12-10 23:02:51.870944] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:44.685 [2024-12-10 23:02:51.870950] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:44.685 [2024-12-10 23:02:51.870959] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:44.685 [2024-12-10 23:02:51.871482] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:31:44.685 [2024-12-10 23:02:51.940155] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:44.685 [2024-12-10 23:02:51.940378] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:44.685 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:44.685 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:31:44.685 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:44.685 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:44.685 23:02:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:44.685 23:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:44.685 23:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:31:44.685 23:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:31:44.685 23:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:44.685 23:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:44.685 [2024-12-10 23:02:52.020108] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:44.685 23:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:44.685 23:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:44.685 23:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:44.685 23:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:44.685 23:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:44.685 23:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:44.685 23:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:44.685 23:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:44.685 [2024-12-10 23:02:52.044326] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:44.685 23:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:44.685 23:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:44.685 23:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:44.685 23:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:44.685 23:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:44.685 23:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:31:44.685 23:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:44.685 23:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:44.685 malloc0 00:31:44.685 23:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:44.685 23:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:31:44.685 23:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:44.685 23:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:44.685 23:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:44.685 23:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:31:44.685 23:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:31:44.685 23:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:31:44.685 23:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:31:44.685 23:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:44.685 23:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:44.685 { 00:31:44.685 "params": { 00:31:44.685 "name": "Nvme$subsystem", 00:31:44.685 "trtype": "$TEST_TRANSPORT", 00:31:44.685 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:44.685 "adrfam": "ipv4", 00:31:44.685 "trsvcid": "$NVMF_PORT", 00:31:44.685 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:44.685 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:44.685 "hdgst": ${hdgst:-false}, 00:31:44.685 "ddgst": ${ddgst:-false} 00:31:44.685 }, 00:31:44.685 "method": "bdev_nvme_attach_controller" 00:31:44.685 } 00:31:44.685 EOF 00:31:44.685 )") 00:31:44.685 23:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:31:44.685 23:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:31:44.685 23:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:31:44.686 23:02:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:44.686 "params": { 00:31:44.686 "name": "Nvme1", 00:31:44.686 "trtype": "tcp", 00:31:44.686 "traddr": "10.0.0.2", 00:31:44.686 "adrfam": "ipv4", 00:31:44.686 "trsvcid": "4420", 00:31:44.686 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:44.686 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:44.686 "hdgst": false, 00:31:44.686 "ddgst": false 00:31:44.686 }, 00:31:44.686 "method": "bdev_nvme_attach_controller" 00:31:44.686 }' 00:31:44.686 [2024-12-10 23:02:52.130378] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:31:44.686 [2024-12-10 23:02:52.130422] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2591407 ] 00:31:44.686 [2024-12-10 23:02:52.204539] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:44.686 [2024-12-10 23:02:52.247347] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:31:44.944 Running I/O for 10 seconds... 00:31:46.816 8333.00 IOPS, 65.10 MiB/s [2024-12-10T22:02:55.920Z] 8420.00 IOPS, 65.78 MiB/s [2024-12-10T22:02:56.856Z] 8441.00 IOPS, 65.95 MiB/s [2024-12-10T22:02:57.791Z] 8452.50 IOPS, 66.04 MiB/s [2024-12-10T22:02:58.726Z] 8466.60 IOPS, 66.15 MiB/s [2024-12-10T22:02:59.662Z] 8471.67 IOPS, 66.18 MiB/s [2024-12-10T22:03:00.600Z] 8474.00 IOPS, 66.20 MiB/s [2024-12-10T22:03:01.975Z] 8467.25 IOPS, 66.15 MiB/s [2024-12-10T22:03:02.911Z] 8471.89 IOPS, 66.19 MiB/s [2024-12-10T22:03:02.911Z] 8470.40 IOPS, 66.17 MiB/s 00:31:55.182 Latency(us) 00:31:55.182 [2024-12-10T22:03:02.911Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:55.182 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:31:55.182 Verification LBA range: start 0x0 length 0x1000 00:31:55.182 Nvme1n1 : 10.01 8474.35 66.21 0.00 0.00 15060.96 1638.40 21655.37 00:31:55.182 [2024-12-10T22:03:02.911Z] =================================================================================================================== 00:31:55.182 [2024-12-10T22:03:02.911Z] Total : 8474.35 66.21 0.00 0.00 15060.96 1638.40 21655.37 00:31:55.182 23:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2593132 00:31:55.182 23:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:31:55.182 23:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:55.182 23:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:31:55.182 23:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:31:55.182 23:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:31:55.182 23:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:31:55.182 23:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:55.182 23:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:55.182 { 00:31:55.182 "params": { 00:31:55.182 "name": "Nvme$subsystem", 00:31:55.182 "trtype": "$TEST_TRANSPORT", 00:31:55.182 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:55.182 "adrfam": "ipv4", 00:31:55.182 "trsvcid": "$NVMF_PORT", 00:31:55.182 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:55.182 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:55.182 "hdgst": ${hdgst:-false}, 00:31:55.182 "ddgst": ${ddgst:-false} 00:31:55.182 }, 00:31:55.182 "method": "bdev_nvme_attach_controller" 00:31:55.182 } 00:31:55.182 EOF 00:31:55.182 )") 00:31:55.182 23:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:31:55.182 [2024-12-10 23:03:02.727849] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:55.182 [2024-12-10 23:03:02.727881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:55.182 23:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:31:55.182 23:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:31:55.182 23:03:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:55.182 "params": { 00:31:55.182 "name": "Nvme1", 00:31:55.182 "trtype": "tcp", 00:31:55.182 "traddr": "10.0.0.2", 00:31:55.182 "adrfam": "ipv4", 00:31:55.182 "trsvcid": "4420", 00:31:55.182 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:55.182 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:55.182 "hdgst": false, 00:31:55.182 "ddgst": false 00:31:55.182 }, 00:31:55.182 "method": "bdev_nvme_attach_controller" 00:31:55.182 }' 00:31:55.182 [2024-12-10 23:03:02.739798] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:55.182 [2024-12-10 23:03:02.739812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:55.182 [2024-12-10 23:03:02.751794] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:55.182 [2024-12-10 23:03:02.751804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:55.182 [2024-12-10 23:03:02.763793] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:55.182 [2024-12-10 23:03:02.763803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:55.182 [2024-12-10 23:03:02.765053] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:31:55.182 [2024-12-10 23:03:02.765096] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2593132 ] 00:31:55.182 [2024-12-10 23:03:02.775792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:55.182 [2024-12-10 23:03:02.775803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:55.182 [2024-12-10 23:03:02.787792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:55.182 [2024-12-10 23:03:02.787802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:55.182 [2024-12-10 23:03:02.799792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:55.182 [2024-12-10 23:03:02.799803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:55.182 [2024-12-10 23:03:02.811790] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:55.182 [2024-12-10 23:03:02.811802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:55.182 [2024-12-10 23:03:02.823797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:55.182 [2024-12-10 23:03:02.823809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:55.182 [2024-12-10 23:03:02.835790] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:55.182 [2024-12-10 23:03:02.835801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:55.182 [2024-12-10 23:03:02.839644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:55.182 [2024-12-10 23:03:02.847793] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:55.182 [2024-12-10 23:03:02.847805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:55.182 [2024-12-10 23:03:02.859792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:55.182 [2024-12-10 23:03:02.859805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:55.182 [2024-12-10 23:03:02.871791] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:55.182 [2024-12-10 23:03:02.871801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:55.182 [2024-12-10 23:03:02.880401] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:31:55.182 [2024-12-10 23:03:02.883791] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:55.182 [2024-12-10 23:03:02.883803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:55.182 [2024-12-10 23:03:02.895814] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:55.182 [2024-12-10 23:03:02.895836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:55.442 [2024-12-10 23:03:02.907802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:55.442 [2024-12-10 23:03:02.907817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:55.442 [2024-12-10 23:03:02.919800] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:55.442 [2024-12-10 23:03:02.919816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:55.442 [2024-12-10 23:03:02.931794] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:55.442 [2024-12-10 23:03:02.931807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:55.442 [2024-12-10 23:03:02.943796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:55.442 [2024-12-10 23:03:02.943808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:55.442 [2024-12-10 23:03:02.955792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:55.442 [2024-12-10 23:03:02.955802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:55.442 [2024-12-10 23:03:02.967808] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:55.442 [2024-12-10 23:03:02.967828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:55.442 [2024-12-10 23:03:02.979799] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:55.442 [2024-12-10 23:03:02.979814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:55.442 [2024-12-10 23:03:02.991800] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:55.442 [2024-12-10 23:03:02.991815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:55.442 [2024-12-10 23:03:03.003794] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:55.442 [2024-12-10 23:03:03.003804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:55.442 [2024-12-10 23:03:03.015793] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:55.442 [2024-12-10 23:03:03.015804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:55.442 [2024-12-10 23:03:03.027795] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:55.442 [2024-12-10 23:03:03.027807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:55.442 [2024-12-10 23:03:03.039798] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:55.442 [2024-12-10 23:03:03.039813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:55.442 [2024-12-10 23:03:03.051796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:55.442 [2024-12-10 23:03:03.051811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:55.442 [2024-12-10 23:03:03.063792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:55.442 [2024-12-10 23:03:03.063807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:55.442 [2024-12-10 23:03:03.075791] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:55.442 [2024-12-10 23:03:03.075802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:55.442 [2024-12-10 23:03:03.087803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:55.442 [2024-12-10 23:03:03.087819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:55.442 [2024-12-10 23:03:03.099791] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:55.442 [2024-12-10 23:03:03.099802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:55.442 [2024-12-10 23:03:03.111792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:55.442 [2024-12-10 23:03:03.111802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:55.442 [2024-12-10 23:03:03.123793] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:55.442 [2024-12-10 23:03:03.123805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:55.442 [2024-12-10 23:03:03.135794] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:55.442 [2024-12-10 23:03:03.135809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:55.442 [2024-12-10 23:03:03.147792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:55.442 [2024-12-10 23:03:03.147803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:55.442 [2024-12-10 23:03:03.159793] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:55.442 [2024-12-10 23:03:03.159803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:55.701 [2024-12-10 23:03:03.171795] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:55.701 [2024-12-10 23:03:03.171808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:55.701 [2024-12-10 23:03:03.183799] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:55.701 [2024-12-10 23:03:03.183817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:55.701 Running I/O for 5 seconds... 00:31:55.701 [2024-12-10 23:03:03.199812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:55.701 [2024-12-10 23:03:03.199832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:55.701 [2024-12-10 23:03:03.213735] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:55.701 [2024-12-10 23:03:03.213755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:55.701 [2024-12-10 23:03:03.229050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:55.701 [2024-12-10 23:03:03.229070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:55.701 [2024-12-10 23:03:03.244254] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:55.701 [2024-12-10 23:03:03.244274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:55.701 [2024-12-10 23:03:03.256209] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:55.701 [2024-12-10 23:03:03.256229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:55.702 [2024-12-10 23:03:03.269491] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:55.702 [2024-12-10 23:03:03.269515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:55.702 [2024-12-10 23:03:03.284329] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:55.702 [2024-12-10 23:03:03.284348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:55.702 [2024-12-10 23:03:03.295701] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:55.702 [2024-12-10 23:03:03.295720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:55.702 [2024-12-10 23:03:03.309694] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:55.702 [2024-12-10 23:03:03.309713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:55.702 [2024-12-10 23:03:03.325075] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:55.702 [2024-12-10 23:03:03.325095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:55.702 [2024-12-10 23:03:03.340193] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:55.702 [2024-12-10 23:03:03.340212] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:55.702 [2024-12-10 23:03:03.356431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:55.702 [2024-12-10 23:03:03.356455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:55.702 [2024-12-10 23:03:03.371459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:55.702 [2024-12-10 23:03:03.371480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:55.702 [2024-12-10 23:03:03.383125] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:55.702 [2024-12-10 23:03:03.383145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:55.702 [2024-12-10 23:03:03.397504] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:55.702 [2024-12-10 23:03:03.397523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:55.702 [2024-12-10 23:03:03.412410] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:55.702 [2024-12-10 23:03:03.412429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:55.961 [2024-12-10 23:03:03.427789] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:55.961 [2024-12-10 23:03:03.427809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:55.961 [2024-12-10 23:03:03.439283] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:55.961 [2024-12-10 23:03:03.439303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:55.961 [2024-12-10 23:03:03.453296] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:55.961 [2024-12-10 23:03:03.453316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:55.961 [2024-12-10 23:03:03.468484] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:55.961 [2024-12-10 23:03:03.468504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:55.961 [2024-12-10 23:03:03.483286] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:55.961 [2024-12-10 23:03:03.483305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:55.961 [2024-12-10 23:03:03.496765] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:55.961 [2024-12-10 23:03:03.496784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:55.961 [2024-12-10 23:03:03.511780] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:55.961 [2024-12-10 23:03:03.511799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:55.961 [2024-12-10 23:03:03.523890] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:55.961 [2024-12-10 23:03:03.523911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:55.961 [2024-12-10 23:03:03.538002] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:55.961 [2024-12-10 23:03:03.538026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:55.961 [2024-12-10 23:03:03.552859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:55.961 [2024-12-10 23:03:03.552878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:55.961 [2024-12-10 23:03:03.567508] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:55.961 [2024-12-10 23:03:03.567527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:55.961 [2024-12-10 23:03:03.580185] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:55.961 [2024-12-10 23:03:03.580203] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:55.961 [2024-12-10 23:03:03.593745] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:55.961 [2024-12-10 23:03:03.593764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:55.961 [2024-12-10 23:03:03.608299] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:55.961 [2024-12-10 23:03:03.608319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:55.961 [2024-12-10 23:03:03.624521] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:55.961 [2024-12-10 23:03:03.624540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:55.961 [2024-12-10 23:03:03.639705] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:55.961 [2024-12-10 23:03:03.639724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:55.961 [2024-12-10 23:03:03.653503] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:55.961 [2024-12-10 23:03:03.653522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:55.961 [2024-12-10 23:03:03.668285] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:55.961 [2024-12-10 23:03:03.668304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:55.961 [2024-12-10 23:03:03.683944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:55.961 [2024-12-10 23:03:03.683964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.220 [2024-12-10 23:03:03.697408] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.220 [2024-12-10 23:03:03.697426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.220 [2024-12-10 23:03:03.712255] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.220 [2024-12-10 23:03:03.712273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.220 [2024-12-10 23:03:03.728107] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.220 [2024-12-10 23:03:03.728126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.220 [2024-12-10 23:03:03.740533] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.220 [2024-12-10 23:03:03.740552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.220 [2024-12-10 23:03:03.753045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.220 [2024-12-10 23:03:03.753064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.220 [2024-12-10 23:03:03.763787] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.220 [2024-12-10 23:03:03.763806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.220 [2024-12-10 23:03:03.777926] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.220 [2024-12-10 23:03:03.777944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.220 [2024-12-10 23:03:03.792782] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.220 [2024-12-10 23:03:03.792801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.220 [2024-12-10 23:03:03.807987] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.220 [2024-12-10 23:03:03.808013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.220 [2024-12-10 23:03:03.819493] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.220 [2024-12-10 23:03:03.819513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.220 [2024-12-10 23:03:03.833578] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.220 [2024-12-10 23:03:03.833597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.220 [2024-12-10 23:03:03.848908] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.220 [2024-12-10 23:03:03.848927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.220 [2024-12-10 23:03:03.863503] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.220 [2024-12-10 23:03:03.863522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.220 [2024-12-10 23:03:03.874846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.220 [2024-12-10 23:03:03.874865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.220 [2024-12-10 23:03:03.889733] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.220 [2024-12-10 23:03:03.889753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.220 [2024-12-10 23:03:03.905114] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.220 [2024-12-10 23:03:03.905139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.220 [2024-12-10 23:03:03.920290] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.220 [2024-12-10 23:03:03.920310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.220 [2024-12-10 23:03:03.931690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.220 [2024-12-10 23:03:03.931709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.479 [2024-12-10 23:03:03.946205] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.479 [2024-12-10 23:03:03.946224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.479 [2024-12-10 23:03:03.960930] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.479 [2024-12-10 23:03:03.960949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.479 [2024-12-10 23:03:03.975775] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.479 [2024-12-10 23:03:03.975793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.479 [2024-12-10 23:03:03.987210] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.479 [2024-12-10 23:03:03.987228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.479 [2024-12-10 23:03:04.001849] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.479 [2024-12-10 23:03:04.001868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.479 [2024-12-10 23:03:04.016861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.479 [2024-12-10 23:03:04.016880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.479 [2024-12-10 23:03:04.031793] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.479 [2024-12-10 23:03:04.031812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.479 [2024-12-10 23:03:04.042355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.479 [2024-12-10 23:03:04.042374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.479 [2024-12-10 23:03:04.057095] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.479 [2024-12-10 23:03:04.057114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.479 [2024-12-10 23:03:04.071529] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.479 [2024-12-10 23:03:04.071553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.479 [2024-12-10 23:03:04.084110] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.479 [2024-12-10 23:03:04.084132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.479 [2024-12-10 23:03:04.097601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.479 [2024-12-10 23:03:04.097621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.479 [2024-12-10 23:03:04.112600] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.479 [2024-12-10 23:03:04.112620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.479 [2024-12-10 23:03:04.128459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.479 [2024-12-10 23:03:04.128479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.479 [2024-12-10 23:03:04.143904] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.479 [2024-12-10 23:03:04.143923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.479 [2024-12-10 23:03:04.157754] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.479 [2024-12-10 23:03:04.157773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.479 [2024-12-10 23:03:04.172638] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.479 [2024-12-10 23:03:04.172657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.479 [2024-12-10 23:03:04.187531] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.479 [2024-12-10 23:03:04.187551] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.479 16541.00 IOPS, 129.23 MiB/s [2024-12-10T22:03:04.208Z] [2024-12-10 23:03:04.201719] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.479 [2024-12-10 23:03:04.201738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.738 [2024-12-10 23:03:04.216856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.738 [2024-12-10 23:03:04.216875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.738 [2024-12-10 23:03:04.231861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.738 [2024-12-10 23:03:04.231881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.738 [2024-12-10 23:03:04.244817] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.738 [2024-12-10 23:03:04.244837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.738 [2024-12-10 23:03:04.259864] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.738 [2024-12-10 23:03:04.259883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.738 [2024-12-10 23:03:04.272362] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.738 [2024-12-10 23:03:04.272381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.738 [2024-12-10 23:03:04.285414] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.738 [2024-12-10 23:03:04.285433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.738 [2024-12-10 23:03:04.300368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.738 [2024-12-10 23:03:04.300387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.738 [2024-12-10 23:03:04.315902] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.738 [2024-12-10 23:03:04.315921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.738 [2024-12-10 23:03:04.328541] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.738 [2024-12-10 23:03:04.328560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.738 [2024-12-10 23:03:04.343645] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.738 [2024-12-10 23:03:04.343664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.738 [2024-12-10 23:03:04.357210] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.738 [2024-12-10 23:03:04.357230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.738 [2024-12-10 23:03:04.372320] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.738 [2024-12-10 23:03:04.372340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.738 [2024-12-10 23:03:04.387390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.738 [2024-12-10 23:03:04.387409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.738 [2024-12-10 23:03:04.401662] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.738 [2024-12-10 23:03:04.401682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.738 [2024-12-10 23:03:04.416944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.738 [2024-12-10 23:03:04.416963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.738 [2024-12-10 23:03:04.431821] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.738 [2024-12-10 23:03:04.431839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.738 [2024-12-10 23:03:04.444168] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.738 [2024-12-10 23:03:04.444186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.738 [2024-12-10 23:03:04.457679] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.738 [2024-12-10 23:03:04.457698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.997 [2024-12-10 23:03:04.473044] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.997 [2024-12-10 23:03:04.473064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.997 [2024-12-10 23:03:04.487685] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.997 [2024-12-10 23:03:04.487705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.997 [2024-12-10 23:03:04.499239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.997 [2024-12-10 23:03:04.499259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.997 [2024-12-10 23:03:04.513676] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.997 [2024-12-10 23:03:04.513697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.997 [2024-12-10 23:03:04.528978] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.997 [2024-12-10 23:03:04.528998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.997 [2024-12-10 23:03:04.543783] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.997 [2024-12-10 23:03:04.543803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.997 [2024-12-10 23:03:04.554851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.997 [2024-12-10 23:03:04.554871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.997 [2024-12-10 23:03:04.569445] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.997 [2024-12-10 23:03:04.569464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.997 [2024-12-10 23:03:04.584752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.997 [2024-12-10 23:03:04.584772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.997 [2024-12-10 23:03:04.599715] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.997 [2024-12-10 23:03:04.599736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.997 [2024-12-10 23:03:04.612231] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.997 [2024-12-10 23:03:04.612249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.997 [2024-12-10 23:03:04.627995] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.997 [2024-12-10 23:03:04.628015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.997 [2024-12-10 23:03:04.641077] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.997 [2024-12-10 23:03:04.641097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.997 [2024-12-10 23:03:04.656930] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.997 [2024-12-10 23:03:04.656951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.997 [2024-12-10 23:03:04.671595] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.997 [2024-12-10 23:03:04.671615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.997 [2024-12-10 23:03:04.685902] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.997 [2024-12-10 23:03:04.685921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.997 [2024-12-10 23:03:04.700770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.997 [2024-12-10 23:03:04.700789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:56.997 [2024-12-10 23:03:04.715741] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:56.997 [2024-12-10 23:03:04.715761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.255 [2024-12-10 23:03:04.726873] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.255 [2024-12-10 23:03:04.726893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.255 [2024-12-10 23:03:04.741949] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.255 [2024-12-10 23:03:04.741968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.255 [2024-12-10 23:03:04.756664] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.255 [2024-12-10 23:03:04.756684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.255 [2024-12-10 23:03:04.772932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.256 [2024-12-10 23:03:04.772952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.256 [2024-12-10 23:03:04.787843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.256 [2024-12-10 23:03:04.787863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.256 [2024-12-10 23:03:04.798742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.256 [2024-12-10 23:03:04.798761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.256 [2024-12-10 23:03:04.813510] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.256 [2024-12-10 23:03:04.813530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.256 [2024-12-10 23:03:04.828466] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.256 [2024-12-10 23:03:04.828487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.256 [2024-12-10 23:03:04.843348] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.256 [2024-12-10 23:03:04.843368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.256 [2024-12-10 23:03:04.857072] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.256 [2024-12-10 23:03:04.857091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.256 [2024-12-10 23:03:04.871897] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.256 [2024-12-10 23:03:04.871922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.256 [2024-12-10 23:03:04.884857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.256 [2024-12-10 23:03:04.884877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.256 [2024-12-10 23:03:04.896095] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.256 [2024-12-10 23:03:04.896114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.256 [2024-12-10 23:03:04.909887] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.256 [2024-12-10 23:03:04.909907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.256 [2024-12-10 23:03:04.925004] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.256 [2024-12-10 23:03:04.925024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.256 [2024-12-10 23:03:04.939414] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.256 [2024-12-10 23:03:04.939433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.256 [2024-12-10 23:03:04.953741] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.256 [2024-12-10 23:03:04.953761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.256 [2024-12-10 23:03:04.968603] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.256 [2024-12-10 23:03:04.968622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.513 [2024-12-10 23:03:04.983770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.513 [2024-12-10 23:03:04.983789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.513 [2024-12-10 23:03:04.996825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.513 [2024-12-10 23:03:04.996844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.513 [2024-12-10 23:03:05.012142] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.513 [2024-12-10 23:03:05.012169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.513 [2024-12-10 23:03:05.028423] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.513 [2024-12-10 23:03:05.028442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.513 [2024-12-10 23:03:05.040348] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.513 [2024-12-10 23:03:05.040367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.513 [2024-12-10 23:03:05.055559] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.513 [2024-12-10 23:03:05.055579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.513 [2024-12-10 23:03:05.067067] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.513 [2024-12-10 23:03:05.067086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.513 [2024-12-10 23:03:05.081871] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.513 [2024-12-10 23:03:05.081892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.513 [2024-12-10 23:03:05.096665] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.513 [2024-12-10 23:03:05.096686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.513 [2024-12-10 23:03:05.112646] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.513 [2024-12-10 23:03:05.112667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.513 [2024-12-10 23:03:05.127988] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.513 [2024-12-10 23:03:05.128009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.513 [2024-12-10 23:03:05.141547] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.513 [2024-12-10 23:03:05.141572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.513 [2024-12-10 23:03:05.156594] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.513 [2024-12-10 23:03:05.156612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.513 [2024-12-10 23:03:05.171718] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.513 [2024-12-10 23:03:05.171739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.513 [2024-12-10 23:03:05.185297] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.513 [2024-12-10 23:03:05.185316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.513 [2024-12-10 23:03:05.200372] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.513 [2024-12-10 23:03:05.200391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.513 16529.00 IOPS, 129.13 MiB/s [2024-12-10T22:03:05.242Z] [2024-12-10 23:03:05.215593] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.513 [2024-12-10 23:03:05.215612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.513 [2024-12-10 23:03:05.230093] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.513 [2024-12-10 23:03:05.230113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.781 [2024-12-10 23:03:05.245605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.781 [2024-12-10 23:03:05.245624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.781 [2024-12-10 23:03:05.260917] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.781 [2024-12-10 23:03:05.260936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.781 [2024-12-10 23:03:05.275813] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.781 [2024-12-10 23:03:05.275832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.781 [2024-12-10 23:03:05.290376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.781 [2024-12-10 23:03:05.290396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.781 [2024-12-10 23:03:05.305471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.781 [2024-12-10 23:03:05.305490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.781 [2024-12-10 23:03:05.320101] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.781 [2024-12-10 23:03:05.320121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.781 [2024-12-10 23:03:05.335835] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.781 [2024-12-10 23:03:05.335854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.781 [2024-12-10 23:03:05.348778] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.781 [2024-12-10 23:03:05.348797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.781 [2024-12-10 23:03:05.363757] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.781 [2024-12-10 23:03:05.363780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.781 [2024-12-10 23:03:05.374669] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.781 [2024-12-10 23:03:05.374689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.781 [2024-12-10 23:03:05.389789] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.781 [2024-12-10 23:03:05.389809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.781 [2024-12-10 23:03:05.404738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.781 [2024-12-10 23:03:05.404757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.781 [2024-12-10 23:03:05.419404] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.781 [2024-12-10 23:03:05.419428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.781 [2024-12-10 23:03:05.432408] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.781 [2024-12-10 23:03:05.432427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.781 [2024-12-10 23:03:05.445126] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.781 [2024-12-10 23:03:05.445145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.781 [2024-12-10 23:03:05.456277] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.781 [2024-12-10 23:03:05.456295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.781 [2024-12-10 23:03:05.469668] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.781 [2024-12-10 23:03:05.469686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.781 [2024-12-10 23:03:05.484655] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.781 [2024-12-10 23:03:05.484674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:57.781 [2024-12-10 23:03:05.499851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:57.781 [2024-12-10 23:03:05.499871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.079 [2024-12-10 23:03:05.511016] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.079 [2024-12-10 23:03:05.511035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.079 [2024-12-10 23:03:05.525524] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.079 [2024-12-10 23:03:05.525544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.079 [2024-12-10 23:03:05.540849] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.079 [2024-12-10 23:03:05.540868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.079 [2024-12-10 23:03:05.556625] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.079 [2024-12-10 23:03:05.556644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.079 [2024-12-10 23:03:05.571894] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.079 [2024-12-10 23:03:05.571912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.079 [2024-12-10 23:03:05.584895] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.079 [2024-12-10 23:03:05.584915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.079 [2024-12-10 23:03:05.597429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.079 [2024-12-10 23:03:05.597449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.079 [2024-12-10 23:03:05.612873] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.079 [2024-12-10 23:03:05.612892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.079 [2024-12-10 23:03:05.627851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.079 [2024-12-10 23:03:05.627869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.079 [2024-12-10 23:03:05.638901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.079 [2024-12-10 23:03:05.638920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.079 [2024-12-10 23:03:05.653957] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.079 [2024-12-10 23:03:05.653977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.079 [2024-12-10 23:03:05.668498] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.079 [2024-12-10 23:03:05.668517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.079 [2024-12-10 23:03:05.684164] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.079 [2024-12-10 23:03:05.684185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.079 [2024-12-10 23:03:05.700147] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.079 [2024-12-10 23:03:05.700172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.079 [2024-12-10 23:03:05.715421] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.079 [2024-12-10 23:03:05.715441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.079 [2024-12-10 23:03:05.728822] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.079 [2024-12-10 23:03:05.728842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.079 [2024-12-10 23:03:05.744080] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.079 [2024-12-10 23:03:05.744099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.079 [2024-12-10 23:03:05.759901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.079 [2024-12-10 23:03:05.759921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.079 [2024-12-10 23:03:05.774051] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.079 [2024-12-10 23:03:05.774070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.079 [2024-12-10 23:03:05.789551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.079 [2024-12-10 23:03:05.789570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.362 [2024-12-10 23:03:05.804352] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.362 [2024-12-10 23:03:05.804370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.362 [2024-12-10 23:03:05.816465] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.362 [2024-12-10 23:03:05.816484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.362 [2024-12-10 23:03:05.831453] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.362 [2024-12-10 23:03:05.831472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.362 [2024-12-10 23:03:05.846108] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.362 [2024-12-10 23:03:05.846126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.362 [2024-12-10 23:03:05.860589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.362 [2024-12-10 23:03:05.860608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.362 [2024-12-10 23:03:05.875414] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.362 [2024-12-10 23:03:05.875433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.362 [2024-12-10 23:03:05.888877] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.362 [2024-12-10 23:03:05.888897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.362 [2024-12-10 23:03:05.904153] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.362 [2024-12-10 23:03:05.904178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.362 [2024-12-10 23:03:05.919224] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.362 [2024-12-10 23:03:05.919245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.362 [2024-12-10 23:03:05.933641] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.362 [2024-12-10 23:03:05.933667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.362 [2024-12-10 23:03:05.948457] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.362 [2024-12-10 23:03:05.948477] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.362 [2024-12-10 23:03:05.963829] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.362 [2024-12-10 23:03:05.963849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.362 [2024-12-10 23:03:05.977613] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.362 [2024-12-10 23:03:05.977633] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.362 [2024-12-10 23:03:05.992841] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.362 [2024-12-10 23:03:05.992860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.362 [2024-12-10 23:03:06.007786] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.362 [2024-12-10 23:03:06.007806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.362 [2024-12-10 23:03:06.020803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.362 [2024-12-10 23:03:06.020824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.362 [2024-12-10 23:03:06.035555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.362 [2024-12-10 23:03:06.035574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.362 [2024-12-10 23:03:06.049588] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.362 [2024-12-10 23:03:06.049608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.362 [2024-12-10 23:03:06.064499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.362 [2024-12-10 23:03:06.064519] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.362 [2024-12-10 23:03:06.080222] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.362 [2024-12-10 23:03:06.080242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.621 [2024-12-10 23:03:06.095789] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.621 [2024-12-10 23:03:06.095809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.621 [2024-12-10 23:03:06.108521] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.621 [2024-12-10 23:03:06.108540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.621 [2024-12-10 23:03:06.124301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.621 [2024-12-10 23:03:06.124322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.621 [2024-12-10 23:03:06.140083] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.621 [2024-12-10 23:03:06.140104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.621 [2024-12-10 23:03:06.155201] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.621 [2024-12-10 23:03:06.155221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.621 [2024-12-10 23:03:06.169746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.621 [2024-12-10 23:03:06.169766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.621 [2024-12-10 23:03:06.184613] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.621 [2024-12-10 23:03:06.184632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.621 [2024-12-10 23:03:06.199869] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.621 [2024-12-10 23:03:06.199888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.621 16497.33 IOPS, 128.89 MiB/s [2024-12-10T22:03:06.350Z] [2024-12-10 23:03:06.210458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.621 [2024-12-10 23:03:06.210478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.621 [2024-12-10 23:03:06.225547] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.621 [2024-12-10 23:03:06.225573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.621 [2024-12-10 23:03:06.240705] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.621 [2024-12-10 23:03:06.240725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.621 [2024-12-10 23:03:06.255759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.621 [2024-12-10 23:03:06.255780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.621 [2024-12-10 23:03:06.269931] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.621 [2024-12-10 23:03:06.269951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.621 [2024-12-10 23:03:06.285062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.621 [2024-12-10 23:03:06.285082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.621 [2024-12-10 23:03:06.300527] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.621 [2024-12-10 23:03:06.300547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.621 [2024-12-10 23:03:06.315383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.621 [2024-12-10 23:03:06.315403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.621 [2024-12-10 23:03:06.329849] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.621 [2024-12-10 23:03:06.329869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.621 [2024-12-10 23:03:06.344802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.621 [2024-12-10 23:03:06.344822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.879 [2024-12-10 23:03:06.359693] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.879 [2024-12-10 23:03:06.359713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.879 [2024-12-10 23:03:06.372906] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.879 [2024-12-10 23:03:06.372925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.879 [2024-12-10 23:03:06.388103] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.879 [2024-12-10 23:03:06.388122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.879 [2024-12-10 23:03:06.400617] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.879 [2024-12-10 23:03:06.400635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.879 [2024-12-10 23:03:06.415615] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.879 [2024-12-10 23:03:06.415635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.879 [2024-12-10 23:03:06.428733] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.879 [2024-12-10 23:03:06.428753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.879 [2024-12-10 23:03:06.439934] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.879 [2024-12-10 23:03:06.439953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.879 [2024-12-10 23:03:06.453458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.879 [2024-12-10 23:03:06.453478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.879 [2024-12-10 23:03:06.468729] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.879 [2024-12-10 23:03:06.468748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.879 [2024-12-10 23:03:06.483594] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.879 [2024-12-10 23:03:06.483614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.879 [2024-12-10 23:03:06.496767] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.879 [2024-12-10 23:03:06.496791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.879 [2024-12-10 23:03:06.511781] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.879 [2024-12-10 23:03:06.511800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.879 [2024-12-10 23:03:06.522510] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.879 [2024-12-10 23:03:06.522529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.879 [2024-12-10 23:03:06.537561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.879 [2024-12-10 23:03:06.537580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.879 [2024-12-10 23:03:06.552683] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.879 [2024-12-10 23:03:06.552702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.879 [2024-12-10 23:03:06.567546] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.879 [2024-12-10 23:03:06.567566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.879 [2024-12-10 23:03:06.579439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.879 [2024-12-10 23:03:06.579459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:58.879 [2024-12-10 23:03:06.593874] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:58.879 [2024-12-10 23:03:06.593893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.137 [2024-12-10 23:03:06.608758] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.137 [2024-12-10 23:03:06.608777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.137 [2024-12-10 23:03:06.623750] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.137 [2024-12-10 23:03:06.623770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.137 [2024-12-10 23:03:06.635467] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.137 [2024-12-10 23:03:06.635486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.137 [2024-12-10 23:03:06.649564] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.137 [2024-12-10 23:03:06.649584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.137 [2024-12-10 23:03:06.663976] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.137 [2024-12-10 23:03:06.663995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.137 [2024-12-10 23:03:06.674499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.137 [2024-12-10 23:03:06.674518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.137 [2024-12-10 23:03:06.689725] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.137 [2024-12-10 23:03:06.689744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.137 [2024-12-10 23:03:06.704413] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.137 [2024-12-10 23:03:06.704432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.137 [2024-12-10 23:03:06.719866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.138 [2024-12-10 23:03:06.719885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.138 [2024-12-10 23:03:06.733899] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.138 [2024-12-10 23:03:06.733919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.138 [2024-12-10 23:03:06.748967] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.138 [2024-12-10 23:03:06.748986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.138 [2024-12-10 23:03:06.764032] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.138 [2024-12-10 23:03:06.764055] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.138 [2024-12-10 23:03:06.773902] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.138 [2024-12-10 23:03:06.773924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.138 [2024-12-10 23:03:06.789108] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.138 [2024-12-10 23:03:06.789128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.138 [2024-12-10 23:03:06.804130] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.138 [2024-12-10 23:03:06.804149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.138 [2024-12-10 23:03:06.819826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.138 [2024-12-10 23:03:06.819845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.138 [2024-12-10 23:03:06.832387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.138 [2024-12-10 23:03:06.832406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.138 [2024-12-10 23:03:06.845585] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.138 [2024-12-10 23:03:06.845604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.138 [2024-12-10 23:03:06.860413] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.138 [2024-12-10 23:03:06.860432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.396 [2024-12-10 23:03:06.875931] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.396 [2024-12-10 23:03:06.875950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.396 [2024-12-10 23:03:06.886002] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.396 [2024-12-10 23:03:06.886020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.396 [2024-12-10 23:03:06.901215] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.396 [2024-12-10 23:03:06.901234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.396 [2024-12-10 23:03:06.916674] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.396 [2024-12-10 23:03:06.916693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.396 [2024-12-10 23:03:06.932101] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.396 [2024-12-10 23:03:06.932120] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.396 [2024-12-10 23:03:06.944720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.396 [2024-12-10 23:03:06.944739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.396 [2024-12-10 23:03:06.959867] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.396 [2024-12-10 23:03:06.959886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.396 [2024-12-10 23:03:06.972680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.396 [2024-12-10 23:03:06.972699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.396 [2024-12-10 23:03:06.987690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.396 [2024-12-10 23:03:06.987709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.396 [2024-12-10 23:03:07.001339] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.396 [2024-12-10 23:03:07.001359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.396 [2024-12-10 23:03:07.016224] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.396 [2024-12-10 23:03:07.016242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.396 [2024-12-10 23:03:07.032412] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.396 [2024-12-10 23:03:07.032437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.396 [2024-12-10 23:03:07.047719] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.396 [2024-12-10 23:03:07.047740] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.396 [2024-12-10 23:03:07.060472] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.396 [2024-12-10 23:03:07.060490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.396 [2024-12-10 23:03:07.073171] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.396 [2024-12-10 23:03:07.073191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.396 [2024-12-10 23:03:07.088263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.396 [2024-12-10 23:03:07.088282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.396 [2024-12-10 23:03:07.103656] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.396 [2024-12-10 23:03:07.103675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.396 [2024-12-10 23:03:07.118566] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.396 [2024-12-10 23:03:07.118584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.654 [2024-12-10 23:03:07.133417] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.654 [2024-12-10 23:03:07.133436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.654 [2024-12-10 23:03:07.148199] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.655 [2024-12-10 23:03:07.148218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.655 [2024-12-10 23:03:07.163847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.655 [2024-12-10 23:03:07.163868] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.655 [2024-12-10 23:03:07.177791] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.655 [2024-12-10 23:03:07.177811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.655 [2024-12-10 23:03:07.192772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.655 [2024-12-10 23:03:07.192792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.655 16486.25 IOPS, 128.80 MiB/s [2024-12-10T22:03:07.384Z] [2024-12-10 23:03:07.207865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.655 [2024-12-10 23:03:07.207885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.655 [2024-12-10 23:03:07.220994] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.655 [2024-12-10 23:03:07.221013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.655 [2024-12-10 23:03:07.236389] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.655 [2024-12-10 23:03:07.236408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.655 [2024-12-10 23:03:07.251842] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.655 [2024-12-10 23:03:07.251867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.655 [2024-12-10 23:03:07.264585] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.655 [2024-12-10 23:03:07.264604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.655 [2024-12-10 23:03:07.280176] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.655 [2024-12-10 23:03:07.280195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.655 [2024-12-10 23:03:07.296120] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.655 [2024-12-10 23:03:07.296140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.655 [2024-12-10 23:03:07.311739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.655 [2024-12-10 23:03:07.311758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.655 [2024-12-10 23:03:07.324152] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.655 [2024-12-10 23:03:07.324178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.655 [2024-12-10 23:03:07.337779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.655 [2024-12-10 23:03:07.337800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.655 [2024-12-10 23:03:07.353175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.655 [2024-12-10 23:03:07.353193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.655 [2024-12-10 23:03:07.367878] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.655 [2024-12-10 23:03:07.367899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.655 [2024-12-10 23:03:07.379584] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.655 [2024-12-10 23:03:07.379605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.913 [2024-12-10 23:03:07.394248] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.913 [2024-12-10 23:03:07.394269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.913 [2024-12-10 23:03:07.409355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.913 [2024-12-10 23:03:07.409375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.913 [2024-12-10 23:03:07.424766] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.913 [2024-12-10 23:03:07.424786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.913 [2024-12-10 23:03:07.439982] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.913 [2024-12-10 23:03:07.440003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.913 [2024-12-10 23:03:07.451530] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.913 [2024-12-10 23:03:07.451549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.913 [2024-12-10 23:03:07.465786] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.913 [2024-12-10 23:03:07.465806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.913 [2024-12-10 23:03:07.480916] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.913 [2024-12-10 23:03:07.480936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.913 [2024-12-10 23:03:07.495966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.913 [2024-12-10 23:03:07.495986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.913 [2024-12-10 23:03:07.508377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.913 [2024-12-10 23:03:07.508397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.913 [2024-12-10 23:03:07.521424] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.913 [2024-12-10 23:03:07.521444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.913 [2024-12-10 23:03:07.536185] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.913 [2024-12-10 23:03:07.536204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.913 [2024-12-10 23:03:07.552035] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.913 [2024-12-10 23:03:07.552054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.913 [2024-12-10 23:03:07.564088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.913 [2024-12-10 23:03:07.564107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.913 [2024-12-10 23:03:07.577438] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.913 [2024-12-10 23:03:07.577458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.913 [2024-12-10 23:03:07.592817] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.913 [2024-12-10 23:03:07.592838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.913 [2024-12-10 23:03:07.608041] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.913 [2024-12-10 23:03:07.608062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.913 [2024-12-10 23:03:07.621530] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.913 [2024-12-10 23:03:07.621550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:59.913 [2024-12-10 23:03:07.636523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:59.913 [2024-12-10 23:03:07.636542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.172 [2024-12-10 23:03:07.652225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.172 [2024-12-10 23:03:07.652244] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.172 [2024-12-10 23:03:07.664755] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.172 [2024-12-10 23:03:07.664775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.172 [2024-12-10 23:03:07.679541] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.172 [2024-12-10 23:03:07.679562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.172 [2024-12-10 23:03:07.693645] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.172 [2024-12-10 23:03:07.693664] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.172 [2024-12-10 23:03:07.708885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.172 [2024-12-10 23:03:07.708905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.172 [2024-12-10 23:03:07.723797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.172 [2024-12-10 23:03:07.723817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.172 [2024-12-10 23:03:07.734595] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.172 [2024-12-10 23:03:07.734616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.172 [2024-12-10 23:03:07.749196] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.172 [2024-12-10 23:03:07.749216] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.172 [2024-12-10 23:03:07.763900] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.172 [2024-12-10 23:03:07.763920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.172 [2024-12-10 23:03:07.776453] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.172 [2024-12-10 23:03:07.776472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.172 [2024-12-10 23:03:07.791816] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.172 [2024-12-10 23:03:07.791836] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.172 [2024-12-10 23:03:07.804627] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.172 [2024-12-10 23:03:07.804646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.172 [2024-12-10 23:03:07.817425] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.172 [2024-12-10 23:03:07.817444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.172 [2024-12-10 23:03:07.832375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.172 [2024-12-10 23:03:07.832394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.172 [2024-12-10 23:03:07.847779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.172 [2024-12-10 23:03:07.847799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.172 [2024-12-10 23:03:07.861156] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.172 [2024-12-10 23:03:07.861181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.172 [2024-12-10 23:03:07.876286] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.172 [2024-12-10 23:03:07.876306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.172 [2024-12-10 23:03:07.892203] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.172 [2024-12-10 23:03:07.892221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.430 [2024-12-10 23:03:07.905811] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.430 [2024-12-10 23:03:07.905831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.430 [2024-12-10 23:03:07.920736] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.430 [2024-12-10 23:03:07.920755] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.430 [2024-12-10 23:03:07.935842] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.430 [2024-12-10 23:03:07.935862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.430 [2024-12-10 23:03:07.948428] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.430 [2024-12-10 23:03:07.948446] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.430 [2024-12-10 23:03:07.963803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.430 [2024-12-10 23:03:07.963822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.430 [2024-12-10 23:03:07.978195] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.430 [2024-12-10 23:03:07.978214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.430 [2024-12-10 23:03:07.993145] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.430 [2024-12-10 23:03:07.993170] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.430 [2024-12-10 23:03:08.008261] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.430 [2024-12-10 23:03:08.008281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.430 [2024-12-10 23:03:08.024198] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.430 [2024-12-10 23:03:08.024217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.430 [2024-12-10 23:03:08.039939] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.430 [2024-12-10 23:03:08.039958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.430 [2024-12-10 23:03:08.052465] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.430 [2024-12-10 23:03:08.052484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.430 [2024-12-10 23:03:08.067558] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.430 [2024-12-10 23:03:08.067578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.430 [2024-12-10 23:03:08.080836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.430 [2024-12-10 23:03:08.080855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.430 [2024-12-10 23:03:08.095909] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.430 [2024-12-10 23:03:08.095929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.430 [2024-12-10 23:03:08.107266] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.430 [2024-12-10 23:03:08.107290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.430 [2024-12-10 23:03:08.121762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.430 [2024-12-10 23:03:08.121782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.430 [2024-12-10 23:03:08.137267] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.430 [2024-12-10 23:03:08.137285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.430 [2024-12-10 23:03:08.152309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.430 [2024-12-10 23:03:08.152327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.689 [2024-12-10 23:03:08.167540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.689 [2024-12-10 23:03:08.167559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.689 [2024-12-10 23:03:08.181719] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.689 [2024-12-10 23:03:08.181738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.689 [2024-12-10 23:03:08.197071] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.689 [2024-12-10 23:03:08.197092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.689 16498.20 IOPS, 128.89 MiB/s [2024-12-10T22:03:08.418Z] [2024-12-10 23:03:08.211410] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.689 [2024-12-10 23:03:08.211430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.689 00:32:00.689 Latency(us) 00:32:00.689 [2024-12-10T22:03:08.418Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:00.689 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:32:00.689 Nvme1n1 : 5.01 16499.72 128.90 0.00 0.00 7749.81 1994.57 13335.15 00:32:00.689 [2024-12-10T22:03:08.418Z] =================================================================================================================== 00:32:00.689 [2024-12-10T22:03:08.418Z] Total : 16499.72 128.90 0.00 0.00 7749.81 1994.57 13335.15 00:32:00.689 [2024-12-10 23:03:08.219801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.689 [2024-12-10 23:03:08.219819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.689 [2024-12-10 23:03:08.231799] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.689 [2024-12-10 23:03:08.231815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.689 [2024-12-10 23:03:08.243811] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.689 [2024-12-10 23:03:08.243828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.689 [2024-12-10 23:03:08.255799] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.689 [2024-12-10 23:03:08.255815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.689 [2024-12-10 23:03:08.267803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.689 [2024-12-10 23:03:08.267817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.689 [2024-12-10 23:03:08.279797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.689 [2024-12-10 23:03:08.279811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.689 [2024-12-10 23:03:08.291796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.689 [2024-12-10 23:03:08.291811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.689 [2024-12-10 23:03:08.303795] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.689 [2024-12-10 23:03:08.303808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.689 [2024-12-10 23:03:08.315795] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.689 [2024-12-10 23:03:08.315817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.689 [2024-12-10 23:03:08.327795] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.689 [2024-12-10 23:03:08.327809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.689 [2024-12-10 23:03:08.339792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.689 [2024-12-10 23:03:08.339803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.689 [2024-12-10 23:03:08.351799] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.689 [2024-12-10 23:03:08.351812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.689 [2024-12-10 23:03:08.363791] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.689 [2024-12-10 23:03:08.363802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.689 [2024-12-10 23:03:08.375793] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:00.689 [2024-12-10 23:03:08.375804] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:00.689 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2593132) - No such process 00:32:00.689 23:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2593132 00:32:00.689 23:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:00.689 23:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.689 23:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:00.689 23:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.689 23:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:32:00.689 23:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.689 23:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:00.689 delay0 00:32:00.689 23:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.689 23:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:32:00.689 23:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.689 23:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:00.689 23:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.689 23:03:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:32:00.947 [2024-12-10 23:03:08.563274] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:32:07.511 [2024-12-10 23:03:14.839240] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97fe50 is same with the state(6) to be set 00:32:07.511 Initializing NVMe Controllers 00:32:07.511 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:07.511 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:07.511 Initialization complete. Launching workers. 00:32:07.511 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 262, failed: 14228 00:32:07.511 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 14392, failed to submit 98 00:32:07.511 success 14313, unsuccessful 79, failed 0 00:32:07.511 23:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:32:07.511 23:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:32:07.511 23:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:07.511 23:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:32:07.511 23:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:07.511 23:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:32:07.511 23:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:07.511 23:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:07.511 rmmod nvme_tcp 00:32:07.511 rmmod nvme_fabrics 00:32:07.511 rmmod nvme_keyring 00:32:07.511 23:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:07.511 23:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:32:07.511 23:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:32:07.511 23:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2591264 ']' 00:32:07.511 23:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2591264 00:32:07.511 23:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 2591264 ']' 00:32:07.511 23:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 2591264 00:32:07.511 23:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:32:07.511 23:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:07.511 23:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2591264 00:32:07.511 23:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:07.511 23:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:07.511 23:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2591264' 00:32:07.511 killing process with pid 2591264 00:32:07.511 23:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 2591264 00:32:07.511 23:03:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 2591264 00:32:07.511 23:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:07.511 23:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:07.511 23:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:07.511 23:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:32:07.511 23:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:32:07.511 23:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:07.511 23:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:32:07.511 23:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:07.511 23:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:07.511 23:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:07.511 23:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:07.511 23:03:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:10.046 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:10.046 00:32:10.046 real 0m31.639s 00:32:10.046 user 0m41.083s 00:32:10.046 sys 0m12.353s 00:32:10.046 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:10.046 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:10.046 ************************************ 00:32:10.046 END TEST nvmf_zcopy 00:32:10.046 ************************************ 00:32:10.046 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:32:10.046 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:10.046 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:10.046 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:10.046 ************************************ 00:32:10.046 START TEST nvmf_nmic 00:32:10.046 ************************************ 00:32:10.046 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:32:10.046 * Looking for test storage... 00:32:10.046 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:32:10.046 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:10.046 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:32:10.046 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:10.046 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:10.046 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:10.046 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:10.046 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:10.046 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:32:10.046 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:32:10.046 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:32:10.046 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:32:10.046 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:32:10.046 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:32:10.046 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:32:10.046 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:10.046 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:32:10.046 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:32:10.046 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:10.046 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:10.046 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:32:10.046 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:32:10.046 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:10.046 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:32:10.046 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:32:10.046 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:32:10.046 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:32:10.046 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:10.046 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:32:10.046 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:32:10.046 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:10.046 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:10.046 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:32:10.046 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:10.046 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:10.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:10.046 --rc genhtml_branch_coverage=1 00:32:10.046 --rc genhtml_function_coverage=1 00:32:10.046 --rc genhtml_legend=1 00:32:10.046 --rc geninfo_all_blocks=1 00:32:10.046 --rc geninfo_unexecuted_blocks=1 00:32:10.046 00:32:10.046 ' 00:32:10.046 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:10.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:10.046 --rc genhtml_branch_coverage=1 00:32:10.046 --rc genhtml_function_coverage=1 00:32:10.046 --rc genhtml_legend=1 00:32:10.046 --rc geninfo_all_blocks=1 00:32:10.046 --rc geninfo_unexecuted_blocks=1 00:32:10.046 00:32:10.046 ' 00:32:10.046 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:10.046 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:10.046 --rc genhtml_branch_coverage=1 00:32:10.046 --rc genhtml_function_coverage=1 00:32:10.047 --rc genhtml_legend=1 00:32:10.047 --rc geninfo_all_blocks=1 00:32:10.047 --rc geninfo_unexecuted_blocks=1 00:32:10.047 00:32:10.047 ' 00:32:10.047 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:10.047 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:10.047 --rc genhtml_branch_coverage=1 00:32:10.047 --rc genhtml_function_coverage=1 00:32:10.047 --rc genhtml_legend=1 00:32:10.047 --rc geninfo_all_blocks=1 00:32:10.047 --rc geninfo_unexecuted_blocks=1 00:32:10.047 00:32:10.047 ' 00:32:10.047 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:32:10.047 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:32:10.047 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:10.047 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:10.047 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:10.047 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:10.047 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:10.047 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:10.047 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:10.047 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:10.047 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:10.047 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:10.047 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:32:10.047 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:32:10.047 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:10.047 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:10.047 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:10.047 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:10.047 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:32:10.047 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:32:10.047 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:10.047 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:10.047 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:10.047 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:10.047 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:10.047 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:10.047 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:32:10.047 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:10.047 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:32:10.047 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:10.047 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:10.047 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:10.047 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:10.047 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:10.047 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:10.047 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:10.047 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:10.047 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:10.047 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:10.047 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:10.047 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:10.047 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:32:10.047 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:10.047 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:10.047 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:10.047 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:10.047 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:10.047 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:10.047 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:10.047 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:10.047 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:10.047 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:10.047 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:32:10.047 23:03:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:15.320 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:15.320 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:32:15.320 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:15.320 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:15.320 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:15.320 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:15.320 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:15.320 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:32:15.320 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:15.320 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:32:15.320 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:32:15.320 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:32:15.320 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:32:15.320 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:32:15.320 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:32:15.320 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:15.320 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:15.320 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:15.320 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:15.320 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:15.320 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:15.320 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:15.320 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:15.320 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:15.320 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:15.580 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:15.580 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:15.580 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:15.580 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:15.580 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:15.580 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:15.580 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:15.580 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:15.580 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:15.580 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:15.580 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:15.580 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:15.580 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:15.580 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:15.580 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:15.580 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:15.580 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:15.580 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:15.580 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:15.580 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:15.580 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:15.580 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:15.580 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:15.580 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:15.580 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:15.580 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:15.580 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:15.580 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:15.580 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:15.580 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:15.580 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:15.580 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:15.580 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:15.580 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:15.580 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:15.580 Found net devices under 0000:86:00.0: cvl_0_0 00:32:15.580 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:15.580 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:15.580 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:15.581 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:15.581 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:15.581 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:15.581 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:15.581 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:15.581 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:15.581 Found net devices under 0000:86:00.1: cvl_0_1 00:32:15.581 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:15.581 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:15.581 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:32:15.581 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:15.581 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:15.581 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:15.581 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:15.581 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:15.581 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:15.581 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:15.581 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:15.581 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:15.581 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:15.581 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:15.581 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:15.581 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:15.581 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:15.581 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:15.581 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:15.581 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:15.581 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:15.581 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:15.581 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:15.581 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:15.581 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:15.581 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:15.581 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:15.581 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:15.581 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:15.581 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:15.581 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.389 ms 00:32:15.581 00:32:15.581 --- 10.0.0.2 ping statistics --- 00:32:15.581 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:15.581 rtt min/avg/max/mdev = 0.389/0.389/0.389/0.000 ms 00:32:15.581 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:15.581 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:15.581 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:32:15.581 00:32:15.581 --- 10.0.0.1 ping statistics --- 00:32:15.581 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:15.581 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:32:15.581 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:15.581 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:32:15.581 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:15.581 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:15.581 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:15.581 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:15.581 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:15.581 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:15.581 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:15.840 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:32:15.840 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:15.840 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:15.840 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:15.840 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2598905 00:32:15.840 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:32:15.840 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2598905 00:32:15.840 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 2598905 ']' 00:32:15.840 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:15.840 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:15.840 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:15.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:15.840 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:15.840 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:15.840 [2024-12-10 23:03:23.399173] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:15.840 [2024-12-10 23:03:23.400171] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:32:15.840 [2024-12-10 23:03:23.400228] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:15.840 [2024-12-10 23:03:23.481338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:15.840 [2024-12-10 23:03:23.522956] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:15.840 [2024-12-10 23:03:23.522997] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:15.840 [2024-12-10 23:03:23.523005] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:15.840 [2024-12-10 23:03:23.523011] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:15.840 [2024-12-10 23:03:23.523016] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:15.840 [2024-12-10 23:03:23.524451] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:32:15.840 [2024-12-10 23:03:23.524561] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:32:15.840 [2024-12-10 23:03:23.524668] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:32:15.840 [2024-12-10 23:03:23.524669] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:32:16.100 [2024-12-10 23:03:23.594866] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:16.100 [2024-12-10 23:03:23.595624] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:16.100 [2024-12-10 23:03:23.595955] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:16.100 [2024-12-10 23:03:23.596298] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:16.100 [2024-12-10 23:03:23.596342] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:16.100 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:16.100 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:32:16.100 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:16.100 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:16.100 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:16.100 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:16.100 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:16.100 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.100 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:16.100 [2024-12-10 23:03:23.669531] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:16.100 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.100 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:16.100 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.100 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:16.100 Malloc0 00:32:16.100 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.100 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:32:16.100 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.100 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:16.100 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.100 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:16.100 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.100 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:16.100 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.100 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:16.100 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.100 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:16.100 [2024-12-10 23:03:23.757683] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:16.100 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.100 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:32:16.100 test case1: single bdev can't be used in multiple subsystems 00:32:16.100 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:32:16.100 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.100 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:16.100 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.100 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:32:16.100 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.100 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:16.100 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.100 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:32:16.100 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:32:16.100 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.100 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:16.100 [2024-12-10 23:03:23.785171] bdev.c:8538:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:32:16.100 [2024-12-10 23:03:23.785196] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:32:16.100 [2024-12-10 23:03:23.785206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:16.100 request: 00:32:16.100 { 00:32:16.100 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:32:16.100 "namespace": { 00:32:16.100 "bdev_name": "Malloc0", 00:32:16.100 "no_auto_visible": false, 00:32:16.100 "hide_metadata": false 00:32:16.100 }, 00:32:16.100 "method": "nvmf_subsystem_add_ns", 00:32:16.100 "req_id": 1 00:32:16.100 } 00:32:16.100 Got JSON-RPC error response 00:32:16.100 response: 00:32:16.100 { 00:32:16.100 "code": -32602, 00:32:16.100 "message": "Invalid parameters" 00:32:16.100 } 00:32:16.100 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:32:16.100 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:32:16.100 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:32:16.100 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:32:16.100 Adding namespace failed - expected result. 00:32:16.100 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:32:16.100 test case2: host connect to nvmf target in multiple paths 00:32:16.100 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:16.100 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:16.100 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:16.100 [2024-12-10 23:03:23.793240] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:16.100 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:16.100 23:03:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:16.359 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:32:16.618 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:32:16.618 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:32:16.618 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:32:16.618 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:32:16.618 23:03:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:32:19.149 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:32:19.149 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:32:19.149 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:32:19.149 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:32:19.149 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:32:19.149 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:32:19.149 23:03:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:32:19.149 [global] 00:32:19.149 thread=1 00:32:19.149 invalidate=1 00:32:19.149 rw=write 00:32:19.149 time_based=1 00:32:19.149 runtime=1 00:32:19.149 ioengine=libaio 00:32:19.149 direct=1 00:32:19.149 bs=4096 00:32:19.149 iodepth=1 00:32:19.149 norandommap=0 00:32:19.149 numjobs=1 00:32:19.149 00:32:19.149 verify_dump=1 00:32:19.149 verify_backlog=512 00:32:19.149 verify_state_save=0 00:32:19.149 do_verify=1 00:32:19.149 verify=crc32c-intel 00:32:19.149 [job0] 00:32:19.149 filename=/dev/nvme0n1 00:32:19.149 Could not set queue depth (nvme0n1) 00:32:19.149 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:19.149 fio-3.35 00:32:19.149 Starting 1 thread 00:32:20.526 00:32:20.526 job0: (groupid=0, jobs=1): err= 0: pid=2599694: Tue Dec 10 23:03:27 2024 00:32:20.526 read: IOPS=22, BW=90.3KiB/s (92.5kB/s)(92.0KiB/1019msec) 00:32:20.526 slat (nsec): min=9403, max=23114, avg=21979.17, stdev=2769.08 00:32:20.526 clat (usec): min=40829, max=41068, avg=40966.95, stdev=54.06 00:32:20.526 lat (usec): min=40839, max=41091, avg=40988.93, stdev=55.52 00:32:20.526 clat percentiles (usec): 00:32:20.526 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:32:20.526 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:32:20.526 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:20.526 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:20.526 | 99.99th=[41157] 00:32:20.526 write: IOPS=502, BW=2010KiB/s (2058kB/s)(2048KiB/1019msec); 0 zone resets 00:32:20.526 slat (nsec): min=9152, max=40499, avg=10286.47, stdev=1864.64 00:32:20.526 clat (usec): min=113, max=330, avg=137.03, stdev= 9.35 00:32:20.526 lat (usec): min=137, max=370, avg=147.31, stdev=10.60 00:32:20.526 clat percentiles (usec): 00:32:20.526 | 1.00th=[ 130], 5.00th=[ 133], 10.00th=[ 133], 20.00th=[ 135], 00:32:20.526 | 30.00th=[ 135], 40.00th=[ 137], 50.00th=[ 137], 60.00th=[ 137], 00:32:20.526 | 70.00th=[ 139], 80.00th=[ 141], 90.00th=[ 143], 95.00th=[ 145], 00:32:20.526 | 99.00th=[ 147], 99.50th=[ 149], 99.90th=[ 330], 99.95th=[ 330], 00:32:20.526 | 99.99th=[ 330] 00:32:20.526 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:32:20.526 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:20.526 lat (usec) : 250=95.51%, 500=0.19% 00:32:20.526 lat (msec) : 50=4.30% 00:32:20.526 cpu : usr=0.69%, sys=0.10%, ctx=535, majf=0, minf=1 00:32:20.526 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:20.526 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:20.526 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:20.526 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:20.526 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:20.526 00:32:20.526 Run status group 0 (all jobs): 00:32:20.526 READ: bw=90.3KiB/s (92.5kB/s), 90.3KiB/s-90.3KiB/s (92.5kB/s-92.5kB/s), io=92.0KiB (94.2kB), run=1019-1019msec 00:32:20.526 WRITE: bw=2010KiB/s (2058kB/s), 2010KiB/s-2010KiB/s (2058kB/s-2058kB/s), io=2048KiB (2097kB), run=1019-1019msec 00:32:20.526 00:32:20.526 Disk stats (read/write): 00:32:20.526 nvme0n1: ios=70/512, merge=0/0, ticks=1048/69, in_queue=1117, util=95.39% 00:32:20.526 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:20.526 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:32:20.526 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:20.526 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:32:20.526 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:32:20.526 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:20.526 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:32:20.526 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:20.526 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:32:20.526 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:32:20.526 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:32:20.526 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:20.526 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:32:20.526 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:20.527 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:32:20.527 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:20.527 23:03:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:20.527 rmmod nvme_tcp 00:32:20.527 rmmod nvme_fabrics 00:32:20.527 rmmod nvme_keyring 00:32:20.527 23:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:20.527 23:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:32:20.527 23:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:32:20.527 23:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2598905 ']' 00:32:20.527 23:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2598905 00:32:20.527 23:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 2598905 ']' 00:32:20.527 23:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 2598905 00:32:20.527 23:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:32:20.527 23:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:20.527 23:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2598905 00:32:20.527 23:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:20.527 23:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:20.527 23:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2598905' 00:32:20.527 killing process with pid 2598905 00:32:20.527 23:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 2598905 00:32:20.527 23:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 2598905 00:32:20.786 23:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:20.786 23:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:20.786 23:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:20.786 23:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:32:20.786 23:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:32:20.786 23:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:20.786 23:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:32:20.786 23:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:20.786 23:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:20.786 23:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:20.786 23:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:20.786 23:03:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:22.688 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:22.688 00:32:22.688 real 0m13.117s 00:32:22.688 user 0m24.222s 00:32:22.688 sys 0m5.975s 00:32:22.688 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:22.688 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:32:22.688 ************************************ 00:32:22.688 END TEST nvmf_nmic 00:32:22.688 ************************************ 00:32:22.947 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:32:22.947 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:22.947 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:22.947 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:22.947 ************************************ 00:32:22.947 START TEST nvmf_fio_target 00:32:22.947 ************************************ 00:32:22.947 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:32:22.947 * Looking for test storage... 00:32:22.947 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:32:22.947 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:22.947 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:32:22.947 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:22.947 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:22.947 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:22.947 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:22.947 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:22.947 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:32:22.947 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:32:22.947 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:32:22.947 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:32:22.947 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:32:22.947 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:32:22.947 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:32:22.947 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:22.947 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:32:22.947 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:32:22.947 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:22.947 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:22.947 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:32:22.947 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:32:22.947 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:22.947 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:32:22.947 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:32:22.947 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:32:22.947 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:32:22.947 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:22.947 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:32:22.947 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:32:22.947 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:22.947 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:22.947 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:32:22.947 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:22.947 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:22.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:22.947 --rc genhtml_branch_coverage=1 00:32:22.947 --rc genhtml_function_coverage=1 00:32:22.947 --rc genhtml_legend=1 00:32:22.947 --rc geninfo_all_blocks=1 00:32:22.947 --rc geninfo_unexecuted_blocks=1 00:32:22.947 00:32:22.947 ' 00:32:22.947 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:22.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:22.947 --rc genhtml_branch_coverage=1 00:32:22.947 --rc genhtml_function_coverage=1 00:32:22.947 --rc genhtml_legend=1 00:32:22.947 --rc geninfo_all_blocks=1 00:32:22.947 --rc geninfo_unexecuted_blocks=1 00:32:22.947 00:32:22.947 ' 00:32:22.947 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:22.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:22.947 --rc genhtml_branch_coverage=1 00:32:22.947 --rc genhtml_function_coverage=1 00:32:22.947 --rc genhtml_legend=1 00:32:22.947 --rc geninfo_all_blocks=1 00:32:22.947 --rc geninfo_unexecuted_blocks=1 00:32:22.947 00:32:22.947 ' 00:32:22.947 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:22.947 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:22.947 --rc genhtml_branch_coverage=1 00:32:22.947 --rc genhtml_function_coverage=1 00:32:22.947 --rc genhtml_legend=1 00:32:22.947 --rc geninfo_all_blocks=1 00:32:22.947 --rc geninfo_unexecuted_blocks=1 00:32:22.947 00:32:22.947 ' 00:32:22.947 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:32:22.947 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:32:22.947 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:22.947 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:22.947 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:22.947 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:22.947 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:22.947 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:22.947 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:22.947 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:22.947 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:22.947 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:22.947 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:32:22.947 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:32:22.947 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:22.947 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:22.947 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:22.947 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:22.947 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:32:22.947 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:32:22.947 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:22.947 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:22.947 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:22.948 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:22.948 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:22.948 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:22.948 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:32:22.948 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:22.948 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:32:22.948 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:22.948 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:22.948 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:22.948 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:22.948 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:22.948 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:22.948 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:22.948 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:22.948 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:22.948 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:22.948 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:22.948 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:23.207 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py 00:32:23.207 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:32:23.207 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:23.207 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:23.207 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:23.207 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:23.207 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:23.207 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:23.207 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:23.207 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:23.207 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:23.207 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:23.207 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:32:23.207 23:03:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:29.773 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:29.773 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:32:29.773 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:29.773 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:29.773 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:29.773 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:29.773 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:29.773 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:32:29.773 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:29.773 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:32:29.773 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:32:29.773 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:32:29.773 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:32:29.773 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:32:29.773 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:32:29.773 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:29.773 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:29.773 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:29.773 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:29.773 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:29.773 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:29.773 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:29.773 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:29.773 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:29.773 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:29.773 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:29.773 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:29.773 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:29.773 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:29.773 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:29.773 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:29.773 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:29.773 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:29.773 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:29.773 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:29.773 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:29.773 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:29.773 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:29.773 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:29.773 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:29.773 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:29.773 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:29.774 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:29.774 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:29.774 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:29.774 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:29.774 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:29.774 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:29.774 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:29.774 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:29.774 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:29.774 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:29.774 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:29.774 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:29.774 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:29.774 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:29.774 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:29.774 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:29.774 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:29.774 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:29.774 Found net devices under 0000:86:00.0: cvl_0_0 00:32:29.774 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:29.774 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:29.774 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:29.774 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:29.774 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:29.774 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:29.774 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:29.774 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:29.774 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:29.774 Found net devices under 0000:86:00.1: cvl_0_1 00:32:29.774 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:29.774 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:29.774 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:32:29.774 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:29.774 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:29.774 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:29.774 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:29.774 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:29.774 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:29.774 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:29.774 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:29.774 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:29.774 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:29.774 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:29.774 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:29.774 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:29.774 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:29.774 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:29.774 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:29.774 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:29.774 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:29.774 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:29.774 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:29.774 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:29.774 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:29.774 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:29.774 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:29.774 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:29.774 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:29.774 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:29.774 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.417 ms 00:32:29.774 00:32:29.774 --- 10.0.0.2 ping statistics --- 00:32:29.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:29.774 rtt min/avg/max/mdev = 0.417/0.417/0.417/0.000 ms 00:32:29.774 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:29.774 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:29.774 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:32:29.774 00:32:29.774 --- 10.0.0.1 ping statistics --- 00:32:29.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:29.774 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:32:29.774 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:29.774 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:32:29.774 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:29.774 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:29.774 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:29.774 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:29.774 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:29.774 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:29.774 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:29.774 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:32:29.774 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:29.774 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:29.774 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:29.774 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2603297 00:32:29.774 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2603297 00:32:29.774 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:32:29.774 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 2603297 ']' 00:32:29.774 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:29.774 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:29.774 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:29.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:29.774 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:29.774 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:29.774 [2024-12-10 23:03:36.615227] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:29.774 [2024-12-10 23:03:36.616250] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:32:29.774 [2024-12-10 23:03:36.616293] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:29.774 [2024-12-10 23:03:36.696137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:29.774 [2024-12-10 23:03:36.738429] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:29.774 [2024-12-10 23:03:36.738467] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:29.774 [2024-12-10 23:03:36.738476] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:29.774 [2024-12-10 23:03:36.738483] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:29.774 [2024-12-10 23:03:36.738488] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:29.774 [2024-12-10 23:03:36.739917] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:32:29.774 [2024-12-10 23:03:36.740024] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:32:29.774 [2024-12-10 23:03:36.740133] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:32:29.774 [2024-12-10 23:03:36.740134] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:32:29.775 [2024-12-10 23:03:36.809898] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:29.775 [2024-12-10 23:03:36.810552] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:29.775 [2024-12-10 23:03:36.810960] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:29.775 [2024-12-10 23:03:36.811307] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:29.775 [2024-12-10 23:03:36.811364] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:29.775 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:29.775 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:32:29.775 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:29.775 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:29.775 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:29.775 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:29.775 23:03:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:29.775 [2024-12-10 23:03:37.052829] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:29.775 23:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:29.775 23:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:32:29.775 23:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:30.033 23:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:32:30.033 23:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:30.033 23:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:32:30.033 23:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:30.292 23:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:32:30.292 23:03:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:32:30.550 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:30.808 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:32:30.808 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:31.067 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:32:31.067 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:31.067 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:32:31.067 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:32:31.325 23:03:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:32:31.584 23:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:32:31.584 23:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:31.584 23:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:32:31.584 23:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:32:31.842 23:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:32.099 [2024-12-10 23:03:39.688727] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:32.099 23:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:32:32.358 23:03:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:32:32.616 23:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:32.874 23:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:32:32.874 23:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:32:32.874 23:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:32:32.874 23:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:32:32.874 23:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:32:32.874 23:03:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:32:34.776 23:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:32:34.777 23:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:32:34.777 23:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:32:34.777 23:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:32:34.777 23:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:32:34.777 23:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:32:34.777 23:03:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:32:34.777 [global] 00:32:34.777 thread=1 00:32:34.777 invalidate=1 00:32:34.777 rw=write 00:32:34.777 time_based=1 00:32:34.777 runtime=1 00:32:34.777 ioengine=libaio 00:32:34.777 direct=1 00:32:34.777 bs=4096 00:32:34.777 iodepth=1 00:32:34.777 norandommap=0 00:32:34.777 numjobs=1 00:32:34.777 00:32:34.777 verify_dump=1 00:32:34.777 verify_backlog=512 00:32:34.777 verify_state_save=0 00:32:34.777 do_verify=1 00:32:34.777 verify=crc32c-intel 00:32:34.777 [job0] 00:32:34.777 filename=/dev/nvme0n1 00:32:34.777 [job1] 00:32:34.777 filename=/dev/nvme0n2 00:32:34.777 [job2] 00:32:34.777 filename=/dev/nvme0n3 00:32:34.777 [job3] 00:32:34.777 filename=/dev/nvme0n4 00:32:35.034 Could not set queue depth (nvme0n1) 00:32:35.034 Could not set queue depth (nvme0n2) 00:32:35.034 Could not set queue depth (nvme0n3) 00:32:35.034 Could not set queue depth (nvme0n4) 00:32:35.291 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:35.291 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:35.291 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:35.291 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:35.291 fio-3.35 00:32:35.291 Starting 4 threads 00:32:36.668 00:32:36.668 job0: (groupid=0, jobs=1): err= 0: pid=2604608: Tue Dec 10 23:03:43 2024 00:32:36.668 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:32:36.668 slat (nsec): min=6718, max=28049, avg=7906.78, stdev=1007.15 00:32:36.668 clat (usec): min=211, max=451, avg=253.83, stdev=45.01 00:32:36.668 lat (usec): min=218, max=460, avg=261.73, stdev=45.07 00:32:36.668 clat percentiles (usec): 00:32:36.668 | 1.00th=[ 217], 5.00th=[ 223], 10.00th=[ 225], 20.00th=[ 231], 00:32:36.668 | 30.00th=[ 233], 40.00th=[ 237], 50.00th=[ 241], 60.00th=[ 245], 00:32:36.668 | 70.00th=[ 251], 80.00th=[ 262], 90.00th=[ 285], 95.00th=[ 408], 00:32:36.668 | 99.00th=[ 424], 99.50th=[ 429], 99.90th=[ 437], 99.95th=[ 441], 00:32:36.668 | 99.99th=[ 453] 00:32:36.668 write: IOPS=2359, BW=9439KiB/s (9665kB/s)(9448KiB/1001msec); 0 zone resets 00:32:36.668 slat (usec): min=5, max=697, avg=11.32, stdev=14.21 00:32:36.668 clat (usec): min=132, max=917, avg=180.85, stdev=37.56 00:32:36.668 lat (usec): min=143, max=959, avg=192.17, stdev=40.74 00:32:36.668 clat percentiles (usec): 00:32:36.668 | 1.00th=[ 149], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 161], 00:32:36.668 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 174], 00:32:36.668 | 70.00th=[ 180], 80.00th=[ 200], 90.00th=[ 237], 95.00th=[ 243], 00:32:36.668 | 99.00th=[ 253], 99.50th=[ 281], 99.90th=[ 676], 99.95th=[ 676], 00:32:36.668 | 99.99th=[ 914] 00:32:36.668 bw ( KiB/s): min= 9024, max= 9024, per=26.23%, avg=9024.00, stdev= 0.00, samples=1 00:32:36.668 iops : min= 2256, max= 2256, avg=2256.00, stdev= 0.00, samples=1 00:32:36.668 lat (usec) : 250=85.15%, 500=14.74%, 750=0.09%, 1000=0.02% 00:32:36.668 cpu : usr=2.20%, sys=4.40%, ctx=4412, majf=0, minf=1 00:32:36.668 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:36.668 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:36.668 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:36.668 issued rwts: total=2048,2362,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:36.668 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:36.668 job1: (groupid=0, jobs=1): err= 0: pid=2604609: Tue Dec 10 23:03:43 2024 00:32:36.668 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:32:36.668 slat (nsec): min=7213, max=22802, avg=8339.83, stdev=1206.64 00:32:36.668 clat (usec): min=185, max=418, avg=271.89, stdev=37.10 00:32:36.668 lat (usec): min=193, max=427, avg=280.23, stdev=37.20 00:32:36.668 clat percentiles (usec): 00:32:36.668 | 1.00th=[ 206], 5.00th=[ 223], 10.00th=[ 231], 20.00th=[ 243], 00:32:36.668 | 30.00th=[ 253], 40.00th=[ 260], 50.00th=[ 265], 60.00th=[ 273], 00:32:36.668 | 70.00th=[ 281], 80.00th=[ 293], 90.00th=[ 334], 95.00th=[ 351], 00:32:36.668 | 99.00th=[ 367], 99.50th=[ 375], 99.90th=[ 383], 99.95th=[ 383], 00:32:36.668 | 99.99th=[ 420] 00:32:36.668 write: IOPS=2058, BW=8236KiB/s (8433kB/s)(8244KiB/1001msec); 0 zone resets 00:32:36.668 slat (nsec): min=10690, max=36632, avg=12133.96, stdev=1740.20 00:32:36.668 clat (usec): min=136, max=338, avg=188.55, stdev=21.33 00:32:36.668 lat (usec): min=147, max=352, avg=200.68, stdev=21.56 00:32:36.668 clat percentiles (usec): 00:32:36.668 | 1.00th=[ 147], 5.00th=[ 157], 10.00th=[ 163], 20.00th=[ 172], 00:32:36.668 | 30.00th=[ 178], 40.00th=[ 182], 50.00th=[ 188], 60.00th=[ 194], 00:32:36.668 | 70.00th=[ 198], 80.00th=[ 206], 90.00th=[ 215], 95.00th=[ 227], 00:32:36.668 | 99.00th=[ 245], 99.50th=[ 253], 99.90th=[ 289], 99.95th=[ 297], 00:32:36.668 | 99.99th=[ 338] 00:32:36.668 bw ( KiB/s): min= 8360, max= 8360, per=24.30%, avg=8360.00, stdev= 0.00, samples=1 00:32:36.668 iops : min= 2090, max= 2090, avg=2090.00, stdev= 0.00, samples=1 00:32:36.668 lat (usec) : 250=63.81%, 500=36.19% 00:32:36.668 cpu : usr=3.60%, sys=6.60%, ctx=4110, majf=0, minf=2 00:32:36.668 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:36.668 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:36.668 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:36.668 issued rwts: total=2048,2061,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:36.668 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:36.668 job2: (groupid=0, jobs=1): err= 0: pid=2604610: Tue Dec 10 23:03:43 2024 00:32:36.668 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:32:36.668 slat (nsec): min=6705, max=23552, avg=7589.47, stdev=916.85 00:32:36.668 clat (usec): min=224, max=3272, avg=263.27, stdev=77.52 00:32:36.668 lat (usec): min=231, max=3280, avg=270.86, stdev=77.55 00:32:36.668 clat percentiles (usec): 00:32:36.668 | 1.00th=[ 233], 5.00th=[ 239], 10.00th=[ 243], 20.00th=[ 245], 00:32:36.668 | 30.00th=[ 249], 40.00th=[ 253], 50.00th=[ 255], 60.00th=[ 260], 00:32:36.668 | 70.00th=[ 262], 80.00th=[ 269], 90.00th=[ 277], 95.00th=[ 289], 00:32:36.668 | 99.00th=[ 482], 99.50th=[ 494], 99.90th=[ 510], 99.95th=[ 1123], 00:32:36.668 | 99.99th=[ 3261] 00:32:36.668 write: IOPS=2136, BW=8547KiB/s (8753kB/s)(8556KiB/1001msec); 0 zone resets 00:32:36.668 slat (nsec): min=9664, max=41987, avg=11160.06, stdev=1939.36 00:32:36.668 clat (usec): min=146, max=649, avg=192.67, stdev=20.15 00:32:36.668 lat (usec): min=160, max=660, avg=203.83, stdev=20.40 00:32:36.668 clat percentiles (usec): 00:32:36.668 | 1.00th=[ 165], 5.00th=[ 172], 10.00th=[ 176], 20.00th=[ 182], 00:32:36.668 | 30.00th=[ 184], 40.00th=[ 188], 50.00th=[ 190], 60.00th=[ 194], 00:32:36.668 | 70.00th=[ 198], 80.00th=[ 202], 90.00th=[ 212], 95.00th=[ 225], 00:32:36.668 | 99.00th=[ 249], 99.50th=[ 297], 99.90th=[ 310], 99.95th=[ 310], 00:32:36.668 | 99.99th=[ 652] 00:32:36.668 bw ( KiB/s): min= 8544, max= 8544, per=24.83%, avg=8544.00, stdev= 0.00, samples=1 00:32:36.668 iops : min= 2136, max= 2136, avg=2136.00, stdev= 0.00, samples=1 00:32:36.668 lat (usec) : 250=66.87%, 500=32.98%, 750=0.10% 00:32:36.668 lat (msec) : 2=0.02%, 4=0.02% 00:32:36.668 cpu : usr=2.10%, sys=4.10%, ctx=4188, majf=0, minf=1 00:32:36.668 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:36.668 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:36.668 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:36.668 issued rwts: total=2048,2139,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:36.668 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:36.668 job3: (groupid=0, jobs=1): err= 0: pid=2604611: Tue Dec 10 23:03:43 2024 00:32:36.668 read: IOPS=1922, BW=7688KiB/s (7873kB/s)(7696KiB/1001msec) 00:32:36.668 slat (nsec): min=6790, max=24687, avg=9587.27, stdev=1224.14 00:32:36.668 clat (usec): min=205, max=542, avg=275.43, stdev=41.40 00:32:36.668 lat (usec): min=214, max=550, avg=285.02, stdev=41.41 00:32:36.668 clat percentiles (usec): 00:32:36.668 | 1.00th=[ 221], 5.00th=[ 229], 10.00th=[ 235], 20.00th=[ 243], 00:32:36.668 | 30.00th=[ 253], 40.00th=[ 260], 50.00th=[ 265], 60.00th=[ 273], 00:32:36.668 | 70.00th=[ 277], 80.00th=[ 302], 90.00th=[ 347], 95.00th=[ 363], 00:32:36.668 | 99.00th=[ 388], 99.50th=[ 412], 99.90th=[ 465], 99.95th=[ 545], 00:32:36.668 | 99.99th=[ 545] 00:32:36.668 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:32:36.668 slat (usec): min=9, max=20829, avg=22.91, stdev=459.99 00:32:36.669 clat (usec): min=134, max=365, avg=191.92, stdev=28.89 00:32:36.669 lat (usec): min=146, max=21159, avg=214.83, stdev=463.93 00:32:36.669 clat percentiles (usec): 00:32:36.669 | 1.00th=[ 143], 5.00th=[ 155], 10.00th=[ 161], 20.00th=[ 172], 00:32:36.669 | 30.00th=[ 178], 40.00th=[ 182], 50.00th=[ 190], 60.00th=[ 196], 00:32:36.669 | 70.00th=[ 200], 80.00th=[ 208], 90.00th=[ 225], 95.00th=[ 243], 00:32:36.669 | 99.00th=[ 297], 99.50th=[ 326], 99.90th=[ 359], 99.95th=[ 359], 00:32:36.669 | 99.99th=[ 367] 00:32:36.669 bw ( KiB/s): min= 8192, max= 8192, per=23.81%, avg=8192.00, stdev= 0.00, samples=1 00:32:36.669 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:32:36.669 lat (usec) : 250=63.09%, 500=36.88%, 750=0.03% 00:32:36.669 cpu : usr=2.10%, sys=5.00%, ctx=3975, majf=0, minf=1 00:32:36.669 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:36.669 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:36.669 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:36.669 issued rwts: total=1924,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:36.669 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:36.669 00:32:36.669 Run status group 0 (all jobs): 00:32:36.669 READ: bw=31.5MiB/s (33.0MB/s), 7688KiB/s-8184KiB/s (7873kB/s-8380kB/s), io=31.5MiB (33.0MB), run=1001-1001msec 00:32:36.669 WRITE: bw=33.6MiB/s (35.2MB/s), 8184KiB/s-9439KiB/s (8380kB/s-9665kB/s), io=33.6MiB (35.3MB), run=1001-1001msec 00:32:36.669 00:32:36.669 Disk stats (read/write): 00:32:36.669 nvme0n1: ios=1761/2048, merge=0/0, ticks=642/363, in_queue=1005, util=98.10% 00:32:36.669 nvme0n2: ios=1576/2048, merge=0/0, ticks=1375/372, in_queue=1747, util=98.27% 00:32:36.669 nvme0n3: ios=1612/2048, merge=0/0, ticks=1397/383, in_queue=1780, util=98.33% 00:32:36.669 nvme0n4: ios=1562/1829, merge=0/0, ticks=1363/355, in_queue=1718, util=98.11% 00:32:36.669 23:03:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:32:36.669 [global] 00:32:36.669 thread=1 00:32:36.669 invalidate=1 00:32:36.669 rw=randwrite 00:32:36.669 time_based=1 00:32:36.669 runtime=1 00:32:36.669 ioengine=libaio 00:32:36.669 direct=1 00:32:36.669 bs=4096 00:32:36.669 iodepth=1 00:32:36.669 norandommap=0 00:32:36.669 numjobs=1 00:32:36.669 00:32:36.669 verify_dump=1 00:32:36.669 verify_backlog=512 00:32:36.669 verify_state_save=0 00:32:36.669 do_verify=1 00:32:36.669 verify=crc32c-intel 00:32:36.669 [job0] 00:32:36.669 filename=/dev/nvme0n1 00:32:36.669 [job1] 00:32:36.669 filename=/dev/nvme0n2 00:32:36.669 [job2] 00:32:36.669 filename=/dev/nvme0n3 00:32:36.669 [job3] 00:32:36.669 filename=/dev/nvme0n4 00:32:36.669 Could not set queue depth (nvme0n1) 00:32:36.669 Could not set queue depth (nvme0n2) 00:32:36.669 Could not set queue depth (nvme0n3) 00:32:36.669 Could not set queue depth (nvme0n4) 00:32:36.669 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:36.669 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:36.669 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:36.669 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:36.669 fio-3.35 00:32:36.669 Starting 4 threads 00:32:38.045 00:32:38.045 job0: (groupid=0, jobs=1): err= 0: pid=2604983: Tue Dec 10 23:03:45 2024 00:32:38.045 read: IOPS=46, BW=187KiB/s (191kB/s)(188KiB/1006msec) 00:32:38.045 slat (nsec): min=8661, max=27668, avg=15584.00, stdev=6285.96 00:32:38.045 clat (usec): min=217, max=41395, avg=19318.90, stdev=20516.55 00:32:38.045 lat (usec): min=230, max=41406, avg=19334.48, stdev=20514.79 00:32:38.045 clat percentiles (usec): 00:32:38.045 | 1.00th=[ 219], 5.00th=[ 229], 10.00th=[ 231], 20.00th=[ 249], 00:32:38.045 | 30.00th=[ 260], 40.00th=[ 285], 50.00th=[ 330], 60.00th=[40633], 00:32:38.045 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:32:38.045 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:38.045 | 99.99th=[41157] 00:32:38.045 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:32:38.045 slat (nsec): min=10560, max=39304, avg=12095.84, stdev=1931.84 00:32:38.045 clat (usec): min=127, max=301, avg=173.15, stdev=17.55 00:32:38.045 lat (usec): min=141, max=340, avg=185.24, stdev=17.96 00:32:38.045 clat percentiles (usec): 00:32:38.045 | 1.00th=[ 147], 5.00th=[ 153], 10.00th=[ 155], 20.00th=[ 161], 00:32:38.045 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 174], 00:32:38.045 | 70.00th=[ 178], 80.00th=[ 184], 90.00th=[ 196], 95.00th=[ 204], 00:32:38.045 | 99.00th=[ 233], 99.50th=[ 245], 99.90th=[ 302], 99.95th=[ 302], 00:32:38.045 | 99.99th=[ 302] 00:32:38.045 bw ( KiB/s): min= 4096, max= 4096, per=14.35%, avg=4096.00, stdev= 0.00, samples=1 00:32:38.045 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:32:38.045 lat (usec) : 250=93.38%, 500=2.50%, 750=0.18% 00:32:38.045 lat (msec) : 50=3.94% 00:32:38.045 cpu : usr=0.60%, sys=0.80%, ctx=563, majf=0, minf=1 00:32:38.045 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:38.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:38.045 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:38.045 issued rwts: total=47,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:38.045 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:38.045 job1: (groupid=0, jobs=1): err= 0: pid=2604984: Tue Dec 10 23:03:45 2024 00:32:38.045 read: IOPS=2269, BW=9079KiB/s (9297kB/s)(9088KiB/1001msec) 00:32:38.045 slat (nsec): min=6638, max=27249, avg=8751.99, stdev=1375.32 00:32:38.045 clat (usec): min=190, max=790, avg=230.39, stdev=30.04 00:32:38.045 lat (usec): min=198, max=798, avg=239.15, stdev=29.69 00:32:38.045 clat percentiles (usec): 00:32:38.045 | 1.00th=[ 196], 5.00th=[ 202], 10.00th=[ 204], 20.00th=[ 208], 00:32:38.045 | 30.00th=[ 212], 40.00th=[ 217], 50.00th=[ 225], 60.00th=[ 239], 00:32:38.045 | 70.00th=[ 245], 80.00th=[ 249], 90.00th=[ 258], 95.00th=[ 273], 00:32:38.045 | 99.00th=[ 289], 99.50th=[ 310], 99.90th=[ 619], 99.95th=[ 734], 00:32:38.045 | 99.99th=[ 791] 00:32:38.045 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:32:38.045 slat (nsec): min=9047, max=53265, avg=11825.71, stdev=2069.87 00:32:38.045 clat (usec): min=122, max=461, avg=160.15, stdev=23.19 00:32:38.045 lat (usec): min=132, max=470, avg=171.97, stdev=23.03 00:32:38.045 clat percentiles (usec): 00:32:38.045 | 1.00th=[ 133], 5.00th=[ 141], 10.00th=[ 143], 20.00th=[ 147], 00:32:38.045 | 30.00th=[ 151], 40.00th=[ 153], 50.00th=[ 155], 60.00th=[ 159], 00:32:38.045 | 70.00th=[ 161], 80.00th=[ 167], 90.00th=[ 176], 95.00th=[ 229], 00:32:38.045 | 99.00th=[ 245], 99.50th=[ 247], 99.90th=[ 255], 99.95th=[ 371], 00:32:38.045 | 99.99th=[ 461] 00:32:38.045 bw ( KiB/s): min=10672, max=10672, per=37.38%, avg=10672.00, stdev= 0.00, samples=1 00:32:38.045 iops : min= 2668, max= 2668, avg=2668.00, stdev= 0.00, samples=1 00:32:38.045 lat (usec) : 250=91.47%, 500=8.44%, 750=0.06%, 1000=0.02% 00:32:38.045 cpu : usr=2.70%, sys=5.30%, ctx=4835, majf=0, minf=1 00:32:38.045 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:38.045 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:38.045 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:38.045 issued rwts: total=2272,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:38.045 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:38.045 job2: (groupid=0, jobs=1): err= 0: pid=2604985: Tue Dec 10 23:03:45 2024 00:32:38.045 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:32:38.045 slat (nsec): min=7655, max=42036, avg=9539.72, stdev=1607.83 00:32:38.045 clat (usec): min=200, max=2177, avg=251.93, stdev=66.27 00:32:38.045 lat (usec): min=209, max=2186, avg=261.47, stdev=66.16 00:32:38.045 clat percentiles (usec): 00:32:38.045 | 1.00th=[ 208], 5.00th=[ 212], 10.00th=[ 217], 20.00th=[ 223], 00:32:38.045 | 30.00th=[ 227], 40.00th=[ 233], 50.00th=[ 239], 60.00th=[ 245], 00:32:38.045 | 70.00th=[ 258], 80.00th=[ 289], 90.00th=[ 293], 95.00th=[ 302], 00:32:38.045 | 99.00th=[ 408], 99.50th=[ 457], 99.90th=[ 979], 99.95th=[ 1532], 00:32:38.045 | 99.99th=[ 2180] 00:32:38.045 write: IOPS=2058, BW=8236KiB/s (8433kB/s)(8244KiB/1001msec); 0 zone resets 00:32:38.045 slat (usec): min=10, max=20437, avg=22.89, stdev=449.90 00:32:38.045 clat (usec): min=139, max=347, avg=195.24, stdev=35.76 00:32:38.045 lat (usec): min=152, max=20668, avg=218.14, stdev=452.09 00:32:38.045 clat percentiles (usec): 00:32:38.045 | 1.00th=[ 155], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 167], 00:32:38.045 | 30.00th=[ 169], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 186], 00:32:38.045 | 70.00th=[ 215], 80.00th=[ 239], 90.00th=[ 243], 95.00th=[ 269], 00:32:38.045 | 99.00th=[ 285], 99.50th=[ 285], 99.90th=[ 302], 99.95th=[ 302], 00:32:38.045 | 99.99th=[ 347] 00:32:38.045 bw ( KiB/s): min= 8336, max= 8336, per=29.20%, avg=8336.00, stdev= 0.00, samples=1 00:32:38.045 iops : min= 2084, max= 2084, avg=2084.00, stdev= 0.00, samples=1 00:32:38.046 lat (usec) : 250=79.56%, 500=20.25%, 750=0.12%, 1000=0.02% 00:32:38.046 lat (msec) : 2=0.02%, 4=0.02% 00:32:38.046 cpu : usr=2.90%, sys=7.80%, ctx=4112, majf=0, minf=1 00:32:38.046 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:38.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:38.046 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:38.046 issued rwts: total=2048,2061,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:38.046 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:38.046 job3: (groupid=0, jobs=1): err= 0: pid=2604986: Tue Dec 10 23:03:45 2024 00:32:38.046 read: IOPS=2000, BW=8004KiB/s (8196kB/s)(8012KiB/1001msec) 00:32:38.046 slat (nsec): min=7311, max=24636, avg=9254.56, stdev=1439.42 00:32:38.046 clat (usec): min=214, max=500, avg=265.05, stdev=23.55 00:32:38.046 lat (usec): min=223, max=508, avg=274.30, stdev=23.98 00:32:38.046 clat percentiles (usec): 00:32:38.046 | 1.00th=[ 231], 5.00th=[ 237], 10.00th=[ 241], 20.00th=[ 245], 00:32:38.046 | 30.00th=[ 251], 40.00th=[ 258], 50.00th=[ 265], 60.00th=[ 269], 00:32:38.046 | 70.00th=[ 277], 80.00th=[ 281], 90.00th=[ 293], 95.00th=[ 297], 00:32:38.046 | 99.00th=[ 322], 99.50th=[ 351], 99.90th=[ 486], 99.95th=[ 494], 00:32:38.046 | 99.99th=[ 502] 00:32:38.046 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:32:38.046 slat (nsec): min=10617, max=37356, avg=12891.80, stdev=2156.54 00:32:38.046 clat (usec): min=155, max=319, avg=200.00, stdev=17.95 00:32:38.046 lat (usec): min=167, max=334, avg=212.89, stdev=18.64 00:32:38.046 clat percentiles (usec): 00:32:38.046 | 1.00th=[ 165], 5.00th=[ 172], 10.00th=[ 178], 20.00th=[ 184], 00:32:38.046 | 30.00th=[ 190], 40.00th=[ 196], 50.00th=[ 200], 60.00th=[ 204], 00:32:38.046 | 70.00th=[ 208], 80.00th=[ 215], 90.00th=[ 223], 95.00th=[ 231], 00:32:38.046 | 99.00th=[ 247], 99.50th=[ 253], 99.90th=[ 281], 99.95th=[ 285], 00:32:38.046 | 99.99th=[ 322] 00:32:38.046 bw ( KiB/s): min= 8192, max= 8192, per=28.69%, avg=8192.00, stdev= 0.00, samples=1 00:32:38.046 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:32:38.046 lat (usec) : 250=64.95%, 500=35.03%, 750=0.02% 00:32:38.046 cpu : usr=3.50%, sys=6.90%, ctx=4052, majf=0, minf=1 00:32:38.046 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:38.046 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:38.046 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:38.046 issued rwts: total=2003,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:38.046 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:38.046 00:32:38.046 Run status group 0 (all jobs): 00:32:38.046 READ: bw=24.7MiB/s (25.9MB/s), 187KiB/s-9079KiB/s (191kB/s-9297kB/s), io=24.9MiB (26.1MB), run=1001-1006msec 00:32:38.046 WRITE: bw=27.9MiB/s (29.2MB/s), 2036KiB/s-9.99MiB/s (2085kB/s-10.5MB/s), io=28.1MiB (29.4MB), run=1001-1006msec 00:32:38.046 00:32:38.046 Disk stats (read/write): 00:32:38.046 nvme0n1: ios=95/512, merge=0/0, ticks=1907/78, in_queue=1985, util=98.00% 00:32:38.046 nvme0n2: ios=2052/2048, merge=0/0, ticks=1440/326, in_queue=1766, util=98.27% 00:32:38.046 nvme0n3: ios=1569/2012, merge=0/0, ticks=1328/382, in_queue=1710, util=98.34% 00:32:38.046 nvme0n4: ios=1560/1942, merge=0/0, ticks=1342/366, in_queue=1708, util=98.11% 00:32:38.046 23:03:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:32:38.046 [global] 00:32:38.046 thread=1 00:32:38.046 invalidate=1 00:32:38.046 rw=write 00:32:38.046 time_based=1 00:32:38.046 runtime=1 00:32:38.046 ioengine=libaio 00:32:38.046 direct=1 00:32:38.046 bs=4096 00:32:38.046 iodepth=128 00:32:38.046 norandommap=0 00:32:38.046 numjobs=1 00:32:38.046 00:32:38.046 verify_dump=1 00:32:38.046 verify_backlog=512 00:32:38.046 verify_state_save=0 00:32:38.046 do_verify=1 00:32:38.046 verify=crc32c-intel 00:32:38.046 [job0] 00:32:38.046 filename=/dev/nvme0n1 00:32:38.046 [job1] 00:32:38.046 filename=/dev/nvme0n2 00:32:38.046 [job2] 00:32:38.046 filename=/dev/nvme0n3 00:32:38.046 [job3] 00:32:38.046 filename=/dev/nvme0n4 00:32:38.046 Could not set queue depth (nvme0n1) 00:32:38.046 Could not set queue depth (nvme0n2) 00:32:38.046 Could not set queue depth (nvme0n3) 00:32:38.046 Could not set queue depth (nvme0n4) 00:32:38.304 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:38.304 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:38.304 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:38.304 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:38.304 fio-3.35 00:32:38.304 Starting 4 threads 00:32:39.713 00:32:39.713 job0: (groupid=0, jobs=1): err= 0: pid=2605353: Tue Dec 10 23:03:47 2024 00:32:39.713 read: IOPS=4949, BW=19.3MiB/s (20.3MB/s)(19.4MiB/1006msec) 00:32:39.713 slat (nsec): min=1331, max=14552k, avg=106636.56, stdev=812326.26 00:32:39.713 clat (usec): min=2767, max=70603, avg=13173.33, stdev=7255.75 00:32:39.713 lat (usec): min=2780, max=70611, avg=13279.96, stdev=7323.35 00:32:39.713 clat percentiles (usec): 00:32:39.713 | 1.00th=[ 4621], 5.00th=[ 7177], 10.00th=[ 8586], 20.00th=[ 8979], 00:32:39.713 | 30.00th=[ 9634], 40.00th=[10814], 50.00th=[11338], 60.00th=[12649], 00:32:39.713 | 70.00th=[13698], 80.00th=[15926], 90.00th=[19268], 95.00th=[22676], 00:32:39.713 | 99.00th=[52167], 99.50th=[67634], 99.90th=[70779], 99.95th=[70779], 00:32:39.713 | 99.99th=[70779] 00:32:39.713 write: IOPS=5089, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1006msec); 0 zone resets 00:32:39.713 slat (usec): min=2, max=23335, avg=78.47, stdev=654.20 00:32:39.713 clat (usec): min=288, max=70576, avg=11526.64, stdev=7918.47 00:32:39.713 lat (usec): min=317, max=70581, avg=11605.11, stdev=7943.66 00:32:39.713 clat percentiles (usec): 00:32:39.713 | 1.00th=[ 1467], 5.00th=[ 3228], 10.00th=[ 5080], 20.00th=[ 7898], 00:32:39.713 | 30.00th=[ 9110], 40.00th=[ 9634], 50.00th=[10290], 60.00th=[10552], 00:32:39.713 | 70.00th=[12256], 80.00th=[14484], 90.00th=[16188], 95.00th=[17695], 00:32:39.713 | 99.00th=[57410], 99.50th=[61604], 99.90th=[68682], 99.95th=[68682], 00:32:39.713 | 99.99th=[70779] 00:32:39.713 bw ( KiB/s): min=16384, max=24576, per=31.45%, avg=20480.00, stdev=5792.62, samples=2 00:32:39.713 iops : min= 4096, max= 6144, avg=5120.00, stdev=1448.15, samples=2 00:32:39.713 lat (usec) : 500=0.02%, 750=0.17% 00:32:39.713 lat (msec) : 2=0.46%, 4=3.01%, 10=34.93%, 20=54.56%, 50=5.56% 00:32:39.713 lat (msec) : 100=1.29% 00:32:39.713 cpu : usr=4.68%, sys=5.37%, ctx=375, majf=0, minf=1 00:32:39.713 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:32:39.713 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:39.713 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:39.713 issued rwts: total=4979,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:39.713 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:39.713 job1: (groupid=0, jobs=1): err= 0: pid=2605354: Tue Dec 10 23:03:47 2024 00:32:39.713 read: IOPS=4819, BW=18.8MiB/s (19.7MB/s)(18.9MiB/1003msec) 00:32:39.713 slat (nsec): min=1347, max=13040k, avg=97965.32, stdev=792809.48 00:32:39.713 clat (usec): min=596, max=29078, avg=12852.32, stdev=3921.44 00:32:39.713 lat (usec): min=629, max=29122, avg=12950.29, stdev=3984.28 00:32:39.713 clat percentiles (usec): 00:32:39.713 | 1.00th=[ 2638], 5.00th=[ 8717], 10.00th=[ 9241], 20.00th=[ 9765], 00:32:39.713 | 30.00th=[10290], 40.00th=[10814], 50.00th=[11731], 60.00th=[13042], 00:32:39.713 | 70.00th=[15008], 80.00th=[16188], 90.00th=[17695], 95.00th=[20579], 00:32:39.713 | 99.00th=[23462], 99.50th=[23462], 99.90th=[27132], 99.95th=[27132], 00:32:39.713 | 99.99th=[28967] 00:32:39.713 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:32:39.713 slat (usec): min=2, max=13891, avg=94.74, stdev=708.97 00:32:39.713 clat (usec): min=2765, max=67043, avg=12702.05, stdev=8403.57 00:32:39.713 lat (usec): min=2779, max=67060, avg=12796.79, stdev=8463.55 00:32:39.713 clat percentiles (usec): 00:32:39.713 | 1.00th=[ 3752], 5.00th=[ 6325], 10.00th=[ 6980], 20.00th=[ 8291], 00:32:39.713 | 30.00th=[ 9241], 40.00th=[ 9765], 50.00th=[10290], 60.00th=[11338], 00:32:39.713 | 70.00th=[13173], 80.00th=[15401], 90.00th=[16319], 95.00th=[23987], 00:32:39.713 | 99.00th=[58983], 99.50th=[61604], 99.90th=[65274], 99.95th=[65274], 00:32:39.713 | 99.99th=[66847] 00:32:39.713 bw ( KiB/s): min=16384, max=24576, per=31.45%, avg=20480.00, stdev=5792.62, samples=2 00:32:39.713 iops : min= 4096, max= 6144, avg=5120.00, stdev=1448.15, samples=2 00:32:39.713 lat (usec) : 750=0.21%, 1000=0.05% 00:32:39.713 lat (msec) : 2=0.03%, 4=0.85%, 10=34.32%, 20=57.55%, 50=5.95% 00:32:39.713 lat (msec) : 100=1.03% 00:32:39.713 cpu : usr=4.09%, sys=7.09%, ctx=340, majf=0, minf=2 00:32:39.713 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:32:39.713 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:39.713 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:39.713 issued rwts: total=4834,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:39.713 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:39.713 job2: (groupid=0, jobs=1): err= 0: pid=2605355: Tue Dec 10 23:03:47 2024 00:32:39.713 read: IOPS=4284, BW=16.7MiB/s (17.5MB/s)(16.8MiB/1006msec) 00:32:39.713 slat (nsec): min=1339, max=12401k, avg=108060.14, stdev=834812.55 00:32:39.713 clat (usec): min=3859, max=41343, avg=14007.40, stdev=4905.52 00:32:39.713 lat (usec): min=5345, max=41350, avg=14115.46, stdev=4968.31 00:32:39.713 clat percentiles (usec): 00:32:39.713 | 1.00th=[ 7898], 5.00th=[ 8586], 10.00th=[ 9110], 20.00th=[10814], 00:32:39.713 | 30.00th=[11863], 40.00th=[12256], 50.00th=[13042], 60.00th=[13829], 00:32:39.713 | 70.00th=[14877], 80.00th=[15664], 90.00th=[18220], 95.00th=[24249], 00:32:39.713 | 99.00th=[32113], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:32:39.713 | 99.99th=[41157] 00:32:39.713 write: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec); 0 zone resets 00:32:39.713 slat (usec): min=2, max=13730, avg=108.98, stdev=753.21 00:32:39.713 clat (usec): min=3980, max=56841, avg=14569.20, stdev=6864.72 00:32:39.713 lat (usec): min=3993, max=56845, avg=14678.18, stdev=6920.84 00:32:39.713 clat percentiles (usec): 00:32:39.713 | 1.00th=[ 5473], 5.00th=[ 7373], 10.00th=[ 8848], 20.00th=[ 9372], 00:32:39.713 | 30.00th=[10290], 40.00th=[11076], 50.00th=[12387], 60.00th=[13698], 00:32:39.713 | 70.00th=[14746], 80.00th=[18482], 90.00th=[27919], 95.00th=[28181], 00:32:39.713 | 99.00th=[34341], 99.50th=[37487], 99.90th=[45351], 99.95th=[45351], 00:32:39.713 | 99.99th=[56886] 00:32:39.713 bw ( KiB/s): min=16384, max=20480, per=28.31%, avg=18432.00, stdev=2896.31, samples=2 00:32:39.713 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:32:39.713 lat (msec) : 4=0.08%, 10=20.65%, 20=65.52%, 50=13.74%, 100=0.01% 00:32:39.713 cpu : usr=4.08%, sys=5.17%, ctx=302, majf=0, minf=1 00:32:39.713 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:32:39.713 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:39.713 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:39.713 issued rwts: total=4310,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:39.713 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:39.713 job3: (groupid=0, jobs=1): err= 0: pid=2605356: Tue Dec 10 23:03:47 2024 00:32:39.713 read: IOPS=1621, BW=6486KiB/s (6641kB/s)(6732KiB/1038msec) 00:32:39.713 slat (usec): min=3, max=63889, avg=327.53, stdev=2838.21 00:32:39.713 clat (usec): min=347, max=117696, avg=38539.97, stdev=29445.13 00:32:39.713 lat (msec): min=13, max=129, avg=38.87, stdev=29.60 00:32:39.713 clat percentiles (msec): 00:32:39.713 | 1.00th=[ 12], 5.00th=[ 14], 10.00th=[ 14], 20.00th=[ 15], 00:32:39.713 | 30.00th=[ 15], 40.00th=[ 15], 50.00th=[ 25], 60.00th=[ 43], 00:32:39.713 | 70.00th=[ 53], 80.00th=[ 66], 90.00th=[ 75], 95.00th=[ 113], 00:32:39.713 | 99.00th=[ 118], 99.50th=[ 118], 99.90th=[ 118], 99.95th=[ 118], 00:32:39.713 | 99.99th=[ 118] 00:32:39.713 write: IOPS=1973, BW=7892KiB/s (8082kB/s)(8192KiB/1038msec); 0 zone resets 00:32:39.713 slat (usec): min=7, max=25414, avg=229.12, stdev=1601.63 00:32:39.713 clat (msec): min=9, max=129, avg=31.21, stdev=24.84 00:32:39.713 lat (msec): min=11, max=129, avg=31.44, stdev=24.92 00:32:39.713 clat percentiles (msec): 00:32:39.713 | 1.00th=[ 12], 5.00th=[ 13], 10.00th=[ 14], 20.00th=[ 14], 00:32:39.713 | 30.00th=[ 15], 40.00th=[ 15], 50.00th=[ 16], 60.00th=[ 18], 00:32:39.713 | 70.00th=[ 46], 80.00th=[ 57], 90.00th=[ 67], 95.00th=[ 72], 00:32:39.713 | 99.00th=[ 130], 99.50th=[ 130], 99.90th=[ 130], 99.95th=[ 130], 00:32:39.713 | 99.99th=[ 130] 00:32:39.713 bw ( KiB/s): min= 4096, max=12288, per=12.58%, avg=8192.00, stdev=5792.62, samples=2 00:32:39.713 iops : min= 1024, max= 3072, avg=2048.00, stdev=1448.15, samples=2 00:32:39.713 lat (usec) : 500=0.03% 00:32:39.713 lat (msec) : 10=0.13%, 20=55.35%, 50=14.29%, 100=26.83%, 250=3.38% 00:32:39.713 cpu : usr=2.03%, sys=3.38%, ctx=120, majf=0, minf=1 00:32:39.713 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:32:39.713 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:39.713 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:39.713 issued rwts: total=1683,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:39.713 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:39.713 00:32:39.713 Run status group 0 (all jobs): 00:32:39.713 READ: bw=59.5MiB/s (62.4MB/s), 6486KiB/s-19.3MiB/s (6641kB/s-20.3MB/s), io=61.7MiB (64.7MB), run=1003-1038msec 00:32:39.713 WRITE: bw=63.6MiB/s (66.7MB/s), 7892KiB/s-19.9MiB/s (8082kB/s-20.9MB/s), io=66.0MiB (69.2MB), run=1003-1038msec 00:32:39.713 00:32:39.713 Disk stats (read/write): 00:32:39.713 nvme0n1: ios=4558/4608, merge=0/0, ticks=45968/40265, in_queue=86233, util=98.30% 00:32:39.713 nvme0n2: ios=4147/4191, merge=0/0, ticks=53255/48117, in_queue=101372, util=98.48% 00:32:39.713 nvme0n3: ios=3643/4095, merge=0/0, ticks=48361/56753, in_queue=105114, util=98.44% 00:32:39.713 nvme0n4: ios=1203/1536, merge=0/0, ticks=13514/13277, in_queue=26791, util=98.43% 00:32:39.713 23:03:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:32:39.713 [global] 00:32:39.713 thread=1 00:32:39.713 invalidate=1 00:32:39.713 rw=randwrite 00:32:39.713 time_based=1 00:32:39.713 runtime=1 00:32:39.713 ioengine=libaio 00:32:39.713 direct=1 00:32:39.713 bs=4096 00:32:39.713 iodepth=128 00:32:39.713 norandommap=0 00:32:39.713 numjobs=1 00:32:39.713 00:32:39.713 verify_dump=1 00:32:39.713 verify_backlog=512 00:32:39.713 verify_state_save=0 00:32:39.713 do_verify=1 00:32:39.713 verify=crc32c-intel 00:32:39.713 [job0] 00:32:39.713 filename=/dev/nvme0n1 00:32:39.713 [job1] 00:32:39.713 filename=/dev/nvme0n2 00:32:39.713 [job2] 00:32:39.713 filename=/dev/nvme0n3 00:32:39.713 [job3] 00:32:39.713 filename=/dev/nvme0n4 00:32:39.713 Could not set queue depth (nvme0n1) 00:32:39.713 Could not set queue depth (nvme0n2) 00:32:39.713 Could not set queue depth (nvme0n3) 00:32:39.713 Could not set queue depth (nvme0n4) 00:32:39.977 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:39.977 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:39.977 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:39.977 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:32:39.977 fio-3.35 00:32:39.977 Starting 4 threads 00:32:41.364 00:32:41.364 job0: (groupid=0, jobs=1): err= 0: pid=2605727: Tue Dec 10 23:03:48 2024 00:32:41.364 read: IOPS=4562, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1010msec) 00:32:41.364 slat (nsec): min=1487, max=16855k, avg=99350.22, stdev=814055.93 00:32:41.364 clat (usec): min=4045, max=40621, avg=13603.39, stdev=5908.51 00:32:41.364 lat (usec): min=4052, max=49448, avg=13702.74, stdev=5962.04 00:32:41.364 clat percentiles (usec): 00:32:41.365 | 1.00th=[ 4293], 5.00th=[ 6587], 10.00th=[ 8356], 20.00th=[ 9503], 00:32:41.365 | 30.00th=[10159], 40.00th=[11207], 50.00th=[12518], 60.00th=[13698], 00:32:41.365 | 70.00th=[14484], 80.00th=[16319], 90.00th=[20055], 95.00th=[23987], 00:32:41.365 | 99.00th=[34341], 99.50th=[35390], 99.90th=[40633], 99.95th=[40633], 00:32:41.365 | 99.99th=[40633] 00:32:41.365 write: IOPS=4853, BW=19.0MiB/s (19.9MB/s)(19.1MiB/1010msec); 0 zone resets 00:32:41.365 slat (usec): min=2, max=25735, avg=82.81, stdev=714.46 00:32:41.365 clat (usec): min=1497, max=52117, avg=13362.87, stdev=8710.88 00:32:41.365 lat (usec): min=1605, max=52140, avg=13445.68, stdev=8758.06 00:32:41.365 clat percentiles (usec): 00:32:41.365 | 1.00th=[ 3752], 5.00th=[ 5211], 10.00th=[ 6063], 20.00th=[ 7504], 00:32:41.365 | 30.00th=[ 8029], 40.00th=[ 9372], 50.00th=[10945], 60.00th=[12256], 00:32:41.365 | 70.00th=[14353], 80.00th=[16909], 90.00th=[25035], 95.00th=[30802], 00:32:41.365 | 99.00th=[46924], 99.50th=[48497], 99.90th=[52167], 99.95th=[52167], 00:32:41.365 | 99.99th=[52167] 00:32:41.365 bw ( KiB/s): min=17720, max=20439, per=30.66%, avg=19079.50, stdev=1922.62, samples=2 00:32:41.365 iops : min= 4430, max= 5109, avg=4769.50, stdev=480.13, samples=2 00:32:41.365 lat (msec) : 2=0.01%, 4=0.67%, 10=34.61%, 20=52.59%, 50=11.98% 00:32:41.365 lat (msec) : 100=0.15% 00:32:41.365 cpu : usr=3.47%, sys=6.34%, ctx=292, majf=0, minf=1 00:32:41.365 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:32:41.365 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:41.365 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:41.365 issued rwts: total=4608,4902,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:41.365 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:41.365 job1: (groupid=0, jobs=1): err= 0: pid=2605728: Tue Dec 10 23:03:48 2024 00:32:41.365 read: IOPS=3212, BW=12.5MiB/s (13.2MB/s)(13.2MiB/1051msec) 00:32:41.365 slat (nsec): min=1033, max=21836k, avg=158811.80, stdev=1221348.65 00:32:41.365 clat (usec): min=1102, max=84939, avg=20710.62, stdev=18582.80 00:32:41.365 lat (usec): min=1104, max=84943, avg=20869.43, stdev=18706.21 00:32:41.365 clat percentiles (usec): 00:32:41.365 | 1.00th=[ 2900], 5.00th=[ 4359], 10.00th=[ 6980], 20.00th=[ 8717], 00:32:41.365 | 30.00th=[ 9241], 40.00th=[10683], 50.00th=[11994], 60.00th=[14091], 00:32:41.365 | 70.00th=[20055], 80.00th=[34341], 90.00th=[50594], 95.00th=[60556], 00:32:41.365 | 99.00th=[80217], 99.50th=[82314], 99.90th=[82314], 99.95th=[85459], 00:32:41.365 | 99.99th=[85459] 00:32:41.365 write: IOPS=3410, BW=13.3MiB/s (14.0MB/s)(14.0MiB/1051msec); 0 zone resets 00:32:41.365 slat (nsec): min=1849, max=19246k, avg=110197.88, stdev=921530.97 00:32:41.365 clat (usec): min=987, max=80693, avg=17621.84, stdev=14493.33 00:32:41.365 lat (usec): min=1051, max=80702, avg=17732.04, stdev=14567.09 00:32:41.365 clat percentiles (usec): 00:32:41.365 | 1.00th=[ 1942], 5.00th=[ 5014], 10.00th=[ 7046], 20.00th=[ 7767], 00:32:41.365 | 30.00th=[ 8455], 40.00th=[10159], 50.00th=[11338], 60.00th=[12387], 00:32:41.365 | 70.00th=[15139], 80.00th=[31851], 90.00th=[42730], 95.00th=[46924], 00:32:41.365 | 99.00th=[58983], 99.50th=[67634], 99.90th=[76022], 99.95th=[76022], 00:32:41.365 | 99.99th=[80217] 00:32:41.365 bw ( KiB/s): min=12304, max=16368, per=23.04%, avg=14336.00, stdev=2873.68, samples=2 00:32:41.365 iops : min= 3076, max= 4092, avg=3584.00, stdev=718.42, samples=2 00:32:41.365 lat (usec) : 1000=0.01% 00:32:41.365 lat (msec) : 2=0.93%, 4=2.61%, 10=35.16%, 20=33.36%, 50=20.65% 00:32:41.365 lat (msec) : 100=7.27% 00:32:41.365 cpu : usr=1.90%, sys=3.62%, ctx=298, majf=0, minf=1 00:32:41.365 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:32:41.365 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:41.365 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:41.365 issued rwts: total=3376,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:41.365 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:41.365 job2: (groupid=0, jobs=1): err= 0: pid=2605729: Tue Dec 10 23:03:48 2024 00:32:41.365 read: IOPS=4566, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1009msec) 00:32:41.365 slat (nsec): min=1457, max=27651k, avg=103735.62, stdev=947069.36 00:32:41.365 clat (usec): min=5580, max=46293, avg=13612.00, stdev=4538.39 00:32:41.365 lat (usec): min=5592, max=47061, avg=13715.74, stdev=4633.55 00:32:41.365 clat percentiles (usec): 00:32:41.365 | 1.00th=[ 6587], 5.00th=[ 9765], 10.00th=[10159], 20.00th=[10683], 00:32:41.365 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11469], 60.00th=[12387], 00:32:41.365 | 70.00th=[15139], 80.00th=[17695], 90.00th=[19530], 95.00th=[20317], 00:32:41.365 | 99.00th=[31065], 99.50th=[31065], 99.90th=[31851], 99.95th=[34866], 00:32:41.365 | 99.99th=[46400] 00:32:41.365 write: IOPS=4749, BW=18.6MiB/s (19.5MB/s)(18.7MiB/1009msec); 0 zone resets 00:32:41.365 slat (usec): min=2, max=17991, avg=102.50, stdev=807.63 00:32:41.365 clat (usec): min=2752, max=35277, avg=13619.96, stdev=5685.65 00:32:41.365 lat (usec): min=2760, max=35308, avg=13722.46, stdev=5748.49 00:32:41.365 clat percentiles (usec): 00:32:41.365 | 1.00th=[ 5932], 5.00th=[ 6587], 10.00th=[ 7570], 20.00th=[ 9503], 00:32:41.365 | 30.00th=[10290], 40.00th=[11207], 50.00th=[11731], 60.00th=[13042], 00:32:41.365 | 70.00th=[14877], 80.00th=[19006], 90.00th=[20841], 95.00th=[25297], 00:32:41.365 | 99.00th=[31851], 99.50th=[32637], 99.90th=[32637], 99.95th=[32637], 00:32:41.365 | 99.99th=[35390] 00:32:41.365 bw ( KiB/s): min=18368, max=19113, per=30.12%, avg=18740.50, stdev=526.79, samples=2 00:32:41.365 iops : min= 4592, max= 4778, avg=4685.00, stdev=131.52, samples=2 00:32:41.365 lat (msec) : 4=0.19%, 10=17.00%, 20=72.94%, 50=9.87% 00:32:41.365 cpu : usr=4.86%, sys=5.26%, ctx=294, majf=0, minf=1 00:32:41.365 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:32:41.365 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:41.365 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:41.365 issued rwts: total=4608,4792,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:41.365 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:41.365 job3: (groupid=0, jobs=1): err= 0: pid=2605730: Tue Dec 10 23:03:48 2024 00:32:41.365 read: IOPS=2739, BW=10.7MiB/s (11.2MB/s)(10.7MiB/1004msec) 00:32:41.365 slat (nsec): min=1566, max=27617k, avg=187580.34, stdev=1435951.39 00:32:41.365 clat (usec): min=1676, max=58985, avg=24296.45, stdev=10808.58 00:32:41.365 lat (usec): min=3346, max=59011, avg=24484.03, stdev=10906.19 00:32:41.365 clat percentiles (usec): 00:32:41.365 | 1.00th=[ 6390], 5.00th=[10028], 10.00th=[10683], 20.00th=[12780], 00:32:41.365 | 30.00th=[15533], 40.00th=[19792], 50.00th=[25297], 60.00th=[27919], 00:32:41.365 | 70.00th=[30278], 80.00th=[32113], 90.00th=[39060], 95.00th=[43254], 00:32:41.365 | 99.00th=[50594], 99.50th=[50594], 99.90th=[56361], 99.95th=[56361], 00:32:41.365 | 99.99th=[58983] 00:32:41.365 write: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec); 0 zone resets 00:32:41.365 slat (usec): min=2, max=32400, avg=136.44, stdev=1146.84 00:32:41.365 clat (usec): min=436, max=76871, avg=19594.84, stdev=11944.15 00:32:41.365 lat (usec): min=669, max=76903, avg=19731.27, stdev=12046.12 00:32:41.365 clat percentiles (usec): 00:32:41.365 | 1.00th=[ 2040], 5.00th=[ 5014], 10.00th=[ 6783], 20.00th=[10421], 00:32:41.365 | 30.00th=[10945], 40.00th=[15664], 50.00th=[17171], 60.00th=[18220], 00:32:41.365 | 70.00th=[23200], 80.00th=[29492], 90.00th=[34866], 95.00th=[42206], 00:32:41.365 | 99.00th=[58459], 99.50th=[58983], 99.90th=[58983], 99.95th=[63701], 00:32:41.365 | 99.99th=[77071] 00:32:41.365 bw ( KiB/s): min=12288, max=12288, per=19.75%, avg=12288.00, stdev= 0.00, samples=2 00:32:41.365 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:32:41.365 lat (usec) : 500=0.02%, 750=0.09%, 1000=0.07% 00:32:41.365 lat (msec) : 2=0.31%, 4=1.65%, 10=8.36%, 20=42.32%, 50=44.47% 00:32:41.365 lat (msec) : 100=2.71% 00:32:41.365 cpu : usr=1.99%, sys=5.18%, ctx=239, majf=0, minf=1 00:32:41.365 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:32:41.365 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:41.365 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:41.365 issued rwts: total=2750,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:41.365 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:41.365 00:32:41.365 Run status group 0 (all jobs): 00:32:41.365 READ: bw=57.0MiB/s (59.8MB/s), 10.7MiB/s-17.8MiB/s (11.2MB/s-18.7MB/s), io=59.9MiB (62.8MB), run=1004-1051msec 00:32:41.365 WRITE: bw=60.8MiB/s (63.7MB/s), 12.0MiB/s-19.0MiB/s (12.5MB/s-19.9MB/s), io=63.9MiB (67.0MB), run=1004-1051msec 00:32:41.365 00:32:41.365 Disk stats (read/write): 00:32:41.365 nvme0n1: ios=3754/4096, merge=0/0, ticks=49466/49969, in_queue=99435, util=82.26% 00:32:41.365 nvme0n2: ios=2599/3072, merge=0/0, ticks=32711/32595, in_queue=65306, util=83.25% 00:32:41.365 nvme0n3: ios=3640/3662, merge=0/0, ticks=48772/49357, in_queue=98129, util=97.30% 00:32:41.365 nvme0n4: ios=2064/2446, merge=0/0, ticks=33194/31795, in_queue=64989, util=98.24% 00:32:41.365 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:32:41.365 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2605958 00:32:41.365 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:32:41.365 23:03:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:32:41.365 [global] 00:32:41.365 thread=1 00:32:41.365 invalidate=1 00:32:41.365 rw=read 00:32:41.365 time_based=1 00:32:41.365 runtime=10 00:32:41.365 ioengine=libaio 00:32:41.365 direct=1 00:32:41.365 bs=4096 00:32:41.365 iodepth=1 00:32:41.365 norandommap=1 00:32:41.365 numjobs=1 00:32:41.365 00:32:41.365 [job0] 00:32:41.365 filename=/dev/nvme0n1 00:32:41.365 [job1] 00:32:41.365 filename=/dev/nvme0n2 00:32:41.365 [job2] 00:32:41.365 filename=/dev/nvme0n3 00:32:41.365 [job3] 00:32:41.365 filename=/dev/nvme0n4 00:32:41.365 Could not set queue depth (nvme0n1) 00:32:41.365 Could not set queue depth (nvme0n2) 00:32:41.365 Could not set queue depth (nvme0n3) 00:32:41.365 Could not set queue depth (nvme0n4) 00:32:41.623 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:41.623 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:41.623 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:41.623 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:32:41.623 fio-3.35 00:32:41.623 Starting 4 threads 00:32:44.153 23:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_raid_delete concat0 00:32:44.410 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=41418752, buflen=4096 00:32:44.410 fio: pid=2606104, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:32:44.410 23:03:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_raid_delete raid0 00:32:44.667 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=4505600, buflen=4096 00:32:44.667 fio: pid=2606103, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:32:44.667 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:44.667 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:32:44.926 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=49913856, buflen=4096 00:32:44.926 fio: pid=2606101, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:32:44.926 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:44.926 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:32:44.926 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:44.926 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:32:44.926 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=9084928, buflen=4096 00:32:44.926 fio: pid=2606102, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:32:45.185 00:32:45.185 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2606101: Tue Dec 10 23:03:52 2024 00:32:45.185 read: IOPS=3860, BW=15.1MiB/s (15.8MB/s)(47.6MiB/3157msec) 00:32:45.185 slat (usec): min=7, max=22547, avg=11.46, stdev=211.78 00:32:45.185 clat (usec): min=162, max=3984, avg=242.99, stdev=56.40 00:32:45.185 lat (usec): min=183, max=22969, avg=254.46, stdev=221.11 00:32:45.185 clat percentiles (usec): 00:32:45.185 | 1.00th=[ 184], 5.00th=[ 194], 10.00th=[ 215], 20.00th=[ 227], 00:32:45.185 | 30.00th=[ 237], 40.00th=[ 243], 50.00th=[ 245], 60.00th=[ 249], 00:32:45.185 | 70.00th=[ 251], 80.00th=[ 255], 90.00th=[ 265], 95.00th=[ 273], 00:32:45.185 | 99.00th=[ 293], 99.50th=[ 314], 99.90th=[ 469], 99.95th=[ 506], 00:32:45.185 | 99.99th=[ 2737] 00:32:45.185 bw ( KiB/s): min=15296, max=15816, per=51.33%, avg=15578.50, stdev=201.92, samples=6 00:32:45.185 iops : min= 3824, max= 3954, avg=3894.50, stdev=50.34, samples=6 00:32:45.185 lat (usec) : 250=67.17%, 500=32.76%, 750=0.02% 00:32:45.185 lat (msec) : 2=0.02%, 4=0.03% 00:32:45.185 cpu : usr=2.19%, sys=5.96%, ctx=12191, majf=0, minf=2 00:32:45.185 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:45.185 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:45.185 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:45.185 issued rwts: total=12187,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:45.185 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:45.185 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2606102: Tue Dec 10 23:03:52 2024 00:32:45.185 read: IOPS=657, BW=2628KiB/s (2691kB/s)(8872KiB/3376msec) 00:32:45.185 slat (usec): min=6, max=16871, avg=19.93, stdev=394.54 00:32:45.185 clat (usec): min=190, max=42044, avg=1490.33, stdev=7001.99 00:32:45.185 lat (usec): min=197, max=58186, avg=1510.26, stdev=7077.08 00:32:45.185 clat percentiles (usec): 00:32:45.185 | 1.00th=[ 204], 5.00th=[ 212], 10.00th=[ 219], 20.00th=[ 225], 00:32:45.185 | 30.00th=[ 231], 40.00th=[ 239], 50.00th=[ 247], 60.00th=[ 255], 00:32:45.185 | 70.00th=[ 269], 80.00th=[ 297], 90.00th=[ 322], 95.00th=[ 338], 00:32:45.185 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:32:45.185 | 99.99th=[42206] 00:32:45.185 bw ( KiB/s): min= 93, max=15104, per=9.71%, avg=2946.17, stdev=6014.30, samples=6 00:32:45.185 iops : min= 23, max= 3776, avg=736.50, stdev=1503.60, samples=6 00:32:45.185 lat (usec) : 250=53.85%, 500=43.08% 00:32:45.185 lat (msec) : 50=3.02% 00:32:45.185 cpu : usr=0.50%, sys=0.92%, ctx=2223, majf=0, minf=2 00:32:45.185 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:45.185 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:45.185 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:45.185 issued rwts: total=2219,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:45.185 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:45.185 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2606103: Tue Dec 10 23:03:52 2024 00:32:45.185 read: IOPS=373, BW=1494KiB/s (1529kB/s)(4400KiB/2946msec) 00:32:45.185 slat (nsec): min=8074, max=34143, avg=9285.33, stdev=1993.44 00:32:45.185 clat (usec): min=213, max=41986, avg=2647.10, stdev=9621.49 00:32:45.185 lat (usec): min=222, max=41996, avg=2656.38, stdev=9622.66 00:32:45.185 clat percentiles (usec): 00:32:45.185 | 1.00th=[ 219], 5.00th=[ 223], 10.00th=[ 225], 20.00th=[ 229], 00:32:45.185 | 30.00th=[ 231], 40.00th=[ 233], 50.00th=[ 237], 60.00th=[ 239], 00:32:45.185 | 70.00th=[ 243], 80.00th=[ 247], 90.00th=[ 260], 95.00th=[41157], 00:32:45.185 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:32:45.185 | 99.99th=[42206] 00:32:45.185 bw ( KiB/s): min= 96, max= 8320, per=5.74%, avg=1742.40, stdev=3676.99, samples=5 00:32:45.185 iops : min= 24, max= 2080, avg=435.60, stdev=919.25, samples=5 00:32:45.185 lat (usec) : 250=82.47%, 500=11.53% 00:32:45.185 lat (msec) : 50=5.90% 00:32:45.185 cpu : usr=0.10%, sys=0.44%, ctx=1102, majf=0, minf=1 00:32:45.185 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:45.185 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:45.185 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:45.185 issued rwts: total=1101,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:45.185 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:45.185 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2606104: Tue Dec 10 23:03:52 2024 00:32:45.185 read: IOPS=3719, BW=14.5MiB/s (15.2MB/s)(39.5MiB/2719msec) 00:32:45.185 slat (nsec): min=7078, max=41350, avg=8337.96, stdev=1306.65 00:32:45.185 clat (usec): min=191, max=1554, avg=257.74, stdev=31.46 00:32:45.185 lat (usec): min=199, max=1563, avg=266.08, stdev=31.51 00:32:45.185 clat percentiles (usec): 00:32:45.185 | 1.00th=[ 210], 5.00th=[ 223], 10.00th=[ 229], 20.00th=[ 235], 00:32:45.185 | 30.00th=[ 239], 40.00th=[ 245], 50.00th=[ 249], 60.00th=[ 258], 00:32:45.185 | 70.00th=[ 273], 80.00th=[ 285], 90.00th=[ 293], 95.00th=[ 306], 00:32:45.185 | 99.00th=[ 334], 99.50th=[ 343], 99.90th=[ 424], 99.95th=[ 429], 00:32:45.185 | 99.99th=[ 523] 00:32:45.185 bw ( KiB/s): min=13288, max=15880, per=49.69%, avg=15080.00, stdev=1043.18, samples=5 00:32:45.185 iops : min= 3322, max= 3970, avg=3770.00, stdev=260.79, samples=5 00:32:45.185 lat (usec) : 250=52.28%, 500=47.69%, 750=0.01% 00:32:45.185 lat (msec) : 2=0.01% 00:32:45.185 cpu : usr=1.73%, sys=6.40%, ctx=10113, majf=0, minf=2 00:32:45.185 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:45.185 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:45.185 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:45.185 issued rwts: total=10113,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:45.185 latency : target=0, window=0, percentile=100.00%, depth=1 00:32:45.185 00:32:45.185 Run status group 0 (all jobs): 00:32:45.185 READ: bw=29.6MiB/s (31.1MB/s), 1494KiB/s-15.1MiB/s (1529kB/s-15.8MB/s), io=100MiB (105MB), run=2719-3376msec 00:32:45.185 00:32:45.185 Disk stats (read/write): 00:32:45.185 nvme0n1: ios=12120/0, merge=0/0, ticks=3700/0, in_queue=3700, util=98.58% 00:32:45.185 nvme0n2: ios=2250/0, merge=0/0, ticks=3979/0, in_queue=3979, util=99.02% 00:32:45.185 nvme0n3: ios=1098/0, merge=0/0, ticks=2815/0, in_queue=2815, util=96.52% 00:32:45.185 nvme0n4: ios=9768/0, merge=0/0, ticks=2391/0, in_queue=2391, util=96.45% 00:32:45.185 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:45.185 23:03:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:32:45.444 23:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:45.444 23:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:32:45.703 23:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:45.703 23:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:32:45.961 23:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:32:45.961 23:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:32:45.961 23:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:32:45.961 23:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 2605958 00:32:45.961 23:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:32:45.961 23:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:46.219 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:46.219 23:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:46.219 23:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:32:46.219 23:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:32:46.219 23:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:46.219 23:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:32:46.219 23:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:46.219 23:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:32:46.219 23:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:32:46.220 23:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:32:46.220 nvmf hotplug test: fio failed as expected 00:32:46.220 23:03:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:46.478 23:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:32:46.478 23:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:32:46.478 23:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:32:46.478 23:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:32:46.478 23:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:32:46.478 23:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:46.478 23:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:32:46.478 23:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:46.478 23:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:32:46.478 23:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:46.478 23:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:46.478 rmmod nvme_tcp 00:32:46.478 rmmod nvme_fabrics 00:32:46.478 rmmod nvme_keyring 00:32:46.478 23:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:46.478 23:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:32:46.478 23:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:32:46.478 23:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2603297 ']' 00:32:46.478 23:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2603297 00:32:46.478 23:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 2603297 ']' 00:32:46.478 23:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 2603297 00:32:46.478 23:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:32:46.478 23:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:46.478 23:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2603297 00:32:46.478 23:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:46.478 23:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:46.478 23:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2603297' 00:32:46.478 killing process with pid 2603297 00:32:46.478 23:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 2603297 00:32:46.478 23:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 2603297 00:32:46.736 23:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:46.736 23:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:46.736 23:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:46.736 23:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:32:46.736 23:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:32:46.736 23:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:46.736 23:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:32:46.736 23:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:46.736 23:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:46.736 23:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:46.736 23:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:46.736 23:03:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:49.270 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:49.270 00:32:49.270 real 0m25.914s 00:32:49.270 user 1m30.413s 00:32:49.270 sys 0m11.718s 00:32:49.270 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:49.270 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:32:49.270 ************************************ 00:32:49.270 END TEST nvmf_fio_target 00:32:49.270 ************************************ 00:32:49.270 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:32:49.270 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:49.270 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:49.270 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:49.270 ************************************ 00:32:49.270 START TEST nvmf_bdevio 00:32:49.270 ************************************ 00:32:49.270 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:32:49.270 * Looking for test storage... 00:32:49.270 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:32:49.270 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:49.270 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:32:49.270 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:49.270 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:49.270 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:49.270 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:49.270 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:49.270 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:32:49.270 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:32:49.270 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:32:49.270 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:32:49.270 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:32:49.270 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:32:49.270 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:32:49.270 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:49.270 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:32:49.270 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:32:49.270 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:49.270 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:49.270 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:32:49.270 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:32:49.270 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:49.270 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:32:49.270 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:32:49.270 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:32:49.270 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:32:49.270 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:49.270 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:32:49.270 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:32:49.270 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:49.270 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:49.270 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:32:49.270 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:49.270 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:49.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:49.270 --rc genhtml_branch_coverage=1 00:32:49.270 --rc genhtml_function_coverage=1 00:32:49.270 --rc genhtml_legend=1 00:32:49.270 --rc geninfo_all_blocks=1 00:32:49.270 --rc geninfo_unexecuted_blocks=1 00:32:49.270 00:32:49.270 ' 00:32:49.270 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:49.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:49.270 --rc genhtml_branch_coverage=1 00:32:49.270 --rc genhtml_function_coverage=1 00:32:49.270 --rc genhtml_legend=1 00:32:49.270 --rc geninfo_all_blocks=1 00:32:49.270 --rc geninfo_unexecuted_blocks=1 00:32:49.270 00:32:49.270 ' 00:32:49.270 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:49.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:49.270 --rc genhtml_branch_coverage=1 00:32:49.270 --rc genhtml_function_coverage=1 00:32:49.270 --rc genhtml_legend=1 00:32:49.270 --rc geninfo_all_blocks=1 00:32:49.270 --rc geninfo_unexecuted_blocks=1 00:32:49.270 00:32:49.270 ' 00:32:49.270 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:49.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:49.270 --rc genhtml_branch_coverage=1 00:32:49.270 --rc genhtml_function_coverage=1 00:32:49.270 --rc genhtml_legend=1 00:32:49.270 --rc geninfo_all_blocks=1 00:32:49.270 --rc geninfo_unexecuted_blocks=1 00:32:49.270 00:32:49.270 ' 00:32:49.270 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:32:49.270 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:32:49.270 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:49.270 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:49.270 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:49.270 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:49.270 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:49.270 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:49.270 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:49.270 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:49.270 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:49.270 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:49.270 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:32:49.270 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:32:49.270 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:49.270 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:49.270 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:49.270 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:49.270 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:32:49.270 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:32:49.270 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:49.270 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:49.270 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:49.270 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:49.270 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:49.271 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:49.271 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:32:49.271 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:49.271 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:32:49.271 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:49.271 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:49.271 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:49.271 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:49.271 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:49.271 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:49.271 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:49.271 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:49.271 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:49.271 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:49.271 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:49.271 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:49.271 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:32:49.271 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:49.271 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:49.271 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:49.271 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:49.271 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:49.271 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:49.271 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:49.271 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:49.271 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:49.271 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:49.271 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:32:49.271 23:03:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:54.545 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:54.545 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:32:54.545 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:54.545 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:54.545 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:54.545 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:54.545 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:54.545 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:32:54.545 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:54.545 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:32:54.545 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:32:54.545 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:32:54.545 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:32:54.545 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:32:54.545 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:32:54.545 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:54.545 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:54.545 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:54.545 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:54.545 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:54.545 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:54.545 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:54.545 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:54.545 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:54.545 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:54.545 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:54.545 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:54.545 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:54.545 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:54.545 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:54.545 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:54.545 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:54.545 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:54.545 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:54.545 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:54.545 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:54.546 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:54.546 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:54.546 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:54.546 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:54.546 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:54.546 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:54.546 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:54.546 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:54.546 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:54.546 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:54.546 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:54.546 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:54.546 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:54.546 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:54.546 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:54.546 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:54.546 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:54.546 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:54.546 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:54.546 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:54.546 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:54.546 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:54.546 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:54.546 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:54.546 Found net devices under 0000:86:00.0: cvl_0_0 00:32:54.546 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:54.546 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:54.546 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:54.546 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:54.546 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:54.546 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:54.546 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:54.546 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:54.546 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:54.546 Found net devices under 0000:86:00.1: cvl_0_1 00:32:54.546 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:54.546 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:54.546 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:32:54.546 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:54.546 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:54.546 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:54.546 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:54.546 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:54.546 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:54.546 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:54.546 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:54.546 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:54.546 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:54.546 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:54.546 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:54.546 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:54.546 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:54.546 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:54.806 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:54.806 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:54.806 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:54.806 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:54.806 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:54.806 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:54.806 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:54.806 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:54.806 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:54.806 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:54.806 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:54.806 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:54.806 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.383 ms 00:32:54.806 00:32:54.806 --- 10.0.0.2 ping statistics --- 00:32:54.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:54.806 rtt min/avg/max/mdev = 0.383/0.383/0.383/0.000 ms 00:32:54.806 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:54.806 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:54.806 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:32:54.806 00:32:54.806 --- 10.0.0.1 ping statistics --- 00:32:54.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:54.806 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:32:54.806 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:54.806 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:32:54.806 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:54.806 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:54.806 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:54.806 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:54.806 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:54.806 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:54.806 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:55.065 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:32:55.065 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:55.065 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:55.065 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:55.065 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2610335 00:32:55.065 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2610335 00:32:55.065 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:32:55.065 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 2610335 ']' 00:32:55.065 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:55.065 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:55.065 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:55.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:55.065 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:55.065 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:55.065 [2024-12-10 23:04:02.610601] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:55.065 [2024-12-10 23:04:02.611596] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:32:55.065 [2024-12-10 23:04:02.611636] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:55.065 [2024-12-10 23:04:02.691831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:55.065 [2024-12-10 23:04:02.734428] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:55.065 [2024-12-10 23:04:02.734468] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:55.065 [2024-12-10 23:04:02.734475] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:55.065 [2024-12-10 23:04:02.734482] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:55.065 [2024-12-10 23:04:02.734487] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:55.065 [2024-12-10 23:04:02.735961] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:32:55.065 [2024-12-10 23:04:02.736072] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:32:55.065 [2024-12-10 23:04:02.736196] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:32:55.065 [2024-12-10 23:04:02.736197] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:32:55.360 [2024-12-10 23:04:02.805507] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:55.360 [2024-12-10 23:04:02.806486] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:55.360 [2024-12-10 23:04:02.806615] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:55.360 [2024-12-10 23:04:02.807004] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:55.360 [2024-12-10 23:04:02.807045] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:55.360 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:55.360 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:32:55.360 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:55.360 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:55.360 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:55.360 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:55.360 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:55.360 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.360 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:55.360 [2024-12-10 23:04:02.884828] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:55.360 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.360 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:55.360 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.360 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:55.360 Malloc0 00:32:55.360 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.360 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:55.360 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.360 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:55.360 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.360 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:55.360 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.360 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:55.360 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.360 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:55.360 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.360 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:55.360 [2024-12-10 23:04:02.956967] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:55.360 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.360 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:32:55.360 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:32:55.360 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:32:55.360 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:32:55.360 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:55.360 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:55.360 { 00:32:55.360 "params": { 00:32:55.360 "name": "Nvme$subsystem", 00:32:55.360 "trtype": "$TEST_TRANSPORT", 00:32:55.360 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:55.360 "adrfam": "ipv4", 00:32:55.360 "trsvcid": "$NVMF_PORT", 00:32:55.360 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:55.360 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:55.360 "hdgst": ${hdgst:-false}, 00:32:55.360 "ddgst": ${ddgst:-false} 00:32:55.360 }, 00:32:55.360 "method": "bdev_nvme_attach_controller" 00:32:55.360 } 00:32:55.360 EOF 00:32:55.360 )") 00:32:55.360 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:32:55.360 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:32:55.360 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:32:55.361 23:04:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:55.361 "params": { 00:32:55.361 "name": "Nvme1", 00:32:55.361 "trtype": "tcp", 00:32:55.361 "traddr": "10.0.0.2", 00:32:55.361 "adrfam": "ipv4", 00:32:55.361 "trsvcid": "4420", 00:32:55.361 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:55.361 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:55.361 "hdgst": false, 00:32:55.361 "ddgst": false 00:32:55.361 }, 00:32:55.361 "method": "bdev_nvme_attach_controller" 00:32:55.361 }' 00:32:55.361 [2024-12-10 23:04:03.006263] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:32:55.361 [2024-12-10 23:04:03.006310] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2610507 ] 00:32:55.644 [2024-12-10 23:04:03.084736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:55.644 [2024-12-10 23:04:03.130623] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:32:55.644 [2024-12-10 23:04:03.130726] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:32:55.644 [2024-12-10 23:04:03.130727] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:32:55.925 I/O targets: 00:32:55.925 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:32:55.925 00:32:55.925 00:32:55.925 CUnit - A unit testing framework for C - Version 2.1-3 00:32:55.925 http://cunit.sourceforge.net/ 00:32:55.925 00:32:55.925 00:32:55.925 Suite: bdevio tests on: Nvme1n1 00:32:55.925 Test: blockdev write read block ...passed 00:32:55.925 Test: blockdev write zeroes read block ...passed 00:32:55.925 Test: blockdev write zeroes read no split ...passed 00:32:55.925 Test: blockdev write zeroes read split ...passed 00:32:55.925 Test: blockdev write zeroes read split partial ...passed 00:32:55.925 Test: blockdev reset ...[2024-12-10 23:04:03.515725] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:32:55.925 [2024-12-10 23:04:03.515789] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x89a050 (9): Bad file descriptor 00:32:55.925 [2024-12-10 23:04:03.559870] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:32:55.925 passed 00:32:55.925 Test: blockdev write read 8 blocks ...passed 00:32:55.925 Test: blockdev write read size > 128k ...passed 00:32:55.925 Test: blockdev write read invalid size ...passed 00:32:55.925 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:32:55.925 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:32:55.925 Test: blockdev write read max offset ...passed 00:32:56.185 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:32:56.185 Test: blockdev writev readv 8 blocks ...passed 00:32:56.185 Test: blockdev writev readv 30 x 1block ...passed 00:32:56.185 Test: blockdev writev readv block ...passed 00:32:56.185 Test: blockdev writev readv size > 128k ...passed 00:32:56.185 Test: blockdev writev readv size > 128k in two iovs ...passed 00:32:56.185 Test: blockdev comparev and writev ...[2024-12-10 23:04:03.729062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:56.185 [2024-12-10 23:04:03.729092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:56.185 [2024-12-10 23:04:03.729107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:56.185 [2024-12-10 23:04:03.729115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:56.185 [2024-12-10 23:04:03.729412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:56.185 [2024-12-10 23:04:03.729424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:56.185 [2024-12-10 23:04:03.729436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:56.185 [2024-12-10 23:04:03.729443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:56.185 [2024-12-10 23:04:03.729724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:56.185 [2024-12-10 23:04:03.729734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:56.185 [2024-12-10 23:04:03.729748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:56.185 [2024-12-10 23:04:03.729757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:56.185 [2024-12-10 23:04:03.730042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:56.185 [2024-12-10 23:04:03.730055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:56.185 [2024-12-10 23:04:03.730071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:56.185 [2024-12-10 23:04:03.730080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:56.185 passed 00:32:56.185 Test: blockdev nvme passthru rw ...passed 00:32:56.185 Test: blockdev nvme passthru vendor specific ...[2024-12-10 23:04:03.812470] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:56.185 [2024-12-10 23:04:03.812488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:56.185 [2024-12-10 23:04:03.812608] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:56.185 [2024-12-10 23:04:03.812618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:56.185 [2024-12-10 23:04:03.812732] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:56.185 [2024-12-10 23:04:03.812742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:56.185 [2024-12-10 23:04:03.812853] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:56.185 [2024-12-10 23:04:03.812863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:56.185 passed 00:32:56.185 Test: blockdev nvme admin passthru ...passed 00:32:56.185 Test: blockdev copy ...passed 00:32:56.185 00:32:56.185 Run Summary: Type Total Ran Passed Failed Inactive 00:32:56.185 suites 1 1 n/a 0 0 00:32:56.185 tests 23 23 23 0 0 00:32:56.185 asserts 152 152 152 0 n/a 00:32:56.185 00:32:56.185 Elapsed time = 0.928 seconds 00:32:56.445 23:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:56.445 23:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.445 23:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:56.445 23:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.445 23:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:32:56.445 23:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:32:56.445 23:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:56.445 23:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:32:56.445 23:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:56.445 23:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:32:56.445 23:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:56.445 23:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:56.445 rmmod nvme_tcp 00:32:56.445 rmmod nvme_fabrics 00:32:56.445 rmmod nvme_keyring 00:32:56.445 23:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:56.445 23:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:32:56.445 23:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:32:56.445 23:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2610335 ']' 00:32:56.445 23:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2610335 00:32:56.445 23:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 2610335 ']' 00:32:56.445 23:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 2610335 00:32:56.445 23:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:32:56.445 23:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:56.445 23:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2610335 00:32:56.445 23:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:32:56.445 23:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:32:56.445 23:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2610335' 00:32:56.445 killing process with pid 2610335 00:32:56.445 23:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 2610335 00:32:56.445 23:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 2610335 00:32:56.705 23:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:56.705 23:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:56.705 23:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:56.705 23:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:32:56.705 23:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:32:56.705 23:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:56.705 23:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:32:56.705 23:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:56.705 23:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:56.705 23:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:56.705 23:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:56.705 23:04:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:59.241 23:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:59.241 00:32:59.241 real 0m9.950s 00:32:59.241 user 0m8.754s 00:32:59.241 sys 0m5.193s 00:32:59.241 23:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:59.241 23:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:59.241 ************************************ 00:32:59.241 END TEST nvmf_bdevio 00:32:59.241 ************************************ 00:32:59.241 23:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:32:59.241 00:32:59.241 real 4m33.617s 00:32:59.241 user 9m7.473s 00:32:59.241 sys 1m51.359s 00:32:59.241 23:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:59.241 23:04:06 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:59.241 ************************************ 00:32:59.241 END TEST nvmf_target_core_interrupt_mode 00:32:59.241 ************************************ 00:32:59.241 23:04:06 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:32:59.241 23:04:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:59.241 23:04:06 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:59.241 23:04:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:59.241 ************************************ 00:32:59.241 START TEST nvmf_interrupt 00:32:59.241 ************************************ 00:32:59.241 23:04:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:32:59.241 * Looking for test storage... 00:32:59.241 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:32:59.241 23:04:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:59.241 23:04:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lcov --version 00:32:59.241 23:04:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:59.241 23:04:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:59.241 23:04:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:59.241 23:04:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:59.241 23:04:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:59.241 23:04:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:32:59.241 23:04:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:32:59.241 23:04:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:32:59.241 23:04:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:32:59.241 23:04:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:32:59.241 23:04:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:32:59.241 23:04:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:32:59.241 23:04:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:59.241 23:04:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:32:59.241 23:04:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:32:59.241 23:04:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:59.241 23:04:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:59.241 23:04:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:32:59.241 23:04:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:32:59.241 23:04:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:59.241 23:04:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:32:59.241 23:04:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:32:59.241 23:04:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:32:59.241 23:04:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:32:59.241 23:04:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:59.241 23:04:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:32:59.241 23:04:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:32:59.241 23:04:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:59.241 23:04:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:59.241 23:04:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:32:59.241 23:04:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:59.241 23:04:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:59.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:59.241 --rc genhtml_branch_coverage=1 00:32:59.241 --rc genhtml_function_coverage=1 00:32:59.241 --rc genhtml_legend=1 00:32:59.241 --rc geninfo_all_blocks=1 00:32:59.241 --rc geninfo_unexecuted_blocks=1 00:32:59.241 00:32:59.241 ' 00:32:59.241 23:04:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:59.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:59.241 --rc genhtml_branch_coverage=1 00:32:59.241 --rc genhtml_function_coverage=1 00:32:59.241 --rc genhtml_legend=1 00:32:59.241 --rc geninfo_all_blocks=1 00:32:59.241 --rc geninfo_unexecuted_blocks=1 00:32:59.241 00:32:59.241 ' 00:32:59.241 23:04:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:59.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:59.241 --rc genhtml_branch_coverage=1 00:32:59.241 --rc genhtml_function_coverage=1 00:32:59.241 --rc genhtml_legend=1 00:32:59.241 --rc geninfo_all_blocks=1 00:32:59.241 --rc geninfo_unexecuted_blocks=1 00:32:59.241 00:32:59.241 ' 00:32:59.241 23:04:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:59.241 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:59.241 --rc genhtml_branch_coverage=1 00:32:59.241 --rc genhtml_function_coverage=1 00:32:59.241 --rc genhtml_legend=1 00:32:59.241 --rc geninfo_all_blocks=1 00:32:59.241 --rc geninfo_unexecuted_blocks=1 00:32:59.241 00:32:59.241 ' 00:32:59.241 23:04:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:32:59.241 23:04:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:32:59.241 23:04:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:59.241 23:04:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:59.241 23:04:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:59.241 23:04:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:59.241 23:04:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:59.241 23:04:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:59.242 23:04:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:59.242 23:04:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:59.242 23:04:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:59.242 23:04:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:59.242 23:04:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:32:59.242 23:04:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:32:59.242 23:04:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:59.242 23:04:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:59.242 23:04:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:59.242 23:04:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:59.242 23:04:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:32:59.242 23:04:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:32:59.242 23:04:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:59.242 23:04:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:59.242 23:04:06 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:59.242 23:04:06 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:59.242 23:04:06 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:59.242 23:04:06 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:59.242 23:04:06 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:32:59.242 23:04:06 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:59.242 23:04:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:32:59.242 23:04:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:59.242 23:04:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:59.242 23:04:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:59.242 23:04:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:59.242 23:04:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:59.242 23:04:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:59.242 23:04:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:59.242 23:04:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:59.242 23:04:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:59.242 23:04:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:59.242 23:04:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/interrupt/common.sh 00:32:59.242 23:04:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:32:59.242 23:04:06 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:32:59.242 23:04:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:59.242 23:04:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:59.242 23:04:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:59.242 23:04:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:59.242 23:04:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:59.242 23:04:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:59.242 23:04:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:59.242 23:04:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:59.242 23:04:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:59.242 23:04:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:59.242 23:04:06 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:32:59.242 23:04:06 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:05.813 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:05.813 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:33:05.813 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:05.813 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:05.813 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:05.813 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:05.813 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:05.813 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:33:05.813 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:05.813 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:33:05.813 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:33:05.813 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:33:05.813 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:33:05.813 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:33:05.813 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:33:05.813 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:05.813 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:05.813 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:05.813 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:05.813 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:05.813 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:05.813 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:05.813 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:05.813 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:05.813 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:05.813 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:05.813 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:05.813 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:05.813 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:05.813 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:05.813 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:05.813 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:05.813 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:05.813 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:05.813 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:05.813 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:05.813 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:05.813 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:05.813 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:05.813 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:05.813 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:05.813 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:05.813 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:05.813 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:05.813 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:05.813 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:05.813 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:05.813 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:05.813 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:05.813 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:05.814 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:05.814 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:05.814 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:05.814 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:05.814 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:05.814 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:05.814 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:05.814 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:05.814 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:05.814 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:05.814 Found net devices under 0000:86:00.0: cvl_0_0 00:33:05.814 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:05.814 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:05.814 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:05.814 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:05.814 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:05.814 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:05.814 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:05.814 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:05.814 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:05.814 Found net devices under 0000:86:00.1: cvl_0_1 00:33:05.814 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:05.814 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:05.814 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:33:05.814 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:05.814 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:05.814 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:05.814 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:05.814 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:05.814 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:05.814 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:05.814 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:05.814 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:05.814 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:05.814 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:05.814 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:05.814 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:05.814 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:05.814 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:05.814 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:05.814 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:05.814 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:05.814 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:05.814 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:05.814 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:05.814 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:05.814 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:05.814 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:05.814 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:05.814 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:05.814 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:05.814 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.392 ms 00:33:05.814 00:33:05.814 --- 10.0.0.2 ping statistics --- 00:33:05.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:05.814 rtt min/avg/max/mdev = 0.392/0.392/0.392/0.000 ms 00:33:05.814 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:05.814 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:05.814 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:33:05.814 00:33:05.814 --- 10.0.0.1 ping statistics --- 00:33:05.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:05.814 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:33:05.814 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:05.814 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:33:05.814 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:05.814 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:05.814 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:05.814 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:05.814 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:05.814 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:05.814 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:05.814 23:04:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:33:05.814 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:05.814 23:04:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:05.814 23:04:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:05.814 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=2614135 00:33:05.814 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 2614135 00:33:05.814 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:33:05.814 23:04:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 2614135 ']' 00:33:05.814 23:04:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:05.814 23:04:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:05.814 23:04:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:05.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:05.814 23:04:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:05.814 23:04:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:05.814 [2024-12-10 23:04:12.678253] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:05.814 [2024-12-10 23:04:12.679188] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:33:05.814 [2024-12-10 23:04:12.679224] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:05.814 [2024-12-10 23:04:12.756717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:05.814 [2024-12-10 23:04:12.798798] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:05.814 [2024-12-10 23:04:12.798831] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:05.814 [2024-12-10 23:04:12.798838] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:05.814 [2024-12-10 23:04:12.798845] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:05.814 [2024-12-10 23:04:12.798850] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:05.814 [2024-12-10 23:04:12.800028] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:33:05.814 [2024-12-10 23:04:12.800030] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:33:05.814 [2024-12-10 23:04:12.867816] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:05.814 [2024-12-10 23:04:12.868358] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:05.814 [2024-12-10 23:04:12.868555] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:05.814 23:04:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:05.814 23:04:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:33:05.814 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:05.814 23:04:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:05.814 23:04:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:05.814 23:04:12 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:05.814 23:04:12 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:33:05.814 23:04:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:33:05.814 23:04:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:33:05.814 23:04:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:33:05.814 5000+0 records in 00:33:05.814 5000+0 records out 00:33:05.814 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0145099 s, 706 MB/s 00:33:05.814 23:04:12 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/aiofile AIO0 2048 00:33:05.814 23:04:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.814 23:04:12 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:05.814 AIO0 00:33:05.814 23:04:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.814 23:04:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:33:05.814 23:04:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.814 23:04:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:05.814 [2024-12-10 23:04:13.008826] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:05.814 23:04:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.814 23:04:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:33:05.814 23:04:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.814 23:04:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:05.815 23:04:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.815 23:04:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:33:05.815 23:04:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.815 23:04:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:05.815 23:04:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.815 23:04:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:05.815 23:04:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.815 23:04:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:05.815 [2024-12-10 23:04:13.045068] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:05.815 23:04:13 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.815 23:04:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:33:05.815 23:04:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2614135 0 00:33:05.815 23:04:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2614135 0 idle 00:33:05.815 23:04:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2614135 00:33:05.815 23:04:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:33:05.815 23:04:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:05.815 23:04:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:05.815 23:04:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:05.815 23:04:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:05.815 23:04:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:05.815 23:04:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:05.815 23:04:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:05.815 23:04:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:05.815 23:04:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2614135 -w 256 00:33:05.815 23:04:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:33:05.815 23:04:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2614135 root 20 0 128.2g 45312 33792 S 0.0 0.0 0:00.25 reactor_0' 00:33:05.815 23:04:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2614135 root 20 0 128.2g 45312 33792 S 0.0 0.0 0:00.25 reactor_0 00:33:05.815 23:04:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:05.815 23:04:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:05.815 23:04:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:05.815 23:04:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:05.815 23:04:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:05.815 23:04:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:05.815 23:04:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:05.815 23:04:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:05.815 23:04:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:33:05.815 23:04:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2614135 1 00:33:05.815 23:04:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2614135 1 idle 00:33:05.815 23:04:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2614135 00:33:05.815 23:04:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:33:05.815 23:04:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:05.815 23:04:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:05.815 23:04:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:05.815 23:04:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:05.815 23:04:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:05.815 23:04:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:05.815 23:04:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:05.815 23:04:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:05.815 23:04:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2614135 -w 256 00:33:05.815 23:04:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:33:05.815 23:04:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2614141 root 20 0 128.2g 45312 33792 S 0.0 0.0 0:00.00 reactor_1' 00:33:05.815 23:04:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2614141 root 20 0 128.2g 45312 33792 S 0.0 0.0 0:00.00 reactor_1 00:33:05.815 23:04:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:05.815 23:04:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:05.815 23:04:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:05.815 23:04:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:05.815 23:04:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:05.815 23:04:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:05.815 23:04:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:05.815 23:04:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:05.815 23:04:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf 00:33:05.815 23:04:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=2614394 00:33:05.815 23:04:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:33:05.815 23:04:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:05.815 23:04:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:33:05.815 23:04:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2614135 0 00:33:05.815 23:04:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2614135 0 busy 00:33:05.815 23:04:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2614135 00:33:05.815 23:04:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:33:05.815 23:04:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:33:05.815 23:04:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:33:05.815 23:04:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:05.815 23:04:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:33:05.815 23:04:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:05.815 23:04:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:05.815 23:04:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:05.815 23:04:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2614135 -w 256 00:33:05.815 23:04:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:33:06.074 23:04:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2614135 root 20 0 128.2g 46848 33792 R 62.5 0.0 0:00.36 reactor_0' 00:33:06.074 23:04:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2614135 root 20 0 128.2g 46848 33792 R 62.5 0.0 0:00.36 reactor_0 00:33:06.074 23:04:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:06.074 23:04:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:06.074 23:04:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=62.5 00:33:06.074 23:04:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=62 00:33:06.074 23:04:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:33:06.074 23:04:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:33:06.074 23:04:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:33:06.074 23:04:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:06.074 23:04:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:33:06.074 23:04:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:33:06.074 23:04:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2614135 1 00:33:06.074 23:04:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2614135 1 busy 00:33:06.074 23:04:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2614135 00:33:06.074 23:04:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:33:06.074 23:04:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:33:06.074 23:04:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:33:06.074 23:04:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:06.074 23:04:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:33:06.074 23:04:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:06.074 23:04:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:06.074 23:04:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:06.074 23:04:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2614135 -w 256 00:33:06.074 23:04:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:33:06.074 23:04:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2614141 root 20 0 128.2g 46848 33792 R 99.9 0.0 0:00.23 reactor_1' 00:33:06.074 23:04:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2614141 root 20 0 128.2g 46848 33792 R 99.9 0.0 0:00.23 reactor_1 00:33:06.074 23:04:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:06.074 23:04:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:06.074 23:04:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:33:06.074 23:04:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:33:06.074 23:04:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:33:06.074 23:04:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:33:06.074 23:04:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:33:06.074 23:04:13 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:06.074 23:04:13 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 2614394 00:33:16.055 Initializing NVMe Controllers 00:33:16.055 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:16.055 Controller IO queue size 256, less than required. 00:33:16.055 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:33:16.055 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:33:16.055 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:33:16.055 Initialization complete. Launching workers. 00:33:16.055 ======================================================== 00:33:16.055 Latency(us) 00:33:16.055 Device Information : IOPS MiB/s Average min max 00:33:16.055 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 16492.60 64.42 15530.17 3236.44 30231.23 00:33:16.055 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 16700.29 65.24 15333.31 8136.76 56339.06 00:33:16.055 ======================================================== 00:33:16.055 Total : 33192.89 129.66 15431.12 3236.44 56339.06 00:33:16.055 00:33:16.055 23:04:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:33:16.055 23:04:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2614135 0 00:33:16.055 23:04:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2614135 0 idle 00:33:16.055 23:04:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2614135 00:33:16.055 23:04:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:33:16.055 23:04:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:16.055 23:04:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:16.055 23:04:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:16.055 23:04:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:16.055 23:04:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:16.055 23:04:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:16.055 23:04:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:16.055 23:04:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:16.055 23:04:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2614135 -w 256 00:33:16.055 23:04:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:33:16.315 23:04:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2614135 root 20 0 128.2g 46848 33792 S 6.7 0.0 0:20.25 reactor_0' 00:33:16.315 23:04:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2614135 root 20 0 128.2g 46848 33792 S 6.7 0.0 0:20.25 reactor_0 00:33:16.315 23:04:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:16.315 23:04:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:16.315 23:04:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:33:16.315 23:04:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:33:16.315 23:04:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:16.315 23:04:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:16.315 23:04:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:16.315 23:04:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:16.315 23:04:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:33:16.315 23:04:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2614135 1 00:33:16.315 23:04:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2614135 1 idle 00:33:16.315 23:04:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2614135 00:33:16.315 23:04:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:33:16.315 23:04:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:16.315 23:04:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:16.315 23:04:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:16.315 23:04:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:16.315 23:04:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:16.315 23:04:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:16.315 23:04:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:16.315 23:04:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:16.315 23:04:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2614135 -w 256 00:33:16.315 23:04:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:33:16.315 23:04:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2614141 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:10.00 reactor_1' 00:33:16.315 23:04:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2614141 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:10.00 reactor_1 00:33:16.315 23:04:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:16.315 23:04:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:16.315 23:04:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:16.315 23:04:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:16.315 23:04:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:16.315 23:04:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:16.315 23:04:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:16.315 23:04:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:16.315 23:04:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:33:16.884 23:04:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:33:16.884 23:04:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:33:16.884 23:04:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:33:16.884 23:04:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:33:16.884 23:04:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:33:18.788 23:04:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:33:18.788 23:04:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:33:18.788 23:04:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:33:18.788 23:04:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:33:18.788 23:04:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:33:18.788 23:04:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:33:18.788 23:04:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:33:18.788 23:04:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2614135 0 00:33:18.788 23:04:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2614135 0 idle 00:33:18.788 23:04:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2614135 00:33:18.788 23:04:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:33:18.788 23:04:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:18.788 23:04:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:18.788 23:04:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:18.788 23:04:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:18.788 23:04:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:18.788 23:04:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:18.788 23:04:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:18.788 23:04:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:18.788 23:04:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2614135 -w 256 00:33:18.788 23:04:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:33:19.046 23:04:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2614135 root 20 0 128.2g 72960 33792 S 0.0 0.0 0:20.48 reactor_0' 00:33:19.046 23:04:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2614135 root 20 0 128.2g 72960 33792 S 0.0 0.0 0:20.48 reactor_0 00:33:19.046 23:04:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:19.046 23:04:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:19.046 23:04:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:19.046 23:04:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:19.046 23:04:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:19.046 23:04:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:19.046 23:04:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:19.046 23:04:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:19.046 23:04:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:33:19.046 23:04:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2614135 1 00:33:19.046 23:04:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2614135 1 idle 00:33:19.047 23:04:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2614135 00:33:19.047 23:04:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:33:19.047 23:04:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:33:19.047 23:04:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:33:19.047 23:04:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:33:19.047 23:04:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:33:19.047 23:04:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:33:19.047 23:04:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:33:19.047 23:04:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:33:19.047 23:04:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:33:19.047 23:04:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2614135 -w 256 00:33:19.047 23:04:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:33:19.306 23:04:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2614141 root 20 0 128.2g 72960 33792 S 0.0 0.0 0:10.09 reactor_1' 00:33:19.306 23:04:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2614141 root 20 0 128.2g 72960 33792 S 0.0 0.0 0:10.09 reactor_1 00:33:19.306 23:04:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:33:19.306 23:04:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:33:19.306 23:04:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:33:19.306 23:04:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:33:19.306 23:04:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:33:19.306 23:04:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:33:19.306 23:04:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:33:19.306 23:04:26 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:33:19.306 23:04:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:33:19.306 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:33:19.306 23:04:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:33:19.306 23:04:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:33:19.306 23:04:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:33:19.306 23:04:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:19.306 23:04:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:33:19.306 23:04:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:19.306 23:04:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:33:19.306 23:04:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:33:19.306 23:04:26 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:33:19.306 23:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:19.306 23:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:33:19.306 23:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:19.306 23:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:33:19.306 23:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:19.306 23:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:19.306 rmmod nvme_tcp 00:33:19.306 rmmod nvme_fabrics 00:33:19.306 rmmod nvme_keyring 00:33:19.306 23:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:19.306 23:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:33:19.306 23:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:33:19.306 23:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 2614135 ']' 00:33:19.306 23:04:26 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 2614135 00:33:19.306 23:04:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 2614135 ']' 00:33:19.306 23:04:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 2614135 00:33:19.306 23:04:26 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:33:19.306 23:04:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:19.306 23:04:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2614135 00:33:19.565 23:04:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:19.565 23:04:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:19.565 23:04:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2614135' 00:33:19.565 killing process with pid 2614135 00:33:19.565 23:04:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 2614135 00:33:19.565 23:04:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 2614135 00:33:19.565 23:04:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:19.565 23:04:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:19.565 23:04:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:19.565 23:04:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:33:19.566 23:04:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:33:19.566 23:04:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:19.566 23:04:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:33:19.566 23:04:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:19.566 23:04:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:19.566 23:04:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:19.566 23:04:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:19.566 23:04:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:22.098 23:04:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:22.098 00:33:22.098 real 0m22.810s 00:33:22.098 user 0m39.535s 00:33:22.098 sys 0m8.597s 00:33:22.098 23:04:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:22.098 23:04:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:22.098 ************************************ 00:33:22.098 END TEST nvmf_interrupt 00:33:22.098 ************************************ 00:33:22.098 00:33:22.098 real 27m27.042s 00:33:22.098 user 56m24.965s 00:33:22.098 sys 9m18.940s 00:33:22.098 23:04:29 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:22.098 23:04:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:22.098 ************************************ 00:33:22.098 END TEST nvmf_tcp 00:33:22.098 ************************************ 00:33:22.098 23:04:29 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:33:22.098 23:04:29 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:33:22.098 23:04:29 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:22.098 23:04:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:22.098 23:04:29 -- common/autotest_common.sh@10 -- # set +x 00:33:22.098 ************************************ 00:33:22.098 START TEST spdkcli_nvmf_tcp 00:33:22.098 ************************************ 00:33:22.098 23:04:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:33:22.098 * Looking for test storage... 00:33:22.098 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/spdkcli 00:33:22.098 23:04:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:22.098 23:04:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:33:22.098 23:04:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:22.098 23:04:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:22.098 23:04:29 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:22.098 23:04:29 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:22.098 23:04:29 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:22.098 23:04:29 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:33:22.098 23:04:29 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:33:22.098 23:04:29 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:33:22.099 23:04:29 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:33:22.099 23:04:29 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:33:22.099 23:04:29 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:33:22.099 23:04:29 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:33:22.099 23:04:29 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:22.099 23:04:29 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:33:22.099 23:04:29 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:33:22.099 23:04:29 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:22.099 23:04:29 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:22.099 23:04:29 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:33:22.099 23:04:29 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:33:22.099 23:04:29 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:22.099 23:04:29 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:33:22.099 23:04:29 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:33:22.099 23:04:29 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:33:22.099 23:04:29 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:33:22.099 23:04:29 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:22.099 23:04:29 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:33:22.099 23:04:29 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:33:22.099 23:04:29 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:22.099 23:04:29 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:22.099 23:04:29 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:33:22.099 23:04:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:22.099 23:04:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:22.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:22.099 --rc genhtml_branch_coverage=1 00:33:22.099 --rc genhtml_function_coverage=1 00:33:22.099 --rc genhtml_legend=1 00:33:22.099 --rc geninfo_all_blocks=1 00:33:22.099 --rc geninfo_unexecuted_blocks=1 00:33:22.099 00:33:22.099 ' 00:33:22.099 23:04:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:22.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:22.099 --rc genhtml_branch_coverage=1 00:33:22.099 --rc genhtml_function_coverage=1 00:33:22.099 --rc genhtml_legend=1 00:33:22.099 --rc geninfo_all_blocks=1 00:33:22.099 --rc geninfo_unexecuted_blocks=1 00:33:22.099 00:33:22.099 ' 00:33:22.099 23:04:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:22.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:22.099 --rc genhtml_branch_coverage=1 00:33:22.099 --rc genhtml_function_coverage=1 00:33:22.099 --rc genhtml_legend=1 00:33:22.099 --rc geninfo_all_blocks=1 00:33:22.099 --rc geninfo_unexecuted_blocks=1 00:33:22.099 00:33:22.099 ' 00:33:22.099 23:04:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:22.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:22.099 --rc genhtml_branch_coverage=1 00:33:22.099 --rc genhtml_function_coverage=1 00:33:22.099 --rc genhtml_legend=1 00:33:22.099 --rc geninfo_all_blocks=1 00:33:22.099 --rc geninfo_unexecuted_blocks=1 00:33:22.099 00:33:22.099 ' 00:33:22.099 23:04:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/spdkcli/common.sh 00:33:22.099 23:04:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/spdkcli/spdkcli_job.py 00:33:22.099 23:04:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/json_config/clear_config.py 00:33:22.099 23:04:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:33:22.099 23:04:29 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:33:22.099 23:04:29 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:22.099 23:04:29 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:22.099 23:04:29 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:22.099 23:04:29 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:22.099 23:04:29 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:22.099 23:04:29 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:22.099 23:04:29 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:22.099 23:04:29 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:22.099 23:04:29 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:22.099 23:04:29 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:22.099 23:04:29 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:33:22.099 23:04:29 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:33:22.099 23:04:29 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:22.099 23:04:29 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:22.099 23:04:29 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:22.099 23:04:29 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:22.099 23:04:29 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:33:22.099 23:04:29 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:33:22.099 23:04:29 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:22.099 23:04:29 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:22.099 23:04:29 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:22.099 23:04:29 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:22.099 23:04:29 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:22.099 23:04:29 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:22.099 23:04:29 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:33:22.099 23:04:29 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:22.099 23:04:29 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:33:22.099 23:04:29 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:22.099 23:04:29 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:22.099 23:04:29 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:22.099 23:04:29 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:22.099 23:04:29 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:22.099 23:04:29 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:22.099 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:22.099 23:04:29 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:22.099 23:04:29 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:22.099 23:04:29 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:22.099 23:04:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:33:22.099 23:04:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:33:22.099 23:04:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:33:22.099 23:04:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:33:22.099 23:04:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:22.099 23:04:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:22.099 23:04:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:33:22.099 23:04:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2617079 00:33:22.099 23:04:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2617079 00:33:22.099 23:04:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 2617079 ']' 00:33:22.099 23:04:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:33:22.099 23:04:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:22.099 23:04:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:22.099 23:04:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:22.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:22.099 23:04:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:22.099 23:04:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:22.099 [2024-12-10 23:04:29.692332] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:33:22.099 [2024-12-10 23:04:29.692380] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2617079 ] 00:33:22.099 [2024-12-10 23:04:29.767241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:22.099 [2024-12-10 23:04:29.807389] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:33:22.099 [2024-12-10 23:04:29.807391] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:33:22.358 23:04:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:22.358 23:04:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:33:22.358 23:04:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:33:22.358 23:04:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:22.358 23:04:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:22.358 23:04:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:33:22.358 23:04:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:33:22.358 23:04:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:33:22.358 23:04:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:22.358 23:04:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:22.358 23:04:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:33:22.358 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:33:22.358 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:33:22.358 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:33:22.358 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:33:22.358 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:33:22.358 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:33:22.358 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:22.358 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:33:22.358 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:33:22.358 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:22.358 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:22.358 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:33:22.358 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:22.358 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:22.358 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:33:22.358 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:33:22.358 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:33:22.358 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:33:22.358 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:22.358 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:33:22.358 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:33:22.358 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:33:22.358 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:33:22.358 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:33:22.358 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:33:22.358 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:33:22.358 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:33:22.358 ' 00:33:25.644 [2024-12-10 23:04:32.649260] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:26.580 [2024-12-10 23:04:33.989768] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:33:29.120 [2024-12-10 23:04:36.481522] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:33:31.024 [2024-12-10 23:04:38.652359] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:33:32.928 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:33:32.928 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:33:32.928 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:33:32.928 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:33:32.928 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:33:32.928 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:33:32.928 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:33:32.928 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:32.928 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:33:32.928 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:33:32.928 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:32.928 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:32.928 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:33:32.928 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:32.928 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:32.928 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:33:32.928 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:32.928 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:33:32.928 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:32.928 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:32.928 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:33:32.928 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:33:32.928 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:33:32.928 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:33:32.928 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:32.928 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:33:32.928 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:33:32.928 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:33:32.928 23:04:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:33:32.928 23:04:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:32.928 23:04:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:32.928 23:04:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:33:32.928 23:04:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:32.928 23:04:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:32.928 23:04:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:33:32.928 23:04:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/spdkcli.py ll /nvmf 00:33:33.187 23:04:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:33:33.187 23:04:40 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:33:33.187 23:04:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:33:33.187 23:04:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:33.187 23:04:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:33.445 23:04:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:33:33.445 23:04:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:33.445 23:04:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:33.445 23:04:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:33:33.445 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:33:33.445 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:33.445 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:33:33.445 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:33:33.445 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:33:33.445 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:33:33.446 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:33.446 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:33:33.446 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:33:33.446 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:33:33.446 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:33:33.446 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:33:33.446 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:33:33.446 ' 00:33:38.718 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:33:38.718 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:33:38.718 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:38.718 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:33:38.718 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:33:38.718 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:33:38.718 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:33:38.718 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:38.718 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:33:38.718 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:33:38.718 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:33:38.718 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:33:38.718 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:33:38.718 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:33:38.977 23:04:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:33:38.977 23:04:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:38.977 23:04:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:38.977 23:04:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2617079 00:33:38.977 23:04:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 2617079 ']' 00:33:38.977 23:04:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 2617079 00:33:38.977 23:04:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:33:38.977 23:04:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:38.977 23:04:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2617079 00:33:38.977 23:04:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:38.977 23:04:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:38.977 23:04:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2617079' 00:33:38.977 killing process with pid 2617079 00:33:38.977 23:04:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 2617079 00:33:38.978 23:04:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 2617079 00:33:39.237 23:04:46 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:33:39.237 23:04:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:33:39.237 23:04:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2617079 ']' 00:33:39.237 23:04:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2617079 00:33:39.237 23:04:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 2617079 ']' 00:33:39.237 23:04:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 2617079 00:33:39.237 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/common/autotest_common.sh: line 958: kill: (2617079) - No such process 00:33:39.237 23:04:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 2617079 is not found' 00:33:39.237 Process with pid 2617079 is not found 00:33:39.237 23:04:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:33:39.237 23:04:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:33:39.237 23:04:46 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:33:39.237 00:33:39.237 real 0m17.377s 00:33:39.237 user 0m38.289s 00:33:39.237 sys 0m0.835s 00:33:39.237 23:04:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:39.237 23:04:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:39.237 ************************************ 00:33:39.237 END TEST spdkcli_nvmf_tcp 00:33:39.237 ************************************ 00:33:39.237 23:04:46 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:39.237 23:04:46 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:39.237 23:04:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:39.237 23:04:46 -- common/autotest_common.sh@10 -- # set +x 00:33:39.237 ************************************ 00:33:39.237 START TEST nvmf_identify_passthru 00:33:39.237 ************************************ 00:33:39.237 23:04:46 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:39.237 * Looking for test storage... 00:33:39.237 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:33:39.497 23:04:46 nvmf_identify_passthru -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:39.497 23:04:46 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lcov --version 00:33:39.497 23:04:46 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:39.497 23:04:47 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:39.497 23:04:47 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:39.497 23:04:47 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:39.497 23:04:47 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:39.497 23:04:47 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:33:39.497 23:04:47 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:33:39.497 23:04:47 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:33:39.497 23:04:47 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:33:39.497 23:04:47 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:33:39.497 23:04:47 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:33:39.497 23:04:47 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:33:39.497 23:04:47 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:39.497 23:04:47 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:33:39.497 23:04:47 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:33:39.497 23:04:47 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:39.498 23:04:47 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:39.498 23:04:47 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:33:39.498 23:04:47 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:33:39.498 23:04:47 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:39.498 23:04:47 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:33:39.498 23:04:47 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:33:39.498 23:04:47 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:33:39.498 23:04:47 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:33:39.498 23:04:47 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:39.498 23:04:47 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:33:39.498 23:04:47 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:33:39.498 23:04:47 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:39.498 23:04:47 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:39.498 23:04:47 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:33:39.498 23:04:47 nvmf_identify_passthru -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:39.498 23:04:47 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:39.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:39.498 --rc genhtml_branch_coverage=1 00:33:39.498 --rc genhtml_function_coverage=1 00:33:39.498 --rc genhtml_legend=1 00:33:39.498 --rc geninfo_all_blocks=1 00:33:39.498 --rc geninfo_unexecuted_blocks=1 00:33:39.498 00:33:39.498 ' 00:33:39.498 23:04:47 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:39.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:39.498 --rc genhtml_branch_coverage=1 00:33:39.498 --rc genhtml_function_coverage=1 00:33:39.498 --rc genhtml_legend=1 00:33:39.498 --rc geninfo_all_blocks=1 00:33:39.498 --rc geninfo_unexecuted_blocks=1 00:33:39.498 00:33:39.498 ' 00:33:39.498 23:04:47 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:39.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:39.498 --rc genhtml_branch_coverage=1 00:33:39.498 --rc genhtml_function_coverage=1 00:33:39.498 --rc genhtml_legend=1 00:33:39.498 --rc geninfo_all_blocks=1 00:33:39.498 --rc geninfo_unexecuted_blocks=1 00:33:39.498 00:33:39.498 ' 00:33:39.498 23:04:47 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:39.498 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:39.498 --rc genhtml_branch_coverage=1 00:33:39.498 --rc genhtml_function_coverage=1 00:33:39.498 --rc genhtml_legend=1 00:33:39.498 --rc geninfo_all_blocks=1 00:33:39.498 --rc geninfo_unexecuted_blocks=1 00:33:39.498 00:33:39.498 ' 00:33:39.498 23:04:47 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:33:39.498 23:04:47 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:33:39.498 23:04:47 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:39.498 23:04:47 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:39.498 23:04:47 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:39.498 23:04:47 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:39.498 23:04:47 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:39.498 23:04:47 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:39.498 23:04:47 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:39.498 23:04:47 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:39.498 23:04:47 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:39.498 23:04:47 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:39.498 23:04:47 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:33:39.498 23:04:47 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:33:39.498 23:04:47 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:39.498 23:04:47 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:39.498 23:04:47 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:39.498 23:04:47 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:39.498 23:04:47 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:33:39.498 23:04:47 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:33:39.498 23:04:47 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:39.498 23:04:47 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:39.498 23:04:47 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:39.498 23:04:47 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.498 23:04:47 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.498 23:04:47 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.498 23:04:47 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:33:39.498 23:04:47 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.498 23:04:47 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:33:39.498 23:04:47 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:39.498 23:04:47 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:39.498 23:04:47 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:39.498 23:04:47 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:39.498 23:04:47 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:39.498 23:04:47 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:39.498 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:39.498 23:04:47 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:39.498 23:04:47 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:39.498 23:04:47 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:39.498 23:04:47 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:33:39.498 23:04:47 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:33:39.498 23:04:47 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:39.498 23:04:47 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:39.498 23:04:47 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:39.498 23:04:47 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.498 23:04:47 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.498 23:04:47 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.498 23:04:47 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:33:39.498 23:04:47 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.498 23:04:47 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:33:39.498 23:04:47 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:39.498 23:04:47 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:39.498 23:04:47 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:39.498 23:04:47 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:39.498 23:04:47 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:39.498 23:04:47 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:39.498 23:04:47 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:39.499 23:04:47 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:39.499 23:04:47 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:39.499 23:04:47 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:39.499 23:04:47 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:33:39.499 23:04:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:46.072 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:46.072 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:33:46.072 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:46.072 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:46.072 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:46.072 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:46.072 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:46.072 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:33:46.072 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:46.072 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:33:46.072 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:33:46.072 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:33:46.072 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:33:46.072 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:33:46.072 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:33:46.072 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:46.072 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:46.072 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:46.072 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:46.072 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:46.072 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:46.072 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:46.072 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:46.072 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:46.072 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:46.072 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:46.072 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:46.072 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:46.072 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:46.072 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:46.072 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:46.072 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:46.072 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:46.072 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:46.072 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:46.072 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:46.072 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:46.072 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:46.072 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:46.072 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:46.072 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:46.072 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:46.072 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:46.072 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:46.072 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:46.072 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:46.072 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:46.072 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:46.072 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:46.072 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:46.072 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:46.072 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:46.072 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:46.072 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:46.072 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:46.072 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:46.072 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:46.072 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:46.072 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:46.072 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:46.072 Found net devices under 0000:86:00.0: cvl_0_0 00:33:46.073 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:46.073 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:46.073 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:46.073 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:46.073 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:46.073 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:46.073 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:46.073 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:46.073 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:46.073 Found net devices under 0000:86:00.1: cvl_0_1 00:33:46.073 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:46.073 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:46.073 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:33:46.073 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:46.073 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:46.073 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:46.073 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:46.073 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:46.073 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:46.073 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:46.073 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:46.073 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:46.073 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:46.073 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:46.073 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:46.073 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:46.073 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:46.073 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:46.073 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:46.073 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:46.073 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:46.073 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:46.073 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:46.073 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:46.073 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:46.073 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:46.073 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:46.073 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:46.073 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:46.073 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:46.073 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.415 ms 00:33:46.073 00:33:46.073 --- 10.0.0.2 ping statistics --- 00:33:46.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:46.073 rtt min/avg/max/mdev = 0.415/0.415/0.415/0.000 ms 00:33:46.073 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:46.073 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:46.073 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:33:46.073 00:33:46.073 --- 10.0.0.1 ping statistics --- 00:33:46.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:46.073 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:33:46.073 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:46.073 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:33:46.073 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:46.073 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:46.073 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:46.073 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:46.073 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:46.073 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:46.073 23:04:52 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:46.073 23:04:53 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:33:46.073 23:04:53 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:46.073 23:04:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:46.073 23:04:53 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:33:46.073 23:04:53 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:33:46.073 23:04:53 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:33:46.073 23:04:53 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:33:46.073 23:04:53 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:33:46.073 23:04:53 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:33:46.073 23:04:53 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:33:46.073 23:04:53 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:33:46.073 23:04:53 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/gen_nvme.sh 00:33:46.073 23:04:53 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:33:46.073 23:04:53 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:33:46.073 23:04:53 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:33:46.073 23:04:53 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:5e:00.0 00:33:46.073 23:04:53 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:33:46.073 23:04:53 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:33:46.073 23:04:53 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:33:46.073 23:04:53 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:33:46.073 23:04:53 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:33:50.262 23:04:57 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F0E1P0FGN 00:33:50.262 23:04:57 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:33:50.262 23:04:57 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:33:50.262 23:04:57 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:33:54.452 23:05:01 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:33:54.452 23:05:01 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:33:54.452 23:05:01 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:54.452 23:05:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:54.452 23:05:01 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:33:54.452 23:05:01 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:54.452 23:05:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:54.452 23:05:01 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2624335 00:33:54.452 23:05:01 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:33:54.452 23:05:01 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:54.452 23:05:01 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2624335 00:33:54.452 23:05:01 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 2624335 ']' 00:33:54.452 23:05:01 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:54.452 23:05:01 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:54.452 23:05:01 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:54.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:54.452 23:05:01 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:54.452 23:05:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:54.452 [2024-12-10 23:05:01.590875] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:33:54.452 [2024-12-10 23:05:01.590924] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:54.452 [2024-12-10 23:05:01.670983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:54.452 [2024-12-10 23:05:01.713213] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:54.452 [2024-12-10 23:05:01.713251] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:54.452 [2024-12-10 23:05:01.713258] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:54.452 [2024-12-10 23:05:01.713264] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:54.452 [2024-12-10 23:05:01.713269] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:54.452 [2024-12-10 23:05:01.714828] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:33:54.452 [2024-12-10 23:05:01.714935] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:33:54.452 [2024-12-10 23:05:01.715045] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:33:54.452 [2024-12-10 23:05:01.715045] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:33:54.452 23:05:01 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:54.452 23:05:01 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:33:54.452 23:05:01 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:33:54.452 23:05:01 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.452 23:05:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:54.452 INFO: Log level set to 20 00:33:54.452 INFO: Requests: 00:33:54.452 { 00:33:54.452 "jsonrpc": "2.0", 00:33:54.452 "method": "nvmf_set_config", 00:33:54.452 "id": 1, 00:33:54.452 "params": { 00:33:54.452 "admin_cmd_passthru": { 00:33:54.452 "identify_ctrlr": true 00:33:54.452 } 00:33:54.452 } 00:33:54.452 } 00:33:54.452 00:33:54.452 INFO: response: 00:33:54.452 { 00:33:54.452 "jsonrpc": "2.0", 00:33:54.452 "id": 1, 00:33:54.452 "result": true 00:33:54.452 } 00:33:54.452 00:33:54.452 23:05:01 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.452 23:05:01 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:33:54.452 23:05:01 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.452 23:05:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:54.452 INFO: Setting log level to 20 00:33:54.452 INFO: Setting log level to 20 00:33:54.452 INFO: Log level set to 20 00:33:54.452 INFO: Log level set to 20 00:33:54.452 INFO: Requests: 00:33:54.452 { 00:33:54.452 "jsonrpc": "2.0", 00:33:54.452 "method": "framework_start_init", 00:33:54.452 "id": 1 00:33:54.452 } 00:33:54.452 00:33:54.452 INFO: Requests: 00:33:54.452 { 00:33:54.452 "jsonrpc": "2.0", 00:33:54.452 "method": "framework_start_init", 00:33:54.452 "id": 1 00:33:54.452 } 00:33:54.452 00:33:54.452 [2024-12-10 23:05:01.824403] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:33:54.452 INFO: response: 00:33:54.452 { 00:33:54.452 "jsonrpc": "2.0", 00:33:54.452 "id": 1, 00:33:54.452 "result": true 00:33:54.452 } 00:33:54.452 00:33:54.452 INFO: response: 00:33:54.452 { 00:33:54.452 "jsonrpc": "2.0", 00:33:54.452 "id": 1, 00:33:54.452 "result": true 00:33:54.452 } 00:33:54.452 00:33:54.452 23:05:01 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.452 23:05:01 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:54.452 23:05:01 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.452 23:05:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:54.452 INFO: Setting log level to 40 00:33:54.452 INFO: Setting log level to 40 00:33:54.452 INFO: Setting log level to 40 00:33:54.453 [2024-12-10 23:05:01.837704] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:54.453 23:05:01 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.453 23:05:01 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:33:54.453 23:05:01 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:54.453 23:05:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:54.453 23:05:01 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:33:54.453 23:05:01 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.453 23:05:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:57.738 Nvme0n1 00:33:57.738 23:05:04 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.738 23:05:04 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:33:57.738 23:05:04 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.738 23:05:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:57.738 23:05:04 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.738 23:05:04 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:33:57.738 23:05:04 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.738 23:05:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:57.738 23:05:04 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.738 23:05:04 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:57.738 23:05:04 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.738 23:05:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:57.738 [2024-12-10 23:05:04.748875] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:57.738 23:05:04 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.738 23:05:04 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:33:57.738 23:05:04 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.739 23:05:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:57.739 [ 00:33:57.739 { 00:33:57.739 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:33:57.739 "subtype": "Discovery", 00:33:57.739 "listen_addresses": [], 00:33:57.739 "allow_any_host": true, 00:33:57.739 "hosts": [] 00:33:57.739 }, 00:33:57.739 { 00:33:57.739 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:33:57.739 "subtype": "NVMe", 00:33:57.739 "listen_addresses": [ 00:33:57.739 { 00:33:57.739 "trtype": "TCP", 00:33:57.739 "adrfam": "IPv4", 00:33:57.739 "traddr": "10.0.0.2", 00:33:57.739 "trsvcid": "4420" 00:33:57.739 } 00:33:57.739 ], 00:33:57.739 "allow_any_host": true, 00:33:57.739 "hosts": [], 00:33:57.739 "serial_number": "SPDK00000000000001", 00:33:57.739 "model_number": "SPDK bdev Controller", 00:33:57.739 "max_namespaces": 1, 00:33:57.739 "min_cntlid": 1, 00:33:57.739 "max_cntlid": 65519, 00:33:57.739 "namespaces": [ 00:33:57.739 { 00:33:57.739 "nsid": 1, 00:33:57.739 "bdev_name": "Nvme0n1", 00:33:57.739 "name": "Nvme0n1", 00:33:57.739 "nguid": "873738E8FBEA47E5AED881518231746A", 00:33:57.739 "uuid": "873738e8-fbea-47e5-aed8-81518231746a" 00:33:57.739 } 00:33:57.739 ] 00:33:57.739 } 00:33:57.739 ] 00:33:57.739 23:05:04 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.739 23:05:04 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:57.739 23:05:04 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:33:57.739 23:05:04 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:33:57.739 23:05:05 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F0E1P0FGN 00:33:57.739 23:05:05 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:57.739 23:05:05 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:33:57.739 23:05:05 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:33:57.739 23:05:05 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:33:57.739 23:05:05 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F0E1P0FGN '!=' BTLJ72430F0E1P0FGN ']' 00:33:57.739 23:05:05 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:33:57.739 23:05:05 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:57.739 23:05:05 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.739 23:05:05 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:57.739 23:05:05 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.739 23:05:05 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:33:57.739 23:05:05 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:33:57.739 23:05:05 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:57.739 23:05:05 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:33:57.739 23:05:05 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:57.739 23:05:05 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:33:57.739 23:05:05 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:57.739 23:05:05 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:57.739 rmmod nvme_tcp 00:33:57.739 rmmod nvme_fabrics 00:33:57.739 rmmod nvme_keyring 00:33:57.739 23:05:05 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:57.739 23:05:05 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:33:57.739 23:05:05 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:33:57.739 23:05:05 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 2624335 ']' 00:33:57.739 23:05:05 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 2624335 00:33:57.739 23:05:05 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 2624335 ']' 00:33:57.739 23:05:05 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 2624335 00:33:57.739 23:05:05 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:33:57.739 23:05:05 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:57.739 23:05:05 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2624335 00:33:57.739 23:05:05 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:57.739 23:05:05 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:57.739 23:05:05 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2624335' 00:33:57.739 killing process with pid 2624335 00:33:57.739 23:05:05 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 2624335 00:33:57.739 23:05:05 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 2624335 00:33:59.640 23:05:06 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:59.640 23:05:06 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:59.640 23:05:06 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:59.640 23:05:06 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:33:59.640 23:05:06 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:33:59.640 23:05:06 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:59.640 23:05:06 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:33:59.640 23:05:06 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:59.640 23:05:06 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:59.640 23:05:06 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:59.640 23:05:06 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:59.640 23:05:06 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:01.544 23:05:09 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:01.544 00:34:01.544 real 0m22.139s 00:34:01.544 user 0m27.517s 00:34:01.544 sys 0m6.335s 00:34:01.544 23:05:09 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:01.544 23:05:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:01.544 ************************************ 00:34:01.544 END TEST nvmf_identify_passthru 00:34:01.544 ************************************ 00:34:01.544 23:05:09 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/dif.sh 00:34:01.544 23:05:09 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:01.544 23:05:09 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:01.544 23:05:09 -- common/autotest_common.sh@10 -- # set +x 00:34:01.544 ************************************ 00:34:01.544 START TEST nvmf_dif 00:34:01.544 ************************************ 00:34:01.544 23:05:09 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/dif.sh 00:34:01.544 * Looking for test storage... 00:34:01.544 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:34:01.544 23:05:09 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:01.544 23:05:09 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:34:01.544 23:05:09 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:01.544 23:05:09 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:01.544 23:05:09 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:01.544 23:05:09 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:01.544 23:05:09 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:01.544 23:05:09 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:34:01.544 23:05:09 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:34:01.544 23:05:09 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:34:01.544 23:05:09 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:34:01.544 23:05:09 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:34:01.544 23:05:09 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:34:01.544 23:05:09 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:34:01.544 23:05:09 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:01.544 23:05:09 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:34:01.544 23:05:09 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:34:01.544 23:05:09 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:01.544 23:05:09 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:01.544 23:05:09 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:34:01.544 23:05:09 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:34:01.544 23:05:09 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:01.544 23:05:09 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:34:01.544 23:05:09 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:34:01.544 23:05:09 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:34:01.544 23:05:09 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:34:01.544 23:05:09 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:01.544 23:05:09 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:34:01.544 23:05:09 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:34:01.544 23:05:09 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:01.544 23:05:09 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:01.544 23:05:09 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:34:01.544 23:05:09 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:01.544 23:05:09 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:01.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:01.544 --rc genhtml_branch_coverage=1 00:34:01.544 --rc genhtml_function_coverage=1 00:34:01.544 --rc genhtml_legend=1 00:34:01.544 --rc geninfo_all_blocks=1 00:34:01.544 --rc geninfo_unexecuted_blocks=1 00:34:01.544 00:34:01.544 ' 00:34:01.544 23:05:09 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:01.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:01.544 --rc genhtml_branch_coverage=1 00:34:01.544 --rc genhtml_function_coverage=1 00:34:01.544 --rc genhtml_legend=1 00:34:01.544 --rc geninfo_all_blocks=1 00:34:01.544 --rc geninfo_unexecuted_blocks=1 00:34:01.544 00:34:01.544 ' 00:34:01.544 23:05:09 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:01.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:01.544 --rc genhtml_branch_coverage=1 00:34:01.544 --rc genhtml_function_coverage=1 00:34:01.544 --rc genhtml_legend=1 00:34:01.544 --rc geninfo_all_blocks=1 00:34:01.544 --rc geninfo_unexecuted_blocks=1 00:34:01.544 00:34:01.544 ' 00:34:01.544 23:05:09 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:01.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:01.544 --rc genhtml_branch_coverage=1 00:34:01.544 --rc genhtml_function_coverage=1 00:34:01.544 --rc genhtml_legend=1 00:34:01.544 --rc geninfo_all_blocks=1 00:34:01.544 --rc geninfo_unexecuted_blocks=1 00:34:01.544 00:34:01.544 ' 00:34:01.544 23:05:09 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:34:01.803 23:05:09 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:34:01.803 23:05:09 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:01.803 23:05:09 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:01.803 23:05:09 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:01.803 23:05:09 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:01.803 23:05:09 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:01.803 23:05:09 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:01.803 23:05:09 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:01.803 23:05:09 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:01.803 23:05:09 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:01.803 23:05:09 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:01.803 23:05:09 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:34:01.803 23:05:09 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:34:01.803 23:05:09 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:01.803 23:05:09 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:01.803 23:05:09 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:01.803 23:05:09 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:01.803 23:05:09 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:34:01.803 23:05:09 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:34:01.803 23:05:09 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:01.803 23:05:09 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:01.803 23:05:09 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:01.803 23:05:09 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:01.803 23:05:09 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:01.803 23:05:09 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:01.803 23:05:09 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:34:01.803 23:05:09 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:01.803 23:05:09 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:34:01.803 23:05:09 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:01.803 23:05:09 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:01.803 23:05:09 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:01.803 23:05:09 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:01.803 23:05:09 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:01.803 23:05:09 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:01.803 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:01.803 23:05:09 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:01.803 23:05:09 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:01.803 23:05:09 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:01.803 23:05:09 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:34:01.803 23:05:09 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:34:01.803 23:05:09 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:34:01.803 23:05:09 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:34:01.803 23:05:09 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:34:01.803 23:05:09 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:01.803 23:05:09 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:01.803 23:05:09 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:01.803 23:05:09 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:01.803 23:05:09 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:01.804 23:05:09 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:01.804 23:05:09 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:01.804 23:05:09 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:01.804 23:05:09 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:01.804 23:05:09 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:01.804 23:05:09 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:34:01.804 23:05:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:07.148 23:05:14 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:07.148 23:05:14 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:34:07.148 23:05:14 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:07.148 23:05:14 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:07.148 23:05:14 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:07.148 23:05:14 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:07.148 23:05:14 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:07.148 23:05:14 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:34:07.148 23:05:14 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:07.148 23:05:14 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:34:07.148 23:05:14 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:34:07.148 23:05:14 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:34:07.148 23:05:14 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:34:07.148 23:05:14 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:34:07.148 23:05:14 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:34:07.148 23:05:14 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:07.148 23:05:14 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:07.148 23:05:14 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:07.149 23:05:14 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:07.149 23:05:14 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:07.149 23:05:14 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:07.149 23:05:14 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:07.149 23:05:14 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:07.149 23:05:14 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:07.149 23:05:14 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:07.149 23:05:14 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:07.149 23:05:14 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:07.149 23:05:14 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:07.149 23:05:14 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:07.149 23:05:14 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:07.149 23:05:14 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:07.149 23:05:14 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:07.149 23:05:14 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:07.149 23:05:14 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:07.149 23:05:14 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:34:07.149 Found 0000:86:00.0 (0x8086 - 0x159b) 00:34:07.149 23:05:14 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:07.149 23:05:14 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:07.149 23:05:14 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:07.149 23:05:14 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:07.149 23:05:14 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:07.149 23:05:14 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:07.149 23:05:14 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:34:07.149 Found 0000:86:00.1 (0x8086 - 0x159b) 00:34:07.149 23:05:14 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:07.149 23:05:14 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:07.149 23:05:14 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:07.149 23:05:14 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:07.149 23:05:14 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:07.149 23:05:14 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:07.149 23:05:14 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:07.149 23:05:14 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:07.149 23:05:14 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:07.149 23:05:14 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:07.149 23:05:14 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:07.149 23:05:14 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:07.149 23:05:14 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:07.149 23:05:14 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:07.149 23:05:14 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:07.149 23:05:14 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:34:07.149 Found net devices under 0000:86:00.0: cvl_0_0 00:34:07.149 23:05:14 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:07.149 23:05:14 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:07.149 23:05:14 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:07.149 23:05:14 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:07.149 23:05:14 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:07.149 23:05:14 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:07.149 23:05:14 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:07.149 23:05:14 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:07.149 23:05:14 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:34:07.149 Found net devices under 0000:86:00.1: cvl_0_1 00:34:07.149 23:05:14 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:07.149 23:05:14 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:07.149 23:05:14 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:34:07.149 23:05:14 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:07.149 23:05:14 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:07.149 23:05:14 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:07.149 23:05:14 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:07.149 23:05:14 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:07.149 23:05:14 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:07.149 23:05:14 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:07.149 23:05:14 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:07.149 23:05:14 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:07.149 23:05:14 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:07.149 23:05:14 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:07.149 23:05:14 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:07.149 23:05:14 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:07.149 23:05:14 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:07.149 23:05:14 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:07.149 23:05:14 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:07.408 23:05:14 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:07.408 23:05:14 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:07.408 23:05:14 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:07.408 23:05:14 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:07.408 23:05:15 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:07.408 23:05:15 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:07.666 23:05:15 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:07.666 23:05:15 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:07.666 23:05:15 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:07.666 23:05:15 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:07.666 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:07.666 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.434 ms 00:34:07.666 00:34:07.666 --- 10.0.0.2 ping statistics --- 00:34:07.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:07.667 rtt min/avg/max/mdev = 0.434/0.434/0.434/0.000 ms 00:34:07.667 23:05:15 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:07.667 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:07.667 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:34:07.667 00:34:07.667 --- 10.0.0.1 ping statistics --- 00:34:07.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:07.667 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:34:07.667 23:05:15 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:07.667 23:05:15 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:34:07.667 23:05:15 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:34:07.667 23:05:15 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/setup.sh 00:34:10.201 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:34:10.201 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:34:10.201 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:34:10.201 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:34:10.201 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:34:10.201 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:34:10.201 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:34:10.201 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:34:10.201 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:34:10.201 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:34:10.201 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:34:10.201 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:34:10.201 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:34:10.201 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:34:10.461 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:34:10.461 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:34:10.461 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:34:10.461 23:05:18 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:10.461 23:05:18 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:10.461 23:05:18 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:10.461 23:05:18 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:10.461 23:05:18 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:10.461 23:05:18 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:10.461 23:05:18 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:34:10.461 23:05:18 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:34:10.461 23:05:18 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:10.461 23:05:18 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:10.461 23:05:18 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:10.461 23:05:18 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=2629806 00:34:10.461 23:05:18 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 2629806 00:34:10.461 23:05:18 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:34:10.461 23:05:18 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 2629806 ']' 00:34:10.461 23:05:18 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:10.461 23:05:18 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:10.461 23:05:18 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:10.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:10.461 23:05:18 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:10.461 23:05:18 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:10.461 [2024-12-10 23:05:18.177172] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:34:10.461 [2024-12-10 23:05:18.177218] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:10.720 [2024-12-10 23:05:18.257224] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:10.720 [2024-12-10 23:05:18.296111] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:10.720 [2024-12-10 23:05:18.296143] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:10.720 [2024-12-10 23:05:18.296151] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:10.720 [2024-12-10 23:05:18.296162] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:10.720 [2024-12-10 23:05:18.296196] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:10.720 [2024-12-10 23:05:18.296737] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:34:10.720 23:05:18 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:10.720 23:05:18 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:34:10.720 23:05:18 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:10.720 23:05:18 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:10.720 23:05:18 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:10.720 23:05:18 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:10.720 23:05:18 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:34:10.720 23:05:18 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:34:10.720 23:05:18 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.720 23:05:18 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:10.980 [2024-12-10 23:05:18.446339] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:10.980 23:05:18 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.980 23:05:18 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:34:10.980 23:05:18 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:10.980 23:05:18 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:10.980 23:05:18 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:10.980 ************************************ 00:34:10.980 START TEST fio_dif_1_default 00:34:10.980 ************************************ 00:34:10.980 23:05:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:34:10.980 23:05:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:34:10.980 23:05:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:34:10.980 23:05:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:34:10.980 23:05:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:34:10.980 23:05:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:34:10.980 23:05:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:10.980 23:05:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.980 23:05:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:10.980 bdev_null0 00:34:10.980 23:05:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.980 23:05:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:10.980 23:05:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.980 23:05:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:10.980 23:05:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.980 23:05:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:10.980 23:05:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.980 23:05:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:10.980 23:05:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.980 23:05:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:10.980 23:05:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.980 23:05:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:10.980 [2024-12-10 23:05:18.522660] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:10.980 23:05:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.980 23:05:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:34:10.980 23:05:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:34:10.980 23:05:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:10.980 23:05:18 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:34:10.980 23:05:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:10.980 23:05:18 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:34:10.980 23:05:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:10.980 23:05:18 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:10.980 23:05:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:34:10.980 23:05:18 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:10.980 { 00:34:10.980 "params": { 00:34:10.980 "name": "Nvme$subsystem", 00:34:10.980 "trtype": "$TEST_TRANSPORT", 00:34:10.980 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:10.980 "adrfam": "ipv4", 00:34:10.980 "trsvcid": "$NVMF_PORT", 00:34:10.980 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:10.980 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:10.980 "hdgst": ${hdgst:-false}, 00:34:10.980 "ddgst": ${ddgst:-false} 00:34:10.980 }, 00:34:10.980 "method": "bdev_nvme_attach_controller" 00:34:10.980 } 00:34:10.980 EOF 00:34:10.980 )") 00:34:10.980 23:05:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:10.980 23:05:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:34:10.980 23:05:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:10.980 23:05:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:34:10.980 23:05:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:10.980 23:05:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev 00:34:10.980 23:05:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:34:10.980 23:05:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:10.980 23:05:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:10.980 23:05:18 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:34:10.980 23:05:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:34:10.980 23:05:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev 00:34:10.980 23:05:18 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:34:10.980 23:05:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:34:10.980 23:05:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:10.980 23:05:18 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:34:10.980 23:05:18 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:34:10.980 23:05:18 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:10.980 "params": { 00:34:10.980 "name": "Nvme0", 00:34:10.980 "trtype": "tcp", 00:34:10.980 "traddr": "10.0.0.2", 00:34:10.980 "adrfam": "ipv4", 00:34:10.980 "trsvcid": "4420", 00:34:10.980 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:10.980 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:10.980 "hdgst": false, 00:34:10.980 "ddgst": false 00:34:10.980 }, 00:34:10.980 "method": "bdev_nvme_attach_controller" 00:34:10.980 }' 00:34:10.980 23:05:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:10.980 23:05:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:10.980 23:05:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:10.980 23:05:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev 00:34:10.980 23:05:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:10.980 23:05:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:10.980 23:05:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:10.980 23:05:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:10.980 23:05:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev' 00:34:10.981 23:05:18 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:11.239 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:11.239 fio-3.35 00:34:11.239 Starting 1 thread 00:34:23.446 00:34:23.446 filename0: (groupid=0, jobs=1): err= 0: pid=2630178: Tue Dec 10 23:05:29 2024 00:34:23.446 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10011msec) 00:34:23.446 slat (nsec): min=6056, max=26443, avg=6390.53, stdev=875.65 00:34:23.446 clat (usec): min=40796, max=43323, avg=41008.59, stdev=197.79 00:34:23.446 lat (usec): min=40802, max=43349, avg=41014.98, stdev=198.19 00:34:23.446 clat percentiles (usec): 00:34:23.446 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:34:23.446 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:23.446 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:23.446 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:34:23.446 | 99.99th=[43254] 00:34:23.446 bw ( KiB/s): min= 384, max= 416, per=99.49%, avg=388.80, stdev=11.72, samples=20 00:34:23.446 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:34:23.446 lat (msec) : 50=100.00% 00:34:23.446 cpu : usr=92.57%, sys=7.18%, ctx=14, majf=0, minf=0 00:34:23.446 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:23.446 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:23.446 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:23.446 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:23.446 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:23.446 00:34:23.446 Run status group 0 (all jobs): 00:34:23.447 READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10011-10011msec 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.447 00:34:23.447 real 0m11.250s 00:34:23.447 user 0m16.232s 00:34:23.447 sys 0m0.995s 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:23.447 ************************************ 00:34:23.447 END TEST fio_dif_1_default 00:34:23.447 ************************************ 00:34:23.447 23:05:29 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:34:23.447 23:05:29 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:23.447 23:05:29 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:23.447 23:05:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:23.447 ************************************ 00:34:23.447 START TEST fio_dif_1_multi_subsystems 00:34:23.447 ************************************ 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:23.447 bdev_null0 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:23.447 [2024-12-10 23:05:29.849809] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:23.447 bdev_null1 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:23.447 { 00:34:23.447 "params": { 00:34:23.447 "name": "Nvme$subsystem", 00:34:23.447 "trtype": "$TEST_TRANSPORT", 00:34:23.447 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:23.447 "adrfam": "ipv4", 00:34:23.447 "trsvcid": "$NVMF_PORT", 00:34:23.447 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:23.447 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:23.447 "hdgst": ${hdgst:-false}, 00:34:23.447 "ddgst": ${ddgst:-false} 00:34:23.447 }, 00:34:23.447 "method": "bdev_nvme_attach_controller" 00:34:23.447 } 00:34:23.447 EOF 00:34:23.447 )") 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:23.447 { 00:34:23.447 "params": { 00:34:23.447 "name": "Nvme$subsystem", 00:34:23.447 "trtype": "$TEST_TRANSPORT", 00:34:23.447 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:23.447 "adrfam": "ipv4", 00:34:23.447 "trsvcid": "$NVMF_PORT", 00:34:23.447 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:23.447 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:23.447 "hdgst": ${hdgst:-false}, 00:34:23.447 "ddgst": ${ddgst:-false} 00:34:23.447 }, 00:34:23.447 "method": "bdev_nvme_attach_controller" 00:34:23.447 } 00:34:23.447 EOF 00:34:23.447 )") 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:34:23.447 23:05:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:23.447 "params": { 00:34:23.448 "name": "Nvme0", 00:34:23.448 "trtype": "tcp", 00:34:23.448 "traddr": "10.0.0.2", 00:34:23.448 "adrfam": "ipv4", 00:34:23.448 "trsvcid": "4420", 00:34:23.448 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:23.448 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:23.448 "hdgst": false, 00:34:23.448 "ddgst": false 00:34:23.448 }, 00:34:23.448 "method": "bdev_nvme_attach_controller" 00:34:23.448 },{ 00:34:23.448 "params": { 00:34:23.448 "name": "Nvme1", 00:34:23.448 "trtype": "tcp", 00:34:23.448 "traddr": "10.0.0.2", 00:34:23.448 "adrfam": "ipv4", 00:34:23.448 "trsvcid": "4420", 00:34:23.448 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:23.448 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:23.448 "hdgst": false, 00:34:23.448 "ddgst": false 00:34:23.448 }, 00:34:23.448 "method": "bdev_nvme_attach_controller" 00:34:23.448 }' 00:34:23.448 23:05:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:23.448 23:05:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:23.448 23:05:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:23.448 23:05:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev 00:34:23.448 23:05:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:23.448 23:05:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:23.448 23:05:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:23.448 23:05:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:23.448 23:05:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev' 00:34:23.448 23:05:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:23.448 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:23.448 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:23.448 fio-3.35 00:34:23.448 Starting 2 threads 00:34:33.425 00:34:33.425 filename0: (groupid=0, jobs=1): err= 0: pid=2632142: Tue Dec 10 23:05:40 2024 00:34:33.425 read: IOPS=107, BW=428KiB/s (439kB/s)(4288KiB/10010msec) 00:34:33.425 slat (nsec): min=6413, max=83521, avg=11676.52, stdev=8710.35 00:34:33.425 clat (usec): min=411, max=42555, avg=37310.74, stdev=12599.60 00:34:33.425 lat (usec): min=420, max=42562, avg=37322.42, stdev=12599.38 00:34:33.425 clat percentiles (usec): 00:34:33.425 | 1.00th=[ 424], 5.00th=[ 453], 10.00th=[ 506], 20.00th=[41157], 00:34:33.425 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[41681], 00:34:33.425 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:33.425 | 99.00th=[42206], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:34:33.425 | 99.99th=[42730] 00:34:33.425 bw ( KiB/s): min= 352, max= 544, per=35.71%, avg=427.20, stdev=43.15, samples=20 00:34:33.425 iops : min= 88, max= 136, avg=106.80, stdev=10.79, samples=20 00:34:33.425 lat (usec) : 500=9.42%, 750=1.03% 00:34:33.425 lat (msec) : 50=89.55% 00:34:33.425 cpu : usr=98.78%, sys=0.90%, ctx=40, majf=0, minf=158 00:34:33.425 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:33.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:33.425 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:33.425 issued rwts: total=1072,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:33.425 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:33.425 filename1: (groupid=0, jobs=1): err= 0: pid=2632143: Tue Dec 10 23:05:40 2024 00:34:33.425 read: IOPS=191, BW=768KiB/s (786kB/s)(7696KiB/10022msec) 00:34:33.425 slat (nsec): min=6352, max=45997, avg=8865.43, stdev=5262.75 00:34:33.425 clat (usec): min=385, max=42547, avg=20807.91, stdev=20514.74 00:34:33.425 lat (usec): min=391, max=42555, avg=20816.77, stdev=20512.90 00:34:33.425 clat percentiles (usec): 00:34:33.425 | 1.00th=[ 404], 5.00th=[ 408], 10.00th=[ 412], 20.00th=[ 420], 00:34:33.425 | 30.00th=[ 429], 40.00th=[ 433], 50.00th=[ 652], 60.00th=[41681], 00:34:33.425 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:34:33.425 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:34:33.425 | 99.99th=[42730] 00:34:33.425 bw ( KiB/s): min= 704, max= 832, per=64.23%, avg=768.00, stdev=20.76, samples=20 00:34:33.425 iops : min= 176, max= 208, avg=192.00, stdev= 5.19, samples=20 00:34:33.425 lat (usec) : 500=49.69%, 750=0.42% 00:34:33.425 lat (msec) : 2=0.21%, 50=49.69% 00:34:33.425 cpu : usr=97.15%, sys=2.56%, ctx=18, majf=0, minf=37 00:34:33.425 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:33.425 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:33.425 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:33.425 issued rwts: total=1924,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:33.425 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:33.425 00:34:33.425 Run status group 0 (all jobs): 00:34:33.425 READ: bw=1196KiB/s (1224kB/s), 428KiB/s-768KiB/s (439kB/s-786kB/s), io=11.7MiB (12.3MB), run=10010-10022msec 00:34:33.425 23:05:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:34:33.425 23:05:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:34:33.425 23:05:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:34:33.425 23:05:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:33.425 23:05:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:34:33.425 23:05:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:33.425 23:05:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.425 23:05:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:33.425 23:05:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.425 23:05:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:33.425 23:05:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.425 23:05:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:33.425 23:05:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.425 23:05:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:34:33.425 23:05:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:33.425 23:05:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:34:33.425 23:05:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:33.425 23:05:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.425 23:05:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:33.425 23:05:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.425 23:05:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:33.425 23:05:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.425 23:05:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:33.425 23:05:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.425 00:34:33.425 real 0m11.303s 00:34:33.425 user 0m26.473s 00:34:33.425 sys 0m0.660s 00:34:33.425 23:05:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:33.425 23:05:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:33.425 ************************************ 00:34:33.425 END TEST fio_dif_1_multi_subsystems 00:34:33.425 ************************************ 00:34:33.685 23:05:41 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:34:33.685 23:05:41 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:33.685 23:05:41 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:33.685 23:05:41 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:33.685 ************************************ 00:34:33.685 START TEST fio_dif_rand_params 00:34:33.685 ************************************ 00:34:33.685 23:05:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:34:33.685 23:05:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:34:33.685 23:05:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:34:33.685 23:05:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:34:33.685 23:05:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:34:33.685 23:05:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:34:33.685 23:05:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:34:33.685 23:05:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:34:33.685 23:05:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:34:33.685 23:05:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:33.685 23:05:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:33.685 23:05:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:33.685 23:05:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:33.685 23:05:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:33.685 23:05:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.685 23:05:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:33.685 bdev_null0 00:34:33.685 23:05:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.685 23:05:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:33.685 23:05:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.685 23:05:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:33.685 23:05:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.685 23:05:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:33.685 23:05:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.685 23:05:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:33.685 23:05:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.685 23:05:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:33.685 23:05:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:33.685 23:05:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:33.685 [2024-12-10 23:05:41.230918] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:33.685 23:05:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:33.685 23:05:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:34:33.685 23:05:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:34:33.685 23:05:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:33.685 23:05:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:34:33.685 23:05:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:33.685 23:05:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:34:33.685 23:05:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:33.685 23:05:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:33.685 23:05:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:33.685 23:05:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:33.685 { 00:34:33.686 "params": { 00:34:33.686 "name": "Nvme$subsystem", 00:34:33.686 "trtype": "$TEST_TRANSPORT", 00:34:33.686 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:33.686 "adrfam": "ipv4", 00:34:33.686 "trsvcid": "$NVMF_PORT", 00:34:33.686 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:33.686 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:33.686 "hdgst": ${hdgst:-false}, 00:34:33.686 "ddgst": ${ddgst:-false} 00:34:33.686 }, 00:34:33.686 "method": "bdev_nvme_attach_controller" 00:34:33.686 } 00:34:33.686 EOF 00:34:33.686 )") 00:34:33.686 23:05:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:33.686 23:05:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:33.686 23:05:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:33.686 23:05:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:33.686 23:05:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:33.686 23:05:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev 00:34:33.686 23:05:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:34:33.686 23:05:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:33.686 23:05:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:33.686 23:05:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:33.686 23:05:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:33.686 23:05:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev 00:34:33.686 23:05:41 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:33.686 23:05:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:34:33.686 23:05:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:33.686 23:05:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:34:33.686 23:05:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:34:33.686 23:05:41 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:33.686 "params": { 00:34:33.686 "name": "Nvme0", 00:34:33.686 "trtype": "tcp", 00:34:33.686 "traddr": "10.0.0.2", 00:34:33.686 "adrfam": "ipv4", 00:34:33.686 "trsvcid": "4420", 00:34:33.686 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:33.686 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:33.686 "hdgst": false, 00:34:33.686 "ddgst": false 00:34:33.686 }, 00:34:33.686 "method": "bdev_nvme_attach_controller" 00:34:33.686 }' 00:34:33.686 23:05:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:33.686 23:05:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:33.686 23:05:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:33.686 23:05:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev 00:34:33.686 23:05:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:33.686 23:05:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:33.686 23:05:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:33.686 23:05:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:33.686 23:05:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev' 00:34:33.686 23:05:41 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:33.945 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:33.945 ... 00:34:33.945 fio-3.35 00:34:33.945 Starting 3 threads 00:34:40.507 00:34:40.507 filename0: (groupid=0, jobs=1): err= 0: pid=2634107: Tue Dec 10 23:05:47 2024 00:34:40.507 read: IOPS=302, BW=37.9MiB/s (39.7MB/s)(189MiB/5003msec) 00:34:40.507 slat (nsec): min=6856, max=54010, avg=24231.16, stdev=8850.17 00:34:40.507 clat (usec): min=3287, max=51566, avg=9878.35, stdev=5319.05 00:34:40.507 lat (usec): min=3295, max=51592, avg=9902.58, stdev=5320.45 00:34:40.507 clat percentiles (usec): 00:34:40.507 | 1.00th=[ 3720], 5.00th=[ 4424], 10.00th=[ 6259], 20.00th=[ 6980], 00:34:40.507 | 30.00th=[ 8455], 40.00th=[ 9241], 50.00th=[ 9765], 60.00th=[10290], 00:34:40.507 | 70.00th=[10814], 80.00th=[11338], 90.00th=[11994], 95.00th=[12387], 00:34:40.507 | 99.00th=[46400], 99.50th=[49021], 99.90th=[50594], 99.95th=[51643], 00:34:40.507 | 99.99th=[51643] 00:34:40.507 bw ( KiB/s): min=31232, max=49920, per=34.04%, avg=38739.20, stdev=6537.62, samples=10 00:34:40.507 iops : min= 244, max= 390, avg=302.60, stdev=51.13, samples=10 00:34:40.507 lat (msec) : 4=3.70%, 10=51.95%, 20=42.77%, 50=1.25%, 100=0.33% 00:34:40.507 cpu : usr=90.74%, sys=5.56%, ctx=370, majf=0, minf=9 00:34:40.507 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:40.507 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:40.507 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:40.507 issued rwts: total=1515,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:40.507 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:40.507 filename0: (groupid=0, jobs=1): err= 0: pid=2634108: Tue Dec 10 23:05:47 2024 00:34:40.507 read: IOPS=305, BW=38.1MiB/s (40.0MB/s)(192MiB/5043msec) 00:34:40.507 slat (nsec): min=6394, max=57924, avg=16041.66, stdev=6350.24 00:34:40.507 clat (usec): min=3516, max=51258, avg=9784.68, stdev=5423.53 00:34:40.507 lat (usec): min=3523, max=51271, avg=9800.72, stdev=5423.39 00:34:40.507 clat percentiles (usec): 00:34:40.507 | 1.00th=[ 3818], 5.00th=[ 5669], 10.00th=[ 6259], 20.00th=[ 6849], 00:34:40.508 | 30.00th=[ 8356], 40.00th=[ 9110], 50.00th=[ 9634], 60.00th=[10028], 00:34:40.508 | 70.00th=[10421], 80.00th=[10945], 90.00th=[11600], 95.00th=[12256], 00:34:40.508 | 99.00th=[47973], 99.50th=[48497], 99.90th=[50594], 99.95th=[51119], 00:34:40.508 | 99.99th=[51119] 00:34:40.508 bw ( KiB/s): min=33792, max=46848, per=34.58%, avg=39347.20, stdev=3886.28, samples=10 00:34:40.508 iops : min= 264, max= 366, avg=307.40, stdev=30.36, samples=10 00:34:40.508 lat (msec) : 4=1.82%, 10=56.40%, 20=40.09%, 50=1.56%, 100=0.13% 00:34:40.508 cpu : usr=95.85%, sys=3.83%, ctx=5, majf=0, minf=9 00:34:40.508 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:40.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:40.508 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:40.508 issued rwts: total=1539,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:40.508 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:40.508 filename0: (groupid=0, jobs=1): err= 0: pid=2634109: Tue Dec 10 23:05:47 2024 00:34:40.508 read: IOPS=283, BW=35.5MiB/s (37.2MB/s)(179MiB/5045msec) 00:34:40.508 slat (nsec): min=5575, max=38349, avg=15961.45, stdev=6408.08 00:34:40.508 clat (usec): min=4797, max=51555, avg=10528.22, stdev=9042.07 00:34:40.508 lat (usec): min=4806, max=51564, avg=10544.18, stdev=9041.80 00:34:40.508 clat percentiles (usec): 00:34:40.508 | 1.00th=[ 5800], 5.00th=[ 6456], 10.00th=[ 7308], 20.00th=[ 7898], 00:34:40.508 | 30.00th=[ 8160], 40.00th=[ 8356], 50.00th=[ 8586], 60.00th=[ 8717], 00:34:40.508 | 70.00th=[ 8979], 80.00th=[ 9241], 90.00th=[ 9765], 95.00th=[45351], 00:34:40.508 | 99.00th=[50070], 99.50th=[51119], 99.90th=[51643], 99.95th=[51643], 00:34:40.508 | 99.99th=[51643] 00:34:40.508 bw ( KiB/s): min=22528, max=47616, per=32.13%, avg=36556.80, stdev=8456.87, samples=10 00:34:40.508 iops : min= 176, max= 372, avg=285.60, stdev=66.07, samples=10 00:34:40.508 lat (msec) : 10=92.45%, 20=2.38%, 50=3.77%, 100=1.40% 00:34:40.508 cpu : usr=96.23%, sys=3.45%, ctx=5, majf=0, minf=9 00:34:40.508 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:40.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:40.508 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:40.508 issued rwts: total=1431,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:40.508 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:40.508 00:34:40.508 Run status group 0 (all jobs): 00:34:40.508 READ: bw=111MiB/s (117MB/s), 35.5MiB/s-38.1MiB/s (37.2MB/s-40.0MB/s), io=561MiB (588MB), run=5003-5045msec 00:34:40.508 23:05:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:34:40.508 23:05:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:40.508 23:05:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:40.508 23:05:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:40.508 23:05:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:40.508 23:05:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:40.508 23:05:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.508 23:05:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:40.508 23:05:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.508 23:05:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:40.508 23:05:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.508 23:05:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:40.508 23:05:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.508 23:05:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:34:40.508 23:05:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:34:40.508 23:05:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:34:40.508 23:05:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:34:40.508 23:05:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:34:40.508 23:05:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:34:40.508 23:05:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:34:40.508 23:05:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:40.508 23:05:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:40.508 23:05:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:40.508 23:05:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:40.508 23:05:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:34:40.508 23:05:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.508 23:05:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:40.508 bdev_null0 00:34:40.508 23:05:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.508 23:05:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:40.508 23:05:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.508 23:05:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:40.508 23:05:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.508 23:05:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:40.508 23:05:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.508 23:05:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:40.508 23:05:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.508 23:05:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:40.508 23:05:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.508 23:05:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:40.508 [2024-12-10 23:05:47.599011] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:40.508 23:05:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.508 23:05:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:40.508 23:05:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:34:40.508 23:05:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:34:40.508 23:05:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:34:40.508 23:05:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.508 23:05:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:40.508 bdev_null1 00:34:40.508 23:05:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.508 23:05:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:40.508 23:05:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.508 23:05:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:40.508 23:05:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.508 23:05:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:40.508 23:05:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.508 23:05:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:40.508 23:05:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.508 23:05:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:40.508 23:05:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.508 23:05:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:40.508 23:05:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.508 23:05:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:40.508 23:05:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:34:40.508 23:05:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:34:40.508 23:05:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:34:40.508 23:05:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.508 23:05:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:40.508 bdev_null2 00:34:40.508 23:05:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.508 23:05:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:34:40.508 23:05:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.508 23:05:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:40.508 23:05:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.508 23:05:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:34:40.508 23:05:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.508 23:05:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:40.508 23:05:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.508 23:05:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:34:40.508 23:05:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.508 23:05:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:40.508 23:05:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.508 23:05:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:34:40.508 23:05:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:34:40.508 23:05:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:34:40.508 23:05:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:34:40.508 23:05:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:40.508 23:05:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:34:40.508 23:05:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:40.508 23:05:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:40.508 23:05:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:40.508 23:05:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:40.508 { 00:34:40.508 "params": { 00:34:40.508 "name": "Nvme$subsystem", 00:34:40.508 "trtype": "$TEST_TRANSPORT", 00:34:40.509 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:40.509 "adrfam": "ipv4", 00:34:40.509 "trsvcid": "$NVMF_PORT", 00:34:40.509 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:40.509 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:40.509 "hdgst": ${hdgst:-false}, 00:34:40.509 "ddgst": ${ddgst:-false} 00:34:40.509 }, 00:34:40.509 "method": "bdev_nvme_attach_controller" 00:34:40.509 } 00:34:40.509 EOF 00:34:40.509 )") 00:34:40.509 23:05:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:40.509 23:05:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:40.509 23:05:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:40.509 23:05:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:40.509 23:05:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:40.509 23:05:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev 00:34:40.509 23:05:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:34:40.509 23:05:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:40.509 23:05:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:40.509 23:05:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:40.509 23:05:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:40.509 23:05:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev 00:34:40.509 23:05:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:40.509 23:05:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:40.509 23:05:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:34:40.509 23:05:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:40.509 23:05:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:40.509 23:05:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:40.509 { 00:34:40.509 "params": { 00:34:40.509 "name": "Nvme$subsystem", 00:34:40.509 "trtype": "$TEST_TRANSPORT", 00:34:40.509 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:40.509 "adrfam": "ipv4", 00:34:40.509 "trsvcid": "$NVMF_PORT", 00:34:40.509 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:40.509 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:40.509 "hdgst": ${hdgst:-false}, 00:34:40.509 "ddgst": ${ddgst:-false} 00:34:40.509 }, 00:34:40.509 "method": "bdev_nvme_attach_controller" 00:34:40.509 } 00:34:40.509 EOF 00:34:40.509 )") 00:34:40.509 23:05:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:40.509 23:05:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:40.509 23:05:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:40.509 23:05:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:40.509 23:05:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:40.509 23:05:47 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:40.509 23:05:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:40.509 23:05:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:40.509 { 00:34:40.509 "params": { 00:34:40.509 "name": "Nvme$subsystem", 00:34:40.509 "trtype": "$TEST_TRANSPORT", 00:34:40.509 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:40.509 "adrfam": "ipv4", 00:34:40.509 "trsvcid": "$NVMF_PORT", 00:34:40.509 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:40.509 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:40.509 "hdgst": ${hdgst:-false}, 00:34:40.509 "ddgst": ${ddgst:-false} 00:34:40.509 }, 00:34:40.509 "method": "bdev_nvme_attach_controller" 00:34:40.509 } 00:34:40.509 EOF 00:34:40.509 )") 00:34:40.509 23:05:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:40.509 23:05:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:34:40.509 23:05:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:34:40.509 23:05:47 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:40.509 "params": { 00:34:40.509 "name": "Nvme0", 00:34:40.509 "trtype": "tcp", 00:34:40.509 "traddr": "10.0.0.2", 00:34:40.509 "adrfam": "ipv4", 00:34:40.509 "trsvcid": "4420", 00:34:40.509 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:40.509 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:40.509 "hdgst": false, 00:34:40.509 "ddgst": false 00:34:40.509 }, 00:34:40.509 "method": "bdev_nvme_attach_controller" 00:34:40.509 },{ 00:34:40.509 "params": { 00:34:40.509 "name": "Nvme1", 00:34:40.509 "trtype": "tcp", 00:34:40.509 "traddr": "10.0.0.2", 00:34:40.509 "adrfam": "ipv4", 00:34:40.509 "trsvcid": "4420", 00:34:40.509 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:40.509 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:40.509 "hdgst": false, 00:34:40.509 "ddgst": false 00:34:40.509 }, 00:34:40.509 "method": "bdev_nvme_attach_controller" 00:34:40.509 },{ 00:34:40.509 "params": { 00:34:40.509 "name": "Nvme2", 00:34:40.509 "trtype": "tcp", 00:34:40.509 "traddr": "10.0.0.2", 00:34:40.509 "adrfam": "ipv4", 00:34:40.509 "trsvcid": "4420", 00:34:40.509 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:34:40.509 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:34:40.509 "hdgst": false, 00:34:40.509 "ddgst": false 00:34:40.509 }, 00:34:40.509 "method": "bdev_nvme_attach_controller" 00:34:40.509 }' 00:34:40.509 23:05:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:40.509 23:05:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:40.509 23:05:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:40.509 23:05:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev 00:34:40.509 23:05:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:40.509 23:05:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:40.509 23:05:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:40.509 23:05:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:40.509 23:05:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev' 00:34:40.509 23:05:47 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:40.509 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:40.509 ... 00:34:40.509 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:40.509 ... 00:34:40.509 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:40.509 ... 00:34:40.509 fio-3.35 00:34:40.509 Starting 24 threads 00:34:52.703 00:34:52.703 filename0: (groupid=0, jobs=1): err= 0: pid=2635202: Tue Dec 10 23:05:59 2024 00:34:52.703 read: IOPS=71, BW=288KiB/s (295kB/s)(2904KiB/10088msec) 00:34:52.703 slat (nsec): min=4360, max=15491, avg=8374.46, stdev=1840.00 00:34:52.703 clat (msec): min=130, max=468, avg=222.04, stdev=42.64 00:34:52.703 lat (msec): min=130, max=468, avg=222.05, stdev=42.64 00:34:52.703 clat percentiles (msec): 00:34:52.703 | 1.00th=[ 131], 5.00th=[ 199], 10.00th=[ 201], 20.00th=[ 205], 00:34:52.703 | 30.00th=[ 209], 40.00th=[ 220], 50.00th=[ 220], 60.00th=[ 220], 00:34:52.703 | 70.00th=[ 222], 80.00th=[ 222], 90.00th=[ 224], 95.00th=[ 259], 00:34:52.703 | 99.00th=[ 468], 99.50th=[ 468], 99.90th=[ 468], 99.95th=[ 468], 00:34:52.703 | 99.99th=[ 468] 00:34:52.703 bw ( KiB/s): min= 128, max= 384, per=4.26%, avg=284.00, stdev=57.07, samples=20 00:34:52.703 iops : min= 32, max= 96, avg=71.00, stdev=14.27, samples=20 00:34:52.703 lat (msec) : 250=93.66%, 500=6.34% 00:34:52.703 cpu : usr=98.78%, sys=0.85%, ctx=13, majf=0, minf=21 00:34:52.703 IO depths : 1=2.2%, 2=4.5%, 4=13.2%, 8=69.7%, 16=10.3%, 32=0.0%, >=64=0.0% 00:34:52.703 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.703 complete : 0=0.0%, 4=90.6%, 8=3.7%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.703 issued rwts: total=726,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:52.703 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:52.703 filename0: (groupid=0, jobs=1): err= 0: pid=2635203: Tue Dec 10 23:05:59 2024 00:34:52.703 read: IOPS=72, BW=288KiB/s (295kB/s)(2920KiB/10122msec) 00:34:52.703 slat (nsec): min=7026, max=33213, avg=9958.12, stdev=4225.71 00:34:52.703 clat (msec): min=67, max=359, avg=221.63, stdev=54.20 00:34:52.703 lat (msec): min=67, max=359, avg=221.64, stdev=54.20 00:34:52.703 clat percentiles (msec): 00:34:52.703 | 1.00th=[ 68], 5.00th=[ 153], 10.00th=[ 182], 20.00th=[ 192], 00:34:52.703 | 30.00th=[ 201], 40.00th=[ 205], 50.00th=[ 215], 60.00th=[ 220], 00:34:52.703 | 70.00th=[ 236], 80.00th=[ 251], 90.00th=[ 305], 95.00th=[ 351], 00:34:52.703 | 99.00th=[ 359], 99.50th=[ 359], 99.90th=[ 359], 99.95th=[ 359], 00:34:52.703 | 99.99th=[ 359] 00:34:52.703 bw ( KiB/s): min= 224, max= 384, per=4.28%, avg=285.60, stdev=47.94, samples=20 00:34:52.703 iops : min= 56, max= 96, avg=71.40, stdev=11.98, samples=20 00:34:52.703 lat (msec) : 100=4.38%, 250=74.79%, 500=20.82% 00:34:52.703 cpu : usr=98.78%, sys=0.85%, ctx=12, majf=0, minf=33 00:34:52.703 IO depths : 1=0.4%, 2=1.5%, 4=8.4%, 8=76.8%, 16=12.9%, 32=0.0%, >=64=0.0% 00:34:52.703 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.703 complete : 0=0.0%, 4=89.1%, 8=6.4%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.703 issued rwts: total=730,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:52.703 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:52.704 filename0: (groupid=0, jobs=1): err= 0: pid=2635205: Tue Dec 10 23:05:59 2024 00:34:52.704 read: IOPS=49, BW=197KiB/s (201kB/s)(1984KiB/10088msec) 00:34:52.704 slat (nsec): min=5959, max=43045, avg=9118.25, stdev=4417.02 00:34:52.704 clat (msec): min=119, max=664, avg=325.31, stdev=60.22 00:34:52.704 lat (msec): min=119, max=664, avg=325.32, stdev=60.22 00:34:52.704 clat percentiles (msec): 00:34:52.704 | 1.00th=[ 205], 5.00th=[ 220], 10.00th=[ 268], 20.00th=[ 279], 00:34:52.704 | 30.00th=[ 305], 40.00th=[ 309], 50.00th=[ 334], 60.00th=[ 342], 00:34:52.704 | 70.00th=[ 355], 80.00th=[ 355], 90.00th=[ 359], 95.00th=[ 439], 00:34:52.704 | 99.00th=[ 518], 99.50th=[ 518], 99.90th=[ 667], 99.95th=[ 667], 00:34:52.704 | 99.99th=[ 667] 00:34:52.704 bw ( KiB/s): min= 128, max= 256, per=3.03%, avg=202.11, stdev=64.93, samples=19 00:34:52.704 iops : min= 32, max= 64, avg=50.53, stdev=16.23, samples=19 00:34:52.704 lat (msec) : 250=6.45%, 500=90.32%, 750=3.23% 00:34:52.704 cpu : usr=98.73%, sys=0.87%, ctx=15, majf=0, minf=22 00:34:52.704 IO depths : 1=4.4%, 2=10.7%, 4=25.0%, 8=51.8%, 16=8.1%, 32=0.0%, >=64=0.0% 00:34:52.704 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.704 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.704 issued rwts: total=496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:52.704 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:52.704 filename0: (groupid=0, jobs=1): err= 0: pid=2635206: Tue Dec 10 23:05:59 2024 00:34:52.704 read: IOPS=76, BW=304KiB/s (311kB/s)(3072KiB/10104msec) 00:34:52.704 slat (nsec): min=7027, max=32569, avg=12471.98, stdev=4902.23 00:34:52.704 clat (msec): min=89, max=287, avg=208.52, stdev=26.97 00:34:52.704 lat (msec): min=89, max=287, avg=208.53, stdev=26.97 00:34:52.704 clat percentiles (msec): 00:34:52.704 | 1.00th=[ 90], 5.00th=[ 134], 10.00th=[ 201], 20.00th=[ 205], 00:34:52.704 | 30.00th=[ 207], 40.00th=[ 215], 50.00th=[ 220], 60.00th=[ 220], 00:34:52.704 | 70.00th=[ 222], 80.00th=[ 222], 90.00th=[ 222], 95.00th=[ 222], 00:34:52.704 | 99.00th=[ 224], 99.50th=[ 241], 99.90th=[ 288], 99.95th=[ 288], 00:34:52.704 | 99.99th=[ 288] 00:34:52.704 bw ( KiB/s): min= 256, max= 384, per=4.59%, avg=306.40, stdev=55.49, samples=20 00:34:52.704 iops : min= 64, max= 96, avg=76.60, stdev=13.87, samples=20 00:34:52.704 lat (msec) : 100=2.08%, 250=97.66%, 500=0.26% 00:34:52.704 cpu : usr=98.61%, sys=0.99%, ctx=12, majf=0, minf=23 00:34:52.704 IO depths : 1=0.8%, 2=7.0%, 4=25.0%, 8=55.5%, 16=11.7%, 32=0.0%, >=64=0.0% 00:34:52.704 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.704 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.704 issued rwts: total=768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:52.704 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:52.704 filename0: (groupid=0, jobs=1): err= 0: pid=2635207: Tue Dec 10 23:05:59 2024 00:34:52.704 read: IOPS=69, BW=276KiB/s (283kB/s)(2792KiB/10107msec) 00:34:52.704 slat (nsec): min=7009, max=37419, avg=9645.70, stdev=4013.28 00:34:52.704 clat (msec): min=120, max=359, avg=231.21, stdev=43.89 00:34:52.704 lat (msec): min=120, max=359, avg=231.22, stdev=43.89 00:34:52.704 clat percentiles (msec): 00:34:52.704 | 1.00th=[ 188], 5.00th=[ 199], 10.00th=[ 203], 20.00th=[ 205], 00:34:52.704 | 30.00th=[ 207], 40.00th=[ 209], 50.00th=[ 220], 60.00th=[ 222], 00:34:52.704 | 70.00th=[ 230], 80.00th=[ 234], 90.00th=[ 305], 95.00th=[ 355], 00:34:52.704 | 99.00th=[ 359], 99.50th=[ 359], 99.90th=[ 359], 99.95th=[ 359], 00:34:52.704 | 99.99th=[ 359] 00:34:52.704 bw ( KiB/s): min= 128, max= 336, per=4.08%, avg=272.80, stdev=48.55, samples=20 00:34:52.704 iops : min= 32, max= 84, avg=68.20, stdev=12.14, samples=20 00:34:52.704 lat (msec) : 250=81.38%, 500=18.62% 00:34:52.704 cpu : usr=98.40%, sys=1.24%, ctx=10, majf=0, minf=26 00:34:52.704 IO depths : 1=0.4%, 2=1.0%, 4=6.7%, 8=78.9%, 16=12.9%, 32=0.0%, >=64=0.0% 00:34:52.704 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.704 complete : 0=0.0%, 4=88.6%, 8=7.0%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.704 issued rwts: total=698,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:52.704 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:52.704 filename0: (groupid=0, jobs=1): err= 0: pid=2635208: Tue Dec 10 23:05:59 2024 00:34:52.704 read: IOPS=69, BW=278KiB/s (285kB/s)(2808KiB/10096msec) 00:34:52.704 slat (nsec): min=4379, max=37784, avg=9311.86, stdev=3682.04 00:34:52.704 clat (msec): min=156, max=361, avg=229.62, stdev=47.26 00:34:52.704 lat (msec): min=156, max=361, avg=229.63, stdev=47.26 00:34:52.704 clat percentiles (msec): 00:34:52.704 | 1.00th=[ 157], 5.00th=[ 182], 10.00th=[ 186], 20.00th=[ 192], 00:34:52.704 | 30.00th=[ 203], 40.00th=[ 207], 50.00th=[ 215], 60.00th=[ 220], 00:34:52.704 | 70.00th=[ 253], 80.00th=[ 257], 90.00th=[ 305], 95.00th=[ 355], 00:34:52.704 | 99.00th=[ 359], 99.50th=[ 363], 99.90th=[ 363], 99.95th=[ 363], 00:34:52.704 | 99.99th=[ 363] 00:34:52.704 bw ( KiB/s): min= 128, max= 368, per=4.11%, avg=274.40, stdev=55.25, samples=20 00:34:52.704 iops : min= 32, max= 92, avg=68.60, stdev=13.81, samples=20 00:34:52.704 lat (msec) : 250=68.09%, 500=31.91% 00:34:52.704 cpu : usr=98.70%, sys=0.93%, ctx=12, majf=0, minf=21 00:34:52.704 IO depths : 1=0.4%, 2=1.6%, 4=8.5%, 8=76.6%, 16=12.8%, 32=0.0%, >=64=0.0% 00:34:52.704 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.704 complete : 0=0.0%, 4=89.2%, 8=6.3%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.704 issued rwts: total=702,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:52.704 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:52.704 filename0: (groupid=0, jobs=1): err= 0: pid=2635209: Tue Dec 10 23:05:59 2024 00:34:52.704 read: IOPS=49, BW=197KiB/s (201kB/s)(1984KiB/10087msec) 00:34:52.704 slat (nsec): min=5581, max=20202, avg=8868.63, stdev=2343.65 00:34:52.704 clat (msec): min=197, max=515, avg=324.82, stdev=57.23 00:34:52.704 lat (msec): min=197, max=515, avg=324.83, stdev=57.23 00:34:52.704 clat percentiles (msec): 00:34:52.704 | 1.00th=[ 199], 5.00th=[ 222], 10.00th=[ 266], 20.00th=[ 296], 00:34:52.704 | 30.00th=[ 305], 40.00th=[ 309], 50.00th=[ 321], 60.00th=[ 342], 00:34:52.704 | 70.00th=[ 351], 80.00th=[ 355], 90.00th=[ 359], 95.00th=[ 439], 00:34:52.704 | 99.00th=[ 514], 99.50th=[ 514], 99.90th=[ 514], 99.95th=[ 514], 00:34:52.704 | 99.99th=[ 514] 00:34:52.704 bw ( KiB/s): min= 128, max= 272, per=3.03%, avg=202.11, stdev=55.22, samples=19 00:34:52.704 iops : min= 32, max= 68, avg=50.53, stdev=13.81, samples=19 00:34:52.704 lat (msec) : 250=6.85%, 500=89.92%, 750=3.23% 00:34:52.704 cpu : usr=98.72%, sys=0.91%, ctx=12, majf=0, minf=25 00:34:52.704 IO depths : 1=2.2%, 2=8.3%, 4=24.4%, 8=54.8%, 16=10.3%, 32=0.0%, >=64=0.0% 00:34:52.704 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.704 complete : 0=0.0%, 4=94.2%, 8=0.2%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.704 issued rwts: total=496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:52.704 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:52.704 filename0: (groupid=0, jobs=1): err= 0: pid=2635210: Tue Dec 10 23:05:59 2024 00:34:52.704 read: IOPS=68, BW=274KiB/s (281kB/s)(2768KiB/10088msec) 00:34:52.704 slat (nsec): min=5892, max=20823, avg=8719.36, stdev=2076.73 00:34:52.704 clat (msec): min=146, max=497, avg=233.02, stdev=56.91 00:34:52.704 lat (msec): min=146, max=497, avg=233.03, stdev=56.91 00:34:52.704 clat percentiles (msec): 00:34:52.704 | 1.00th=[ 182], 5.00th=[ 192], 10.00th=[ 194], 20.00th=[ 199], 00:34:52.704 | 30.00th=[ 203], 40.00th=[ 205], 50.00th=[ 215], 60.00th=[ 220], 00:34:52.704 | 70.00th=[ 245], 80.00th=[ 249], 90.00th=[ 305], 95.00th=[ 359], 00:34:52.704 | 99.00th=[ 468], 99.50th=[ 468], 99.90th=[ 498], 99.95th=[ 498], 00:34:52.704 | 99.99th=[ 498] 00:34:52.704 bw ( KiB/s): min= 128, max= 368, per=4.05%, avg=270.40, stdev=54.17, samples=20 00:34:52.704 iops : min= 32, max= 92, avg=67.60, stdev=13.54, samples=20 00:34:52.704 lat (msec) : 250=82.66%, 500=17.34% 00:34:52.704 cpu : usr=98.62%, sys=1.01%, ctx=7, majf=0, minf=23 00:34:52.704 IO depths : 1=0.1%, 2=1.2%, 4=8.1%, 8=77.5%, 16=13.2%, 32=0.0%, >=64=0.0% 00:34:52.704 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.704 complete : 0=0.0%, 4=89.0%, 8=6.5%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.704 issued rwts: total=692,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:52.704 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:52.704 filename1: (groupid=0, jobs=1): err= 0: pid=2635211: Tue Dec 10 23:05:59 2024 00:34:52.704 read: IOPS=74, BW=298KiB/s (305kB/s)(3008KiB/10097msec) 00:34:52.704 slat (nsec): min=4365, max=27406, avg=8661.86, stdev=2350.92 00:34:52.704 clat (msec): min=120, max=308, avg=214.73, stdev=20.89 00:34:52.704 lat (msec): min=120, max=308, avg=214.74, stdev=20.89 00:34:52.704 clat percentiles (msec): 00:34:52.704 | 1.00th=[ 121], 5.00th=[ 201], 10.00th=[ 201], 20.00th=[ 205], 00:34:52.704 | 30.00th=[ 209], 40.00th=[ 218], 50.00th=[ 220], 60.00th=[ 220], 00:34:52.704 | 70.00th=[ 222], 80.00th=[ 222], 90.00th=[ 222], 95.00th=[ 224], 00:34:52.704 | 99.00th=[ 309], 99.50th=[ 309], 99.90th=[ 309], 99.95th=[ 309], 00:34:52.704 | 99.99th=[ 309] 00:34:52.704 bw ( KiB/s): min= 128, max= 384, per=4.41%, avg=294.40, stdev=73.12, samples=20 00:34:52.704 iops : min= 32, max= 96, avg=73.60, stdev=18.28, samples=20 00:34:52.704 lat (msec) : 250=97.87%, 500=2.13% 00:34:52.704 cpu : usr=98.75%, sys=0.87%, ctx=8, majf=0, minf=21 00:34:52.704 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:52.704 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.704 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.704 issued rwts: total=752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:52.704 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:52.704 filename1: (groupid=0, jobs=1): err= 0: pid=2635212: Tue Dec 10 23:05:59 2024 00:34:52.704 read: IOPS=72, BW=290KiB/s (297kB/s)(2928KiB/10089msec) 00:34:52.704 slat (nsec): min=4380, max=15932, avg=8537.51, stdev=1825.80 00:34:52.704 clat (msec): min=140, max=469, avg=219.99, stdev=40.13 00:34:52.704 lat (msec): min=140, max=469, avg=220.00, stdev=40.13 00:34:52.704 clat percentiles (msec): 00:34:52.704 | 1.00th=[ 182], 5.00th=[ 197], 10.00th=[ 201], 20.00th=[ 205], 00:34:52.704 | 30.00th=[ 207], 40.00th=[ 215], 50.00th=[ 218], 60.00th=[ 222], 00:34:52.704 | 70.00th=[ 222], 80.00th=[ 224], 90.00th=[ 224], 95.00th=[ 226], 00:34:52.704 | 99.00th=[ 468], 99.50th=[ 468], 99.90th=[ 468], 99.95th=[ 468], 00:34:52.704 | 99.99th=[ 468] 00:34:52.704 bw ( KiB/s): min= 128, max= 336, per=4.29%, avg=286.40, stdev=49.22, samples=20 00:34:52.704 iops : min= 32, max= 84, avg=71.60, stdev=12.30, samples=20 00:34:52.704 lat (msec) : 250=96.17%, 500=3.83% 00:34:52.704 cpu : usr=98.70%, sys=0.93%, ctx=12, majf=0, minf=24 00:34:52.704 IO depths : 1=0.3%, 2=1.1%, 4=8.6%, 8=77.7%, 16=12.3%, 32=0.0%, >=64=0.0% 00:34:52.704 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.705 complete : 0=0.0%, 4=89.4%, 8=5.1%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.705 issued rwts: total=732,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:52.705 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:52.705 filename1: (groupid=0, jobs=1): err= 0: pid=2635213: Tue Dec 10 23:05:59 2024 00:34:52.705 read: IOPS=75, BW=300KiB/s (307kB/s)(3040KiB/10127msec) 00:34:52.705 slat (nsec): min=6981, max=30487, avg=10034.34, stdev=3970.65 00:34:52.705 clat (msec): min=67, max=358, avg=212.31, stdev=33.63 00:34:52.705 lat (msec): min=67, max=358, avg=212.32, stdev=33.63 00:34:52.705 clat percentiles (msec): 00:34:52.705 | 1.00th=[ 68], 5.00th=[ 165], 10.00th=[ 197], 20.00th=[ 205], 00:34:52.705 | 30.00th=[ 207], 40.00th=[ 215], 50.00th=[ 220], 60.00th=[ 220], 00:34:52.705 | 70.00th=[ 222], 80.00th=[ 222], 90.00th=[ 224], 95.00th=[ 255], 00:34:52.705 | 99.00th=[ 355], 99.50th=[ 359], 99.90th=[ 359], 99.95th=[ 359], 00:34:52.705 | 99.99th=[ 359] 00:34:52.705 bw ( KiB/s): min= 256, max= 384, per=4.46%, avg=297.60, stdev=42.30, samples=20 00:34:52.705 iops : min= 64, max= 96, avg=74.40, stdev=10.58, samples=20 00:34:52.705 lat (msec) : 100=1.84%, 250=92.63%, 500=5.53% 00:34:52.705 cpu : usr=98.63%, sys=0.99%, ctx=37, majf=0, minf=29 00:34:52.705 IO depths : 1=0.1%, 2=2.4%, 4=12.8%, 8=72.2%, 16=12.5%, 32=0.0%, >=64=0.0% 00:34:52.705 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.705 complete : 0=0.0%, 4=90.7%, 8=4.0%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.705 issued rwts: total=760,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:52.705 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:52.705 filename1: (groupid=0, jobs=1): err= 0: pid=2635215: Tue Dec 10 23:05:59 2024 00:34:52.705 read: IOPS=50, BW=203KiB/s (208kB/s)(2048KiB/10088msec) 00:34:52.705 slat (nsec): min=6262, max=24884, avg=8901.72, stdev=2615.21 00:34:52.705 clat (msec): min=119, max=515, avg=315.15, stdev=71.23 00:34:52.705 lat (msec): min=119, max=515, avg=315.16, stdev=71.23 00:34:52.705 clat percentiles (msec): 00:34:52.705 | 1.00th=[ 121], 5.00th=[ 207], 10.00th=[ 207], 20.00th=[ 271], 00:34:52.705 | 30.00th=[ 305], 40.00th=[ 305], 50.00th=[ 309], 60.00th=[ 334], 00:34:52.705 | 70.00th=[ 355], 80.00th=[ 355], 90.00th=[ 359], 95.00th=[ 443], 00:34:52.705 | 99.00th=[ 514], 99.50th=[ 514], 99.90th=[ 514], 99.95th=[ 514], 00:34:52.705 | 99.99th=[ 514] 00:34:52.705 bw ( KiB/s): min= 128, max= 256, per=3.12%, avg=208.84, stdev=60.22, samples=19 00:34:52.705 iops : min= 32, max= 64, avg=52.21, stdev=15.05, samples=19 00:34:52.705 lat (msec) : 250=14.45%, 500=82.42%, 750=3.12% 00:34:52.705 cpu : usr=98.56%, sys=1.05%, ctx=15, majf=0, minf=31 00:34:52.705 IO depths : 1=3.3%, 2=9.6%, 4=25.0%, 8=52.9%, 16=9.2%, 32=0.0%, >=64=0.0% 00:34:52.705 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.705 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.705 issued rwts: total=512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:52.705 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:52.705 filename1: (groupid=0, jobs=1): err= 0: pid=2635216: Tue Dec 10 23:05:59 2024 00:34:52.705 read: IOPS=77, BW=310KiB/s (317kB/s)(3136KiB/10122msec) 00:34:52.705 slat (nsec): min=6980, max=38705, avg=12281.54, stdev=5133.04 00:34:52.705 clat (msec): min=89, max=223, avg=206.45, stdev=29.10 00:34:52.705 lat (msec): min=89, max=223, avg=206.46, stdev=29.09 00:34:52.705 clat percentiles (msec): 00:34:52.705 | 1.00th=[ 90], 5.00th=[ 122], 10.00th=[ 199], 20.00th=[ 205], 00:34:52.705 | 30.00th=[ 207], 40.00th=[ 215], 50.00th=[ 220], 60.00th=[ 220], 00:34:52.705 | 70.00th=[ 222], 80.00th=[ 222], 90.00th=[ 222], 95.00th=[ 222], 00:34:52.705 | 99.00th=[ 224], 99.50th=[ 224], 99.90th=[ 224], 99.95th=[ 224], 00:34:52.705 | 99.99th=[ 224] 00:34:52.705 bw ( KiB/s): min= 256, max= 384, per=4.61%, avg=307.20, stdev=64.34, samples=20 00:34:52.705 iops : min= 64, max= 96, avg=76.80, stdev=16.08, samples=20 00:34:52.705 lat (msec) : 100=2.04%, 250=97.96% 00:34:52.705 cpu : usr=98.80%, sys=0.84%, ctx=13, majf=0, minf=29 00:34:52.705 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:52.705 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.705 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.705 issued rwts: total=784,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:52.705 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:52.705 filename1: (groupid=0, jobs=1): err= 0: pid=2635217: Tue Dec 10 23:05:59 2024 00:34:52.705 read: IOPS=74, BW=297KiB/s (304kB/s)(3000KiB/10103msec) 00:34:52.705 slat (nsec): min=6231, max=34388, avg=10556.10, stdev=4478.78 00:34:52.705 clat (msec): min=120, max=314, avg=215.29, stdev=19.37 00:34:52.705 lat (msec): min=120, max=314, avg=215.30, stdev=19.37 00:34:52.705 clat percentiles (msec): 00:34:52.705 | 1.00th=[ 121], 5.00th=[ 201], 10.00th=[ 201], 20.00th=[ 205], 00:34:52.705 | 30.00th=[ 207], 40.00th=[ 218], 50.00th=[ 220], 60.00th=[ 222], 00:34:52.705 | 70.00th=[ 222], 80.00th=[ 222], 90.00th=[ 222], 95.00th=[ 224], 00:34:52.705 | 99.00th=[ 275], 99.50th=[ 313], 99.90th=[ 317], 99.95th=[ 317], 00:34:52.705 | 99.99th=[ 317] 00:34:52.705 bw ( KiB/s): min= 128, max= 368, per=4.40%, avg=293.60, stdev=63.85, samples=20 00:34:52.705 iops : min= 32, max= 92, avg=73.40, stdev=15.96, samples=20 00:34:52.705 lat (msec) : 250=95.73%, 500=4.27% 00:34:52.705 cpu : usr=98.66%, sys=0.97%, ctx=14, majf=0, minf=28 00:34:52.705 IO depths : 1=0.1%, 2=6.4%, 4=25.1%, 8=56.1%, 16=12.3%, 32=0.0%, >=64=0.0% 00:34:52.705 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.705 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.705 issued rwts: total=750,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:52.705 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:52.705 filename1: (groupid=0, jobs=1): err= 0: pid=2635218: Tue Dec 10 23:05:59 2024 00:34:52.705 read: IOPS=68, BW=276KiB/s (283kB/s)(2784KiB/10088msec) 00:34:52.705 slat (nsec): min=7035, max=19332, avg=8718.94, stdev=2071.06 00:34:52.705 clat (msec): min=143, max=467, avg=231.51, stdev=56.12 00:34:52.705 lat (msec): min=143, max=467, avg=231.52, stdev=56.12 00:34:52.705 clat percentiles (msec): 00:34:52.705 | 1.00th=[ 144], 5.00th=[ 192], 10.00th=[ 197], 20.00th=[ 199], 00:34:52.705 | 30.00th=[ 203], 40.00th=[ 205], 50.00th=[ 209], 60.00th=[ 224], 00:34:52.705 | 70.00th=[ 236], 80.00th=[ 245], 90.00th=[ 305], 95.00th=[ 359], 00:34:52.705 | 99.00th=[ 468], 99.50th=[ 468], 99.90th=[ 468], 99.95th=[ 468], 00:34:52.705 | 99.99th=[ 468] 00:34:52.705 bw ( KiB/s): min= 128, max= 368, per=4.08%, avg=272.00, stdev=54.94, samples=20 00:34:52.705 iops : min= 32, max= 92, avg=68.00, stdev=13.73, samples=20 00:34:52.705 lat (msec) : 250=85.06%, 500=14.94% 00:34:52.705 cpu : usr=98.71%, sys=0.92%, ctx=8, majf=0, minf=26 00:34:52.705 IO depths : 1=0.6%, 2=1.9%, 4=9.1%, 8=75.9%, 16=12.6%, 32=0.0%, >=64=0.0% 00:34:52.705 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.705 complete : 0=0.0%, 4=89.3%, 8=6.1%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.705 issued rwts: total=696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:52.705 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:52.705 filename1: (groupid=0, jobs=1): err= 0: pid=2635219: Tue Dec 10 23:05:59 2024 00:34:52.705 read: IOPS=76, BW=304KiB/s (311kB/s)(3072KiB/10104msec) 00:34:52.705 slat (nsec): min=6997, max=37246, avg=12083.80, stdev=5132.68 00:34:52.705 clat (msec): min=89, max=236, avg=208.52, stdev=26.01 00:34:52.705 lat (msec): min=89, max=236, avg=208.53, stdev=26.01 00:34:52.705 clat percentiles (msec): 00:34:52.705 | 1.00th=[ 90], 5.00th=[ 134], 10.00th=[ 201], 20.00th=[ 205], 00:34:52.705 | 30.00th=[ 207], 40.00th=[ 215], 50.00th=[ 220], 60.00th=[ 220], 00:34:52.705 | 70.00th=[ 222], 80.00th=[ 222], 90.00th=[ 222], 95.00th=[ 222], 00:34:52.705 | 99.00th=[ 224], 99.50th=[ 224], 99.90th=[ 236], 99.95th=[ 236], 00:34:52.705 | 99.99th=[ 236] 00:34:52.705 bw ( KiB/s): min= 256, max= 384, per=4.59%, avg=306.40, stdev=55.49, samples=20 00:34:52.705 iops : min= 64, max= 96, avg=76.60, stdev=13.87, samples=20 00:34:52.705 lat (msec) : 100=2.08%, 250=97.92% 00:34:52.705 cpu : usr=98.57%, sys=1.05%, ctx=16, majf=0, minf=25 00:34:52.705 IO depths : 1=0.3%, 2=6.5%, 4=25.0%, 8=56.0%, 16=12.2%, 32=0.0%, >=64=0.0% 00:34:52.705 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.705 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.705 issued rwts: total=768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:52.705 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:52.705 filename2: (groupid=0, jobs=1): err= 0: pid=2635220: Tue Dec 10 23:05:59 2024 00:34:52.705 read: IOPS=76, BW=304KiB/s (311kB/s)(3072KiB/10104msec) 00:34:52.705 slat (nsec): min=7023, max=32077, avg=12218.74, stdev=5181.70 00:34:52.705 clat (msec): min=80, max=273, avg=208.51, stdev=26.97 00:34:52.705 lat (msec): min=80, max=273, avg=208.52, stdev=26.97 00:34:52.705 clat percentiles (msec): 00:34:52.705 | 1.00th=[ 90], 5.00th=[ 134], 10.00th=[ 201], 20.00th=[ 205], 00:34:52.705 | 30.00th=[ 207], 40.00th=[ 215], 50.00th=[ 220], 60.00th=[ 220], 00:34:52.705 | 70.00th=[ 222], 80.00th=[ 222], 90.00th=[ 222], 95.00th=[ 222], 00:34:52.705 | 99.00th=[ 224], 99.50th=[ 255], 99.90th=[ 275], 99.95th=[ 275], 00:34:52.705 | 99.99th=[ 275] 00:34:52.705 bw ( KiB/s): min= 256, max= 384, per=4.59%, avg=306.40, stdev=58.79, samples=20 00:34:52.705 iops : min= 64, max= 96, avg=76.60, stdev=14.70, samples=20 00:34:52.705 lat (msec) : 100=2.08%, 250=97.40%, 500=0.52% 00:34:52.705 cpu : usr=98.47%, sys=1.16%, ctx=16, majf=0, minf=32 00:34:52.705 IO depths : 1=1.7%, 2=7.9%, 4=25.0%, 8=54.6%, 16=10.8%, 32=0.0%, >=64=0.0% 00:34:52.705 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.705 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.705 issued rwts: total=768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:52.705 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:52.705 filename2: (groupid=0, jobs=1): err= 0: pid=2635221: Tue Dec 10 23:05:59 2024 00:34:52.705 read: IOPS=69, BW=277KiB/s (283kB/s)(2792KiB/10096msec) 00:34:52.705 slat (nsec): min=4467, max=16533, avg=8520.73, stdev=2002.06 00:34:52.705 clat (msec): min=154, max=428, avg=231.21, stdev=48.24 00:34:52.705 lat (msec): min=154, max=428, avg=231.22, stdev=48.24 00:34:52.705 clat percentiles (msec): 00:34:52.705 | 1.00th=[ 167], 5.00th=[ 184], 10.00th=[ 186], 20.00th=[ 192], 00:34:52.705 | 30.00th=[ 205], 40.00th=[ 207], 50.00th=[ 220], 60.00th=[ 222], 00:34:52.705 | 70.00th=[ 251], 80.00th=[ 255], 90.00th=[ 309], 95.00th=[ 355], 00:34:52.705 | 99.00th=[ 359], 99.50th=[ 359], 99.90th=[ 430], 99.95th=[ 430], 00:34:52.705 | 99.99th=[ 430] 00:34:52.705 bw ( KiB/s): min= 128, max= 368, per=4.08%, avg=272.80, stdev=54.56, samples=20 00:34:52.705 iops : min= 32, max= 92, avg=68.20, stdev=13.64, samples=20 00:34:52.705 lat (msec) : 250=68.48%, 500=31.52% 00:34:52.705 cpu : usr=98.78%, sys=0.84%, ctx=14, majf=0, minf=33 00:34:52.706 IO depths : 1=0.1%, 2=1.0%, 4=7.6%, 8=78.1%, 16=13.2%, 32=0.0%, >=64=0.0% 00:34:52.706 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.706 complete : 0=0.0%, 4=88.9%, 8=6.7%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.706 issued rwts: total=698,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:52.706 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:52.706 filename2: (groupid=0, jobs=1): err= 0: pid=2635222: Tue Dec 10 23:05:59 2024 00:34:52.706 read: IOPS=67, BW=271KiB/s (277kB/s)(2736KiB/10106msec) 00:34:52.706 slat (nsec): min=3387, max=36509, avg=8958.55, stdev=3006.99 00:34:52.706 clat (msec): min=128, max=402, avg=235.98, stdev=41.77 00:34:52.706 lat (msec): min=128, max=402, avg=235.99, stdev=41.77 00:34:52.706 clat percentiles (msec): 00:34:52.706 | 1.00th=[ 188], 5.00th=[ 205], 10.00th=[ 205], 20.00th=[ 220], 00:34:52.706 | 30.00th=[ 220], 40.00th=[ 220], 50.00th=[ 222], 60.00th=[ 222], 00:34:52.706 | 70.00th=[ 222], 80.00th=[ 271], 90.00th=[ 300], 95.00th=[ 342], 00:34:52.706 | 99.00th=[ 351], 99.50th=[ 359], 99.90th=[ 401], 99.95th=[ 401], 00:34:52.706 | 99.99th=[ 401] 00:34:52.706 bw ( KiB/s): min= 128, max= 384, per=4.01%, avg=267.20, stdev=68.10, samples=20 00:34:52.706 iops : min= 32, max= 96, avg=66.80, stdev=17.03, samples=20 00:34:52.706 lat (msec) : 250=77.19%, 500=22.81% 00:34:52.706 cpu : usr=98.70%, sys=0.93%, ctx=11, majf=0, minf=28 00:34:52.706 IO depths : 1=2.6%, 2=6.0%, 4=16.2%, 8=65.2%, 16=9.9%, 32=0.0%, >=64=0.0% 00:34:52.706 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.706 complete : 0=0.0%, 4=91.5%, 8=2.9%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.706 issued rwts: total=684,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:52.706 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:52.706 filename2: (groupid=0, jobs=1): err= 0: pid=2635223: Tue Dec 10 23:05:59 2024 00:34:52.706 read: IOPS=68, BW=276KiB/s (283kB/s)(2792KiB/10119msec) 00:34:52.706 slat (nsec): min=7012, max=38755, avg=9841.40, stdev=3950.72 00:34:52.706 clat (msec): min=131, max=388, avg=231.27, stdev=43.71 00:34:52.706 lat (msec): min=131, max=388, avg=231.28, stdev=43.71 00:34:52.706 clat percentiles (msec): 00:34:52.706 | 1.00th=[ 186], 5.00th=[ 199], 10.00th=[ 203], 20.00th=[ 205], 00:34:52.706 | 30.00th=[ 207], 40.00th=[ 209], 50.00th=[ 220], 60.00th=[ 224], 00:34:52.706 | 70.00th=[ 232], 80.00th=[ 234], 90.00th=[ 305], 95.00th=[ 355], 00:34:52.706 | 99.00th=[ 359], 99.50th=[ 359], 99.90th=[ 388], 99.95th=[ 388], 00:34:52.706 | 99.99th=[ 388] 00:34:52.706 bw ( KiB/s): min= 128, max= 384, per=4.08%, avg=272.80, stdev=56.98, samples=20 00:34:52.706 iops : min= 32, max= 96, avg=68.20, stdev=14.24, samples=20 00:34:52.706 lat (msec) : 250=81.66%, 500=18.34% 00:34:52.706 cpu : usr=98.73%, sys=0.90%, ctx=12, majf=0, minf=29 00:34:52.706 IO depths : 1=0.3%, 2=1.3%, 4=8.0%, 8=77.4%, 16=13.0%, 32=0.0%, >=64=0.0% 00:34:52.706 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.706 complete : 0=0.0%, 4=89.0%, 8=6.6%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.706 issued rwts: total=698,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:52.706 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:52.706 filename2: (groupid=0, jobs=1): err= 0: pid=2635225: Tue Dec 10 23:05:59 2024 00:34:52.706 read: IOPS=73, BW=292KiB/s (299kB/s)(2952KiB/10100msec) 00:34:52.706 slat (nsec): min=6282, max=27067, avg=8800.52, stdev=2480.90 00:34:52.706 clat (msec): min=148, max=440, avg=218.55, stdev=26.97 00:34:52.706 lat (msec): min=148, max=440, avg=218.56, stdev=26.97 00:34:52.706 clat percentiles (msec): 00:34:52.706 | 1.00th=[ 148], 5.00th=[ 190], 10.00th=[ 203], 20.00th=[ 205], 00:34:52.706 | 30.00th=[ 215], 40.00th=[ 218], 50.00th=[ 220], 60.00th=[ 222], 00:34:52.706 | 70.00th=[ 222], 80.00th=[ 222], 90.00th=[ 224], 95.00th=[ 232], 00:34:52.706 | 99.00th=[ 326], 99.50th=[ 338], 99.90th=[ 443], 99.95th=[ 443], 00:34:52.706 | 99.99th=[ 443] 00:34:52.706 bw ( KiB/s): min= 128, max= 336, per=4.32%, avg=288.80, stdev=48.83, samples=20 00:34:52.706 iops : min= 32, max= 84, avg=72.20, stdev=12.21, samples=20 00:34:52.706 lat (msec) : 250=95.12%, 500=4.88% 00:34:52.706 cpu : usr=98.43%, sys=1.20%, ctx=13, majf=0, minf=26 00:34:52.706 IO depths : 1=0.1%, 2=0.5%, 4=7.3%, 8=79.5%, 16=12.5%, 32=0.0%, >=64=0.0% 00:34:52.706 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.706 complete : 0=0.0%, 4=89.0%, 8=5.5%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.706 issued rwts: total=738,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:52.706 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:52.706 filename2: (groupid=0, jobs=1): err= 0: pid=2635226: Tue Dec 10 23:05:59 2024 00:34:52.706 read: IOPS=77, BW=309KiB/s (316kB/s)(3128KiB/10122msec) 00:34:52.706 slat (nsec): min=7026, max=42144, avg=11769.82, stdev=5087.65 00:34:52.706 clat (msec): min=67, max=294, avg=206.83, stdev=30.82 00:34:52.706 lat (msec): min=67, max=294, avg=206.84, stdev=30.81 00:34:52.706 clat percentiles (msec): 00:34:52.706 | 1.00th=[ 68], 5.00th=[ 123], 10.00th=[ 201], 20.00th=[ 205], 00:34:52.706 | 30.00th=[ 207], 40.00th=[ 215], 50.00th=[ 220], 60.00th=[ 220], 00:34:52.706 | 70.00th=[ 220], 80.00th=[ 222], 90.00th=[ 222], 95.00th=[ 224], 00:34:52.706 | 99.00th=[ 224], 99.50th=[ 224], 99.90th=[ 296], 99.95th=[ 296], 00:34:52.706 | 99.99th=[ 296] 00:34:52.706 bw ( KiB/s): min= 256, max= 384, per=4.59%, avg=306.40, stdev=55.49, samples=20 00:34:52.706 iops : min= 64, max= 96, avg=76.60, stdev=13.87, samples=20 00:34:52.706 lat (msec) : 100=3.84%, 250=95.91%, 500=0.26% 00:34:52.706 cpu : usr=98.63%, sys=1.01%, ctx=12, majf=0, minf=39 00:34:52.706 IO depths : 1=0.3%, 2=6.5%, 4=25.1%, 8=56.0%, 16=12.1%, 32=0.0%, >=64=0.0% 00:34:52.706 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.706 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.706 issued rwts: total=782,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:52.706 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:52.706 filename2: (groupid=0, jobs=1): err= 0: pid=2635227: Tue Dec 10 23:05:59 2024 00:34:52.706 read: IOPS=71, BW=286KiB/s (293kB/s)(2888KiB/10094msec) 00:34:52.706 slat (nsec): min=7026, max=19446, avg=8720.77, stdev=2039.08 00:34:52.706 clat (msec): min=143, max=515, avg=223.08, stdev=48.13 00:34:52.706 lat (msec): min=143, max=515, avg=223.09, stdev=48.13 00:34:52.706 clat percentiles (msec): 00:34:52.706 | 1.00th=[ 182], 5.00th=[ 199], 10.00th=[ 201], 20.00th=[ 205], 00:34:52.706 | 30.00th=[ 209], 40.00th=[ 218], 50.00th=[ 220], 60.00th=[ 220], 00:34:52.706 | 70.00th=[ 222], 80.00th=[ 222], 90.00th=[ 224], 95.00th=[ 226], 00:34:52.706 | 99.00th=[ 514], 99.50th=[ 514], 99.90th=[ 514], 99.95th=[ 514], 00:34:52.706 | 99.99th=[ 514] 00:34:52.706 bw ( KiB/s): min= 256, max= 384, per=4.46%, avg=297.26, stdev=40.73, samples=19 00:34:52.706 iops : min= 64, max= 96, avg=74.32, stdev=10.18, samples=19 00:34:52.706 lat (msec) : 250=95.29%, 500=2.49%, 750=2.22% 00:34:52.706 cpu : usr=98.61%, sys=1.02%, ctx=11, majf=0, minf=29 00:34:52.706 IO depths : 1=0.7%, 2=1.9%, 4=9.8%, 8=75.6%, 16=11.9%, 32=0.0%, >=64=0.0% 00:34:52.706 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.706 complete : 0=0.0%, 4=89.7%, 8=4.8%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.706 issued rwts: total=722,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:52.706 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:52.706 filename2: (groupid=0, jobs=1): err= 0: pid=2635228: Tue Dec 10 23:05:59 2024 00:34:52.706 read: IOPS=74, BW=296KiB/s (304kB/s)(3008KiB/10147msec) 00:34:52.706 slat (nsec): min=6731, max=41504, avg=17159.46, stdev=4902.02 00:34:52.706 clat (msec): min=4, max=381, avg=213.97, stdev=68.64 00:34:52.706 lat (msec): min=4, max=381, avg=213.98, stdev=68.64 00:34:52.706 clat percentiles (msec): 00:34:52.706 | 1.00th=[ 5], 5.00th=[ 32], 10.00th=[ 174], 20.00th=[ 201], 00:34:52.706 | 30.00th=[ 205], 40.00th=[ 207], 50.00th=[ 209], 60.00th=[ 222], 00:34:52.706 | 70.00th=[ 230], 80.00th=[ 239], 90.00th=[ 300], 95.00th=[ 355], 00:34:52.706 | 99.00th=[ 359], 99.50th=[ 359], 99.90th=[ 384], 99.95th=[ 384], 00:34:52.706 | 99.99th=[ 384] 00:34:52.706 bw ( KiB/s): min= 224, max= 640, per=4.41%, avg=294.40, stdev=90.27, samples=20 00:34:52.706 iops : min= 56, max= 160, avg=73.60, stdev=22.57, samples=20 00:34:52.706 lat (msec) : 10=2.13%, 20=0.93%, 50=2.13%, 100=3.19%, 250=76.20% 00:34:52.706 lat (msec) : 500=15.43% 00:34:52.706 cpu : usr=98.38%, sys=1.23%, ctx=6, majf=0, minf=31 00:34:52.706 IO depths : 1=0.3%, 2=1.9%, 4=9.6%, 8=75.0%, 16=13.3%, 32=0.0%, >=64=0.0% 00:34:52.706 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.706 complete : 0=0.0%, 4=89.6%, 8=6.2%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:52.706 issued rwts: total=752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:52.706 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:52.706 00:34:52.706 Run status group 0 (all jobs): 00:34:52.706 READ: bw=6664KiB/s (6824kB/s), 197KiB/s-310KiB/s (201kB/s-317kB/s), io=66.0MiB (69.2MB), run=10087-10147msec 00:34:52.706 23:05:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:34:52.706 23:05:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:52.706 23:05:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:52.706 23:05:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:52.706 23:05:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:52.706 23:05:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:52.706 23:05:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.706 23:05:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:52.706 23:05:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.706 23:05:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:52.706 23:05:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.706 23:05:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:52.706 23:05:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.706 23:05:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:52.706 23:05:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:52.706 23:05:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:34:52.706 23:05:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:52.706 23:05:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.706 23:05:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:52.706 23:05:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.706 23:05:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:52.706 23:05:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.706 23:05:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:52.706 23:05:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.706 23:05:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:52.707 bdev_null0 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:52.707 [2024-12-10 23:05:59.366466] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:52.707 bdev_null1 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:52.707 { 00:34:52.707 "params": { 00:34:52.707 "name": "Nvme$subsystem", 00:34:52.707 "trtype": "$TEST_TRANSPORT", 00:34:52.707 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:52.707 "adrfam": "ipv4", 00:34:52.707 "trsvcid": "$NVMF_PORT", 00:34:52.707 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:52.707 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:52.707 "hdgst": ${hdgst:-false}, 00:34:52.707 "ddgst": ${ddgst:-false} 00:34:52.707 }, 00:34:52.707 "method": "bdev_nvme_attach_controller" 00:34:52.707 } 00:34:52.707 EOF 00:34:52.707 )") 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:52.707 { 00:34:52.707 "params": { 00:34:52.707 "name": "Nvme$subsystem", 00:34:52.707 "trtype": "$TEST_TRANSPORT", 00:34:52.707 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:52.707 "adrfam": "ipv4", 00:34:52.707 "trsvcid": "$NVMF_PORT", 00:34:52.707 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:52.707 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:52.707 "hdgst": ${hdgst:-false}, 00:34:52.707 "ddgst": ${ddgst:-false} 00:34:52.707 }, 00:34:52.707 "method": "bdev_nvme_attach_controller" 00:34:52.707 } 00:34:52.707 EOF 00:34:52.707 )") 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:52.707 "params": { 00:34:52.707 "name": "Nvme0", 00:34:52.707 "trtype": "tcp", 00:34:52.707 "traddr": "10.0.0.2", 00:34:52.707 "adrfam": "ipv4", 00:34:52.707 "trsvcid": "4420", 00:34:52.707 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:52.707 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:52.707 "hdgst": false, 00:34:52.707 "ddgst": false 00:34:52.707 }, 00:34:52.707 "method": "bdev_nvme_attach_controller" 00:34:52.707 },{ 00:34:52.707 "params": { 00:34:52.707 "name": "Nvme1", 00:34:52.707 "trtype": "tcp", 00:34:52.707 "traddr": "10.0.0.2", 00:34:52.707 "adrfam": "ipv4", 00:34:52.707 "trsvcid": "4420", 00:34:52.707 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:52.707 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:52.707 "hdgst": false, 00:34:52.707 "ddgst": false 00:34:52.707 }, 00:34:52.707 "method": "bdev_nvme_attach_controller" 00:34:52.707 }' 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:52.707 23:05:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:52.708 23:05:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev 00:34:52.708 23:05:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:52.708 23:05:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:52.708 23:05:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:52.708 23:05:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:52.708 23:05:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev' 00:34:52.708 23:05:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:52.708 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:34:52.708 ... 00:34:52.708 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:34:52.708 ... 00:34:52.708 fio-3.35 00:34:52.708 Starting 4 threads 00:34:57.981 00:34:57.981 filename0: (groupid=0, jobs=1): err= 0: pid=2637225: Tue Dec 10 23:06:05 2024 00:34:57.981 read: IOPS=2697, BW=21.1MiB/s (22.1MB/s)(105MiB/5003msec) 00:34:57.981 slat (nsec): min=6308, max=46587, avg=8850.79, stdev=2957.01 00:34:57.981 clat (usec): min=819, max=5467, avg=2940.08, stdev=408.79 00:34:57.981 lat (usec): min=839, max=5482, avg=2948.93, stdev=408.62 00:34:57.981 clat percentiles (usec): 00:34:57.981 | 1.00th=[ 1762], 5.00th=[ 2245], 10.00th=[ 2409], 20.00th=[ 2671], 00:34:57.981 | 30.00th=[ 2802], 40.00th=[ 2933], 50.00th=[ 3032], 60.00th=[ 3064], 00:34:57.982 | 70.00th=[ 3097], 80.00th=[ 3130], 90.00th=[ 3326], 95.00th=[ 3523], 00:34:57.982 | 99.00th=[ 4047], 99.50th=[ 4293], 99.90th=[ 4686], 99.95th=[ 5276], 00:34:57.982 | 99.99th=[ 5473] 00:34:57.982 bw ( KiB/s): min=20832, max=22352, per=26.09%, avg=21584.00, stdev=411.88, samples=10 00:34:57.982 iops : min= 2604, max= 2794, avg=2698.00, stdev=51.48, samples=10 00:34:57.982 lat (usec) : 1000=0.25% 00:34:57.982 lat (msec) : 2=1.73%, 4=96.84%, 10=1.17% 00:34:57.982 cpu : usr=95.56%, sys=4.08%, ctx=6, majf=0, minf=9 00:34:57.982 IO depths : 1=0.2%, 2=3.3%, 4=67.5%, 8=29.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:57.982 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:57.982 complete : 0=0.0%, 4=93.7%, 8=6.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:57.982 issued rwts: total=13495,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:57.982 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:57.982 filename0: (groupid=0, jobs=1): err= 0: pid=2637226: Tue Dec 10 23:06:05 2024 00:34:57.982 read: IOPS=2623, BW=20.5MiB/s (21.5MB/s)(102MiB/5001msec) 00:34:57.982 slat (nsec): min=6325, max=47870, avg=9023.86, stdev=3088.25 00:34:57.982 clat (usec): min=556, max=5661, avg=3024.10, stdev=440.31 00:34:57.982 lat (usec): min=566, max=5671, avg=3033.12, stdev=440.22 00:34:57.982 clat percentiles (usec): 00:34:57.982 | 1.00th=[ 2024], 5.00th=[ 2343], 10.00th=[ 2507], 20.00th=[ 2737], 00:34:57.982 | 30.00th=[ 2868], 40.00th=[ 3032], 50.00th=[ 3064], 60.00th=[ 3064], 00:34:57.982 | 70.00th=[ 3097], 80.00th=[ 3228], 90.00th=[ 3458], 95.00th=[ 3785], 00:34:57.982 | 99.00th=[ 4490], 99.50th=[ 4883], 99.90th=[ 5342], 99.95th=[ 5407], 00:34:57.982 | 99.99th=[ 5669] 00:34:57.982 bw ( KiB/s): min=20240, max=21728, per=25.26%, avg=20894.22, stdev=436.31, samples=9 00:34:57.982 iops : min= 2530, max= 2716, avg=2611.78, stdev=54.54, samples=9 00:34:57.982 lat (usec) : 750=0.01%, 1000=0.04% 00:34:57.982 lat (msec) : 2=0.82%, 4=96.19%, 10=2.94% 00:34:57.982 cpu : usr=95.52%, sys=4.12%, ctx=8, majf=0, minf=9 00:34:57.982 IO depths : 1=0.1%, 2=3.8%, 4=66.6%, 8=29.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:57.982 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:57.982 complete : 0=0.0%, 4=93.9%, 8=6.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:57.982 issued rwts: total=13118,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:57.982 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:57.982 filename1: (groupid=0, jobs=1): err= 0: pid=2637227: Tue Dec 10 23:06:05 2024 00:34:57.982 read: IOPS=2554, BW=20.0MiB/s (20.9MB/s)(99.8MiB/5002msec) 00:34:57.982 slat (nsec): min=6285, max=42447, avg=8706.41, stdev=2824.26 00:34:57.982 clat (usec): min=739, max=5646, avg=3105.77, stdev=394.58 00:34:57.982 lat (usec): min=750, max=5653, avg=3114.47, stdev=394.56 00:34:57.982 clat percentiles (usec): 00:34:57.982 | 1.00th=[ 2180], 5.00th=[ 2507], 10.00th=[ 2704], 20.00th=[ 2900], 00:34:57.982 | 30.00th=[ 3032], 40.00th=[ 3064], 50.00th=[ 3064], 60.00th=[ 3097], 00:34:57.982 | 70.00th=[ 3130], 80.00th=[ 3326], 90.00th=[ 3523], 95.00th=[ 3752], 00:34:57.982 | 99.00th=[ 4621], 99.50th=[ 4883], 99.90th=[ 5407], 99.95th=[ 5407], 00:34:57.982 | 99.99th=[ 5669] 00:34:57.982 bw ( KiB/s): min=19911, max=20784, per=24.71%, avg=20437.50, stdev=327.52, samples=10 00:34:57.982 iops : min= 2488, max= 2598, avg=2554.60, stdev=41.10, samples=10 00:34:57.982 lat (usec) : 750=0.01%, 1000=0.01% 00:34:57.982 lat (msec) : 2=0.43%, 4=96.78%, 10=2.78% 00:34:57.982 cpu : usr=95.64%, sys=4.02%, ctx=6, majf=0, minf=9 00:34:57.982 IO depths : 1=0.1%, 2=2.1%, 4=70.3%, 8=27.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:57.982 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:57.982 complete : 0=0.0%, 4=92.2%, 8=7.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:57.982 issued rwts: total=12779,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:57.982 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:57.982 filename1: (groupid=0, jobs=1): err= 0: pid=2637229: Tue Dec 10 23:06:05 2024 00:34:57.982 read: IOPS=2466, BW=19.3MiB/s (20.2MB/s)(96.4MiB/5002msec) 00:34:57.982 slat (nsec): min=6343, max=45473, avg=8818.14, stdev=3058.55 00:34:57.982 clat (usec): min=1239, max=5519, avg=3216.49, stdev=457.76 00:34:57.982 lat (usec): min=1250, max=5526, avg=3225.31, stdev=457.41 00:34:57.982 clat percentiles (usec): 00:34:57.982 | 1.00th=[ 2311], 5.00th=[ 2704], 10.00th=[ 2868], 20.00th=[ 2999], 00:34:57.982 | 30.00th=[ 3032], 40.00th=[ 3064], 50.00th=[ 3064], 60.00th=[ 3130], 00:34:57.982 | 70.00th=[ 3261], 80.00th=[ 3359], 90.00th=[ 3687], 95.00th=[ 4293], 00:34:57.982 | 99.00th=[ 4948], 99.50th=[ 5145], 99.90th=[ 5342], 99.95th=[ 5342], 00:34:57.982 | 99.99th=[ 5407] 00:34:57.982 bw ( KiB/s): min=19312, max=20144, per=23.82%, avg=19704.89, stdev=302.96, samples=9 00:34:57.982 iops : min= 2414, max= 2518, avg=2463.11, stdev=37.87, samples=9 00:34:57.982 lat (msec) : 2=0.21%, 4=92.84%, 10=6.95% 00:34:57.982 cpu : usr=95.12%, sys=4.52%, ctx=6, majf=0, minf=9 00:34:57.982 IO depths : 1=0.1%, 2=2.2%, 4=71.1%, 8=26.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:57.982 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:57.982 complete : 0=0.0%, 4=91.6%, 8=8.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:57.982 issued rwts: total=12339,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:57.982 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:57.982 00:34:57.982 Run status group 0 (all jobs): 00:34:57.982 READ: bw=80.8MiB/s (84.7MB/s), 19.3MiB/s-21.1MiB/s (20.2MB/s-22.1MB/s), io=404MiB (424MB), run=5001-5003msec 00:34:57.982 23:06:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:34:57.982 23:06:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:57.982 23:06:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:57.982 23:06:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:57.982 23:06:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:57.982 23:06:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:57.982 23:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.982 23:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:57.982 23:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.982 23:06:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:57.982 23:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.982 23:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:57.982 23:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.982 23:06:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:57.982 23:06:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:57.982 23:06:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:34:57.982 23:06:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:57.982 23:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.982 23:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:57.982 23:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.982 23:06:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:57.982 23:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.982 23:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:57.982 23:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.982 00:34:57.982 real 0m24.508s 00:34:57.982 user 4m54.465s 00:34:57.982 sys 0m4.954s 00:34:57.982 23:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:57.982 23:06:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:57.982 ************************************ 00:34:57.982 END TEST fio_dif_rand_params 00:34:57.982 ************************************ 00:34:58.242 23:06:05 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:34:58.242 23:06:05 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:58.242 23:06:05 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:58.242 23:06:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:58.242 ************************************ 00:34:58.242 START TEST fio_dif_digest 00:34:58.242 ************************************ 00:34:58.242 23:06:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:34:58.242 23:06:05 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:34:58.242 23:06:05 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:34:58.242 23:06:05 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:34:58.242 23:06:05 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:34:58.242 23:06:05 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:34:58.242 23:06:05 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:34:58.242 23:06:05 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:34:58.242 23:06:05 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:34:58.242 23:06:05 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:34:58.242 23:06:05 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:34:58.242 23:06:05 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:34:58.242 23:06:05 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:34:58.242 23:06:05 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:34:58.242 23:06:05 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:34:58.242 23:06:05 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:34:58.242 23:06:05 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:58.242 23:06:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.242 23:06:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:58.242 bdev_null0 00:34:58.242 23:06:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.242 23:06:05 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:58.242 23:06:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.242 23:06:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:58.242 23:06:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.242 23:06:05 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:58.242 23:06:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.242 23:06:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:58.242 23:06:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.242 23:06:05 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:58.242 23:06:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.242 23:06:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:58.242 [2024-12-10 23:06:05.815679] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:58.242 23:06:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.242 23:06:05 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:34:58.242 23:06:05 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:34:58.242 23:06:05 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:58.242 23:06:05 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:34:58.242 23:06:05 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:58.242 23:06:05 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:34:58.242 23:06:05 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:58.242 23:06:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:58.242 23:06:05 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:34:58.242 23:06:05 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:58.242 { 00:34:58.242 "params": { 00:34:58.242 "name": "Nvme$subsystem", 00:34:58.242 "trtype": "$TEST_TRANSPORT", 00:34:58.242 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:58.242 "adrfam": "ipv4", 00:34:58.242 "trsvcid": "$NVMF_PORT", 00:34:58.242 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:58.242 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:58.242 "hdgst": ${hdgst:-false}, 00:34:58.242 "ddgst": ${ddgst:-false} 00:34:58.242 }, 00:34:58.242 "method": "bdev_nvme_attach_controller" 00:34:58.242 } 00:34:58.242 EOF 00:34:58.242 )") 00:34:58.243 23:06:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:58.243 23:06:05 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:34:58.243 23:06:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:58.243 23:06:05 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:34:58.243 23:06:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:58.243 23:06:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev 00:34:58.243 23:06:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:34:58.243 23:06:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:58.243 23:06:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:58.243 23:06:05 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:34:58.243 23:06:05 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:34:58.243 23:06:05 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:34:58.243 23:06:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev 00:34:58.243 23:06:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:34:58.243 23:06:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:58.243 23:06:05 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:34:58.243 23:06:05 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:34:58.243 23:06:05 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:58.243 "params": { 00:34:58.243 "name": "Nvme0", 00:34:58.243 "trtype": "tcp", 00:34:58.243 "traddr": "10.0.0.2", 00:34:58.243 "adrfam": "ipv4", 00:34:58.243 "trsvcid": "4420", 00:34:58.243 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:58.243 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:58.243 "hdgst": true, 00:34:58.243 "ddgst": true 00:34:58.243 }, 00:34:58.243 "method": "bdev_nvme_attach_controller" 00:34:58.243 }' 00:34:58.243 23:06:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:58.243 23:06:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:58.243 23:06:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:58.243 23:06:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev 00:34:58.243 23:06:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:58.243 23:06:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:58.243 23:06:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:58.243 23:06:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:58.243 23:06:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/fio/spdk_bdev' 00:34:58.243 23:06:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:58.501 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:58.501 ... 00:34:58.501 fio-3.35 00:34:58.501 Starting 3 threads 00:35:10.704 00:35:10.704 filename0: (groupid=0, jobs=1): err= 0: pid=2638527: Tue Dec 10 23:06:16 2024 00:35:10.704 read: IOPS=258, BW=32.3MiB/s (33.9MB/s)(325MiB/10045msec) 00:35:10.704 slat (nsec): min=6618, max=41770, avg=11945.92, stdev=1826.32 00:35:10.704 clat (usec): min=8921, max=49595, avg=11568.72, stdev=1301.79 00:35:10.704 lat (usec): min=8933, max=49608, avg=11580.66, stdev=1301.85 00:35:10.704 clat percentiles (usec): 00:35:10.704 | 1.00th=[ 9634], 5.00th=[10290], 10.00th=[10552], 20.00th=[10945], 00:35:10.704 | 30.00th=[11076], 40.00th=[11338], 50.00th=[11469], 60.00th=[11731], 00:35:10.704 | 70.00th=[11994], 80.00th=[12125], 90.00th=[12518], 95.00th=[12911], 00:35:10.704 | 99.00th=[13566], 99.50th=[13960], 99.90th=[14746], 99.95th=[47449], 00:35:10.704 | 99.99th=[49546] 00:35:10.704 bw ( KiB/s): min=32256, max=34304, per=32.22%, avg=33228.80, stdev=624.87, samples=20 00:35:10.704 iops : min= 252, max= 268, avg=259.60, stdev= 4.88, samples=20 00:35:10.704 lat (msec) : 10=2.58%, 20=97.34%, 50=0.08% 00:35:10.704 cpu : usr=95.51%, sys=4.16%, ctx=22, majf=0, minf=11 00:35:10.704 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:10.704 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.704 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.704 issued rwts: total=2598,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.704 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:10.704 filename0: (groupid=0, jobs=1): err= 0: pid=2638528: Tue Dec 10 23:06:16 2024 00:35:10.704 read: IOPS=262, BW=32.8MiB/s (34.4MB/s)(329MiB/10046msec) 00:35:10.704 slat (nsec): min=6624, max=29803, avg=11696.99, stdev=1819.67 00:35:10.704 clat (usec): min=6752, max=47620, avg=11416.17, stdev=1262.19 00:35:10.704 lat (usec): min=6764, max=47632, avg=11427.87, stdev=1262.14 00:35:10.704 clat percentiles (usec): 00:35:10.704 | 1.00th=[ 9634], 5.00th=[10159], 10.00th=[10421], 20.00th=[10683], 00:35:10.704 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11338], 60.00th=[11600], 00:35:10.704 | 70.00th=[11731], 80.00th=[11994], 90.00th=[12387], 95.00th=[12780], 00:35:10.704 | 99.00th=[13435], 99.50th=[13698], 99.90th=[15139], 99.95th=[45876], 00:35:10.704 | 99.99th=[47449] 00:35:10.704 bw ( KiB/s): min=32768, max=34816, per=32.65%, avg=33676.80, stdev=494.70, samples=20 00:35:10.704 iops : min= 256, max= 272, avg=263.10, stdev= 3.86, samples=20 00:35:10.704 lat (msec) : 10=3.49%, 20=96.43%, 50=0.08% 00:35:10.704 cpu : usr=95.70%, sys=3.96%, ctx=21, majf=0, minf=9 00:35:10.704 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:10.704 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.704 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.704 issued rwts: total=2633,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.704 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:10.704 filename0: (groupid=0, jobs=1): err= 0: pid=2638529: Tue Dec 10 23:06:16 2024 00:35:10.704 read: IOPS=285, BW=35.6MiB/s (37.4MB/s)(358MiB/10045msec) 00:35:10.704 slat (nsec): min=6564, max=27672, avg=11775.07, stdev=1810.43 00:35:10.704 clat (usec): min=8030, max=50055, avg=10493.58, stdev=1255.72 00:35:10.704 lat (usec): min=8039, max=50068, avg=10505.36, stdev=1255.68 00:35:10.704 clat percentiles (usec): 00:35:10.704 | 1.00th=[ 8848], 5.00th=[ 9241], 10.00th=[ 9503], 20.00th=[ 9896], 00:35:10.704 | 30.00th=[10159], 40.00th=[10290], 50.00th=[10421], 60.00th=[10683], 00:35:10.704 | 70.00th=[10814], 80.00th=[11076], 90.00th=[11338], 95.00th=[11600], 00:35:10.704 | 99.00th=[12256], 99.50th=[12518], 99.90th=[15926], 99.95th=[49021], 00:35:10.704 | 99.99th=[50070] 00:35:10.704 bw ( KiB/s): min=35584, max=37888, per=35.52%, avg=36633.60, stdev=679.35, samples=20 00:35:10.704 iops : min= 278, max= 296, avg=286.20, stdev= 5.31, samples=20 00:35:10.704 lat (msec) : 10=24.72%, 20=75.21%, 50=0.03%, 100=0.03% 00:35:10.704 cpu : usr=95.69%, sys=3.98%, ctx=14, majf=0, minf=0 00:35:10.704 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:10.704 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.704 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:10.704 issued rwts: total=2864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:10.704 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:10.704 00:35:10.704 Run status group 0 (all jobs): 00:35:10.704 READ: bw=101MiB/s (106MB/s), 32.3MiB/s-35.6MiB/s (33.9MB/s-37.4MB/s), io=1012MiB (1061MB), run=10045-10046msec 00:35:10.704 23:06:17 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:35:10.704 23:06:17 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:35:10.704 23:06:17 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:35:10.704 23:06:17 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:10.704 23:06:17 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:35:10.704 23:06:17 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:10.704 23:06:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.704 23:06:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:10.704 23:06:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.704 23:06:17 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:10.704 23:06:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.704 23:06:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:10.704 23:06:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.704 00:35:10.704 real 0m11.341s 00:35:10.704 user 0m35.334s 00:35:10.704 sys 0m1.574s 00:35:10.704 23:06:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:10.704 23:06:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:10.704 ************************************ 00:35:10.704 END TEST fio_dif_digest 00:35:10.704 ************************************ 00:35:10.704 23:06:17 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:35:10.704 23:06:17 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:35:10.704 23:06:17 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:10.704 23:06:17 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:35:10.704 23:06:17 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:10.704 23:06:17 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:35:10.704 23:06:17 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:10.704 23:06:17 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:10.704 rmmod nvme_tcp 00:35:10.704 rmmod nvme_fabrics 00:35:10.704 rmmod nvme_keyring 00:35:10.704 23:06:17 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:10.704 23:06:17 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:35:10.704 23:06:17 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:35:10.704 23:06:17 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 2629806 ']' 00:35:10.704 23:06:17 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 2629806 00:35:10.704 23:06:17 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 2629806 ']' 00:35:10.704 23:06:17 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 2629806 00:35:10.704 23:06:17 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:35:10.704 23:06:17 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:10.704 23:06:17 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2629806 00:35:10.704 23:06:17 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:10.704 23:06:17 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:10.704 23:06:17 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2629806' 00:35:10.704 killing process with pid 2629806 00:35:10.704 23:06:17 nvmf_dif -- common/autotest_common.sh@973 -- # kill 2629806 00:35:10.704 23:06:17 nvmf_dif -- common/autotest_common.sh@978 -- # wait 2629806 00:35:10.704 23:06:17 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:35:10.704 23:06:17 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/setup.sh reset 00:35:12.612 Waiting for block devices as requested 00:35:12.612 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:35:12.612 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:12.612 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:12.872 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:12.872 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:12.872 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:13.131 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:13.131 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:13.131 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:13.391 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:13.391 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:13.391 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:13.391 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:13.650 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:13.650 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:13.650 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:13.909 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:13.909 23:06:21 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:13.909 23:06:21 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:13.909 23:06:21 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:35:13.909 23:06:21 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:35:13.909 23:06:21 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:13.909 23:06:21 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:35:13.909 23:06:21 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:13.909 23:06:21 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:13.909 23:06:21 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:13.909 23:06:21 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:13.909 23:06:21 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:16.447 23:06:23 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:16.447 00:35:16.447 real 1m14.474s 00:35:16.447 user 7m12.612s 00:35:16.447 sys 0m19.810s 00:35:16.447 23:06:23 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:16.447 23:06:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:16.447 ************************************ 00:35:16.447 END TEST nvmf_dif 00:35:16.447 ************************************ 00:35:16.447 23:06:23 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/abort_qd_sizes.sh 00:35:16.447 23:06:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:16.447 23:06:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:16.447 23:06:23 -- common/autotest_common.sh@10 -- # set +x 00:35:16.447 ************************************ 00:35:16.447 START TEST nvmf_abort_qd_sizes 00:35:16.447 ************************************ 00:35:16.447 23:06:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target/abort_qd_sizes.sh 00:35:16.447 * Looking for test storage... 00:35:16.447 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/target 00:35:16.447 23:06:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:16.447 23:06:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:35:16.447 23:06:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:16.447 23:06:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:16.447 23:06:23 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:16.447 23:06:23 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:16.447 23:06:23 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:16.447 23:06:23 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:35:16.447 23:06:23 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:35:16.447 23:06:23 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:35:16.447 23:06:23 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:35:16.447 23:06:23 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:35:16.447 23:06:23 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:35:16.447 23:06:23 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:35:16.447 23:06:23 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:16.447 23:06:23 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:35:16.447 23:06:23 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:35:16.447 23:06:23 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:16.447 23:06:23 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:16.447 23:06:23 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:35:16.447 23:06:23 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:35:16.447 23:06:23 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:16.447 23:06:23 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:35:16.447 23:06:23 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:35:16.447 23:06:23 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:35:16.447 23:06:23 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:35:16.447 23:06:23 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:16.447 23:06:23 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:35:16.447 23:06:23 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:35:16.447 23:06:23 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:16.447 23:06:23 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:16.447 23:06:23 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:35:16.447 23:06:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:16.447 23:06:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:16.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:16.447 --rc genhtml_branch_coverage=1 00:35:16.447 --rc genhtml_function_coverage=1 00:35:16.447 --rc genhtml_legend=1 00:35:16.447 --rc geninfo_all_blocks=1 00:35:16.447 --rc geninfo_unexecuted_blocks=1 00:35:16.447 00:35:16.447 ' 00:35:16.447 23:06:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:16.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:16.447 --rc genhtml_branch_coverage=1 00:35:16.447 --rc genhtml_function_coverage=1 00:35:16.447 --rc genhtml_legend=1 00:35:16.447 --rc geninfo_all_blocks=1 00:35:16.447 --rc geninfo_unexecuted_blocks=1 00:35:16.447 00:35:16.447 ' 00:35:16.447 23:06:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:16.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:16.447 --rc genhtml_branch_coverage=1 00:35:16.447 --rc genhtml_function_coverage=1 00:35:16.447 --rc genhtml_legend=1 00:35:16.447 --rc geninfo_all_blocks=1 00:35:16.447 --rc geninfo_unexecuted_blocks=1 00:35:16.447 00:35:16.447 ' 00:35:16.447 23:06:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:16.447 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:16.447 --rc genhtml_branch_coverage=1 00:35:16.447 --rc genhtml_function_coverage=1 00:35:16.447 --rc genhtml_legend=1 00:35:16.447 --rc geninfo_all_blocks=1 00:35:16.447 --rc geninfo_unexecuted_blocks=1 00:35:16.447 00:35:16.447 ' 00:35:16.447 23:06:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:35:16.447 23:06:23 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:35:16.447 23:06:23 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:16.447 23:06:23 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:16.447 23:06:23 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:16.447 23:06:23 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:16.447 23:06:23 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:16.447 23:06:23 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:16.447 23:06:23 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:16.447 23:06:23 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:16.447 23:06:23 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:16.447 23:06:23 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:16.447 23:06:23 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:35:16.447 23:06:23 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:35:16.447 23:06:23 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:16.447 23:06:23 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:16.447 23:06:23 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:16.447 23:06:23 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:16.447 23:06:23 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:35:16.447 23:06:23 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:35:16.447 23:06:23 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:16.447 23:06:23 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:16.447 23:06:23 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:16.447 23:06:23 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:16.447 23:06:23 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:16.447 23:06:23 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:16.447 23:06:23 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:35:16.447 23:06:23 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:16.447 23:06:23 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:35:16.447 23:06:23 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:16.447 23:06:23 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:16.447 23:06:23 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:16.448 23:06:23 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:16.448 23:06:23 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:16.448 23:06:23 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:16.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:16.448 23:06:23 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:16.448 23:06:23 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:16.448 23:06:23 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:16.448 23:06:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:35:16.448 23:06:23 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:16.448 23:06:23 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:16.448 23:06:23 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:16.448 23:06:23 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:16.448 23:06:23 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:16.448 23:06:23 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:16.448 23:06:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:16.448 23:06:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:16.448 23:06:23 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:16.448 23:06:23 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:16.448 23:06:23 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:35:16.448 23:06:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:21.722 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:21.722 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:35:21.722 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:21.722 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:21.722 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:21.722 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:21.722 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:21.722 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:35:21.722 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:21.722 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:35:21.722 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:35:21.722 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:35:21.722 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:35:21.722 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:35:21.722 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:35:21.722 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:21.722 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:21.722 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:21.722 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:21.722 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:21.722 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:21.722 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:21.722 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:21.722 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:21.722 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:21.722 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:21.722 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:21.722 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:21.722 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:21.722 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:21.722 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:21.722 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:21.722 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:21.722 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:21.722 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:35:21.722 Found 0000:86:00.0 (0x8086 - 0x159b) 00:35:21.722 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:21.722 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:21.722 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:21.722 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:21.722 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:21.722 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:21.722 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:35:21.722 Found 0000:86:00.1 (0x8086 - 0x159b) 00:35:21.722 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:21.722 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:21.723 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:21.723 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:21.723 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:21.723 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:21.723 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:21.723 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:21.723 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:21.723 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:21.723 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:21.723 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:21.723 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:21.723 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:21.723 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:21.723 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:35:21.723 Found net devices under 0000:86:00.0: cvl_0_0 00:35:21.723 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:21.723 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:21.723 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:21.723 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:21.723 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:21.723 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:21.723 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:21.723 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:21.723 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:35:21.723 Found net devices under 0000:86:00.1: cvl_0_1 00:35:21.723 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:21.723 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:21.723 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:35:21.723 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:21.723 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:21.723 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:21.723 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:21.723 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:21.723 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:21.723 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:21.723 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:21.723 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:21.723 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:21.723 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:21.723 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:21.723 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:21.723 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:21.723 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:21.723 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:21.723 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:21.723 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:21.982 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:21.982 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:21.982 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:21.982 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:21.982 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:21.982 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:21.982 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:21.982 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:21.982 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:21.982 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.347 ms 00:35:21.982 00:35:21.982 --- 10.0.0.2 ping statistics --- 00:35:21.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:21.982 rtt min/avg/max/mdev = 0.347/0.347/0.347/0.000 ms 00:35:21.982 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:21.982 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:21.982 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:35:21.982 00:35:21.982 --- 10.0.0.1 ping statistics --- 00:35:21.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:21.982 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:35:21.982 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:21.982 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:35:21.982 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:35:21.982 23:06:29 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/setup.sh 00:35:25.275 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:25.275 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:25.275 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:25.275 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:25.275 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:25.275 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:25.275 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:25.275 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:25.275 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:25.275 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:25.275 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:25.275 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:25.275 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:25.275 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:25.275 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:25.275 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:25.844 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:35:25.844 23:06:33 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:25.844 23:06:33 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:25.844 23:06:33 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:25.844 23:06:33 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:25.844 23:06:33 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:25.844 23:06:33 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:25.844 23:06:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:35:25.844 23:06:33 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:25.844 23:06:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:25.844 23:06:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:25.844 23:06:33 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=2646847 00:35:25.844 23:06:33 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:35:25.844 23:06:33 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 2646847 00:35:25.844 23:06:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 2646847 ']' 00:35:25.844 23:06:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:25.844 23:06:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:25.844 23:06:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:25.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:25.844 23:06:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:25.844 23:06:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:26.103 [2024-12-10 23:06:33.590003] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:35:26.104 [2024-12-10 23:06:33.590045] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:26.104 [2024-12-10 23:06:33.670886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:26.104 [2024-12-10 23:06:33.713839] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:26.104 [2024-12-10 23:06:33.713874] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:26.104 [2024-12-10 23:06:33.713881] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:26.104 [2024-12-10 23:06:33.713887] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:26.104 [2024-12-10 23:06:33.713893] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:26.104 [2024-12-10 23:06:33.715315] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:26.104 [2024-12-10 23:06:33.715421] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:35:26.104 [2024-12-10 23:06:33.715527] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:35:26.104 [2024-12-10 23:06:33.715527] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:35:26.104 23:06:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:26.104 23:06:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:35:26.104 23:06:33 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:26.104 23:06:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:26.104 23:06:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:26.362 23:06:33 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:26.363 23:06:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:35:26.363 23:06:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:35:26.363 23:06:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:35:26.363 23:06:33 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:35:26.363 23:06:33 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:35:26.363 23:06:33 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:5e:00.0 ]] 00:35:26.363 23:06:33 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:35:26.363 23:06:33 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:35:26.363 23:06:33 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:35:26.363 23:06:33 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:35:26.363 23:06:33 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:35:26.363 23:06:33 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:35:26.363 23:06:33 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:35:26.363 23:06:33 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:5e:00.0 00:35:26.363 23:06:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:35:26.363 23:06:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:35:26.363 23:06:33 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:35:26.363 23:06:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:26.363 23:06:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:26.363 23:06:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:26.363 ************************************ 00:35:26.363 START TEST spdk_target_abort 00:35:26.363 ************************************ 00:35:26.363 23:06:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:35:26.363 23:06:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:35:26.363 23:06:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:35:26.363 23:06:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.363 23:06:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:29.650 spdk_targetn1 00:35:29.650 23:06:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.650 23:06:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:29.650 23:06:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.650 23:06:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:29.650 [2024-12-10 23:06:36.728136] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:29.650 23:06:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.650 23:06:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:35:29.650 23:06:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.650 23:06:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:29.650 23:06:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.650 23:06:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:35:29.650 23:06:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.650 23:06:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:29.650 23:06:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.650 23:06:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:35:29.650 23:06:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.650 23:06:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:29.650 [2024-12-10 23:06:36.772423] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:29.650 23:06:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.650 23:06:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:35:29.650 23:06:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:29.650 23:06:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:29.650 23:06:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:35:29.650 23:06:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:29.650 23:06:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:35:29.650 23:06:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:29.650 23:06:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:29.650 23:06:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:29.650 23:06:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:29.650 23:06:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:29.650 23:06:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:29.650 23:06:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:29.650 23:06:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:29.650 23:06:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:35:29.650 23:06:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:29.650 23:06:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:29.650 23:06:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:29.650 23:06:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:29.650 23:06:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:29.650 23:06:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:32.936 Initializing NVMe Controllers 00:35:32.936 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:32.936 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:32.936 Initialization complete. Launching workers. 00:35:32.936 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 15929, failed: 0 00:35:32.936 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1220, failed to submit 14709 00:35:32.936 success 766, unsuccessful 454, failed 0 00:35:32.936 23:06:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:32.936 23:06:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:36.315 Initializing NVMe Controllers 00:35:36.315 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:36.315 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:36.315 Initialization complete. Launching workers. 00:35:36.315 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8549, failed: 0 00:35:36.315 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1256, failed to submit 7293 00:35:36.315 success 314, unsuccessful 942, failed 0 00:35:36.315 23:06:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:36.315 23:06:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:38.856 Initializing NVMe Controllers 00:35:38.856 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:38.856 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:38.856 Initialization complete. Launching workers. 00:35:38.856 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 37831, failed: 0 00:35:38.856 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2789, failed to submit 35042 00:35:38.856 success 584, unsuccessful 2205, failed 0 00:35:38.856 23:06:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:35:38.856 23:06:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.856 23:06:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:38.856 23:06:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:38.856 23:06:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:35:38.856 23:06:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:38.856 23:06:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:40.233 23:06:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.233 23:06:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2646847 00:35:40.233 23:06:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 2646847 ']' 00:35:40.233 23:06:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 2646847 00:35:40.233 23:06:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:35:40.233 23:06:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:40.233 23:06:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2646847 00:35:40.233 23:06:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:40.233 23:06:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:40.233 23:06:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2646847' 00:35:40.233 killing process with pid 2646847 00:35:40.233 23:06:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 2646847 00:35:40.233 23:06:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 2646847 00:35:40.492 00:35:40.492 real 0m14.109s 00:35:40.492 user 0m53.786s 00:35:40.492 sys 0m2.553s 00:35:40.492 23:06:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:40.492 23:06:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:40.492 ************************************ 00:35:40.492 END TEST spdk_target_abort 00:35:40.492 ************************************ 00:35:40.492 23:06:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:35:40.492 23:06:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:40.492 23:06:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:40.492 23:06:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:40.492 ************************************ 00:35:40.492 START TEST kernel_target_abort 00:35:40.492 ************************************ 00:35:40.492 23:06:48 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:35:40.492 23:06:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:35:40.492 23:06:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:35:40.492 23:06:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:40.492 23:06:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:40.492 23:06:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:40.492 23:06:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:40.492 23:06:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:40.492 23:06:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:40.492 23:06:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:40.492 23:06:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:40.492 23:06:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:40.492 23:06:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:35:40.492 23:06:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:35:40.492 23:06:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:35:40.492 23:06:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:40.492 23:06:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:40.492 23:06:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:40.492 23:06:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:35:40.492 23:06:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:35:40.492 23:06:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:35:40.492 23:06:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:40.492 23:06:48 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/setup.sh reset 00:35:43.028 Waiting for block devices as requested 00:35:43.287 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:35:43.287 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:43.287 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:43.546 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:43.546 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:43.546 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:43.805 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:43.805 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:43.805 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:44.063 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:44.064 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:44.064 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:44.064 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:44.322 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:44.322 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:44.322 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:44.581 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:44.581 23:06:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:35:44.581 23:06:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:44.581 23:06:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:35:44.581 23:06:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:35:44.581 23:06:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:44.581 23:06:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:35:44.581 23:06:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:35:44.581 23:06:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:35:44.581 23:06:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/spdk-gpt.py nvme0n1 00:35:44.581 No valid GPT data, bailing 00:35:44.582 23:06:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:44.582 23:06:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:35:44.582 23:06:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:35:44.582 23:06:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:35:44.582 23:06:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:35:44.582 23:06:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:44.582 23:06:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:44.582 23:06:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:44.582 23:06:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:35:44.582 23:06:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:35:44.582 23:06:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:35:44.582 23:06:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:35:44.582 23:06:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:35:44.582 23:06:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:35:44.582 23:06:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:35:44.582 23:06:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:35:44.582 23:06:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:44.582 23:06:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:35:44.840 00:35:44.840 Discovery Log Number of Records 2, Generation counter 2 00:35:44.840 =====Discovery Log Entry 0====== 00:35:44.840 trtype: tcp 00:35:44.840 adrfam: ipv4 00:35:44.840 subtype: current discovery subsystem 00:35:44.840 treq: not specified, sq flow control disable supported 00:35:44.840 portid: 1 00:35:44.840 trsvcid: 4420 00:35:44.840 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:44.840 traddr: 10.0.0.1 00:35:44.840 eflags: none 00:35:44.840 sectype: none 00:35:44.840 =====Discovery Log Entry 1====== 00:35:44.840 trtype: tcp 00:35:44.840 adrfam: ipv4 00:35:44.840 subtype: nvme subsystem 00:35:44.840 treq: not specified, sq flow control disable supported 00:35:44.840 portid: 1 00:35:44.840 trsvcid: 4420 00:35:44.840 subnqn: nqn.2016-06.io.spdk:testnqn 00:35:44.840 traddr: 10.0.0.1 00:35:44.840 eflags: none 00:35:44.840 sectype: none 00:35:44.841 23:06:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:35:44.841 23:06:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:44.841 23:06:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:44.841 23:06:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:35:44.841 23:06:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:44.841 23:06:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:35:44.841 23:06:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:44.841 23:06:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:44.841 23:06:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:44.841 23:06:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:44.841 23:06:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:44.841 23:06:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:44.841 23:06:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:44.841 23:06:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:44.841 23:06:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:35:44.841 23:06:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:44.841 23:06:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:35:44.841 23:06:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:44.841 23:06:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:44.841 23:06:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:44.841 23:06:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:48.125 Initializing NVMe Controllers 00:35:48.125 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:48.125 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:48.125 Initialization complete. Launching workers. 00:35:48.125 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 92782, failed: 0 00:35:48.125 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 92782, failed to submit 0 00:35:48.125 success 0, unsuccessful 92782, failed 0 00:35:48.125 23:06:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:48.125 23:06:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:51.410 Initializing NVMe Controllers 00:35:51.410 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:51.410 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:51.410 Initialization complete. Launching workers. 00:35:51.410 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 145851, failed: 0 00:35:51.410 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36562, failed to submit 109289 00:35:51.410 success 0, unsuccessful 36562, failed 0 00:35:51.410 23:06:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:51.410 23:06:58 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:54.693 Initializing NVMe Controllers 00:35:54.693 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:54.693 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:54.693 Initialization complete. Launching workers. 00:35:54.693 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 136646, failed: 0 00:35:54.693 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 34238, failed to submit 102408 00:35:54.693 success 0, unsuccessful 34238, failed 0 00:35:54.693 23:07:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:35:54.693 23:07:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:35:54.693 23:07:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:35:54.693 23:07:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:54.693 23:07:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:54.693 23:07:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:54.693 23:07:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:54.693 23:07:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:35:54.693 23:07:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:35:54.693 23:07:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/setup.sh 00:35:57.229 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:57.229 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:57.229 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:57.229 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:57.229 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:57.229 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:57.229 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:57.229 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:57.229 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:57.229 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:57.229 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:57.229 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:57.229 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:57.229 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:57.229 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:57.229 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:57.799 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:35:58.057 00:35:58.057 real 0m17.569s 00:35:58.057 user 0m9.151s 00:35:58.057 sys 0m5.038s 00:35:58.057 23:07:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:58.057 23:07:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:58.057 ************************************ 00:35:58.057 END TEST kernel_target_abort 00:35:58.057 ************************************ 00:35:58.057 23:07:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:35:58.057 23:07:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:35:58.057 23:07:05 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:58.057 23:07:05 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:35:58.057 23:07:05 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:58.057 23:07:05 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:35:58.057 23:07:05 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:58.057 23:07:05 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:58.058 rmmod nvme_tcp 00:35:58.058 rmmod nvme_fabrics 00:35:58.058 rmmod nvme_keyring 00:35:58.058 23:07:05 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:58.058 23:07:05 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:35:58.058 23:07:05 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:35:58.058 23:07:05 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 2646847 ']' 00:35:58.058 23:07:05 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 2646847 00:35:58.058 23:07:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 2646847 ']' 00:35:58.058 23:07:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 2646847 00:35:58.058 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/common/autotest_common.sh: line 958: kill: (2646847) - No such process 00:35:58.058 23:07:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 2646847 is not found' 00:35:58.058 Process with pid 2646847 is not found 00:35:58.058 23:07:05 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:35:58.058 23:07:05 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/setup.sh reset 00:36:01.347 Waiting for block devices as requested 00:36:01.347 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:36:01.347 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:36:01.347 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:36:01.347 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:36:01.347 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:36:01.347 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:36:01.347 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:36:01.347 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:36:01.606 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:36:01.606 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:36:01.606 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:36:01.865 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:36:01.865 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:36:01.865 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:36:02.124 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:36:02.124 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:36:02.124 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:36:02.124 23:07:09 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:02.124 23:07:09 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:02.124 23:07:09 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:36:02.124 23:07:09 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:36:02.124 23:07:09 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:02.124 23:07:09 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:36:02.124 23:07:09 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:02.124 23:07:09 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:02.124 23:07:09 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:02.124 23:07:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:02.124 23:07:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:04.659 23:07:11 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:04.659 00:36:04.659 real 0m48.276s 00:36:04.659 user 1m7.273s 00:36:04.659 sys 0m16.294s 00:36:04.659 23:07:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:04.659 23:07:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:04.659 ************************************ 00:36:04.659 END TEST nvmf_abort_qd_sizes 00:36:04.659 ************************************ 00:36:04.659 23:07:11 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/keyring/file.sh 00:36:04.659 23:07:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:04.659 23:07:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:04.659 23:07:11 -- common/autotest_common.sh@10 -- # set +x 00:36:04.659 ************************************ 00:36:04.659 START TEST keyring_file 00:36:04.659 ************************************ 00:36:04.659 23:07:11 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/keyring/file.sh 00:36:04.659 * Looking for test storage... 00:36:04.659 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/keyring 00:36:04.659 23:07:12 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:04.659 23:07:12 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:36:04.659 23:07:12 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:04.659 23:07:12 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:04.659 23:07:12 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:04.659 23:07:12 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:04.659 23:07:12 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:04.659 23:07:12 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:36:04.659 23:07:12 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:36:04.659 23:07:12 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:36:04.659 23:07:12 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:36:04.659 23:07:12 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:36:04.659 23:07:12 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:36:04.659 23:07:12 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:36:04.659 23:07:12 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:04.659 23:07:12 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:36:04.659 23:07:12 keyring_file -- scripts/common.sh@345 -- # : 1 00:36:04.659 23:07:12 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:04.659 23:07:12 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:04.659 23:07:12 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:36:04.660 23:07:12 keyring_file -- scripts/common.sh@353 -- # local d=1 00:36:04.660 23:07:12 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:04.660 23:07:12 keyring_file -- scripts/common.sh@355 -- # echo 1 00:36:04.660 23:07:12 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:36:04.660 23:07:12 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:36:04.660 23:07:12 keyring_file -- scripts/common.sh@353 -- # local d=2 00:36:04.660 23:07:12 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:04.660 23:07:12 keyring_file -- scripts/common.sh@355 -- # echo 2 00:36:04.660 23:07:12 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:36:04.660 23:07:12 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:04.660 23:07:12 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:04.660 23:07:12 keyring_file -- scripts/common.sh@368 -- # return 0 00:36:04.660 23:07:12 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:04.660 23:07:12 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:04.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:04.660 --rc genhtml_branch_coverage=1 00:36:04.660 --rc genhtml_function_coverage=1 00:36:04.660 --rc genhtml_legend=1 00:36:04.660 --rc geninfo_all_blocks=1 00:36:04.660 --rc geninfo_unexecuted_blocks=1 00:36:04.660 00:36:04.660 ' 00:36:04.660 23:07:12 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:04.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:04.660 --rc genhtml_branch_coverage=1 00:36:04.660 --rc genhtml_function_coverage=1 00:36:04.660 --rc genhtml_legend=1 00:36:04.660 --rc geninfo_all_blocks=1 00:36:04.660 --rc geninfo_unexecuted_blocks=1 00:36:04.660 00:36:04.660 ' 00:36:04.660 23:07:12 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:04.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:04.660 --rc genhtml_branch_coverage=1 00:36:04.660 --rc genhtml_function_coverage=1 00:36:04.660 --rc genhtml_legend=1 00:36:04.660 --rc geninfo_all_blocks=1 00:36:04.660 --rc geninfo_unexecuted_blocks=1 00:36:04.660 00:36:04.660 ' 00:36:04.660 23:07:12 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:04.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:04.660 --rc genhtml_branch_coverage=1 00:36:04.660 --rc genhtml_function_coverage=1 00:36:04.660 --rc genhtml_legend=1 00:36:04.660 --rc geninfo_all_blocks=1 00:36:04.660 --rc geninfo_unexecuted_blocks=1 00:36:04.660 00:36:04.660 ' 00:36:04.660 23:07:12 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/keyring/common.sh 00:36:04.660 23:07:12 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:36:04.660 23:07:12 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:36:04.660 23:07:12 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:04.660 23:07:12 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:04.660 23:07:12 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:04.660 23:07:12 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:04.660 23:07:12 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:04.660 23:07:12 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:04.660 23:07:12 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:04.660 23:07:12 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:04.660 23:07:12 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:04.660 23:07:12 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:04.660 23:07:12 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:36:04.660 23:07:12 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:36:04.660 23:07:12 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:04.660 23:07:12 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:04.660 23:07:12 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:04.660 23:07:12 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:04.660 23:07:12 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:36:04.660 23:07:12 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:36:04.660 23:07:12 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:04.660 23:07:12 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:04.660 23:07:12 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:04.660 23:07:12 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:04.660 23:07:12 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:04.660 23:07:12 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:04.660 23:07:12 keyring_file -- paths/export.sh@5 -- # export PATH 00:36:04.660 23:07:12 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:04.660 23:07:12 keyring_file -- nvmf/common.sh@51 -- # : 0 00:36:04.660 23:07:12 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:04.660 23:07:12 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:04.660 23:07:12 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:04.660 23:07:12 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:04.660 23:07:12 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:04.660 23:07:12 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:04.660 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:04.660 23:07:12 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:04.660 23:07:12 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:04.660 23:07:12 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:04.660 23:07:12 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:36:04.660 23:07:12 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:36:04.660 23:07:12 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:36:04.660 23:07:12 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:36:04.660 23:07:12 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:36:04.660 23:07:12 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:36:04.660 23:07:12 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:04.660 23:07:12 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:04.660 23:07:12 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:04.660 23:07:12 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:04.660 23:07:12 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:04.660 23:07:12 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:04.660 23:07:12 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.gFbEJjx5JB 00:36:04.660 23:07:12 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:04.660 23:07:12 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:04.660 23:07:12 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:36:04.660 23:07:12 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:04.660 23:07:12 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:36:04.660 23:07:12 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:36:04.660 23:07:12 keyring_file -- nvmf/common.sh@733 -- # python - 00:36:04.660 23:07:12 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.gFbEJjx5JB 00:36:04.660 23:07:12 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.gFbEJjx5JB 00:36:04.660 23:07:12 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.gFbEJjx5JB 00:36:04.660 23:07:12 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:36:04.660 23:07:12 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:04.660 23:07:12 keyring_file -- keyring/common.sh@17 -- # name=key1 00:36:04.660 23:07:12 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:04.660 23:07:12 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:04.660 23:07:12 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:04.660 23:07:12 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.mBz6xsN41D 00:36:04.660 23:07:12 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:04.660 23:07:12 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:04.660 23:07:12 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:36:04.660 23:07:12 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:04.660 23:07:12 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:36:04.660 23:07:12 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:36:04.660 23:07:12 keyring_file -- nvmf/common.sh@733 -- # python - 00:36:04.660 23:07:12 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.mBz6xsN41D 00:36:04.660 23:07:12 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.mBz6xsN41D 00:36:04.660 23:07:12 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.mBz6xsN41D 00:36:04.660 23:07:12 keyring_file -- keyring/file.sh@30 -- # tgtpid=2655485 00:36:04.660 23:07:12 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt 00:36:04.660 23:07:12 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2655485 00:36:04.660 23:07:12 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2655485 ']' 00:36:04.660 23:07:12 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:04.660 23:07:12 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:04.660 23:07:12 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:04.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:04.661 23:07:12 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:04.661 23:07:12 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:04.661 [2024-12-10 23:07:12.346414] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:36:04.661 [2024-12-10 23:07:12.346463] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2655485 ] 00:36:04.920 [2024-12-10 23:07:12.419515] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:04.920 [2024-12-10 23:07:12.472947] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:36:05.178 23:07:12 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:05.178 23:07:12 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:36:05.178 23:07:12 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:36:05.178 23:07:12 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.178 23:07:12 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:05.178 [2024-12-10 23:07:12.696907] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:05.178 null0 00:36:05.178 [2024-12-10 23:07:12.728944] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:05.178 [2024-12-10 23:07:12.729289] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:05.178 23:07:12 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.178 23:07:12 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:05.178 23:07:12 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:36:05.178 23:07:12 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:05.178 23:07:12 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:36:05.178 23:07:12 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:05.178 23:07:12 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:36:05.178 23:07:12 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:05.178 23:07:12 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:05.178 23:07:12 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.178 23:07:12 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:05.178 [2024-12-10 23:07:12.761019] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:36:05.178 request: 00:36:05.178 { 00:36:05.178 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:36:05.179 "secure_channel": false, 00:36:05.179 "listen_address": { 00:36:05.179 "trtype": "tcp", 00:36:05.179 "traddr": "127.0.0.1", 00:36:05.179 "trsvcid": "4420" 00:36:05.179 }, 00:36:05.179 "method": "nvmf_subsystem_add_listener", 00:36:05.179 "req_id": 1 00:36:05.179 } 00:36:05.179 Got JSON-RPC error response 00:36:05.179 response: 00:36:05.179 { 00:36:05.179 "code": -32602, 00:36:05.179 "message": "Invalid parameters" 00:36:05.179 } 00:36:05.179 23:07:12 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:36:05.179 23:07:12 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:36:05.179 23:07:12 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:05.179 23:07:12 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:05.179 23:07:12 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:05.179 23:07:12 keyring_file -- keyring/file.sh@47 -- # bperfpid=2655498 00:36:05.179 23:07:12 keyring_file -- keyring/file.sh@49 -- # waitforlisten 2655498 /var/tmp/bperf.sock 00:36:05.179 23:07:12 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:36:05.179 23:07:12 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2655498 ']' 00:36:05.179 23:07:12 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:05.179 23:07:12 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:05.179 23:07:12 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:05.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:05.179 23:07:12 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:05.179 23:07:12 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:05.179 [2024-12-10 23:07:12.817859] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:36:05.179 [2024-12-10 23:07:12.817901] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2655498 ] 00:36:05.179 [2024-12-10 23:07:12.893901] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:05.438 [2024-12-10 23:07:12.936695] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:36:05.438 23:07:13 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:05.438 23:07:13 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:36:05.438 23:07:13 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.gFbEJjx5JB 00:36:05.438 23:07:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.gFbEJjx5JB 00:36:05.697 23:07:13 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.mBz6xsN41D 00:36:05.697 23:07:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.mBz6xsN41D 00:36:05.697 23:07:13 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:36:05.697 23:07:13 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:36:05.697 23:07:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:05.697 23:07:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:05.697 23:07:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:05.956 23:07:13 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.gFbEJjx5JB == \/\t\m\p\/\t\m\p\.\g\F\b\E\J\j\x\5\J\B ]] 00:36:05.956 23:07:13 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:36:05.956 23:07:13 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:36:05.956 23:07:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:05.956 23:07:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:05.956 23:07:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:06.214 23:07:13 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.mBz6xsN41D == \/\t\m\p\/\t\m\p\.\m\B\z\6\x\s\N\4\1\D ]] 00:36:06.215 23:07:13 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:36:06.215 23:07:13 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:06.215 23:07:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:06.215 23:07:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:06.215 23:07:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:06.215 23:07:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:06.473 23:07:14 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:36:06.473 23:07:14 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:36:06.473 23:07:14 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:06.473 23:07:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:06.473 23:07:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:06.473 23:07:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:06.473 23:07:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:06.732 23:07:14 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:36:06.732 23:07:14 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:06.732 23:07:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:06.732 [2024-12-10 23:07:14.409216] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:06.990 nvme0n1 00:36:06.990 23:07:14 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:36:06.990 23:07:14 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:06.990 23:07:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:06.990 23:07:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:06.990 23:07:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:06.991 23:07:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:06.991 23:07:14 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:36:06.991 23:07:14 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:36:06.991 23:07:14 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:06.991 23:07:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:06.991 23:07:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:06.991 23:07:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:06.991 23:07:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:07.249 23:07:14 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:36:07.249 23:07:14 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:07.507 Running I/O for 1 seconds... 00:36:08.442 18655.00 IOPS, 72.87 MiB/s 00:36:08.442 Latency(us) 00:36:08.442 [2024-12-10T22:07:16.171Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:08.442 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:36:08.442 nvme0n1 : 1.00 18702.27 73.06 0.00 0.00 6831.18 2806.65 16412.49 00:36:08.442 [2024-12-10T22:07:16.171Z] =================================================================================================================== 00:36:08.442 [2024-12-10T22:07:16.171Z] Total : 18702.27 73.06 0.00 0.00 6831.18 2806.65 16412.49 00:36:08.442 { 00:36:08.442 "results": [ 00:36:08.442 { 00:36:08.442 "job": "nvme0n1", 00:36:08.442 "core_mask": "0x2", 00:36:08.442 "workload": "randrw", 00:36:08.442 "percentage": 50, 00:36:08.442 "status": "finished", 00:36:08.442 "queue_depth": 128, 00:36:08.442 "io_size": 4096, 00:36:08.442 "runtime": 1.00437, 00:36:08.442 "iops": 18702.2710754005, 00:36:08.442 "mibps": 73.0557463882832, 00:36:08.442 "io_failed": 0, 00:36:08.442 "io_timeout": 0, 00:36:08.442 "avg_latency_us": 6831.177874231539, 00:36:08.442 "min_latency_us": 2806.6504347826085, 00:36:08.442 "max_latency_us": 16412.49391304348 00:36:08.442 } 00:36:08.442 ], 00:36:08.442 "core_count": 1 00:36:08.442 } 00:36:08.442 23:07:16 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:08.442 23:07:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:08.700 23:07:16 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:36:08.700 23:07:16 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:08.700 23:07:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:08.700 23:07:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:08.700 23:07:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:08.700 23:07:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:08.969 23:07:16 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:36:08.969 23:07:16 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:36:08.969 23:07:16 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:08.969 23:07:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:08.969 23:07:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:08.969 23:07:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:08.969 23:07:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:08.969 23:07:16 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:36:08.969 23:07:16 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:08.969 23:07:16 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:36:08.969 23:07:16 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:08.969 23:07:16 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:36:08.969 23:07:16 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:08.969 23:07:16 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:36:08.969 23:07:16 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:08.969 23:07:16 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:08.969 23:07:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:09.230 [2024-12-10 23:07:16.831480] /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:09.230 [2024-12-10 23:07:16.832178] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x238ae90 (107): Transport endpoint is not connected 00:36:09.230 [2024-12-10 23:07:16.833173] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x238ae90 (9): Bad file descriptor 00:36:09.230 [2024-12-10 23:07:16.834174] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:36:09.230 [2024-12-10 23:07:16.834183] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:09.230 [2024-12-10 23:07:16.834190] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:36:09.231 [2024-12-10 23:07:16.834199] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:36:09.231 request: 00:36:09.231 { 00:36:09.231 "name": "nvme0", 00:36:09.231 "trtype": "tcp", 00:36:09.231 "traddr": "127.0.0.1", 00:36:09.231 "adrfam": "ipv4", 00:36:09.231 "trsvcid": "4420", 00:36:09.231 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:09.231 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:09.231 "prchk_reftag": false, 00:36:09.231 "prchk_guard": false, 00:36:09.231 "hdgst": false, 00:36:09.231 "ddgst": false, 00:36:09.231 "psk": "key1", 00:36:09.231 "allow_unrecognized_csi": false, 00:36:09.231 "method": "bdev_nvme_attach_controller", 00:36:09.231 "req_id": 1 00:36:09.231 } 00:36:09.231 Got JSON-RPC error response 00:36:09.231 response: 00:36:09.231 { 00:36:09.231 "code": -5, 00:36:09.231 "message": "Input/output error" 00:36:09.231 } 00:36:09.231 23:07:16 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:36:09.231 23:07:16 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:09.231 23:07:16 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:09.231 23:07:16 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:09.231 23:07:16 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:36:09.231 23:07:16 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:09.231 23:07:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:09.231 23:07:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:09.231 23:07:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:09.231 23:07:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:09.490 23:07:17 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:36:09.490 23:07:17 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:36:09.490 23:07:17 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:09.490 23:07:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:09.490 23:07:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:09.490 23:07:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:09.490 23:07:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:09.748 23:07:17 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:36:09.748 23:07:17 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:36:09.748 23:07:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:09.748 23:07:17 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:36:09.748 23:07:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:36:10.008 23:07:17 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:36:10.008 23:07:17 keyring_file -- keyring/file.sh@78 -- # jq length 00:36:10.008 23:07:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:10.267 23:07:17 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:36:10.267 23:07:17 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.gFbEJjx5JB 00:36:10.267 23:07:17 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.gFbEJjx5JB 00:36:10.267 23:07:17 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:36:10.267 23:07:17 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.gFbEJjx5JB 00:36:10.267 23:07:17 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:36:10.267 23:07:17 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:10.267 23:07:17 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:36:10.267 23:07:17 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:10.267 23:07:17 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.gFbEJjx5JB 00:36:10.267 23:07:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.gFbEJjx5JB 00:36:10.526 [2024-12-10 23:07:18.023326] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.gFbEJjx5JB': 0100660 00:36:10.526 [2024-12-10 23:07:18.023350] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:36:10.526 request: 00:36:10.526 { 00:36:10.526 "name": "key0", 00:36:10.526 "path": "/tmp/tmp.gFbEJjx5JB", 00:36:10.526 "method": "keyring_file_add_key", 00:36:10.526 "req_id": 1 00:36:10.526 } 00:36:10.526 Got JSON-RPC error response 00:36:10.526 response: 00:36:10.526 { 00:36:10.526 "code": -1, 00:36:10.526 "message": "Operation not permitted" 00:36:10.526 } 00:36:10.526 23:07:18 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:36:10.526 23:07:18 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:10.526 23:07:18 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:10.526 23:07:18 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:10.526 23:07:18 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.gFbEJjx5JB 00:36:10.526 23:07:18 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.gFbEJjx5JB 00:36:10.526 23:07:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.gFbEJjx5JB 00:36:10.526 23:07:18 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.gFbEJjx5JB 00:36:10.785 23:07:18 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:36:10.785 23:07:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:10.785 23:07:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:10.785 23:07:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:10.785 23:07:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:10.785 23:07:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:10.785 23:07:18 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:36:10.785 23:07:18 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:10.785 23:07:18 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:36:10.785 23:07:18 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:10.785 23:07:18 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:36:10.785 23:07:18 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:10.785 23:07:18 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:36:10.785 23:07:18 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:10.785 23:07:18 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:10.785 23:07:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:11.044 [2024-12-10 23:07:18.632941] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.gFbEJjx5JB': No such file or directory 00:36:11.044 [2024-12-10 23:07:18.632959] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:36:11.044 [2024-12-10 23:07:18.632974] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:36:11.044 [2024-12-10 23:07:18.632981] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:36:11.044 [2024-12-10 23:07:18.633004] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:36:11.044 [2024-12-10 23:07:18.633010] bdev_nvme.c:6801:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:36:11.044 request: 00:36:11.044 { 00:36:11.044 "name": "nvme0", 00:36:11.044 "trtype": "tcp", 00:36:11.044 "traddr": "127.0.0.1", 00:36:11.044 "adrfam": "ipv4", 00:36:11.045 "trsvcid": "4420", 00:36:11.045 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:11.045 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:11.045 "prchk_reftag": false, 00:36:11.045 "prchk_guard": false, 00:36:11.045 "hdgst": false, 00:36:11.045 "ddgst": false, 00:36:11.045 "psk": "key0", 00:36:11.045 "allow_unrecognized_csi": false, 00:36:11.045 "method": "bdev_nvme_attach_controller", 00:36:11.045 "req_id": 1 00:36:11.045 } 00:36:11.045 Got JSON-RPC error response 00:36:11.045 response: 00:36:11.045 { 00:36:11.045 "code": -19, 00:36:11.045 "message": "No such device" 00:36:11.045 } 00:36:11.045 23:07:18 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:36:11.045 23:07:18 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:11.045 23:07:18 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:11.045 23:07:18 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:11.045 23:07:18 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:36:11.045 23:07:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:11.304 23:07:18 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:11.304 23:07:18 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:11.304 23:07:18 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:11.304 23:07:18 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:11.304 23:07:18 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:11.304 23:07:18 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:11.304 23:07:18 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Kg7k9HnSvI 00:36:11.304 23:07:18 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:11.304 23:07:18 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:11.304 23:07:18 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:36:11.304 23:07:18 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:11.304 23:07:18 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:36:11.304 23:07:18 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:36:11.304 23:07:18 keyring_file -- nvmf/common.sh@733 -- # python - 00:36:11.304 23:07:18 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Kg7k9HnSvI 00:36:11.304 23:07:18 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Kg7k9HnSvI 00:36:11.304 23:07:18 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.Kg7k9HnSvI 00:36:11.304 23:07:18 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Kg7k9HnSvI 00:36:11.304 23:07:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Kg7k9HnSvI 00:36:11.563 23:07:19 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:11.563 23:07:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:11.822 nvme0n1 00:36:11.822 23:07:19 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:36:11.822 23:07:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:11.822 23:07:19 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:11.822 23:07:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:11.822 23:07:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:11.822 23:07:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:12.081 23:07:19 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:36:12.081 23:07:19 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:36:12.081 23:07:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:12.081 23:07:19 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:36:12.081 23:07:19 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:36:12.081 23:07:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:12.081 23:07:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:12.081 23:07:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:12.340 23:07:19 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:36:12.340 23:07:19 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:36:12.340 23:07:19 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:12.340 23:07:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:12.340 23:07:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:12.340 23:07:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:12.340 23:07:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:12.599 23:07:20 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:36:12.599 23:07:20 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:12.599 23:07:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:12.858 23:07:20 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:36:12.858 23:07:20 keyring_file -- keyring/file.sh@105 -- # jq length 00:36:12.858 23:07:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:13.117 23:07:20 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:36:13.117 23:07:20 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Kg7k9HnSvI 00:36:13.117 23:07:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Kg7k9HnSvI 00:36:13.117 23:07:20 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.mBz6xsN41D 00:36:13.117 23:07:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.mBz6xsN41D 00:36:13.376 23:07:21 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:13.376 23:07:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:13.635 nvme0n1 00:36:13.635 23:07:21 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:36:13.635 23:07:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:36:13.895 23:07:21 keyring_file -- keyring/file.sh@113 -- # config='{ 00:36:13.895 "subsystems": [ 00:36:13.895 { 00:36:13.895 "subsystem": "keyring", 00:36:13.895 "config": [ 00:36:13.895 { 00:36:13.895 "method": "keyring_file_add_key", 00:36:13.895 "params": { 00:36:13.895 "name": "key0", 00:36:13.895 "path": "/tmp/tmp.Kg7k9HnSvI" 00:36:13.895 } 00:36:13.895 }, 00:36:13.895 { 00:36:13.895 "method": "keyring_file_add_key", 00:36:13.895 "params": { 00:36:13.895 "name": "key1", 00:36:13.895 "path": "/tmp/tmp.mBz6xsN41D" 00:36:13.895 } 00:36:13.895 } 00:36:13.895 ] 00:36:13.895 }, 00:36:13.895 { 00:36:13.895 "subsystem": "iobuf", 00:36:13.895 "config": [ 00:36:13.895 { 00:36:13.895 "method": "iobuf_set_options", 00:36:13.895 "params": { 00:36:13.895 "small_pool_count": 8192, 00:36:13.895 "large_pool_count": 1024, 00:36:13.895 "small_bufsize": 8192, 00:36:13.895 "large_bufsize": 135168, 00:36:13.895 "enable_numa": false 00:36:13.895 } 00:36:13.895 } 00:36:13.895 ] 00:36:13.895 }, 00:36:13.895 { 00:36:13.895 "subsystem": "sock", 00:36:13.895 "config": [ 00:36:13.895 { 00:36:13.895 "method": "sock_set_default_impl", 00:36:13.895 "params": { 00:36:13.895 "impl_name": "posix" 00:36:13.895 } 00:36:13.895 }, 00:36:13.895 { 00:36:13.895 "method": "sock_impl_set_options", 00:36:13.895 "params": { 00:36:13.895 "impl_name": "ssl", 00:36:13.895 "recv_buf_size": 4096, 00:36:13.895 "send_buf_size": 4096, 00:36:13.895 "enable_recv_pipe": true, 00:36:13.895 "enable_quickack": false, 00:36:13.895 "enable_placement_id": 0, 00:36:13.895 "enable_zerocopy_send_server": true, 00:36:13.895 "enable_zerocopy_send_client": false, 00:36:13.895 "zerocopy_threshold": 0, 00:36:13.895 "tls_version": 0, 00:36:13.895 "enable_ktls": false 00:36:13.895 } 00:36:13.895 }, 00:36:13.895 { 00:36:13.895 "method": "sock_impl_set_options", 00:36:13.895 "params": { 00:36:13.895 "impl_name": "posix", 00:36:13.895 "recv_buf_size": 2097152, 00:36:13.895 "send_buf_size": 2097152, 00:36:13.895 "enable_recv_pipe": true, 00:36:13.895 "enable_quickack": false, 00:36:13.895 "enable_placement_id": 0, 00:36:13.895 "enable_zerocopy_send_server": true, 00:36:13.895 "enable_zerocopy_send_client": false, 00:36:13.895 "zerocopy_threshold": 0, 00:36:13.895 "tls_version": 0, 00:36:13.895 "enable_ktls": false 00:36:13.895 } 00:36:13.895 } 00:36:13.895 ] 00:36:13.895 }, 00:36:13.895 { 00:36:13.895 "subsystem": "vmd", 00:36:13.895 "config": [] 00:36:13.895 }, 00:36:13.895 { 00:36:13.895 "subsystem": "accel", 00:36:13.895 "config": [ 00:36:13.895 { 00:36:13.895 "method": "accel_set_options", 00:36:13.895 "params": { 00:36:13.895 "small_cache_size": 128, 00:36:13.895 "large_cache_size": 16, 00:36:13.895 "task_count": 2048, 00:36:13.895 "sequence_count": 2048, 00:36:13.895 "buf_count": 2048 00:36:13.895 } 00:36:13.895 } 00:36:13.895 ] 00:36:13.895 }, 00:36:13.895 { 00:36:13.895 "subsystem": "bdev", 00:36:13.895 "config": [ 00:36:13.895 { 00:36:13.895 "method": "bdev_set_options", 00:36:13.895 "params": { 00:36:13.895 "bdev_io_pool_size": 65535, 00:36:13.895 "bdev_io_cache_size": 256, 00:36:13.895 "bdev_auto_examine": true, 00:36:13.895 "iobuf_small_cache_size": 128, 00:36:13.895 "iobuf_large_cache_size": 16 00:36:13.895 } 00:36:13.895 }, 00:36:13.895 { 00:36:13.895 "method": "bdev_raid_set_options", 00:36:13.895 "params": { 00:36:13.895 "process_window_size_kb": 1024, 00:36:13.896 "process_max_bandwidth_mb_sec": 0 00:36:13.896 } 00:36:13.896 }, 00:36:13.896 { 00:36:13.896 "method": "bdev_iscsi_set_options", 00:36:13.896 "params": { 00:36:13.896 "timeout_sec": 30 00:36:13.896 } 00:36:13.896 }, 00:36:13.896 { 00:36:13.896 "method": "bdev_nvme_set_options", 00:36:13.896 "params": { 00:36:13.896 "action_on_timeout": "none", 00:36:13.896 "timeout_us": 0, 00:36:13.896 "timeout_admin_us": 0, 00:36:13.896 "keep_alive_timeout_ms": 10000, 00:36:13.896 "arbitration_burst": 0, 00:36:13.896 "low_priority_weight": 0, 00:36:13.896 "medium_priority_weight": 0, 00:36:13.896 "high_priority_weight": 0, 00:36:13.896 "nvme_adminq_poll_period_us": 10000, 00:36:13.896 "nvme_ioq_poll_period_us": 0, 00:36:13.896 "io_queue_requests": 512, 00:36:13.896 "delay_cmd_submit": true, 00:36:13.896 "transport_retry_count": 4, 00:36:13.896 "bdev_retry_count": 3, 00:36:13.896 "transport_ack_timeout": 0, 00:36:13.896 "ctrlr_loss_timeout_sec": 0, 00:36:13.896 "reconnect_delay_sec": 0, 00:36:13.896 "fast_io_fail_timeout_sec": 0, 00:36:13.896 "disable_auto_failback": false, 00:36:13.896 "generate_uuids": false, 00:36:13.896 "transport_tos": 0, 00:36:13.896 "nvme_error_stat": false, 00:36:13.896 "rdma_srq_size": 0, 00:36:13.896 "io_path_stat": false, 00:36:13.896 "allow_accel_sequence": false, 00:36:13.896 "rdma_max_cq_size": 0, 00:36:13.896 "rdma_cm_event_timeout_ms": 0, 00:36:13.896 "dhchap_digests": [ 00:36:13.896 "sha256", 00:36:13.896 "sha384", 00:36:13.896 "sha512" 00:36:13.896 ], 00:36:13.896 "dhchap_dhgroups": [ 00:36:13.896 "null", 00:36:13.896 "ffdhe2048", 00:36:13.896 "ffdhe3072", 00:36:13.896 "ffdhe4096", 00:36:13.896 "ffdhe6144", 00:36:13.896 "ffdhe8192" 00:36:13.896 ], 00:36:13.896 "rdma_umr_per_io": false 00:36:13.896 } 00:36:13.896 }, 00:36:13.896 { 00:36:13.896 "method": "bdev_nvme_attach_controller", 00:36:13.896 "params": { 00:36:13.896 "name": "nvme0", 00:36:13.896 "trtype": "TCP", 00:36:13.896 "adrfam": "IPv4", 00:36:13.896 "traddr": "127.0.0.1", 00:36:13.896 "trsvcid": "4420", 00:36:13.896 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:13.896 "prchk_reftag": false, 00:36:13.896 "prchk_guard": false, 00:36:13.896 "ctrlr_loss_timeout_sec": 0, 00:36:13.896 "reconnect_delay_sec": 0, 00:36:13.896 "fast_io_fail_timeout_sec": 0, 00:36:13.896 "psk": "key0", 00:36:13.896 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:13.896 "hdgst": false, 00:36:13.896 "ddgst": false, 00:36:13.896 "multipath": "multipath" 00:36:13.896 } 00:36:13.896 }, 00:36:13.896 { 00:36:13.896 "method": "bdev_nvme_set_hotplug", 00:36:13.896 "params": { 00:36:13.896 "period_us": 100000, 00:36:13.896 "enable": false 00:36:13.896 } 00:36:13.896 }, 00:36:13.896 { 00:36:13.896 "method": "bdev_wait_for_examine" 00:36:13.896 } 00:36:13.896 ] 00:36:13.896 }, 00:36:13.896 { 00:36:13.896 "subsystem": "nbd", 00:36:13.896 "config": [] 00:36:13.896 } 00:36:13.896 ] 00:36:13.896 }' 00:36:13.896 23:07:21 keyring_file -- keyring/file.sh@115 -- # killprocess 2655498 00:36:13.896 23:07:21 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2655498 ']' 00:36:13.896 23:07:21 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2655498 00:36:13.896 23:07:21 keyring_file -- common/autotest_common.sh@959 -- # uname 00:36:13.896 23:07:21 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:13.896 23:07:21 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2655498 00:36:13.896 23:07:21 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:13.896 23:07:21 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:13.896 23:07:21 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2655498' 00:36:13.896 killing process with pid 2655498 00:36:13.896 23:07:21 keyring_file -- common/autotest_common.sh@973 -- # kill 2655498 00:36:13.896 Received shutdown signal, test time was about 1.000000 seconds 00:36:13.896 00:36:13.896 Latency(us) 00:36:13.896 [2024-12-10T22:07:21.625Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:13.896 [2024-12-10T22:07:21.625Z] =================================================================================================================== 00:36:13.896 [2024-12-10T22:07:21.625Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:13.896 23:07:21 keyring_file -- common/autotest_common.sh@978 -- # wait 2655498 00:36:14.156 23:07:21 keyring_file -- keyring/file.sh@118 -- # bperfpid=2657040 00:36:14.156 23:07:21 keyring_file -- keyring/file.sh@120 -- # waitforlisten 2657040 /var/tmp/bperf.sock 00:36:14.156 23:07:21 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2657040 ']' 00:36:14.156 23:07:21 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:14.156 23:07:21 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:14.156 23:07:21 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:36:14.156 23:07:21 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:14.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:14.156 23:07:21 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:14.156 23:07:21 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:36:14.156 "subsystems": [ 00:36:14.156 { 00:36:14.156 "subsystem": "keyring", 00:36:14.156 "config": [ 00:36:14.156 { 00:36:14.156 "method": "keyring_file_add_key", 00:36:14.156 "params": { 00:36:14.156 "name": "key0", 00:36:14.156 "path": "/tmp/tmp.Kg7k9HnSvI" 00:36:14.156 } 00:36:14.156 }, 00:36:14.156 { 00:36:14.156 "method": "keyring_file_add_key", 00:36:14.156 "params": { 00:36:14.156 "name": "key1", 00:36:14.156 "path": "/tmp/tmp.mBz6xsN41D" 00:36:14.156 } 00:36:14.156 } 00:36:14.156 ] 00:36:14.156 }, 00:36:14.156 { 00:36:14.156 "subsystem": "iobuf", 00:36:14.156 "config": [ 00:36:14.156 { 00:36:14.156 "method": "iobuf_set_options", 00:36:14.156 "params": { 00:36:14.156 "small_pool_count": 8192, 00:36:14.156 "large_pool_count": 1024, 00:36:14.156 "small_bufsize": 8192, 00:36:14.156 "large_bufsize": 135168, 00:36:14.156 "enable_numa": false 00:36:14.156 } 00:36:14.156 } 00:36:14.156 ] 00:36:14.156 }, 00:36:14.156 { 00:36:14.156 "subsystem": "sock", 00:36:14.156 "config": [ 00:36:14.156 { 00:36:14.156 "method": "sock_set_default_impl", 00:36:14.156 "params": { 00:36:14.156 "impl_name": "posix" 00:36:14.156 } 00:36:14.156 }, 00:36:14.156 { 00:36:14.156 "method": "sock_impl_set_options", 00:36:14.156 "params": { 00:36:14.156 "impl_name": "ssl", 00:36:14.156 "recv_buf_size": 4096, 00:36:14.156 "send_buf_size": 4096, 00:36:14.156 "enable_recv_pipe": true, 00:36:14.156 "enable_quickack": false, 00:36:14.156 "enable_placement_id": 0, 00:36:14.156 "enable_zerocopy_send_server": true, 00:36:14.156 "enable_zerocopy_send_client": false, 00:36:14.156 "zerocopy_threshold": 0, 00:36:14.156 "tls_version": 0, 00:36:14.156 "enable_ktls": false 00:36:14.156 } 00:36:14.156 }, 00:36:14.156 { 00:36:14.156 "method": "sock_impl_set_options", 00:36:14.156 "params": { 00:36:14.156 "impl_name": "posix", 00:36:14.156 "recv_buf_size": 2097152, 00:36:14.156 "send_buf_size": 2097152, 00:36:14.156 "enable_recv_pipe": true, 00:36:14.156 "enable_quickack": false, 00:36:14.156 "enable_placement_id": 0, 00:36:14.156 "enable_zerocopy_send_server": true, 00:36:14.156 "enable_zerocopy_send_client": false, 00:36:14.156 "zerocopy_threshold": 0, 00:36:14.156 "tls_version": 0, 00:36:14.156 "enable_ktls": false 00:36:14.156 } 00:36:14.156 } 00:36:14.156 ] 00:36:14.156 }, 00:36:14.156 { 00:36:14.156 "subsystem": "vmd", 00:36:14.156 "config": [] 00:36:14.156 }, 00:36:14.156 { 00:36:14.156 "subsystem": "accel", 00:36:14.156 "config": [ 00:36:14.156 { 00:36:14.156 "method": "accel_set_options", 00:36:14.156 "params": { 00:36:14.156 "small_cache_size": 128, 00:36:14.156 "large_cache_size": 16, 00:36:14.156 "task_count": 2048, 00:36:14.156 "sequence_count": 2048, 00:36:14.156 "buf_count": 2048 00:36:14.156 } 00:36:14.156 } 00:36:14.156 ] 00:36:14.156 }, 00:36:14.156 { 00:36:14.156 "subsystem": "bdev", 00:36:14.156 "config": [ 00:36:14.156 { 00:36:14.156 "method": "bdev_set_options", 00:36:14.156 "params": { 00:36:14.156 "bdev_io_pool_size": 65535, 00:36:14.156 "bdev_io_cache_size": 256, 00:36:14.156 "bdev_auto_examine": true, 00:36:14.156 "iobuf_small_cache_size": 128, 00:36:14.156 "iobuf_large_cache_size": 16 00:36:14.156 } 00:36:14.156 }, 00:36:14.156 { 00:36:14.156 "method": "bdev_raid_set_options", 00:36:14.156 "params": { 00:36:14.156 "process_window_size_kb": 1024, 00:36:14.156 "process_max_bandwidth_mb_sec": 0 00:36:14.156 } 00:36:14.156 }, 00:36:14.156 { 00:36:14.156 "method": "bdev_iscsi_set_options", 00:36:14.156 "params": { 00:36:14.156 "timeout_sec": 30 00:36:14.156 } 00:36:14.156 }, 00:36:14.156 { 00:36:14.156 "method": "bdev_nvme_set_options", 00:36:14.156 "params": { 00:36:14.157 "action_on_timeout": "none", 00:36:14.157 "timeout_us": 0, 00:36:14.157 "timeout_admin_us": 0, 00:36:14.157 "keep_alive_timeout_ms": 10000, 00:36:14.157 "arbitration_burst": 0, 00:36:14.157 "low_priority_weight": 0, 00:36:14.157 "medium_priority_weight": 0, 00:36:14.157 "high_priority_weight": 0, 00:36:14.157 "nvme_adminq_poll_period_us": 10000, 00:36:14.157 "nvme_ioq_poll_period_us": 0, 00:36:14.157 "io_queue_requests": 512, 00:36:14.157 "delay_cmd_submit": true, 00:36:14.157 "transport_retry_count": 4, 00:36:14.157 "bdev_retry_count": 3, 00:36:14.157 "transport_ack_timeout": 0, 00:36:14.157 "ctrlr_loss_timeout_sec": 0, 00:36:14.157 "reconnect_delay_sec": 0, 00:36:14.157 "fast_io_fail_timeout_sec": 0, 00:36:14.157 "disable_auto_failback": false, 00:36:14.157 "generate_uuids": false, 00:36:14.157 "transport_tos": 0, 00:36:14.157 "nvme_error_stat": false, 00:36:14.157 "rdma_srq_size": 0, 00:36:14.157 "io_path_stat": false, 00:36:14.157 "allow_accel_sequence": false, 00:36:14.157 "rdma_max_cq_size": 0, 00:36:14.157 "rdma_cm_event_timeout_ms": 0, 00:36:14.157 "dhchap_digests": [ 00:36:14.157 "sha256", 00:36:14.157 "sha384", 00:36:14.157 "sha512" 00:36:14.157 ], 00:36:14.157 "dhchap_dhgroups": [ 00:36:14.157 "null", 00:36:14.157 "ffdhe2048", 00:36:14.157 "ffdhe3072", 00:36:14.157 "ffdhe4096", 00:36:14.157 "ffdhe6144", 00:36:14.157 "ffdhe8192" 00:36:14.157 ], 00:36:14.157 "rdma_umr_per_io": false 00:36:14.157 } 00:36:14.157 }, 00:36:14.157 { 00:36:14.157 "method": "bdev_nvme_attach_controller", 00:36:14.157 "params": { 00:36:14.157 "name": "nvme0", 00:36:14.157 "trtype": "TCP", 00:36:14.157 "adrfam": "IPv4", 00:36:14.157 "traddr": "127.0.0.1", 00:36:14.157 "trsvcid": "4420", 00:36:14.157 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:14.157 "prchk_reftag": false, 00:36:14.157 "prchk_guard": false, 00:36:14.157 "ctrlr_loss_timeout_sec": 0, 00:36:14.157 "reconnect_delay_sec": 0, 00:36:14.157 "fast_io_fail_timeout_sec": 0, 00:36:14.157 "psk": "key0", 00:36:14.157 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:14.157 "hdgst": false, 00:36:14.157 "ddgst": false, 00:36:14.157 "multipath": "multipath" 00:36:14.157 } 00:36:14.157 }, 00:36:14.157 { 00:36:14.157 "method": "bdev_nvme_set_hotplug", 00:36:14.157 "params": { 00:36:14.157 "period_us": 100000, 00:36:14.157 "enable": false 00:36:14.157 } 00:36:14.157 }, 00:36:14.157 { 00:36:14.157 "method": "bdev_wait_for_examine" 00:36:14.157 } 00:36:14.157 ] 00:36:14.157 }, 00:36:14.157 { 00:36:14.157 "subsystem": "nbd", 00:36:14.157 "config": [] 00:36:14.157 } 00:36:14.157 ] 00:36:14.157 }' 00:36:14.157 23:07:21 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:14.157 [2024-12-10 23:07:21.800456] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:36:14.157 [2024-12-10 23:07:21.800507] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2657040 ] 00:36:14.157 [2024-12-10 23:07:21.875517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:14.416 [2024-12-10 23:07:21.917526] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:36:14.416 [2024-12-10 23:07:22.078963] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:14.984 23:07:22 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:14.984 23:07:22 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:36:14.984 23:07:22 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:36:14.984 23:07:22 keyring_file -- keyring/file.sh@121 -- # jq length 00:36:14.984 23:07:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:15.242 23:07:22 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:36:15.242 23:07:22 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:36:15.242 23:07:22 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:15.242 23:07:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:15.242 23:07:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:15.242 23:07:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:15.242 23:07:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:15.500 23:07:23 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:36:15.500 23:07:23 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:36:15.500 23:07:23 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:15.500 23:07:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:15.500 23:07:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:15.500 23:07:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:15.500 23:07:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:15.758 23:07:23 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:36:15.758 23:07:23 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:36:15.758 23:07:23 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:36:15.758 23:07:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:36:15.758 23:07:23 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:36:15.758 23:07:23 keyring_file -- keyring/file.sh@1 -- # cleanup 00:36:15.758 23:07:23 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.Kg7k9HnSvI /tmp/tmp.mBz6xsN41D 00:36:15.758 23:07:23 keyring_file -- keyring/file.sh@20 -- # killprocess 2657040 00:36:15.758 23:07:23 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2657040 ']' 00:36:15.758 23:07:23 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2657040 00:36:15.758 23:07:23 keyring_file -- common/autotest_common.sh@959 -- # uname 00:36:15.758 23:07:23 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:15.758 23:07:23 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2657040 00:36:16.017 23:07:23 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:16.017 23:07:23 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:16.017 23:07:23 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2657040' 00:36:16.017 killing process with pid 2657040 00:36:16.017 23:07:23 keyring_file -- common/autotest_common.sh@973 -- # kill 2657040 00:36:16.017 Received shutdown signal, test time was about 1.000000 seconds 00:36:16.017 00:36:16.017 Latency(us) 00:36:16.017 [2024-12-10T22:07:23.746Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:16.017 [2024-12-10T22:07:23.746Z] =================================================================================================================== 00:36:16.017 [2024-12-10T22:07:23.746Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:36:16.017 23:07:23 keyring_file -- common/autotest_common.sh@978 -- # wait 2657040 00:36:16.017 23:07:23 keyring_file -- keyring/file.sh@21 -- # killprocess 2655485 00:36:16.017 23:07:23 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2655485 ']' 00:36:16.017 23:07:23 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2655485 00:36:16.017 23:07:23 keyring_file -- common/autotest_common.sh@959 -- # uname 00:36:16.017 23:07:23 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:16.017 23:07:23 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2655485 00:36:16.017 23:07:23 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:16.017 23:07:23 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:16.017 23:07:23 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2655485' 00:36:16.017 killing process with pid 2655485 00:36:16.017 23:07:23 keyring_file -- common/autotest_common.sh@973 -- # kill 2655485 00:36:16.017 23:07:23 keyring_file -- common/autotest_common.sh@978 -- # wait 2655485 00:36:16.586 00:36:16.586 real 0m12.041s 00:36:16.586 user 0m29.970s 00:36:16.586 sys 0m2.790s 00:36:16.586 23:07:24 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:16.586 23:07:24 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:16.586 ************************************ 00:36:16.586 END TEST keyring_file 00:36:16.586 ************************************ 00:36:16.586 23:07:24 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:36:16.586 23:07:24 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/keyring/linux.sh 00:36:16.586 23:07:24 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:16.586 23:07:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:16.586 23:07:24 -- common/autotest_common.sh@10 -- # set +x 00:36:16.586 ************************************ 00:36:16.586 START TEST keyring_linux 00:36:16.586 ************************************ 00:36:16.586 23:07:24 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/keyring/linux.sh 00:36:16.586 Joined session keyring: 612643162 00:36:16.586 * Looking for test storage... 00:36:16.586 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/keyring 00:36:16.586 23:07:24 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:16.586 23:07:24 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:36:16.586 23:07:24 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:16.586 23:07:24 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:16.586 23:07:24 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:16.586 23:07:24 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:16.586 23:07:24 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:16.586 23:07:24 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:36:16.586 23:07:24 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:36:16.586 23:07:24 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:36:16.586 23:07:24 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:36:16.586 23:07:24 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:36:16.586 23:07:24 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:36:16.586 23:07:24 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:36:16.586 23:07:24 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:16.586 23:07:24 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:36:16.586 23:07:24 keyring_linux -- scripts/common.sh@345 -- # : 1 00:36:16.586 23:07:24 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:16.586 23:07:24 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:16.586 23:07:24 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:36:16.586 23:07:24 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:36:16.586 23:07:24 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:16.586 23:07:24 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:36:16.586 23:07:24 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:36:16.586 23:07:24 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:36:16.586 23:07:24 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:36:16.586 23:07:24 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:16.586 23:07:24 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:36:16.586 23:07:24 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:36:16.586 23:07:24 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:16.586 23:07:24 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:16.586 23:07:24 keyring_linux -- scripts/common.sh@368 -- # return 0 00:36:16.586 23:07:24 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:16.586 23:07:24 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:16.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:16.586 --rc genhtml_branch_coverage=1 00:36:16.586 --rc genhtml_function_coverage=1 00:36:16.586 --rc genhtml_legend=1 00:36:16.586 --rc geninfo_all_blocks=1 00:36:16.586 --rc geninfo_unexecuted_blocks=1 00:36:16.586 00:36:16.586 ' 00:36:16.586 23:07:24 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:16.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:16.586 --rc genhtml_branch_coverage=1 00:36:16.586 --rc genhtml_function_coverage=1 00:36:16.586 --rc genhtml_legend=1 00:36:16.586 --rc geninfo_all_blocks=1 00:36:16.586 --rc geninfo_unexecuted_blocks=1 00:36:16.586 00:36:16.586 ' 00:36:16.586 23:07:24 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:16.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:16.586 --rc genhtml_branch_coverage=1 00:36:16.586 --rc genhtml_function_coverage=1 00:36:16.586 --rc genhtml_legend=1 00:36:16.586 --rc geninfo_all_blocks=1 00:36:16.586 --rc geninfo_unexecuted_blocks=1 00:36:16.586 00:36:16.586 ' 00:36:16.586 23:07:24 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:16.586 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:16.586 --rc genhtml_branch_coverage=1 00:36:16.586 --rc genhtml_function_coverage=1 00:36:16.586 --rc genhtml_legend=1 00:36:16.586 --rc geninfo_all_blocks=1 00:36:16.586 --rc geninfo_unexecuted_blocks=1 00:36:16.586 00:36:16.586 ' 00:36:16.586 23:07:24 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/keyring/common.sh 00:36:16.586 23:07:24 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh 00:36:16.586 23:07:24 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:36:16.586 23:07:24 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:16.586 23:07:24 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:16.586 23:07:24 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:16.586 23:07:24 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:16.586 23:07:24 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:16.586 23:07:24 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:16.586 23:07:24 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:16.586 23:07:24 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:16.586 23:07:24 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:16.586 23:07:24 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:16.586 23:07:24 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:36:16.586 23:07:24 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:36:16.586 23:07:24 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:16.586 23:07:24 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:16.586 23:07:24 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:16.586 23:07:24 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:16.586 23:07:24 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/common.sh 00:36:16.586 23:07:24 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:36:16.586 23:07:24 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:16.586 23:07:24 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:16.586 23:07:24 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:16.586 23:07:24 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:16.586 23:07:24 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:16.586 23:07:24 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:16.586 23:07:24 keyring_linux -- paths/export.sh@5 -- # export PATH 00:36:16.586 23:07:24 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:16.586 23:07:24 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:36:16.586 23:07:24 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:16.586 23:07:24 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:16.586 23:07:24 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:16.586 23:07:24 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:16.586 23:07:24 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:16.586 23:07:24 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:16.586 /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:16.586 23:07:24 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:16.586 23:07:24 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:16.586 23:07:24 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:16.586 23:07:24 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:36:16.586 23:07:24 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:36:16.586 23:07:24 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:36:16.586 23:07:24 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:36:16.586 23:07:24 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:36:16.586 23:07:24 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:36:16.586 23:07:24 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:36:16.586 23:07:24 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:36:16.586 23:07:24 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:36:16.586 23:07:24 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:16.586 23:07:24 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:36:16.586 23:07:24 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:36:16.586 23:07:24 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:16.586 23:07:24 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:16.586 23:07:24 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:36:16.586 23:07:24 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:16.586 23:07:24 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:36:16.586 23:07:24 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:36:16.586 23:07:24 keyring_linux -- nvmf/common.sh@733 -- # python - 00:36:16.846 23:07:24 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:36:16.846 23:07:24 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:36:16.846 /tmp/:spdk-test:key0 00:36:16.846 23:07:24 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:36:16.846 23:07:24 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:36:16.846 23:07:24 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:36:16.846 23:07:24 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:16.846 23:07:24 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:36:16.846 23:07:24 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:36:16.846 23:07:24 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:16.846 23:07:24 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:16.846 23:07:24 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:36:16.846 23:07:24 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:16.846 23:07:24 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:36:16.846 23:07:24 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:36:16.846 23:07:24 keyring_linux -- nvmf/common.sh@733 -- # python - 00:36:16.846 23:07:24 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:36:16.846 23:07:24 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:36:16.846 /tmp/:spdk-test:key1 00:36:16.846 23:07:24 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=2657566 00:36:16.846 23:07:24 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 2657566 00:36:16.846 23:07:24 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/bin/spdk_tgt 00:36:16.846 23:07:24 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 2657566 ']' 00:36:16.846 23:07:24 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:16.846 23:07:24 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:16.846 23:07:24 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:16.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:16.846 23:07:24 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:16.846 23:07:24 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:16.846 [2024-12-10 23:07:24.450142] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:36:16.846 [2024-12-10 23:07:24.450213] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2657566 ] 00:36:16.846 [2024-12-10 23:07:24.523552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:16.846 [2024-12-10 23:07:24.564650] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:36:17.105 23:07:24 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:17.105 23:07:24 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:36:17.105 23:07:24 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:36:17.105 23:07:24 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:17.105 23:07:24 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:17.105 [2024-12-10 23:07:24.796859] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:17.105 null0 00:36:17.105 [2024-12-10 23:07:24.828913] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:17.105 [2024-12-10 23:07:24.829263] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:17.364 23:07:24 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:17.364 23:07:24 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:36:17.364 1056841930 00:36:17.364 23:07:24 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:36:17.364 386212990 00:36:17.364 23:07:24 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=2657694 00:36:17.364 23:07:24 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 2657694 /var/tmp/bperf.sock 00:36:17.364 23:07:24 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:36:17.364 23:07:24 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 2657694 ']' 00:36:17.364 23:07:24 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:17.364 23:07:24 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:17.364 23:07:24 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:17.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:17.364 23:07:24 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:17.364 23:07:24 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:17.364 [2024-12-10 23:07:24.902734] Starting SPDK v25.01-pre git sha1 66289a6db / DPDK 24.03.0 initialization... 00:36:17.364 [2024-12-10 23:07:24.902777] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2657694 ] 00:36:17.364 [2024-12-10 23:07:24.978154] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:17.364 [2024-12-10 23:07:25.019869] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:36:17.364 23:07:25 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:17.364 23:07:25 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:36:17.364 23:07:25 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:36:17.364 23:07:25 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:36:17.623 23:07:25 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:36:17.623 23:07:25 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:17.882 23:07:25 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:36:17.882 23:07:25 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:36:18.140 [2024-12-10 23:07:25.689648] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:18.140 nvme0n1 00:36:18.140 23:07:25 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:36:18.140 23:07:25 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:36:18.140 23:07:25 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:36:18.140 23:07:25 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:36:18.140 23:07:25 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:36:18.140 23:07:25 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:18.399 23:07:25 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:36:18.399 23:07:25 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:36:18.399 23:07:25 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:36:18.399 23:07:25 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:36:18.399 23:07:25 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:18.399 23:07:25 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:18.399 23:07:25 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:36:18.658 23:07:26 keyring_linux -- keyring/linux.sh@25 -- # sn=1056841930 00:36:18.658 23:07:26 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:36:18.658 23:07:26 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:36:18.658 23:07:26 keyring_linux -- keyring/linux.sh@26 -- # [[ 1056841930 == \1\0\5\6\8\4\1\9\3\0 ]] 00:36:18.658 23:07:26 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 1056841930 00:36:18.658 23:07:26 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:36:18.658 23:07:26 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:18.658 Running I/O for 1 seconds... 00:36:19.595 20980.00 IOPS, 81.95 MiB/s 00:36:19.595 Latency(us) 00:36:19.595 [2024-12-10T22:07:27.324Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:19.595 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:36:19.595 nvme0n1 : 1.01 20981.17 81.96 0.00 0.00 6080.21 5214.39 11055.64 00:36:19.595 [2024-12-10T22:07:27.324Z] =================================================================================================================== 00:36:19.595 [2024-12-10T22:07:27.324Z] Total : 20981.17 81.96 0.00 0.00 6080.21 5214.39 11055.64 00:36:19.595 { 00:36:19.595 "results": [ 00:36:19.595 { 00:36:19.595 "job": "nvme0n1", 00:36:19.595 "core_mask": "0x2", 00:36:19.595 "workload": "randread", 00:36:19.595 "status": "finished", 00:36:19.595 "queue_depth": 128, 00:36:19.595 "io_size": 4096, 00:36:19.595 "runtime": 1.006045, 00:36:19.595 "iops": 20981.168834396074, 00:36:19.595 "mibps": 81.95769075935966, 00:36:19.595 "io_failed": 0, 00:36:19.595 "io_timeout": 0, 00:36:19.595 "avg_latency_us": 6080.214699063203, 00:36:19.595 "min_latency_us": 5214.3860869565215, 00:36:19.595 "max_latency_us": 11055.638260869566 00:36:19.595 } 00:36:19.595 ], 00:36:19.595 "core_count": 1 00:36:19.595 } 00:36:19.595 23:07:27 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:19.595 23:07:27 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:19.854 23:07:27 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:36:19.854 23:07:27 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:36:19.854 23:07:27 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:36:19.854 23:07:27 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:36:19.854 23:07:27 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:36:19.854 23:07:27 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:20.114 23:07:27 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:36:20.114 23:07:27 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:36:20.114 23:07:27 keyring_linux -- keyring/linux.sh@23 -- # return 00:36:20.114 23:07:27 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:20.114 23:07:27 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:36:20.114 23:07:27 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:20.114 23:07:27 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:36:20.114 23:07:27 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:20.114 23:07:27 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:36:20.114 23:07:27 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:20.114 23:07:27 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:20.114 23:07:27 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:20.374 [2024-12-10 23:07:27.908923] /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:20.374 [2024-12-10 23:07:27.909636] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470c20 (107): Transport endpoint is not connected 00:36:20.374 [2024-12-10 23:07:27.910631] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2470c20 (9): Bad file descriptor 00:36:20.374 [2024-12-10 23:07:27.911631] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:36:20.374 [2024-12-10 23:07:27.911642] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:20.374 [2024-12-10 23:07:27.911649] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:36:20.374 [2024-12-10 23:07:27.911657] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:36:20.374 request: 00:36:20.374 { 00:36:20.374 "name": "nvme0", 00:36:20.374 "trtype": "tcp", 00:36:20.374 "traddr": "127.0.0.1", 00:36:20.374 "adrfam": "ipv4", 00:36:20.374 "trsvcid": "4420", 00:36:20.374 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:20.374 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:20.374 "prchk_reftag": false, 00:36:20.374 "prchk_guard": false, 00:36:20.374 "hdgst": false, 00:36:20.374 "ddgst": false, 00:36:20.374 "psk": ":spdk-test:key1", 00:36:20.374 "allow_unrecognized_csi": false, 00:36:20.374 "method": "bdev_nvme_attach_controller", 00:36:20.374 "req_id": 1 00:36:20.374 } 00:36:20.374 Got JSON-RPC error response 00:36:20.374 response: 00:36:20.374 { 00:36:20.374 "code": -5, 00:36:20.374 "message": "Input/output error" 00:36:20.374 } 00:36:20.374 23:07:27 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:36:20.374 23:07:27 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:20.374 23:07:27 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:20.374 23:07:27 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:20.374 23:07:27 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:36:20.374 23:07:27 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:36:20.374 23:07:27 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:36:20.374 23:07:27 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:36:20.374 23:07:27 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:36:20.374 23:07:27 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:36:20.374 23:07:27 keyring_linux -- keyring/linux.sh@33 -- # sn=1056841930 00:36:20.374 23:07:27 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 1056841930 00:36:20.374 1 links removed 00:36:20.374 23:07:27 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:36:20.374 23:07:27 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:36:20.374 23:07:27 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:36:20.374 23:07:27 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:36:20.374 23:07:27 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:36:20.374 23:07:27 keyring_linux -- keyring/linux.sh@33 -- # sn=386212990 00:36:20.374 23:07:27 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 386212990 00:36:20.374 1 links removed 00:36:20.374 23:07:27 keyring_linux -- keyring/linux.sh@41 -- # killprocess 2657694 00:36:20.374 23:07:27 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 2657694 ']' 00:36:20.374 23:07:27 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 2657694 00:36:20.374 23:07:27 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:36:20.374 23:07:27 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:20.374 23:07:27 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2657694 00:36:20.374 23:07:27 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:20.374 23:07:27 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:20.374 23:07:27 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2657694' 00:36:20.374 killing process with pid 2657694 00:36:20.374 23:07:28 keyring_linux -- common/autotest_common.sh@973 -- # kill 2657694 00:36:20.374 Received shutdown signal, test time was about 1.000000 seconds 00:36:20.374 00:36:20.374 Latency(us) 00:36:20.374 [2024-12-10T22:07:28.103Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:20.374 [2024-12-10T22:07:28.103Z] =================================================================================================================== 00:36:20.374 [2024-12-10T22:07:28.103Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:20.374 23:07:28 keyring_linux -- common/autotest_common.sh@978 -- # wait 2657694 00:36:20.633 23:07:28 keyring_linux -- keyring/linux.sh@42 -- # killprocess 2657566 00:36:20.633 23:07:28 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 2657566 ']' 00:36:20.633 23:07:28 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 2657566 00:36:20.633 23:07:28 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:36:20.633 23:07:28 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:20.633 23:07:28 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2657566 00:36:20.633 23:07:28 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:20.633 23:07:28 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:20.633 23:07:28 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2657566' 00:36:20.633 killing process with pid 2657566 00:36:20.633 23:07:28 keyring_linux -- common/autotest_common.sh@973 -- # kill 2657566 00:36:20.633 23:07:28 keyring_linux -- common/autotest_common.sh@978 -- # wait 2657566 00:36:20.892 00:36:20.892 real 0m4.406s 00:36:20.892 user 0m8.340s 00:36:20.892 sys 0m1.454s 00:36:20.892 23:07:28 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:20.892 23:07:28 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:20.892 ************************************ 00:36:20.892 END TEST keyring_linux 00:36:20.892 ************************************ 00:36:20.892 23:07:28 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:36:20.892 23:07:28 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:36:20.892 23:07:28 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:36:20.892 23:07:28 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:36:20.892 23:07:28 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:36:20.892 23:07:28 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:36:20.892 23:07:28 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:36:20.892 23:07:28 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:36:20.892 23:07:28 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:36:20.892 23:07:28 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:36:20.892 23:07:28 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:36:20.892 23:07:28 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:36:20.892 23:07:28 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:36:20.892 23:07:28 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:36:20.892 23:07:28 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:36:20.892 23:07:28 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:36:20.892 23:07:28 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:36:20.892 23:07:28 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:20.892 23:07:28 -- common/autotest_common.sh@10 -- # set +x 00:36:20.892 23:07:28 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:36:20.892 23:07:28 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:36:20.892 23:07:28 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:36:20.892 23:07:28 -- common/autotest_common.sh@10 -- # set +x 00:36:26.270 INFO: APP EXITING 00:36:26.270 INFO: killing all VMs 00:36:26.270 INFO: killing vhost app 00:36:26.270 INFO: EXIT DONE 00:36:28.807 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:36:28.807 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:36:28.807 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:36:28.807 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:36:28.807 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:36:28.807 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:36:28.807 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:36:28.807 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:36:28.807 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:36:28.807 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:36:28.807 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:36:28.807 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:36:28.807 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:36:28.807 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:36:28.807 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:36:28.807 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:36:29.066 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:36:32.358 Cleaning 00:36:32.358 Removing: /var/run/dpdk/spdk0/config 00:36:32.359 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:36:32.359 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:36:32.359 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:36:32.359 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:36:32.359 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:36:32.359 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:36:32.359 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:36:32.359 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:36:32.359 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:36:32.359 Removing: /var/run/dpdk/spdk0/hugepage_info 00:36:32.359 Removing: /var/run/dpdk/spdk1/config 00:36:32.359 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:36:32.359 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:36:32.359 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:36:32.359 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:36:32.359 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:36:32.359 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:36:32.359 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:36:32.359 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:36:32.359 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:36:32.359 Removing: /var/run/dpdk/spdk1/hugepage_info 00:36:32.359 Removing: /var/run/dpdk/spdk2/config 00:36:32.359 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:36:32.359 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:36:32.359 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:36:32.359 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:36:32.359 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:36:32.359 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:36:32.359 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:36:32.359 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:36:32.359 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:36:32.359 Removing: /var/run/dpdk/spdk2/hugepage_info 00:36:32.359 Removing: /var/run/dpdk/spdk3/config 00:36:32.359 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:36:32.359 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:36:32.359 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:36:32.359 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:36:32.359 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:36:32.359 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:36:32.359 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:36:32.359 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:36:32.359 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:36:32.359 Removing: /var/run/dpdk/spdk3/hugepage_info 00:36:32.359 Removing: /var/run/dpdk/spdk4/config 00:36:32.359 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:36:32.359 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:36:32.359 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:36:32.359 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:36:32.359 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:36:32.359 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:36:32.359 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:36:32.359 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:36:32.359 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:36:32.359 Removing: /var/run/dpdk/spdk4/hugepage_info 00:36:32.359 Removing: /dev/shm/bdev_svc_trace.1 00:36:32.359 Removing: /dev/shm/nvmf_trace.0 00:36:32.359 Removing: /dev/shm/spdk_tgt_trace.pid2177574 00:36:32.359 Removing: /var/run/dpdk/spdk0 00:36:32.359 Removing: /var/run/dpdk/spdk1 00:36:32.359 Removing: /var/run/dpdk/spdk2 00:36:32.359 Removing: /var/run/dpdk/spdk3 00:36:32.359 Removing: /var/run/dpdk/spdk4 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2175424 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2176489 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2177574 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2178209 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2179154 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2179245 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2180279 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2180372 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2180724 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2182361 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2184071 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2184515 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2184697 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2184927 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2185222 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2185473 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2185723 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2186006 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2186744 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2189751 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2190008 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2190265 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2190310 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2190768 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2190977 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2191323 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2191483 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2191751 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2191756 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2192014 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2192028 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2192587 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2192834 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2193140 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2196838 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2201329 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2211516 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2212077 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2216351 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2216792 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2221088 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2227103 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2230178 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2240441 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2249444 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2251154 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2252124 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2269086 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2273154 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2318500 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2323855 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2330174 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2336849 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2336855 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2337726 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2338474 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2339378 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2340018 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2340069 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2340302 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2340317 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2340443 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2341237 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2342145 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2343065 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2343549 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2343731 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2343981 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2345005 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2345986 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2354291 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2383680 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2388251 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2390022 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2391702 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2391875 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2392109 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2392129 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2392636 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2394470 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2395261 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2395731 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2397950 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2398352 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2399163 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2403826 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2409231 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2409232 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2409233 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2413087 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2421576 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2425617 00:36:32.359 Removing: /var/run/dpdk/spdk_pid2431831 00:36:32.360 Removing: /var/run/dpdk/spdk_pid2433128 00:36:32.360 Removing: /var/run/dpdk/spdk_pid2434472 00:36:32.360 Removing: /var/run/dpdk/spdk_pid2436004 00:36:32.360 Removing: /var/run/dpdk/spdk_pid2440512 00:36:32.360 Removing: /var/run/dpdk/spdk_pid2444827 00:36:32.360 Removing: /var/run/dpdk/spdk_pid2448837 00:36:32.360 Removing: /var/run/dpdk/spdk_pid2456862 00:36:32.360 Removing: /var/run/dpdk/spdk_pid2456957 00:36:32.360 Removing: /var/run/dpdk/spdk_pid2461455 00:36:32.360 Removing: /var/run/dpdk/spdk_pid2461682 00:36:32.360 Removing: /var/run/dpdk/spdk_pid2461912 00:36:32.360 Removing: /var/run/dpdk/spdk_pid2462371 00:36:32.360 Removing: /var/run/dpdk/spdk_pid2462376 00:36:32.360 Removing: /var/run/dpdk/spdk_pid2467033 00:36:32.360 Removing: /var/run/dpdk/spdk_pid2467525 00:36:32.360 Removing: /var/run/dpdk/spdk_pid2471989 00:36:32.360 Removing: /var/run/dpdk/spdk_pid2474529 00:36:32.360 Removing: /var/run/dpdk/spdk_pid2479918 00:36:32.619 Removing: /var/run/dpdk/spdk_pid2485255 00:36:32.619 Removing: /var/run/dpdk/spdk_pid2494040 00:36:32.619 Removing: /var/run/dpdk/spdk_pid2501386 00:36:32.619 Removing: /var/run/dpdk/spdk_pid2501412 00:36:32.619 Removing: /var/run/dpdk/spdk_pid2520323 00:36:32.619 Removing: /var/run/dpdk/spdk_pid2520812 00:36:32.619 Removing: /var/run/dpdk/spdk_pid2521491 00:36:32.619 Removing: /var/run/dpdk/spdk_pid2521965 00:36:32.619 Removing: /var/run/dpdk/spdk_pid2522707 00:36:32.619 Removing: /var/run/dpdk/spdk_pid2523179 00:36:32.619 Removing: /var/run/dpdk/spdk_pid2523720 00:36:32.619 Removing: /var/run/dpdk/spdk_pid2524341 00:36:32.619 Removing: /var/run/dpdk/spdk_pid2528604 00:36:32.619 Removing: /var/run/dpdk/spdk_pid2528839 00:36:32.619 Removing: /var/run/dpdk/spdk_pid2534908 00:36:32.619 Removing: /var/run/dpdk/spdk_pid2534964 00:36:32.619 Removing: /var/run/dpdk/spdk_pid2540435 00:36:32.619 Removing: /var/run/dpdk/spdk_pid2544677 00:36:32.619 Removing: /var/run/dpdk/spdk_pid2555026 00:36:32.619 Removing: /var/run/dpdk/spdk_pid2555620 00:36:32.619 Removing: /var/run/dpdk/spdk_pid2559865 00:36:32.620 Removing: /var/run/dpdk/spdk_pid2560109 00:36:32.620 Removing: /var/run/dpdk/spdk_pid2564244 00:36:32.620 Removing: /var/run/dpdk/spdk_pid2569989 00:36:32.620 Removing: /var/run/dpdk/spdk_pid2572572 00:36:32.620 Removing: /var/run/dpdk/spdk_pid2582592 00:36:32.620 Removing: /var/run/dpdk/spdk_pid2591407 00:36:32.620 Removing: /var/run/dpdk/spdk_pid2593132 00:36:32.620 Removing: /var/run/dpdk/spdk_pid2594149 00:36:32.620 Removing: /var/run/dpdk/spdk_pid2610507 00:36:32.620 Removing: /var/run/dpdk/spdk_pid2614394 00:36:32.620 Removing: /var/run/dpdk/spdk_pid2617079 00:36:32.620 Removing: /var/run/dpdk/spdk_pid2624818 00:36:32.620 Removing: /var/run/dpdk/spdk_pid2624823 00:36:32.620 Removing: /var/run/dpdk/spdk_pid2630021 00:36:32.620 Removing: /var/run/dpdk/spdk_pid2631884 00:36:32.620 Removing: /var/run/dpdk/spdk_pid2633788 00:36:32.620 Removing: /var/run/dpdk/spdk_pid2635048 00:36:32.620 Removing: /var/run/dpdk/spdk_pid2637018 00:36:32.620 Removing: /var/run/dpdk/spdk_pid2638208 00:36:32.620 Removing: /var/run/dpdk/spdk_pid2647335 00:36:32.620 Removing: /var/run/dpdk/spdk_pid2647961 00:36:32.620 Removing: /var/run/dpdk/spdk_pid2648473 00:36:32.620 Removing: /var/run/dpdk/spdk_pid2650737 00:36:32.620 Removing: /var/run/dpdk/spdk_pid2651204 00:36:32.620 Removing: /var/run/dpdk/spdk_pid2651667 00:36:32.620 Removing: /var/run/dpdk/spdk_pid2655485 00:36:32.620 Removing: /var/run/dpdk/spdk_pid2655498 00:36:32.620 Removing: /var/run/dpdk/spdk_pid2657040 00:36:32.620 Removing: /var/run/dpdk/spdk_pid2657566 00:36:32.620 Removing: /var/run/dpdk/spdk_pid2657694 00:36:32.620 Clean 00:36:32.879 23:07:40 -- common/autotest_common.sh@1453 -- # return 0 00:36:32.879 23:07:40 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:36:32.879 23:07:40 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:32.879 23:07:40 -- common/autotest_common.sh@10 -- # set +x 00:36:32.879 23:07:40 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:36:32.879 23:07:40 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:32.879 23:07:40 -- common/autotest_common.sh@10 -- # set +x 00:36:32.879 23:07:40 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/timing.txt 00:36:32.879 23:07:40 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/udev.log ]] 00:36:32.879 23:07:40 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/udev.log 00:36:32.879 23:07:40 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:36:32.879 23:07:40 -- spdk/autotest.sh@398 -- # hostname 00:36:32.879 23:07:40 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk -t spdk-wfp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/cov_test.info 00:36:32.879 geninfo: WARNING: invalid characters removed from testname! 00:36:54.814 23:08:01 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/cov_total.info 00:36:56.719 23:08:04 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/cov_total.info 00:36:58.620 23:08:06 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/cov_total.info 00:37:00.524 23:08:08 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/cov_total.info 00:37:03.059 23:08:10 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/cov_total.info 00:37:04.436 23:08:12 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/cov_total.info 00:37:06.341 23:08:13 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:37:06.341 23:08:13 -- spdk/autorun.sh@1 -- $ timing_finish 00:37:06.341 23:08:13 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/timing.txt ]] 00:37:06.341 23:08:13 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:37:06.341 23:08:13 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:37:06.341 23:08:13 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk/../output/timing.txt 00:37:06.341 + [[ -n 2097272 ]] 00:37:06.341 + sudo kill 2097272 00:37:06.351 [Pipeline] } 00:37:06.366 [Pipeline] // stage 00:37:06.371 [Pipeline] } 00:37:06.385 [Pipeline] // timeout 00:37:06.390 [Pipeline] } 00:37:06.404 [Pipeline] // catchError 00:37:06.409 [Pipeline] } 00:37:06.424 [Pipeline] // wrap 00:37:06.430 [Pipeline] } 00:37:06.443 [Pipeline] // catchError 00:37:06.452 [Pipeline] stage 00:37:06.454 [Pipeline] { (Epilogue) 00:37:06.466 [Pipeline] catchError 00:37:06.468 [Pipeline] { 00:37:06.480 [Pipeline] echo 00:37:06.482 Cleanup processes 00:37:06.487 [Pipeline] sh 00:37:06.773 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk 00:37:06.773 2668259 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk 00:37:06.786 [Pipeline] sh 00:37:07.071 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest_2/spdk 00:37:07.071 ++ grep -v 'sudo pgrep' 00:37:07.071 ++ awk '{print $1}' 00:37:07.071 + sudo kill -9 00:37:07.071 + true 00:37:07.083 [Pipeline] sh 00:37:07.369 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:37:19.631 [Pipeline] sh 00:37:19.917 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:37:19.917 Artifacts sizes are good 00:37:19.930 [Pipeline] archiveArtifacts 00:37:19.937 Archiving artifacts 00:37:20.064 [Pipeline] sh 00:37:20.348 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest_2 00:37:20.362 [Pipeline] cleanWs 00:37:20.371 [WS-CLEANUP] Deleting project workspace... 00:37:20.371 [WS-CLEANUP] Deferred wipeout is used... 00:37:20.378 [WS-CLEANUP] done 00:37:20.380 [Pipeline] } 00:37:20.397 [Pipeline] // catchError 00:37:20.408 [Pipeline] sh 00:37:20.691 + logger -p user.info -t JENKINS-CI 00:37:20.699 [Pipeline] } 00:37:20.708 [Pipeline] // stage 00:37:20.712 [Pipeline] } 00:37:20.721 [Pipeline] // node 00:37:20.726 [Pipeline] End of Pipeline 00:37:20.751 Finished: SUCCESS